Sample records for model parameters initial

  1. Idealized Experiments for Optimizing Model Parameters Using a 4D-Variational Method in an Intermediate Coupled Model of ENSO

    NASA Astrophysics Data System (ADS)

    Gao, Chuan; Zhang, Rong-Hua; Wu, Xinrong; Sun, Jichang

    2018-04-01

    Large biases exist in real-time ENSO prediction, which can be attributed to uncertainties in initial conditions and model parameters. Previously, a 4D variational (4D-Var) data assimilation system was developed for an intermediate coupled model (ICM) and used to improve ENSO modeling through optimized initial conditions. In this paper, this system is further applied to optimize model parameters. In the ICM used, one important process for ENSO is related to the anomalous temperature of subsurface water entrained into the mixed layer ( T e), which is empirically and explicitly related to sea level (SL) variation. The strength of the thermocline effect on SST (referred to simply as "the thermocline effect") is represented by an introduced parameter, α Te. A numerical procedure is developed to optimize this model parameter through the 4D-Var assimilation of SST data in a twin experiment context with an idealized setting. Experiments having their initial condition optimized only, and having their initial condition plus this additional model parameter optimized, are compared. It is shown that ENSO evolution can be more effectively recovered by including the additional optimization of this parameter in ENSO modeling. The demonstrated feasibility of optimizing model parameters and initial conditions together through the 4D-Var method provides a modeling platform for ENSO studies. Further applications of the 4D-Var data assimilation system implemented in the ICM are also discussed.

  2. Using a 4D-Variational Method to Optimize Model Parameters in an Intermediate Coupled Model of ENSO

    NASA Astrophysics Data System (ADS)

    Gao, C.; Zhang, R. H.

    2017-12-01

    Large biases exist in real-time ENSO prediction, which is attributed to uncertainties in initial conditions and model parameters. Previously, a four dimentional variational (4D-Var) data assimilation system was developed for an intermediate coupled model (ICM) and used to improve ENSO modeling through optimized initial conditions. In this paper, this system is further applied to optimize model parameters. In the ICM used, one important process for ENSO is related to the anomalous temperature of subsurface water entrained into the mixed layer (Te), which is empirically and explicitly related to sea level (SL) variation, written as Te=αTe×FTe (SL). The introduced parameter, αTe, represents the strength of the thermocline effect on sea surface temperature (SST; referred as the thermocline effect). A numerical procedure is developed to optimize this model parameter through the 4D-Var assimilation of SST data in a twin experiment context with an idealized setting. Experiments having initial condition optimized only and having initial condition plus this additional model parameter optimized both are compared. It is shown that ENSO evolution can be more effectively recovered by including the additional optimization of this parameter in ENSO modeling. The demonstrated feasibility of optimizing model parameter and initial condition together through the 4D-Var method provides a modeling platform for ENSO studies. Further applications of the 4D-Var data assimilation system implemented in the ICM are also discussed.

  3. Characterization of Initial Parameter Information for Lifetime Prediction of Electronic Devices.

    PubMed

    Li, Zhigang; Liu, Boying; Yuan, Mengxiong; Zhang, Feifei; Guo, Jiaqiang

    2016-01-01

    Newly manufactured electronic devices are subject to different levels of potential defects existing among the initial parameter information of the devices. In this study, a characterization of electromagnetic relays that were operated at their optimal performance with appropriate and steady parameter values was performed to estimate the levels of their potential defects and to develop a lifetime prediction model. First, the initial parameter information value and stability were quantified to measure the performance of the electronics. In particular, the values of the initial parameter information were estimated using the probability-weighted average method, whereas the stability of the parameter information was determined by using the difference between the extrema and end points of the fitting curves for the initial parameter information. Second, a lifetime prediction model for small-sized samples was proposed on the basis of both measures. Finally, a model for the relationship of the initial contact resistance and stability over the lifetime of the sampled electromagnetic relays was proposed and verified. A comparison of the actual and predicted lifetimes of the relays revealed a 15.4% relative error, indicating that the lifetime of electronic devices can be predicted based on their initial parameter information.

  4. Characterization of Initial Parameter Information for Lifetime Prediction of Electronic Devices

    PubMed Central

    Li, Zhigang; Liu, Boying; Yuan, Mengxiong; Zhang, Feifei; Guo, Jiaqiang

    2016-01-01

    Newly manufactured electronic devices are subject to different levels of potential defects existing among the initial parameter information of the devices. In this study, a characterization of electromagnetic relays that were operated at their optimal performance with appropriate and steady parameter values was performed to estimate the levels of their potential defects and to develop a lifetime prediction model. First, the initial parameter information value and stability were quantified to measure the performance of the electronics. In particular, the values of the initial parameter information were estimated using the probability-weighted average method, whereas the stability of the parameter information was determined by using the difference between the extrema and end points of the fitting curves for the initial parameter information. Second, a lifetime prediction model for small-sized samples was proposed on the basis of both measures. Finally, a model for the relationship of the initial contact resistance and stability over the lifetime of the sampled electromagnetic relays was proposed and verified. A comparison of the actual and predicted lifetimes of the relays revealed a 15.4% relative error, indicating that the lifetime of electronic devices can be predicted based on their initial parameter information. PMID:27907188

  5. Sensitivity and spin-up times of cohesive sediment transport models used to simulate bathymetric change: Chapter 31

    USGS Publications Warehouse

    Schoellhamer, D.H.; Ganju, N.K.; Mineart, P.R.; Lionberger, M.A.; Kusuda, T.; Yamanishi, H.; Spearman, J.; Gailani, J. Z.

    2008-01-01

    Bathymetric change in tidal environments is modulated by watershed sediment yield, hydrodynamic processes, benthic composition, and anthropogenic activities. These multiple forcings combine to complicate simple prediction of bathymetric change; therefore, numerical models are necessary to simulate sediment transport. Errors arise from these simulations, due to inaccurate initial conditions and model parameters. We investigated the response of bathymetric change to initial conditions and model parameters with a simplified zero-dimensional cohesive sediment transport model, a two-dimensional hydrodynamic/sediment transport model, and a tidally averaged box model. The zero-dimensional model consists of a well-mixed control volume subjected to a semidiurnal tide, with a cohesive sediment bed. Typical cohesive sediment parameters were utilized for both the bed and suspended sediment. The model was run until equilibrium in terms of bathymetric change was reached, where equilibrium is defined as less than the rate of sea level rise in San Francisco Bay (2.17 mm/year). Using this state as the initial condition, model parameters were perturbed 10% to favor deposition, and the model was resumed. Perturbed parameters included, but were not limited to, maximum tidal current, erosion rate constant, and critical shear stress for erosion. Bathymetric change was most sensitive to maximum tidal current, with a 10% perturbation resulting in an additional 1.4 m of deposition over 10 years. Re-establishing equilibrium in this model required 14 years. The next most sensitive parameter was the critical shear stress for erosion; when increased 10%, an additional 0.56 m of sediment was deposited and 13 years were required to re-establish equilibrium. The two-dimensional hydrodynamic/sediment transport model was calibrated to suspended-sediment concentration, and despite robust solution of hydrodynamic conditions it was unable to accurately hindcast bathymetric change. The tidally averaged box model was calibrated to bathymetric change data and shows rapidly evolving bathymetry in the first 10-20 years, though sediment supply and hydrodynamic forcing did not vary greatly. This initial burst of bathymetric change is believed to be model adjustment to initial conditions, and suggests a spin-up time of greater than 10 years. These three diverse modeling approaches reinforce the sensitivity of cohesive sediment transport models to initial conditions and model parameters, and highlight the importance of appropriate calibration data. Adequate spin-up time of the order of years is required to initialize models, otherwise the solution will contain bathymetric change that is not due to environmental forcings, but rather improper specification of initial conditions and model parameters. Temporally intensive bathymetric change data can assist in determining initial conditions and parameters, provided they are available. Computational effort may be reduced by selectively updating hydrodynamics and bathymetry, thereby allowing time for spin-up periods. reserved.

  6. SURF Model Calibration Strategy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Menikoff, Ralph

    2017-03-10

    SURF and SURFplus are high explosive reactive burn models for shock initiation and propagation of detonation waves. They are engineering models motivated by the ignition & growth concept of high spots and for SURFplus a second slow reaction for the energy release from carbon clustering. A key feature of the SURF model is that there is a partial decoupling between model parameters and detonation properties. This enables reduced sets of independent parameters to be calibrated sequentially for the initiation and propagation regimes. Here we focus on a methodology for tting the initiation parameters to Pop plot data based on 1-Dmore » simulations to compute a numerical Pop plot. In addition, the strategy for tting the remaining parameters for the propagation regime and failure diameter is discussed.« less

  7. The Model Parameter Estimation Experiment (MOPEX): Its structure, connection to other international initiatives and future directions

    USGS Publications Warehouse

    Wagener, T.; Hogue, T.; Schaake, J.; Duan, Q.; Gupta, H.; Andreassian, V.; Hall, A.; Leavesley, G.

    2006-01-01

    The Model Parameter Estimation Experiment (MOPEX) is an international project aimed at developing enhanced techniques for the a priori estimation of parameters in hydrological models and in land surface parameterization schemes connected to atmospheric models. The MOPEX science strategy involves: database creation, a priori parameter estimation methodology development, parameter refinement or calibration, and the demonstration of parameter transferability. A comprehensive MOPEX database has been developed that contains historical hydrometeorological data and land surface characteristics data for many hydrological basins in the United States (US) and in other countries. This database is being continuously expanded to include basins from various hydroclimatic regimes throughout the world. MOPEX research has largely been driven by a series of international workshops that have brought interested hydrologists and land surface modellers together to exchange knowledge and experience in developing and applying parameter estimation techniques. With its focus on parameter estimation, MOPEX plays an important role in the international context of other initiatives such as GEWEX, HEPEX, PUB and PILPS. This paper outlines the MOPEX initiative, discusses its role in the scientific community, and briefly states future directions.

  8. Effect of Surface Tension Anisotropy and Welding Parameters on Initial Instability Dynamics During Solidification: A Phase-Field Study

    NASA Astrophysics Data System (ADS)

    Yu, Fengyi; Wei, Yanhong

    2018-05-01

    The effects of surface tension anisotropy and welding parameters on initial instability dynamics during gas tungsten arc welding of an Al-alloy are investigated by a quantitative phase-field model. The results show that the surface tension anisotropy and welding parameters affect the initial instability dynamics in different ways during welding. The surface tension anisotropy does not influence the solute diffusion process but does affect the stability of the solid/liquid interface during solidification. The welding parameters affect the initial instability dynamics by varying the growth rate and thermal gradient. The incubation time decreases, and the initial wavelength remains stable as the welding speed increases. When welding power increases, the incubation time increases and the initial wavelength slightly increases. Experiments were performed for the same set of welding parameters used in modeling, and the results of the experiments and simulations were in good agreement.

  9. Determination of remodeling parameters for a strain-adaptive finite element model of the distal ulna.

    PubMed

    Neuert, Mark A C; Dunning, Cynthia E

    2013-09-01

    Strain energy-based adaptive material models are used to predict bone resorption resulting from stress shielding induced by prosthetic joint implants. Generally, such models are governed by two key parameters: a homeostatic strain-energy state (K) and a threshold deviation from this state required to initiate bone reformation (s). A refinement procedure has been performed to estimate these parameters in the femur and glenoid; this study investigates the specific influences of these parameters on resulting density distributions in the distal ulna. A finite element model of a human ulna was created using micro-computed tomography (µCT) data, initialized to a homogeneous density distribution, and subjected to approximate in vivo loading. Values for K and s were tested, and the resulting steady-state density distribution compared with values derived from µCT images. The sensitivity of these parameters to initial conditions was examined by altering the initial homogeneous density value. The refined model parameters selected were then applied to six additional human ulnae to determine their performance across individuals. Model accuracy using the refined parameters was found to be comparable with that found in previous studies of the glenoid and femur, and gross bone structures, such as the cortical shell and medullary canal, were reproduced. The model was found to be insensitive to initial conditions; however, a fair degree of variation was observed between the six specimens. This work represents an important contribution to the study of changes in load transfer in the distal ulna following the implementation of commercial orthopedic implants.

  10. An initial-abstraction, constant-loss model for unit hydrograph modeling for applicable watersheds in Texas

    USGS Publications Warehouse

    Asquith, William H.; Roussel, Meghan C.

    2007-01-01

    Estimation of representative hydrographs from design storms, which are known as design hydrographs, provides for cost-effective, riskmitigated design of drainage structures such as bridges, culverts, roadways, and other infrastructure. During 2001?07, the U.S. Geological Survey (USGS), in cooperation with the Texas Department of Transportation, investigated runoff hydrographs, design storms, unit hydrographs,and watershed-loss models to enhance design hydrograph estimation in Texas. Design hydrographs ideally should mimic the general volume, peak, and shape of observed runoff hydrographs. Design hydrographs commonly are estimated in part by unit hydrographs. A unit hydrograph is defined as the runoff hydrograph that results from a unit pulse of excess rainfall uniformly distributed over the watershed at a constant rate for a specific duration. A time-distributed, watershed-loss model is required for modeling by unit hydrographs. This report develops a specific time-distributed, watershed-loss model known as an initial-abstraction, constant-loss model. For this watershed-loss model, a watershed is conceptualized to have the capacity to store or abstract an absolute depth of rainfall at and near the beginning of a storm. Depths of total rainfall less than this initial abstraction do not produce runoff. The watershed also is conceptualized to have the capacity to remove rainfall at a constant rate (loss) after the initial abstraction is satisfied. Additional rainfall inputs after the initial abstraction is satisfied contribute to runoff if the rainfall rate (intensity) is larger than the constant loss. The initial abstraction, constant-loss model thus is a two-parameter model. The initial-abstraction, constant-loss model is investigated through detailed computational and statistical analysis of observed rainfall and runoff data for 92 USGS streamflow-gaging stations (watersheds) in Texas with contributing drainage areas from 0.26 to 166 square miles. The analysis is limited to a previously described, watershed-specific, gamma distribution model of the unit hydrograph. In particular, the initial-abstraction, constant-loss model is tuned to the gamma distribution model of the unit hydrograph. A complex computational analysis of observed rainfall and runoff for the 92 watersheds was done to determine, by storm, optimal values of initial abstraction and constant loss. Optimal parameter values for a given storm were defined as those values that produced a modeled runoff hydrograph with volume equal to the observed runoff hydrograph and also minimized the residual sum of squares of the two hydrographs. Subsequently, the means of the optimal parameters were computed on a watershed-specific basis. These means for each watershed are considered the most representative, are tabulated, and are used in further statistical analyses. Statistical analyses of watershed-specific, initial abstraction and constant loss include documentation of the distribution of each parameter using the generalized lambda distribution. The analyses show that watershed development has substantial influence on initial abstraction and limited influence on constant loss. The means and medians of the 92 watershed-specific parameters are tabulated with respect to watershed development; although they have considerable uncertainty, these parameters can be used for parameter prediction for ungaged watersheds. The statistical analyses of watershed-specific, initial abstraction and constant loss also include development of predictive procedures for estimation of each parameter for ungaged watersheds. Both regression equations and regression trees for estimation of initial abstraction and constant loss are provided. The watershed characteristics included in the regression analyses are (1) main-channel length, (2) a binary factor representing watershed development, (3) a binary factor representing watersheds with an abundance of rocky and thin-soiled terrain, and (4) curve numb

  11. Calibration of infiltration parameters on hydrological tank model using runoff coefficient of rational method

    NASA Astrophysics Data System (ADS)

    Suryoputro, Nugroho; Suhardjono, Soetopo, Widandi; Suhartanto, Ery

    2017-09-01

    In calibrating hydrological models, there are generally two stages of activity: 1) determining realistic model initial parameters in representing natural component physical processes, 2) entering initial parameter values which are then processed by trial error or automatically to obtain optimal values. To determine a realistic initial value, it takes experience and user knowledge of the model. This is a problem for beginner model users. This paper will present another approach to estimate the infiltration parameters in the tank model. The parameters will be approximated by the runoff coefficient of rational method. The value approach of infiltration parameter is simply described as the result of the difference in the percentage of total rainfall minus the percentage of runoff. It is expected that the results of this research will accelerate the calibration process of tank model parameters. The research was conducted on the sub-watershed Kali Bango in Malang Regency with an area of 239,71 km2. Infiltration measurements were carried out in January 2017 to March 2017. Analysis of soil samples at Soil Physics Laboratory, Department of Soil Science, Faculty of Agriculture, Universitas Brawijaya. Rainfall and discharge data were obtained from UPT PSAWS Bango Gedangan in Malang. Temperature, evaporation, relative humidity, wind speed data was obtained from BMKG station of Karang Ploso, Malang. The results showed that the infiltration coefficient at the top tank outlet can be determined its initial value by using the approach of the coefficient of runoff rational method with good result.

  12. Algebraic method for parameter identification of circuit models for batteries under non-zero initial condition

    NASA Astrophysics Data System (ADS)

    Devarakonda, Lalitha; Hu, Tingshu

    2014-12-01

    This paper presents an algebraic method for parameter identification of Thevenin's equivalent circuit models for batteries under non-zero initial condition. In traditional methods, it was assumed that all capacitor voltages have zero initial conditions at the beginning of each charging/discharging test. This would require a long rest time between two tests, leading to very lengthy tests for a charging/discharging cycle. In this paper, we propose an algebraic method which can extract the circuit parameters together with initial conditions. This would theoretically reduce the rest time to 0 and substantially accelerate the testing cycles.

  13. Human Resource Scheduling in Performing a Sequence of Discrete Responses

    DTIC Science & Technology

    2009-02-28

    each is a graph comparing simulated results of each respective model with data from Experiment 3b. As described below the parameters of the model...initiated in parallel with ongoing Central operations on another. To fix model parameters we estimated the range of times to perform the sum of the...standard deviation for each parameter was set to 50% of mean value. Initial simulations found no meaningful differences between setting the standard

  14. An adaptive control scheme for a flexible manipulator

    NASA Technical Reports Server (NTRS)

    Yang, T. C.; Yang, J. C. S.; Kudva, P.

    1987-01-01

    The problem of controlling a single link flexible manipulator is considered. A self-tuning adaptive control scheme is proposed which consists of a least squares on-line parameter identification of an equivalent linear model followed by a tuning of the gains of a pole placement controller using the parameter estimates. Since the initial parameter values for this model are assumed unknown, the use of arbitrarily chosen initial parameter estimates in the adaptive controller would result in undesirable transient effects. Hence, the initial stage control is carried out with a PID controller. Once the identified parameters have converged, control is transferred to the adaptive controller. Naturally, the relevant issues in this scheme are tests for parameter convergence and minimization of overshoots during control switch-over. To demonstrate the effectiveness of the proposed scheme, simulation results are presented with an analytical nonlinear dynamic model of a single link flexible manipulator.

  15. Towards adjoint-based inversion of time-dependent mantle convection with nonlinear viscosity

    NASA Astrophysics Data System (ADS)

    Li, Dunzhu; Gurnis, Michael; Stadler, Georg

    2017-04-01

    We develop and study an adjoint-based inversion method for the simultaneous recovery of initial temperature conditions and viscosity parameters in time-dependent mantle convection from the current mantle temperature and historic plate motion. Based on a realistic rheological model with temperature-dependent and strain-rate-dependent viscosity, we formulate the inversion as a PDE-constrained optimization problem. The objective functional includes the misfit of surface velocity (plate motion) history, the misfit of the current mantle temperature, and a regularization for the uncertain initial condition. The gradient of this functional with respect to the initial temperature and the uncertain viscosity parameters is computed by solving the adjoint of the mantle convection equations. This gradient is used in a pre-conditioned quasi-Newton minimization algorithm. We study the prospects and limitations of the inversion, as well as the computational performance of the method using two synthetic problems, a sinking cylinder and a realistic subduction model. The subduction model is characterized by the migration of a ridge toward a trench whereby both plate motions and subduction evolve. The results demonstrate: (1) for known viscosity parameters, the initial temperature can be well recovered, as in previous initial condition-only inversions where the effective viscosity was given; (2) for known initial temperature, viscosity parameters can be recovered accurately, despite the existence of trade-offs due to ill-conditioning; (3) for the joint inversion of initial condition and viscosity parameters, initial condition and effective viscosity can be reasonably recovered, but the high dimension of the parameter space and the resulting ill-posedness may limit recovery of viscosity parameters.

  16. Estimation of Ecosystem Parameters of the Community Land Model with DREAM: Evaluation of the Potential for Upscaling Net Ecosystem Exchange

    NASA Astrophysics Data System (ADS)

    Hendricks Franssen, H. J.; Post, H.; Vrugt, J. A.; Fox, A. M.; Baatz, R.; Kumbhar, P.; Vereecken, H.

    2015-12-01

    Estimation of net ecosystem exchange (NEE) by land surface models is strongly affected by uncertain ecosystem parameters and initial conditions. A possible approach is the estimation of plant functional type (PFT) specific parameters for sites with measurement data like NEE and application of the parameters at other sites with the same PFT and no measurements. This upscaling strategy was evaluated in this work for sites in Germany and France. Ecosystem parameters and initial conditions were estimated with NEE-time series of one year length, or a time series of only one season. The DREAM(zs) algorithm was used for the estimation of parameters and initial conditions. DREAM(zs) is not limited to Gaussian distributions and can condition to large time series of measurement data simultaneously. DREAM(zs) was used in combination with the Community Land Model (CLM) v4.5. Parameter estimates were evaluated by model predictions at the same site for an independent verification period. In addition, the parameter estimates were evaluated at other, independent sites situated >500km away with the same PFT. The main conclusions are: i) simulations with estimated parameters reproduced better the NEE measurement data in the verification periods, including the annual NEE-sum (23% improvement), annual NEE-cycle and average diurnal NEE course (error reduction by factor 1,6); ii) estimated parameters based on seasonal NEE-data outperformed estimated parameters based on yearly data; iii) in addition, those seasonal parameters were often also significantly different from their yearly equivalents; iv) estimated parameters were significantly different if initial conditions were estimated together with the parameters. We conclude that estimated PFT-specific parameters improve land surface model predictions significantly at independent verification sites and for independent verification periods so that their potential for upscaling is demonstrated. However, simulation results also indicate that possibly the estimated parameters mask other model errors. This would imply that their application at climatic time scales would not improve model predictions. A central question is whether the integration of many different data streams (e.g., biomass, remotely sensed LAI) could solve the problems indicated here.

  17. Comparison of Two Global Sensitivity Analysis Methods for Hydrologic Modeling over the Columbia River Basin

    NASA Astrophysics Data System (ADS)

    Hameed, M.; Demirel, M. C.; Moradkhani, H.

    2015-12-01

    Global Sensitivity Analysis (GSA) approach helps identify the effectiveness of model parameters or inputs and thus provides essential information about the model performance. In this study, the effects of the Sacramento Soil Moisture Accounting (SAC-SMA) model parameters, forcing data, and initial conditions are analysed by using two GSA methods: Sobol' and Fourier Amplitude Sensitivity Test (FAST). The simulations are carried out over five sub-basins within the Columbia River Basin (CRB) for three different periods: one-year, four-year, and seven-year. Four factors are considered and evaluated by using the two sensitivity analysis methods: the simulation length, parameter range, model initial conditions, and the reliability of the global sensitivity analysis methods. The reliability of the sensitivity analysis results is compared based on 1) the agreement between the two sensitivity analysis methods (Sobol' and FAST) in terms of highlighting the same parameters or input as the most influential parameters or input and 2) how the methods are cohered in ranking these sensitive parameters under the same conditions (sub-basins and simulation length). The results show the coherence between the Sobol' and FAST sensitivity analysis methods. Additionally, it is found that FAST method is sufficient to evaluate the main effects of the model parameters and inputs. Another conclusion of this study is that the smaller parameter or initial condition ranges, the more consistency and coherence between the sensitivity analysis methods results.

  18. A Three-Parameter Model for Predicting Fatigue Life of Ductile Metals Under Constant Amplitude Multiaxial Loading

    NASA Astrophysics Data System (ADS)

    Liu, Jia; Li, Jing; Zhang, Zhong-ping

    2013-04-01

    In this article, a fatigue damage parameter is proposed to assess the multiaxial fatigue lives of ductile metals based on the critical plane concept: Fatigue crack initiation is controlled by the maximum shear strain, and the other important effect in the fatigue damage process is the normal strain and stress. This fatigue damage parameter introduces a stress-correlated factor, which describes the degree of the non-proportional cyclic hardening. Besides, a three-parameter multiaxial fatigue criterion is used to correlate the fatigue lifetime of metallic materials with the proposed damage parameter. Under the uniaxial loading, this three-parameter model reduces to the recently developed Zhang's model for predicting the uniaxial fatigue crack initiation life. The accuracy and reliability of this three-parameter model are checked against the experimental data found in literature through testing six different ductile metals under various strain paths with zero/non-zero mean stress.

  19. Knowledge transmission model with differing initial transmission and retransmission process

    NASA Astrophysics Data System (ADS)

    Wang, Haiying; Wang, Jun; Small, Michael

    2018-10-01

    Knowledge transmission is a cyclic dynamic diffusion process. The rate of acceptance of knowledge differs upon whether or not the recipient has previously held the knowledge. In this paper, the knowledge transmission process is divided into an initial and a retransmission procedure, each with its own transmission and self-learning parameters. Based on epidemic spreading model, we propose a naive-evangelical-agnostic (VEA) knowledge transmission model and derive mean-field equations to describe the dynamics of knowledge transmission in homogeneous networks. Theoretical analysis identifies a criterion for the persistence of knowledge, i.e., the reproduction number R0 depends on the minor effective parameters between the initial and retransmission process. Moreover, the final size of evangelical individuals is only related to retransmission process parameters. Numerical simulations validate the theoretical analysis. Furthermore, the simulations indicate that increasing the initial transmission parameters, including first transmission and self-learning rates of naive individuals, can accelerate the velocity of knowledge transmission efficiently but have no effect on the final size of evangelical individuals. In contrast, the retransmission parameters, including retransmission and self-learning rates of agnostic individuals, have a significant effect on the rate of knowledge transmission, i.e., the larger parameters the greater final density of evangelical individuals.

  20. Nucleosynthesis of Iron-Peak Elements in Type-Ia Supernovae

    NASA Astrophysics Data System (ADS)

    Leung, Shing-Chi; Nomoto, Ken'ichi

    The observed features of typical Type Ia supernovae are well-modeled as the explosions of carbon-oxygen white dwarfs both near Chandrasekhar mass and sub-Chandrasekhar mass. However, observations in the last decade have shown that Type Ia supernovae exhibit a wide diversity, which implies models for wider range of parameters are necessary. Based on the hydrodynamics code we developed, we carry out a parameter study of Chandrasekhar mass models for Type Ia supernovae. We conduct a series of two-dimensional hydrodynamics simulations of the explosion phase using the turbulent flame model with the deflagration-detonation-transition (DDT). To reconstruct the nucleosynthesis history, we use the particle tracer scheme. We examine the role of model parameters by examining their influences on the final product of nucleosynthesis. The parameters include the initial density, metallicity, initial flame structure, detonation criteria and so on. We show that the observed chemical evolution of galaxies can help constrain these model parameters.

  1. Migration kinetics of four photo-initiators from paper food packaging to solid food simulants.

    PubMed

    Cai, Huimei; Ji, Shuilin; Zhang, Juzhou; Tao, Gushuai; Peng, Chuanyi; Hou, Ruyan; Zhang, Liang; Sun, Yue; Wan, Xiaochun

    2017-09-01

    The migration behaviour of four photo-initiators (BP, EHA, MBP and Irgacure 907) was studied by 'printing' onto four different food-packaging materials (Kraft paper, white cardboard, Polyethylene (PE)-coated paper and composite paper) and tracking movement into the food simulant: Tenax-TA (porous polymer 2,6-diphenyl furan resin). The results indicated that the migration of the photo-initiators was related to the molecular weight and log K o/w of each photo-initiator. At different temperatures, the migration rates of the photo-initiators were different in papers with different thicknesses. The amount of each photo-initiator found in the food was closely related to the food matrix. The Weibull model was used to predict the migration load into the food simulants by calculating the parameters τ and β and determining the relationship of the two parameters with temperature and paper thickness. The established Weibull model was then used to predict the migration of each photo-initiator with respect to different foods. A two-parameter Weibull model fitted the actual situation, with some deviation from the actual migration amount.

  2. An Open Singularity-Free Cosmological Model with Inflation

    NASA Astrophysics Data System (ADS)

    Karaca, Koray; Bayin, Selçuk

    In the light of recent observations which point to an open universe (Ω0 < 1), we construct an open singularity-free cosmological model by reconsidering a model originally constructed for a closed universe. Our model starts from a nonsingular state called prematter, governed by an inflationary equation of state P = (γp - 1)ρ where γp (~= 10-3) is a small positive parameter representing the initial vacuum dominance of the universe. Unlike the closed models universe cannot be initially static hence, starts with an initial expansion rate represented by the initial value of the Hubble constant H(0). Therefore, our model is a two-parameter universe model (γp,H(0)). Comparing the predictions of this model for the present properties of the universe with the recent observational results, we argue that the model constructed in this work could be used as a realistic universe model.

  3. Further comments on sensitivities, parameter estimation, and sampling design in one-dimensional analysis of solute transport in porous media

    USGS Publications Warehouse

    Knopman, Debra S.; Voss, Clifford I.

    1988-01-01

    Sensitivities of solute concentration to parameters associated with first-order chemical decay, boundary conditions, initial conditions, and multilayer transport are examined in one-dimensional analytical models of transient solute transport in porous media. A sensitivity is a change in solute concentration resulting from a change in a model parameter. Sensitivity analysis is important because minimum information required in regression on chemical data for the estimation of model parameters by regression is expressed in terms of sensitivities. Nonlinear regression models of solute transport were tested on sets of noiseless observations from known models that exceeded the minimum sensitivity information requirements. Results demonstrate that the regression models consistently converged to the correct parameters when the initial sets of parameter values substantially deviated from the correct parameters. On the basis of the sensitivity analysis, several statements may be made about design of sampling for parameter estimation for the models examined: (1) estimation of parameters associated with solute transport in the individual layers of a multilayer system is possible even when solute concentrations in the individual layers are mixed in an observation well; (2) when estimating parameters in a decaying upstream boundary condition, observations are best made late in the passage of the front near a time chosen by adding the inverse of an hypothesized value of the source decay parameter to the estimated mean travel time at a given downstream location; (3) estimation of a first-order chemical decay parameter requires observations to be made late in the passage of the front, preferably near a location corresponding to a travel time of √2 times the half-life of the solute; and (4) estimation of a parameter relating to spatial variability in an initial condition requires observations to be made early in time relative to passage of the solute front.

  4. The Effect of Initial Cell Concentration on Xylose Fermentation by Pichia stipitis

    NASA Astrophysics Data System (ADS)

    Agbogbo, Frank K.; Coward-Kelly, Guillermo; Torry-Smith, Mads; Wenger, Kevin; Jeffries, Thomas W.

    Xylose was fermented using Pichia stipitis CBS 6054 at different initial cell concentrations. A high initial cell concentration increased the rate of xylose utilization, ethanol formation, and the ethanol yield. The highest ethanol concentration of 41.0 g/L and a yield of 0.38 g/g was obtained using an initial cell concentration of 6.5 g/L. Even though more xylitol was produced when the initial cell concentrations were high, cell density had no effect on the final ethanol yield. A two-parameter mathematical model was used to predict the cell population dynamics at the different initial cell concentrations. The model parameters, a and b correlate with the initial cell concentrations used with an R 2 of 0.99.

  5. An AI-based approach to structural damage identification by modal analysis

    NASA Technical Reports Server (NTRS)

    Glass, B. J.; Hanagud, S.

    1990-01-01

    Flexible-structure damage is presently addressed by a combined model- and parameter-identification approach which employs the AI methodologies of classification, heuristic search, and object-oriented model knowledge representation. The conditions for model-space search convergence to the best model are discussed in terms of search-tree organization and initial model parameter error. In the illustrative example of a truss structure presented, the use of both model and parameter identification is shown to lead to smaller parameter corrections than would be required by parameter identification alone.

  6. System health monitoring using multiple-model adaptive estimation techniques

    NASA Astrophysics Data System (ADS)

    Sifford, Stanley Ryan

    Monitoring system health for fault detection and diagnosis by tracking system parameters concurrently with state estimates is approached using a new multiple-model adaptive estimation (MMAE) method. This novel method is called GRid-based Adaptive Parameter Estimation (GRAPE). GRAPE expands existing MMAE methods by using new techniques to sample the parameter space. GRAPE expands on MMAE with the hypothesis that sample models can be applied and resampled without relying on a predefined set of models. GRAPE is initially implemented in a linear framework using Kalman filter models. A more generalized GRAPE formulation is presented using extended Kalman filter (EKF) models to represent nonlinear systems. GRAPE can handle both time invariant and time varying systems as it is designed to track parameter changes. Two techniques are presented to generate parameter samples for the parallel filter models. The first approach is called selected grid-based stratification (SGBS). SGBS divides the parameter space into equally spaced strata. The second approach uses Latin Hypercube Sampling (LHS) to determine the parameter locations and minimize the total number of required models. LHS is particularly useful when the parameter dimensions grow. Adding more parameters does not require the model count to increase for LHS. Each resample is independent of the prior sample set other than the location of the parameter estimate. SGBS and LHS can be used for both the initial sample and subsequent resamples. Furthermore, resamples are not required to use the same technique. Both techniques are demonstrated for both linear and nonlinear frameworks. The GRAPE framework further formalizes the parameter tracking process through a general approach for nonlinear systems. These additional methods allow GRAPE to either narrow the focus to converged values within a parameter range or expand the range in the appropriate direction to track the parameters outside the current parameter range boundary. Customizable rules define the specific resample behavior when the GRAPE parameter estimates converge. Convergence itself is determined from the derivatives of the parameter estimates using a simple moving average window to filter out noise. The system can be tuned to match the desired performance goals by making adjustments to parameters such as the sample size, convergence criteria, resample criteria, initial sampling method, resampling method, confidence in prior sample covariances, sample delay, and others.

  7. Association of parameter, software, and hardware variation with large-scale behavior across 57,000 climate models

    PubMed Central

    Knight, Christopher G.; Knight, Sylvia H. E.; Massey, Neil; Aina, Tolu; Christensen, Carl; Frame, Dave J.; Kettleborough, Jamie A.; Martin, Andrew; Pascoe, Stephen; Sanderson, Ben; Stainforth, David A.; Allen, Myles R.

    2007-01-01

    In complex spatial models, as used to predict the climate response to greenhouse gas emissions, parameter variation within plausible bounds has major effects on model behavior of interest. Here, we present an unprecedentedly large ensemble of >57,000 climate model runs in which 10 parameters, initial conditions, hardware, and software used to run the model all have been varied. We relate information about the model runs to large-scale model behavior (equilibrium sensitivity of global mean temperature to a doubling of carbon dioxide). We demonstrate that effects of parameter, hardware, and software variation are detectable, complex, and interacting. However, we find most of the effects of parameter variation are caused by a small subset of parameters. Notably, the entrainment coefficient in clouds is associated with 30% of the variation seen in climate sensitivity, although both low and high values can give high climate sensitivity. We demonstrate that the effect of hardware and software is small relative to the effect of parameter variation and, over the wide range of systems tested, may be treated as equivalent to that caused by changes in initial conditions. We discuss the significance of these results in relation to the design and interpretation of climate modeling experiments and large-scale modeling more generally. PMID:17640921

  8. An analytical-numerical approach for parameter determination of a five-parameter single-diode model of photovoltaic cells and modules

    NASA Astrophysics Data System (ADS)

    Hejri, Mohammad; Mokhtari, Hossein; Azizian, Mohammad Reza; Söder, Lennart

    2016-04-01

    Parameter extraction of the five-parameter single-diode model of solar cells and modules from experimental data is a challenging problem. These parameters are evaluated from a set of nonlinear equations that cannot be solved analytically. On the other hand, a numerical solution of such equations needs a suitable initial guess to converge to a solution. This paper presents a new set of approximate analytical solutions for the parameters of a five-parameter single-diode model of photovoltaic (PV) cells and modules. The proposed solutions provide a good initial point which guarantees numerical analysis convergence. The proposed technique needs only a few data from the PV current-voltage characteristics, i.e. open circuit voltage Voc, short circuit current Isc and maximum power point current and voltage Im; Vm making it a fast and low cost parameter determination technique. The accuracy of the presented theoretical I-V curves is verified by experimental data.

  9. Highway Fuel Consumption Computer Model (Version 1)

    DOT National Transportation Integrated Search

    1974-04-01

    A highway fuel consumption computer model is given. The model allows the computation of fuel consumption of a highway vehicle class as a function of time. The model is of the initial value (in this case initial inventory) and lumped parameter type. P...

  10. Photothermal waves for two temperature with a semiconducting medium under using a dual-phase-lag model and hydrostatic initial stress

    NASA Astrophysics Data System (ADS)

    Lotfy, Kh.

    2017-07-01

    The dual-phase-lag (DPL) model with two different time translations and Lord-Shulman (LS) theory with one relaxation time are applied to study the effect of hydrostatic initial stress on medium under the influence of two temperature parameter(a new model will be introduced using two temperature theory) and photothermal theory. We solved the thermal loading at the free surface in the semi-infinite semiconducting medium-coupled plasma waves with the effect of mechanical force during a photothermal process. The exact expressions of the considered variables are obtained using normal mode analysis also the two temperature coefficient ratios were obtained analytically. Numerical results for the field quantities are given in the physical domain and illustrated graphically under the effects of several parameters. Comparisons are made between the results of the two different models with and without two temperature parameter, and for two different values of the hydrostatic initial stress. A comparison is carried out between the considered variables as calculated from the generalized thermoelasticity based on the DPL model and the LS theory in the absence and presence of the thermoelastic and thermoelectric coupling parameters.

  11. Estimation of Community Land Model parameters for an improved assessment of net carbon fluxes at European sites

    NASA Astrophysics Data System (ADS)

    Post, Hanna; Vrugt, Jasper A.; Fox, Andrew; Vereecken, Harry; Hendricks Franssen, Harrie-Jan

    2017-03-01

    The Community Land Model (CLM) contains many parameters whose values are uncertain and thus require careful estimation for model application at individual sites. Here we used Bayesian inference with the DiffeRential Evolution Adaptive Metropolis (DREAM(zs)) algorithm to estimate eight CLM v.4.5 ecosystem parameters using 1 year records of half-hourly net ecosystem CO2 exchange (NEE) observations of four central European sites with different plant functional types (PFTs). The posterior CLM parameter distributions of each site were estimated per individual season and on a yearly basis. These estimates were then evaluated using NEE data from an independent evaluation period and data from "nearby" FLUXNET sites at 600 km distance to the original sites. Latent variables (multipliers) were used to treat explicitly uncertainty in the initial carbon-nitrogen pools. The posterior parameter estimates were superior to their default values in their ability to track and explain the measured NEE data of each site. The seasonal parameter values reduced with more than 50% (averaged over all sites) the bias in the simulated NEE values. The most consistent performance of CLM during the evaluation period was found for the posterior parameter values of the forest PFTs, and contrary to the C3-grass and C3-crop sites, the latent variables of the initial pools further enhanced the quality-of-fit. The carbon sink function of the forest PFTs significantly increased with the posterior parameter estimates. We thus conclude that land surface model predictions of carbon stocks and fluxes require careful consideration of uncertain ecological parameters and initial states.

  12. Effects of developmental variability on the dynamics and self-organization of cell populations

    NASA Astrophysics Data System (ADS)

    Prabhakara, Kaumudi H.; Gholami, Azam; Zykov, Vladimir S.; Bodenschatz, Eberhard

    2017-11-01

    We report experimental and theoretical results for spatiotemporal pattern formation in cell populations, where the parameters vary in space and time due to mechanisms intrinsic to the system, namely Dictyostelium discoideum (D.d.) in the starvation phase. We find that different patterns are formed when the populations are initialized at different developmental stages, or when populations at different initial developmental stages are mixed. The experimentally observed patterns can be understood with a modified Kessler-Levine model that takes into account the initial spatial heterogeneity of the cell populations and a developmental path introduced by us, i.e. the time dependence of the various biochemical parameters. The dynamics of the parameters agree with known biochemical studies. Most importantly, the modified model reproduces not only our results, but also the observations of an independent experiment published earlier. This shows that pattern formation can be used to understand and quantify the temporal evolution of the system parameters.

  13. Logistic Mixed Models to Investigate Implicit and Explicit Belief Tracking.

    PubMed

    Lages, Martin; Scheel, Anne

    2016-01-01

    We investigated the proposition of a two-systems Theory of Mind in adults' belief tracking. A sample of N = 45 participants predicted the choice of one of two opponent players after observing several rounds in an animated card game. Three matches of this card game were played and initial gaze direction on target and subsequent choice predictions were recorded for each belief task and participant. We conducted logistic regressions with mixed effects on the binary data and developed Bayesian logistic mixed models to infer implicit and explicit mentalizing in true belief and false belief tasks. Although logistic regressions with mixed effects predicted the data well a Bayesian logistic mixed model with latent task- and subject-specific parameters gave a better account of the data. As expected explicit choice predictions suggested a clear understanding of true and false beliefs (TB/FB). Surprisingly, however, model parameters for initial gaze direction also indicated belief tracking. We discuss why task-specific parameters for initial gaze directions are different from choice predictions yet reflect second-order perspective taking.

  14. A preliminary study of crack initiation and growth at stress concentration sites

    NASA Technical Reports Server (NTRS)

    Dawicke, D. S.; Gallagher, J. P.; Hartman, G. A.; Rajendran, A. M.

    1982-01-01

    Crack initiation and propagation models for notches are examined. The Dowling crack initiation model and the E1 Haddad et al. crack propagation model were chosen for additional study. Existing data was used to make a preliminary evaluation of the crack propagation model. The results indicate that for the crack sizes in the test, the elastic parameter K gave good correlation for the crack growth rate data. Additional testing, directed specifically toward the problem of small cracks initiating and propagating from notches is necessary to make a full evaluation of these initiation and propagation models.

  15. Application of positive-real functions in hyperstable discrete model-reference adaptive system design.

    NASA Technical Reports Server (NTRS)

    Karmarkar, J. S.

    1972-01-01

    Proposal of an algorithmic procedure, based on mathematical programming methods, to design compensators for hyperstable discrete model-reference adaptive systems (MRAS). The objective of the compensator is to render the MRAS insensitive to initial parameter estimates within a maximized hypercube in the model parameter space.

  16. A hybrid optimization approach to the estimation of distributed parameters in two-dimensional confined aquifers

    USGS Publications Warehouse

    Heidari, M.; Ranjithan, S.R.

    1998-01-01

    In using non-linear optimization techniques for estimation of parameters in a distributed ground water model, the initial values of the parameters and prior information about them play important roles. In this paper, the genetic algorithm (GA) is combined with the truncated-Newton search technique to estimate groundwater parameters for a confined steady-state ground water model. Use of prior information about the parameters is shown to be important in estimating correct or near-correct values of parameters on a regional scale. The amount of prior information needed for an accurate solution is estimated by evaluation of the sensitivity of the performance function to the parameters. For the example presented here, it is experimentally demonstrated that only one piece of prior information of the least sensitive parameter is sufficient to arrive at the global or near-global optimum solution. For hydraulic head data with measurement errors, the error in the estimation of parameters increases as the standard deviation of the errors increases. Results from our experiments show that, in general, the accuracy of the estimated parameters depends on the level of noise in the hydraulic head data and the initial values used in the truncated-Newton search technique.In using non-linear optimization techniques for estimation of parameters in a distributed ground water model, the initial values of the parameters and prior information about them play important roles. In this paper, the genetic algorithm (GA) is combined with the truncated-Newton search technique to estimate groundwater parameters for a confined steady-state ground water model. Use of prior information about the parameters is shown to be important in estimating correct or near-correct values of parameters on a regional scale. The amount of prior information needed for an accurate solution is estimated by evaluation of the sensitivity of the performance function to the parameters. For the example presented here, it is experimentally demonstrated that only one piece of prior information of the least sensitive parameter is sufficient to arrive at the global or near-global optimum solution. For hydraulic head data with measurement errors, the error in the estimation of parameters increases as the standard deviation of the errors increases. Results from our experiments show that, in general, the accuracy of the estimated parameters depends on the level of noise in the hydraulic head data and the initial values used in the truncated-Newton search technique.

  17. Statistics of initial density perturbations in heavy ion collisions and their fluid dynamic response

    NASA Astrophysics Data System (ADS)

    Floerchinger, Stefan; Wiedemann, Urs Achim

    2014-08-01

    An interesting opportunity to determine thermodynamic and transport properties in more detail is to identify generic statistical properties of initial density perturbations. Here we study event-by-event fluctuations in terms of correlation functions for two models that can be solved analytically. The first assumes Gaussian fluctuations around a distribution that is fixed by the collision geometry but leads to non-Gaussian features after averaging over the reaction plane orientation at non-zero impact parameter. In this context, we derive a three-parameter extension of the commonly used Bessel-Gaussian event-by-event distribution of harmonic flow coefficients. Secondly, we study a model of N independent point sources for which connected n-point correlation functions of initial perturbations scale like 1 /N n-1. This scaling is violated for non-central collisions in a way that can be characterized by its impact parameter dependence. We discuss to what extent these are generic properties that can be expected to hold for any model of initial conditions, and how this can improve the fluid dynamical analysis of heavy ion collisions.

  18. Comment on ;Evaluation of a physically based quasi-linear and a conceptually based nonlinear Muskingum methods; [J. Hydrol., 546, 437-449, 10.1016/j.jhydrol.2017.01.025

    NASA Astrophysics Data System (ADS)

    Barati, Reza

    2017-07-01

    Perumal et al. (2017) compared the performances of the variable parameter McCarthy-Muskingum (VPMM) model of Perumal and Price (2013) and the nonlinear Muskingum (NLM) model of Gill (1978) using hypothetical inflow hydrographs in an artificial channel. As input parameters, first model needs the initial condition, upstream boundary condition, Manning's roughness coefficient, length of the routing reach, cross-sections of the river reach and the bed slope, while the latter one requires the initial condition, upstream boundary condition and the hydrologic parameters (three parameters which can be calibrated using flood hydrographs of the upstream and downstream sections). The VPMM model was examined by available Manning's roughness values, whereas the NLM model was tested in both calibration and validation steps. As final conclusion, Perumal et al. (2017) claimed that the NLM model should be retired from the literature of the Muskingum model. While the author's intention is laudable, this comment examines some important issues in the subject matter of the original study.

  19. Calibration of a flexible measurement system based on industrial articulated robot and structured light sensor

    NASA Astrophysics Data System (ADS)

    Mu, Nan; Wang, Kun; Xie, Zexiao; Ren, Ping

    2017-05-01

    To realize online rapid measurement for complex workpieces, a flexible measurement system based on an articulated industrial robot with a structured light sensor mounted on the end-effector is developed. A method for calibrating the system parameters is proposed in which the hand-eye transformation parameters and the robot kinematic parameters are synthesized in the calibration process. An initial hand-eye calibration is first performed using a standard sphere as the calibration target. By applying the modified complete and parametrically continuous method, we establish a synthesized kinematic model that combines the initial hand-eye transformation and distal link parameters as a whole with the sensor coordinate system as the tool frame. According to the synthesized kinematic model, an error model is constructed based on spheres' center-to-center distance errors. Consequently, the error model parameters can be identified in a calibration experiment using a three-standard-sphere target. Furthermore, the redundancy of error model parameters is eliminated to ensure the accuracy and robustness of the parameter identification. Calibration and measurement experiments are carried out based on an ER3A-C60 robot. The experimental results show that the proposed calibration method enjoys high measurement accuracy, and this efficient and flexible system is suitable for online measurement in industrial scenes.

  20. How to Make Data a Blessing to Parametric Uncertainty Quantification and Reduction?

    NASA Astrophysics Data System (ADS)

    Ye, M.; Shi, X.; Curtis, G. P.; Kohler, M.; Wu, J.

    2013-12-01

    In a Bayesian point of view, probability of model parameters and predictions are conditioned on data used for parameter inference and prediction analysis. It is critical to use appropriate data for quantifying parametric uncertainty and its propagation to model predictions. However, data are always limited and imperfect. When a dataset cannot properly constrain model parameters, it may lead to inaccurate uncertainty quantification. While in this case data appears to be a curse to uncertainty quantification, a comprehensive modeling analysis may help understand the cause and characteristics of parametric uncertainty and thus turns data into a blessing. In this study, we illustrate impacts of data on uncertainty quantification and reduction using an example of surface complexation model (SCM) developed to simulate uranyl (U(VI)) adsorption. The model includes two adsorption sites, referred to as strong and weak sites. The amount of uranium adsorption on these sites determines both the mean arrival time and the long tail of the breakthrough curves. There is one reaction on the weak site but two reactions on the strong site. The unknown parameters include fractions of the total surface site density of the two sites and surface complex formation constants of the three reactions. A total of seven experiments were conducted with different geochemical conditions to estimate these parameters. The experiments with low initial concentration of U(VI) result in a large amount of parametric uncertainty. A modeling analysis shows that it is because the experiments cannot distinguish the relative adsorption affinity of the strong and weak sites on uranium adsorption. Therefore, the experiments with high initial concentration of U(VI) are needed, because in the experiments the strong site is nearly saturated and the weak site can be determined. The experiments with high initial concentration of U(VI) are a blessing to uncertainty quantification, and the experiments with low initial concentration help modelers turn a curse into a blessing. The data impacts on uncertainty quantification and reduction are quantified using probability density functions of model parameters obtained from Markov Chain Monte Carlo simulation using the DREAM algorithm. This study provides insights to model calibration, uncertainty quantification, experiment design, and data collection in groundwater reactive transport modeling and other environmental modeling.

  1. Quantum Discord Preservation for Two Quantum-Correlated Qubits in Two Independent Reserviors

    NASA Astrophysics Data System (ADS)

    Xu, Lan

    2018-03-01

    We investigate the dynamics of quantum discord using an exactly solvable model where two qubits coupled to independent thermal environments. The quantum discord is employed as a non-classical correlation quantifier. By studying the quantum discord of a class of initial states, we find discord remains preserve for a finite time. The effects of the temperature, initial-state parameter, system-reservoir coupling constant and temperature difference parameter of the two independent reserviors are also investigated. We discover that the quantum nature loses faster in high temperature, however, one can extend the time of quantum nature by choosing smaller system-reservoir coupling constant, larger certain initial-state parameter and larger temperature difference parameter.

  2. Comparative Analyses of Creep Models of a Solid Propellant

    NASA Astrophysics Data System (ADS)

    Zhang, J. B.; Lu, B. J.; Gong, S. F.; Zhao, S. P.

    2018-05-01

    The creep experiments of a solid propellant samples under five different stresses are carried out at 293.15 K and 323.15 K. In order to express the creep properties of this solid propellant, the viscoelastic model i.e. three Parameters solid, three Parameters fluid, four Parameters solid, four Parameters fluid and exponential model are involved. On the basis of the principle of least squares fitting, and different stress of all the parameters for the models, the nonlinear fitting procedure can be used to analyze the creep properties. The study shows that the four Parameters solid model can best express the behavior of creep properties of the propellant samples. However, the three Parameters solid and exponential model cannot very well reflect the initial value of the creep process, while the modified four Parameters models are found to agree well with the acceleration characteristics of the creep process.

  3. Development of an Uncertainty Quantification Predictive Chemical Reaction Model for Syngas Combustion

    DOE PAGES

    Slavinskaya, N. A.; Abbasi, M.; Starcke, J. H.; ...

    2017-01-24

    An automated data-centric infrastructure, Process Informatics Model (PrIMe), was applied to validation and optimization of a syngas combustion model. The Bound-to-Bound Data Collaboration (B2BDC) module of PrIMe was employed to discover the limits of parameter modifications based on uncertainty quantification (UQ) and consistency analysis of the model–data system and experimental data, including shock-tube ignition delay times and laminar flame speeds. Existing syngas reaction models are reviewed, and the selected kinetic data are described in detail. Empirical rules were developed and applied to evaluate the uncertainty bounds of the literature experimental data. Here, the initial H 2/CO reaction model, assembled frommore » 73 reactions and 17 species, was subjected to a B2BDC analysis. For this purpose, a dataset was constructed that included a total of 167 experimental targets and 55 active model parameters. Consistency analysis of the composed dataset revealed disagreement between models and data. Further analysis suggested that removing 45 experimental targets, 8 of which were self-inconsistent, would lead to a consistent dataset. This dataset was subjected to a correlation analysis, which highlights possible directions for parameter modification and model improvement. Additionally, several methods of parameter optimization were applied, some of them unique to the B2BDC framework. The optimized models demonstrated improved agreement with experiments compared to the initially assembled model, and their predictions for experiments not included in the initial dataset (i.e., a blind prediction) were investigated. The results demonstrate benefits of applying the B2BDC methodology for developing predictive kinetic models.« less

  4. Development of an Uncertainty Quantification Predictive Chemical Reaction Model for Syngas Combustion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Slavinskaya, N. A.; Abbasi, M.; Starcke, J. H.

    An automated data-centric infrastructure, Process Informatics Model (PrIMe), was applied to validation and optimization of a syngas combustion model. The Bound-to-Bound Data Collaboration (B2BDC) module of PrIMe was employed to discover the limits of parameter modifications based on uncertainty quantification (UQ) and consistency analysis of the model–data system and experimental data, including shock-tube ignition delay times and laminar flame speeds. Existing syngas reaction models are reviewed, and the selected kinetic data are described in detail. Empirical rules were developed and applied to evaluate the uncertainty bounds of the literature experimental data. Here, the initial H 2/CO reaction model, assembled frommore » 73 reactions and 17 species, was subjected to a B2BDC analysis. For this purpose, a dataset was constructed that included a total of 167 experimental targets and 55 active model parameters. Consistency analysis of the composed dataset revealed disagreement between models and data. Further analysis suggested that removing 45 experimental targets, 8 of which were self-inconsistent, would lead to a consistent dataset. This dataset was subjected to a correlation analysis, which highlights possible directions for parameter modification and model improvement. Additionally, several methods of parameter optimization were applied, some of them unique to the B2BDC framework. The optimized models demonstrated improved agreement with experiments compared to the initially assembled model, and their predictions for experiments not included in the initial dataset (i.e., a blind prediction) were investigated. The results demonstrate benefits of applying the B2BDC methodology for developing predictive kinetic models.« less

  5. Numerical simulation of asphalt mixtures fracture using continuum models

    NASA Astrophysics Data System (ADS)

    Szydłowski, Cezary; Górski, Jarosław; Stienss, Marcin; Smakosz, Łukasz

    2018-01-01

    The paper considers numerical models of fracture processes of semi-circular asphalt mixture specimens subjected to three-point bending. Parameter calibration of the asphalt mixture constitutive models requires advanced, complex experimental test procedures. The highly non-homogeneous material is numerically modelled by a quasi-continuum model. The computational parameters are averaged data of the components, i.e. asphalt, aggregate and the air voids composing the material. The model directly captures random nature of material parameters and aggregate distribution in specimens. Initial results of the analysis are presented here.

  6. Space shuttle SRM plume expansion sensitivity analysis. [flow characteristics of exhaust gases from solid propellant rocket engines

    NASA Technical Reports Server (NTRS)

    Smith, S. D.; Tevepaugh, J. A.; Penny, M. M.

    1975-01-01

    The exhaust plumes of the space shuttle solid rocket motors can have a significant effect on the base pressure and base drag of the shuttle vehicle. A parametric analysis was conducted to assess the sensitivity of the initial plume expansion angle of analytical solid rocket motor flow fields to various analytical input parameters and operating conditions. The results of the analysis are presented and conclusions reached regarding the sensitivity of the initial plume expansion angle to each parameter investigated. Operating conditions parametrically varied were chamber pressure, nozzle inlet angle, nozzle throat radius of curvature ratio and propellant particle loading. Empirical particle parameters investigated were mean size, local drag coefficient and local heat transfer coefficient. Sensitivity of the initial plume expansion angle to gas thermochemistry model and local drag coefficient model assumptions were determined.

  7. Forecasting of dissolved oxygen in the Guanting reservoir using an optimized NGBM (1,1) model.

    PubMed

    An, Yan; Zou, Zhihong; Zhao, Yanfei

    2015-03-01

    An optimized nonlinear grey Bernoulli model was proposed by using a particle swarm optimization algorithm to solve the parameter optimization problem. In addition, each item in the first-order accumulated generating sequence was set in turn as an initial condition to determine which alternative would yield the highest forecasting accuracy. To test the forecasting performance, the optimized models with different initial conditions were then used to simulate dissolved oxygen concentrations in the Guanting reservoir inlet and outlet (China). The empirical results show that the optimized model can remarkably improve forecasting accuracy, and the particle swarm optimization technique is a good tool to solve parameter optimization problems. What's more, the optimized model with an initial condition that performs well in in-sample simulation may not do as well as in out-of-sample forecasting. Copyright © 2015. Published by Elsevier B.V.

  8. A Modelling Study for Predicting Life of Downhole Tubes Considering Service Environmental Parameters and Stress

    PubMed Central

    Zhao, Tianliang; Liu, Zhiyong; Du, Cuiwei; Hu, Jianpeng; Li, Xiaogang

    2016-01-01

    A modelling effort was made to try to predict the life of downhole tubes or casings, synthetically considering the effect of service influencing factors on corrosion rate. Based on the discussed corrosion mechanism and corrosion processes of downhole tubes, a mathematic model was established. For downhole tubes, the influencing factors are environmental parameters and stress, which vary with service duration. Stress and the environmental parameters including water content, partial pressure of H2S and CO2, pH value, total pressure and temperature, were considered to be time-dependent. Based on the model, life-span of an L80 downhole tube in oilfield Halfaya, an oilfield in Iraq, was predicted. The results show that life-span of the L80 downhole tube in Halfaya is 247 months (approximately 20 years) under initial stress of 0.1 yield strength and 641 months (approximately 53 years) under no initial stress, which indicates that an initial stress of 0.1 yield strength will reduce the life-span by more than half. PMID:28773872

  9. Logistic Mixed Models to Investigate Implicit and Explicit Belief Tracking

    PubMed Central

    Lages, Martin; Scheel, Anne

    2016-01-01

    We investigated the proposition of a two-systems Theory of Mind in adults’ belief tracking. A sample of N = 45 participants predicted the choice of one of two opponent players after observing several rounds in an animated card game. Three matches of this card game were played and initial gaze direction on target and subsequent choice predictions were recorded for each belief task and participant. We conducted logistic regressions with mixed effects on the binary data and developed Bayesian logistic mixed models to infer implicit and explicit mentalizing in true belief and false belief tasks. Although logistic regressions with mixed effects predicted the data well a Bayesian logistic mixed model with latent task- and subject-specific parameters gave a better account of the data. As expected explicit choice predictions suggested a clear understanding of true and false beliefs (TB/FB). Surprisingly, however, model parameters for initial gaze direction also indicated belief tracking. We discuss why task-specific parameters for initial gaze directions are different from choice predictions yet reflect second-order perspective taking. PMID:27853440

  10. Uncertainty quantification and propagation in dynamic models using ambient vibration measurements, application to a 10-story building

    NASA Astrophysics Data System (ADS)

    Behmanesh, Iman; Yousefianmoghadam, Seyedsina; Nozari, Amin; Moaveni, Babak; Stavridis, Andreas

    2018-07-01

    This paper investigates the application of Hierarchical Bayesian model updating for uncertainty quantification and response prediction of civil structures. In this updating framework, structural parameters of an initial finite element (FE) model (e.g., stiffness or mass) are calibrated by minimizing error functions between the identified modal parameters and the corresponding parameters of the model. These error functions are assumed to have Gaussian probability distributions with unknown parameters to be determined. The estimated parameters of error functions represent the uncertainty of the calibrated model in predicting building's response (modal parameters here). The focus of this paper is to answer whether the quantified model uncertainties using dynamic measurement at building's reference/calibration state can be used to improve the model prediction accuracies at a different structural state, e.g., damaged structure. Also, the effects of prediction error bias on the uncertainty of the predicted values is studied. The test structure considered here is a ten-story concrete building located in Utica, NY. The modal parameters of the building at its reference state are identified from ambient vibration data and used to calibrate parameters of the initial FE model as well as the error functions. Before demolishing the building, six of its exterior walls were removed and ambient vibration measurements were also collected from the structure after the wall removal. These data are not used to calibrate the model; they are only used to assess the predicted results. The model updating framework proposed in this paper is applied to estimate the modal parameters of the building at its reference state as well as two damaged states: moderate damage (removal of four walls) and severe damage (removal of six walls). Good agreement is observed between the model-predicted modal parameters and those identified from vibration tests. Moreover, it is shown that including prediction error bias in the updating process instead of commonly-used zero-mean error function can significantly reduce the prediction uncertainties.

  11. Computational Modeling and Analysis of Insulin Induced Eukaryotic Translation Initiation

    PubMed Central

    Lequieu, Joshua; Chakrabarti, Anirikh; Nayak, Satyaprakash; Varner, Jeffrey D.

    2011-01-01

    Insulin, the primary hormone regulating the level of glucose in the bloodstream, modulates a variety of cellular and enzymatic processes in normal and diseased cells. Insulin signals are processed by a complex network of biochemical interactions which ultimately induce gene expression programs or other processes such as translation initiation. Surprisingly, despite the wealth of literature on insulin signaling, the relative importance of the components linking insulin with translation initiation remains unclear. We addressed this question by developing and interrogating a family of mathematical models of insulin induced translation initiation. The insulin network was modeled using mass-action kinetics within an ordinary differential equation (ODE) framework. A family of model parameters was estimated, starting from an initial best fit parameter set, using 24 experimental data sets taken from literature. The residual between model simulations and each of the experimental constraints were simultaneously minimized using multiobjective optimization. Interrogation of the model population, using sensitivity and robustness analysis, identified an insulin-dependent switch that controlled translation initiation. Our analysis suggested that without insulin, a balance between the pro-initiation activity of the GTP-binding protein Rheb and anti-initiation activity of PTEN controlled basal initiation. On the other hand, in the presence of insulin a combination of PI3K and Rheb activity controlled inducible initiation, where PI3K was only critical in the presence of insulin. Other well known regulatory mechanisms governing insulin action, for example IRS-1 negative feedback, modulated the relative importance of PI3K and Rheb but did not fundamentally change the signal flow. PMID:22102801

  12. Determining Hypocentral Parameters for Local Earthquakes in 1-D Using a Genetic Algorithm and Two-point ray tracing

    NASA Astrophysics Data System (ADS)

    Kim, W.; Hahm, I.; Ahn, S. J.; Lim, D. H.

    2005-12-01

    This paper introduces a powerful method for determining hypocentral parameters for local earthquakes in 1-D using a genetic algorithm (GA) and two-point ray tracing. Using existing algorithms to determine hypocentral parameters is difficult, because these parameters can vary based on initial velocity models. We developed a new method to solve this problem by applying a GA to an existing algorithm, HYPO-71 (Lee and Larh, 1975). The original HYPO-71 algorithm was modified by applying two-point ray tracing and a weighting factor with respect to the takeoff angle at the source to reduce errors from the ray path and hypocenter depth. Artificial data, without error, were generated by computer using two-point ray tracing in a true model, in which velocity structure and hypocentral parameters were known. The accuracy of the calculated results was easily determined by comparing calculated and actual values. We examined the accuracy of this method for several cases by changing the true and modeled layer numbers and thicknesses. The computational results show that this method determines nearly exact hypocentral parameters without depending on initial velocity models. Furthermore, accurate and nearly unique hypocentral parameters were obtained, although the number of modeled layers and thicknesses differed from those in the true model. Therefore, this method can be a useful tool for determining hypocentral parameters in regions where reliable local velocity values are unknown. This method also provides the basic a priori information for 3-D studies. KEY -WORDS: hypocentral parameters, genetic algorithm (GA), two-point ray tracing

  13. Robust and fast nonlinear optimization of diffusion MRI microstructure models.

    PubMed

    Harms, R L; Fritz, F J; Tobisch, A; Goebel, R; Roebroeck, A

    2017-07-15

    Advances in biophysical multi-compartment modeling for diffusion MRI (dMRI) have gained popularity because of greater specificity than DTI in relating the dMRI signal to underlying cellular microstructure. A large range of these diffusion microstructure models have been developed and each of the popular models comes with its own, often different, optimization algorithm, noise model and initialization strategy to estimate its parameter maps. Since data fit, accuracy and precision is hard to verify, this creates additional challenges to comparability and generalization of results from diffusion microstructure models. In addition, non-linear optimization is computationally expensive leading to very long run times, which can be prohibitive in large group or population studies. In this technical note we investigate the performance of several optimization algorithms and initialization strategies over a few of the most popular diffusion microstructure models, including NODDI and CHARMED. We evaluate whether a single well performing optimization approach exists that could be applied to many models and would equate both run time and fit aspects. All models, algorithms and strategies were implemented on the Graphics Processing Unit (GPU) to remove run time constraints, with which we achieve whole brain dataset fits in seconds to minutes. We then evaluated fit, accuracy, precision and run time for different models of differing complexity against three common optimization algorithms and three parameter initialization strategies. Variability of the achieved quality of fit in actual data was evaluated on ten subjects of each of two population studies with a different acquisition protocol. We find that optimization algorithms and multi-step optimization approaches have a considerable influence on performance and stability over subjects and over acquisition protocols. The gradient-free Powell conjugate-direction algorithm was found to outperform other common algorithms in terms of run time, fit, accuracy and precision. Parameter initialization approaches were found to be relevant especially for more complex models, such as those involving several fiber orientations per voxel. For these, a fitting cascade initializing or fixing parameter values in a later optimization step from simpler models in an earlier optimization step further improved run time, fit, accuracy and precision compared to a single step fit. This establishes and makes available standards by which robust fit and accuracy can be achieved in shorter run times. This is especially relevant for the use of diffusion microstructure modeling in large group or population studies and in combining microstructure parameter maps with tractography results. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  14. Parameter identification of thermophilic anaerobic degradation of valerate.

    PubMed

    Flotats, Xavier; Ahring, Birgitte K; Angelidaki, Irini

    2003-01-01

    The considered mathematical model of the decomposition of valerate presents three unknown kinetic parameters, two unknown stoichiometric coefficients, and three unknown initial concentrations for biomass. Applying a structural identifiability study, we concluded that it is necessary to perform simultaneous batch experiments with different initial conditions for estimating these parameters. Four simultaneous batch experiments were conducted at 55 degrees C, characterized by four different initial acetate concentrations. Product inhibition of valerate degradation by acetate was considered. Practical identification was done optimizing the sum of the multiple determination coefficients for all measured state variables and for all experiments simultaneously. The estimated values of kinetic parameters and stoichiometric coefficients were characterized by the parameter correlation matrix, the confidence interval, and the student's t-test at 5% significance level with positive results except for the saturation constant, for which more experiments for improving its identifiability should be conducted. In this article, we discuss kinetic parameter estimation methods.

  15. Initialization of a fractional order identification algorithm applied for Lithium-ion battery modeling in time domain

    NASA Astrophysics Data System (ADS)

    Nasser Eddine, Achraf; Huard, Benoît; Gabano, Jean-Denis; Poinot, Thierry

    2018-06-01

    This paper deals with the initialization of a non linear identification algorithm used to accurately estimate the physical parameters of Lithium-ion battery. A Randles electric equivalent circuit is used to describe the internal impedance of the battery. The diffusion phenomenon related to this modeling is presented using a fractional order method. The battery model is thus reformulated into a transfer function which can be identified through Levenberg-Marquardt algorithm to ensure the algorithm's convergence to the physical parameters. An initialization method is proposed in this paper by taking into account previously acquired information about the static and dynamic system behavior. The method is validated using noisy voltage response, while precision of the final identification results is evaluated using Monte-Carlo method.

  16. Ensemble-based flash-flood modelling: Taking into account hydrodynamic parameters and initial soil moisture uncertainties

    NASA Astrophysics Data System (ADS)

    Edouard, Simon; Vincendon, Béatrice; Ducrocq, Véronique

    2018-05-01

    Intense precipitation events in the Mediterranean often lead to devastating flash floods (FF). FF modelling is affected by several kinds of uncertainties and Hydrological Ensemble Prediction Systems (HEPS) are designed to take those uncertainties into account. The major source of uncertainty comes from rainfall forcing and convective-scale meteorological ensemble prediction systems can manage it for forecasting purpose. But other sources are related to the hydrological modelling part of the HEPS. This study focuses on the uncertainties arising from the hydrological model parameters and initial soil moisture with aim to design an ensemble-based version of an hydrological model dedicated to Mediterranean fast responding rivers simulations, the ISBA-TOP coupled system. The first step consists in identifying the parameters that have the strongest influence on FF simulations by assuming perfect precipitation. A sensitivity study is carried out first using a synthetic framework and then for several real events and several catchments. Perturbation methods varying the most sensitive parameters as well as initial soil moisture allow designing an ensemble-based version of ISBA-TOP. The first results of this system on some real events are presented. The direct perspective of this work will be to drive this ensemble-based version with the members of a convective-scale meteorological ensemble prediction system to design a complete HEPS for FF forecasting.

  17. Volcanic Ash Data Assimilation System for Atmospheric Transport Model

    NASA Astrophysics Data System (ADS)

    Ishii, K.; Shimbori, T.; Sato, E.; Tokumoto, T.; Hayashi, Y.; Hashimoto, A.

    2017-12-01

    The Japan Meteorological Agency (JMA) has two operations for volcanic ash forecasts, which are Volcanic Ash Fall Forecast (VAFF) and Volcanic Ash Advisory (VAA). In these operations, the forecasts are calculated by atmospheric transport models including the advection process, the turbulent diffusion process, the gravitational fall process and the deposition process (wet/dry). The initial distribution of volcanic ash in the models is the most important but uncertain factor. In operations, the model of Suzuki (1983) with many empirical assumptions is adopted to the initial distribution. This adversely affects the reconstruction of actual eruption plumes.We are developing a volcanic ash data assimilation system using weather radars and meteorological satellite observation, in order to improve the initial distribution of the atmospheric transport models. Our data assimilation system is based on the three-dimensional variational data assimilation method (3D-Var). Analysis variables are ash concentration and size distribution parameters which are mutually independent. The radar observation is expected to provide three-dimensional parameters such as ash concentration and parameters of ash particle size distribution. On the other hand, the satellite observation is anticipated to provide two-dimensional parameters of ash clouds such as mass loading, top height and particle effective radius. In this study, we estimate the thickness of ash clouds using vertical wind shear of JMA numerical weather prediction, and apply for the volcanic ash data assimilation system.

  18. The Canadian Hydrological Model (CHM): A multi-scale, variable-complexity hydrological model for cold regions

    NASA Astrophysics Data System (ADS)

    Marsh, C.; Pomeroy, J. W.; Wheater, H. S.

    2016-12-01

    There is a need for hydrological land surface schemes that can link to atmospheric models, provide hydrological prediction at multiple scales and guide the development of multiple objective water predictive systems. Distributed raster-based models suffer from an overrepresentation of topography, leading to wasted computational effort that increases uncertainty due to greater numbers of parameters and initial conditions. The Canadian Hydrological Model (CHM) is a modular, multiphysics, spatially distributed modelling framework designed for representing hydrological processes, including those that operate in cold-regions. Unstructured meshes permit variable spatial resolution, allowing coarse resolutions at low spatial variability and fine resolutions as required. Model uncertainty is reduced by lessening the necessary computational elements relative to high-resolution rasters. CHM uses a novel multi-objective approach for unstructured triangular mesh generation that fulfills hydrologically important constraints (e.g., basin boundaries, water bodies, soil classification, land cover, elevation, and slope/aspect). This provides an efficient spatial representation of parameters and initial conditions, as well as well-formed and well-graded triangles that are suitable for numerical discretization. CHM uses high-quality open source libraries and high performance computing paradigms to provide a framework that allows for integrating current state-of-the-art process algorithms. The impact of changes to model structure, including individual algorithms, parameters, initial conditions, driving meteorology, and spatial/temporal discretization can be easily tested. Initial testing of CHM compared spatial scales and model complexity for a spring melt period at a sub-arctic mountain basin. The meshing algorithm reduced the total number of computational elements and preserved the spatial heterogeneity of predictions.

  19. Parameter estimation for a cohesive sediment transport model by assimilating satellite observations in the Hangzhou Bay: Temporal variations and spatial distributions

    NASA Astrophysics Data System (ADS)

    Wang, Daosheng; Zhang, Jicai; He, Xianqiang; Chu, Dongdong; Lv, Xianqing; Wang, Ya Ping; Yang, Yang; Fan, Daidu; Gao, Shu

    2018-01-01

    Model parameters in the suspended cohesive sediment transport models are critical for the accurate simulation of suspended sediment concentrations (SSCs). Difficulties in estimating the model parameters still prevent numerical modeling of the sediment transport from achieving a high level of predictability. Based on a three-dimensional cohesive sediment transport model and its adjoint model, the satellite remote sensing data of SSCs during both spring tide and neap tide, retrieved from Geostationary Ocean Color Imager (GOCI), are assimilated to synchronously estimate four spatially and temporally varying parameters in the Hangzhou Bay in China, including settling velocity, resuspension rate, inflow open boundary conditions and initial conditions. After data assimilation, the model performance is significantly improved. Through several sensitivity experiments, the spatial and temporal variation tendencies of the estimated model parameters are verified to be robust and not affected by model settings. The pattern for the variations of the estimated parameters is analyzed and summarized. The temporal variations and spatial distributions of the estimated settling velocity are negatively correlated with current speed, which can be explained using the combination of flocculation process and Stokes' law. The temporal variations and spatial distributions of the estimated resuspension rate are also negatively correlated with current speed, which are related to the grain size of the seabed sediments under different current velocities. Besides, the estimated inflow open boundary conditions reach the local maximum values near the low water slack conditions and the estimated initial conditions are negatively correlated with water depth, which is consistent with the general understanding. The relationships between the estimated parameters and the hydrodynamic fields can be suggestive for improving the parameterization in cohesive sediment transport models.

  20. Sensitivity of geological, geochemical and hydrologic parameters in complex reactive transport systems for in-situ uranium bioremediation

    NASA Astrophysics Data System (ADS)

    Yang, G.; Maher, K.; Caers, J.

    2015-12-01

    Groundwater contamination associated with remediated uranium mill tailings is a challenging environmental problem, particularly within the Colorado River Basin. To examine the effectiveness of in-situ bioremediation of U(VI), acetate injection has been proposed and tested at the Rifle pilot site. There have been several geologic modeling and simulated contaminant transport investigations, to evaluate the potential outcomes of the process and identify crucial factors for successful uranium reduction. Ultimately, findings from these studies would contribute to accurate predictions of the efficacy of uranium reduction. However, all these previous studies have considered limited model complexities, either because of the concern that data is too sparse to resolve such complex systems or because some parameters are assumed to be less important. Such simplified initial modeling, however, limits the predictive power of the model. Moreover, previous studies have not yet focused on spatial heterogeneity of various modeling components and its impact on the spatial distribution of the immobilized uranium (U(IV)). In this study, we study the impact of uncertainty on 21 parameters on model responses by means of recently developed distance-based global sensitivity analysis (DGSA), to study the main effects and interactions of parameters of various types. The 21 parameters include, for example, spatial variability of initial uranium concentration, mean hydraulic conductivity, and variogram structures of hydraulic conductivity. DGSA allows for studying multi-variate model responses based on spatial and non-spatial model parameters. When calculating the distances between model responses, in addition to the overall uranium reduction efficacy, we also considered the spatial profiles of the immobilized uranium concentration as target response. Results show that the mean hydraulic conductivity and the mineral reaction rate are the two most sensitive parameters with regard to the overall uranium reduction. But in terms of spatial distribution of immobilized uranium, initial conditions of uranium concentration and spatial uncertainty in hydraulic conductivity also become important. These analyses serve as the first step of further prediction practices of the complex uranium transport and reaction systems.

  1. Evaluating Uncertainty of Runoff Simulation using SWAT model of the Feilaixia Watershed in China Based on the GLUE Method

    NASA Astrophysics Data System (ADS)

    Chen, X.; Huang, G.

    2017-12-01

    In recent years, distributed hydrological models have been widely used in storm water management, water resources protection and so on. Therefore, how to evaluate the uncertainty of the model reasonably and efficiently becomes a hot topic today. In this paper, the soil and water assessment tool (SWAT) model is constructed for the study area of China's Feilaixia watershed, and the uncertainty of the runoff simulation is analyzed by GLUE method deeply. Taking the initial parameter range of GLUE method as the research core, the influence of different initial parameter ranges on model uncertainty is studied. In this paper, two sets of parameter ranges are chosen as the object of study, the first one (range 1) is recommended by SWAT-CUP and the second one (range 2) is calibrated by SUFI-2. The results showed that under the same number of simulations (10,000 times), the overall uncertainty obtained by the range 2 is less than the range 1. Specifically, the "behavioral" parameter sets for the range 2 is 10000 and for the range 1 is 4448. In the calibration and the validation, the ratio of P-factor to R-factor for range 1 is 1.387 and 1.391, and for range 2 is 1.405 and 1.462 respectively. In addition, the simulation result of range 2 is better with the NS and R2 slightly higher than range 1. Therefore, it can be concluded that using the parameter range calibrated by SUFI-2 as the initial parameter range for the GLUE is a way to effectively capture and evaluate the simulation uncertainty.

  2. Local Sensitivity of Predicted CO 2 Injectivity and Plume Extent to Model Inputs for the FutureGen 2.0 site

    DOE PAGES

    Zhang, Z. Fred; White, Signe K.; Bonneville, Alain; ...

    2014-12-31

    Numerical simulations have been used for estimating CO2 injectivity, CO2 plume extent, pressure distribution, and Area of Review (AoR), and for the design of CO2 injection operations and monitoring network for the FutureGen project. The simulation results are affected by uncertainties associated with numerous input parameters, the conceptual model, initial and boundary conditions, and factors related to injection operations. Furthermore, the uncertainties in the simulation results also vary in space and time. The key need is to identify those uncertainties that critically impact the simulation results and quantify their impacts. We introduce an approach to determine the local sensitivity coefficientmore » (LSC), defined as the response of the output in percent, to rank the importance of model inputs on outputs. The uncertainty of an input with higher sensitivity has larger impacts on the output. The LSC is scalable by the error of an input parameter. The composite sensitivity of an output to a subset of inputs can be calculated by summing the individual LSC values. We propose a local sensitivity coefficient method and applied it to the FutureGen 2.0 Site in Morgan County, Illinois, USA, to investigate the sensitivity of input parameters and initial conditions. The conceptual model for the site consists of 31 layers, each of which has a unique set of input parameters. The sensitivity of 11 parameters for each layer and 7 inputs as initial conditions is then investigated. For CO2 injectivity and plume size, about half of the uncertainty is due to only 4 or 5 of the 348 inputs and 3/4 of the uncertainty is due to about 15 of the inputs. The initial conditions and the properties of the injection layer and its neighbour layers contribute to most of the sensitivity. Overall, the simulation outputs are very sensitive to only a small fraction of the inputs. However, the parameters that are important for controlling CO2 injectivity are not the same as those controlling the plume size. The three most sensitive inputs for injectivity were the horizontal permeability of Mt Simon 11 (the injection layer), the initial fracture-pressure gradient, and the residual aqueous saturation of Mt Simon 11, while those for the plume area were the initial salt concentration, the initial pressure, and the initial fracture-pressure gradient. The advantages of requiring only a single set of simulation results, scalability to the proper parameter errors, and easy calculation of the composite sensitivities make this approach very cost-effective for estimating AoR uncertainty and guiding cost-effective site characterization, injection well design, and monitoring network design for CO2 storage projects.« less

  3. Combined Uncertainty and A-Posteriori Error Bound Estimates for CFD Calculations: Theory and Implementation

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.

    2014-01-01

    Simulation codes often utilize finite-dimensional approximation resulting in numerical error. Some examples include, numerical methods utilizing grids and finite-dimensional basis functions, particle methods using a finite number of particles. These same simulation codes also often contain sources of uncertainty, for example, uncertain parameters and fields associated with the imposition of initial and boundary data,uncertain physical model parameters such as chemical reaction rates, mixture model parameters, material property parameters, etc.

  4. FracFit: A Robust Parameter Estimation Tool for Anomalous Transport Problems

    NASA Astrophysics Data System (ADS)

    Kelly, J. F.; Bolster, D.; Meerschaert, M. M.; Drummond, J. D.; Packman, A. I.

    2016-12-01

    Anomalous transport cannot be adequately described with classical Fickian advection-dispersion equations (ADE). Rather, fractional calculus models may be used, which capture non-Fickian behavior (e.g. skewness and power-law tails). FracFit is a robust parameter estimation tool based on space- and time-fractional models used to model anomalous transport. Currently, four fractional models are supported: 1) space fractional advection-dispersion equation (sFADE), 2) time-fractional dispersion equation with drift (TFDE), 3) fractional mobile-immobile equation (FMIE), and 4) tempered fractional mobile-immobile equation (TFMIE); additional models may be added in the future. Model solutions using pulse initial conditions and continuous injections are evaluated using stable distribution PDFs and CDFs or subordination integrals. Parameter estimates are extracted from measured breakthrough curves (BTCs) using a weighted nonlinear least squares (WNLS) algorithm. Optimal weights for BTCs for pulse initial conditions and continuous injections are presented, facilitating the estimation of power-law tails. Two sample applications are analyzed: 1) continuous injection laboratory experiments using natural organic matter and 2) pulse injection BTCs in the Selke river. Model parameters are compared across models and goodness-of-fit metrics are presented, assisting model evaluation. The sFADE and time-fractional models are compared using space-time duality (Baeumer et. al., 2009), which links the two paradigms.

  5. Individual Differences in a Positional Learning Task across the Adult Lifespan

    ERIC Educational Resources Information Center

    Rast, Philippe; Zimprich, Daniel

    2010-01-01

    This study aimed at modeling individual and average non-linear trajectories of positional learning using a structured latent growth curve approach. The model is based on an exponential function which encompasses three parameters: Initial performance, learning rate, and asymptotic performance. These learning parameters were compared in a positional…

  6. Optimization of a Thermodynamic Model Using a Dakota Toolbox Interface

    NASA Astrophysics Data System (ADS)

    Cyrus, J.; Jafarov, E. E.; Schaefer, K. M.; Wang, K.; Clow, G. D.; Piper, M.; Overeem, I.

    2016-12-01

    Scientific modeling of the Earth physical processes is an important driver of modern science. The behavior of these scientific models is governed by a set of input parameters. It is crucial to choose accurate input parameters that will also preserve the corresponding physics being simulated in the model. In order to effectively simulate real world processes the models output data must be close to the observed measurements. To achieve this optimal simulation, input parameters are tuned until we have minimized the objective function, which is the error between the simulation model outputs and the observed measurements. We developed an auxiliary package, which serves as a python interface between the user and DAKOTA. The package makes it easy for the user to conduct parameter space explorations, parameter optimizations, as well as sensitivity analysis while tracking and storing results in a database. The ability to perform these analyses via a Python library also allows the users to combine analysis techniques, for example finding an approximate equilibrium with optimization then immediately explore the space around it. We used the interface to calibrate input parameters for the heat flow model, which is commonly used in permafrost science. We performed optimization on the first three layers of the permafrost model, each with two thermal conductivity coefficients input parameters. Results of parameter space explorations indicate that the objective function not always has a unique minimal value. We found that gradient-based optimization works the best for the objective functions with one minimum. Otherwise, we employ more advanced Dakota methods such as genetic optimization and mesh based convergence in order to find the optimal input parameters. We were able to recover 6 initially unknown thermal conductivity parameters within 2% accuracy of their known values. Our initial tests indicate that the developed interface for the Dakota toolbox could be used to perform analysis and optimization on a `black box' scientific model more efficiently than using just Dakota.

  7. An automatic and effective parameter optimization method for model tuning

    NASA Astrophysics Data System (ADS)

    Zhang, T.; Li, L.; Lin, Y.; Xue, W.; Xie, F.; Xu, H.; Huang, X.

    2015-05-01

    Physical parameterizations in General Circulation Models (GCMs), having various uncertain parameters, greatly impact model performance and model climate sensitivity. Traditional manual and empirical tuning of these parameters is time consuming and ineffective. In this study, a "three-step" methodology is proposed to automatically and effectively obtain the optimum combination of some key parameters in cloud and convective parameterizations according to a comprehensive objective evaluation metrics. Different from the traditional optimization methods, two extra steps, one determines parameter sensitivity and the other chooses the optimum initial value of sensitive parameters, are introduced before the downhill simplex method to reduce the computational cost and improve the tuning performance. Atmospheric GCM simulation results show that the optimum combination of these parameters determined using this method is able to improve the model's overall performance by 9%. The proposed methodology and software framework can be easily applied to other GCMs to speed up the model development process, especially regarding unavoidable comprehensive parameters tuning during the model development stage.

  8. Assessment of initial soil moisture conditions for event-based rainfall-runoff modelling

    NASA Astrophysics Data System (ADS)

    Tramblay, Yves; Bouvier, Christophe; Martin, Claude; Didon-Lescot, Jean-François; Todorovik, Dragana; Domergue, Jean-Marc

    2010-06-01

    Flash floods are the most destructive natural hazards that occur in the Mediterranean region. Rainfall-runoff models can be very useful for flash flood forecasting and prediction. Event-based models are very popular for operational purposes, but there is a need to reduce the uncertainties related to the initial moisture conditions estimation prior to a flood event. This paper aims to compare several soil moisture indicators: local Time Domain Reflectometry (TDR) measurements of soil moisture, modelled soil moisture through the Interaction-Sol-Biosphère-Atmosphère (ISBA) component of the SIM model (Météo-France), antecedent precipitation and base flow. A modelling approach based on the Soil Conservation Service-Curve Number method (SCS-CN) is used to simulate the flood events in a small headwater catchment in the Cevennes region (France). The model involves two parameters: one for the runoff production, S, and one for the routing component, K. The S parameter can be interpreted as the maximal water retention capacity, and acts as the initial condition of the model, depending on the antecedent moisture conditions. The model was calibrated from a 20-flood sample, and led to a median Nash value of 0.9. The local TDR measurements in the deepest layers of soil (80-140 cm) were found to be the best predictors for the S parameter. TDR measurements averaged over the whole soil profile, outputs of the SIM model, and the logarithm of base flow also proved to be good predictors, whereas antecedent precipitations were found to be less efficient. The good correlations observed between the TDR predictors and the S calibrated values indicate that monitoring soil moisture could help setting the initial conditions for simplified event-based models in small basins.

  9. Comparative Sensitivity Analysis of Muscle Activation Dynamics

    PubMed Central

    Günther, Michael; Götz, Thomas

    2015-01-01

    We mathematically compared two models of mammalian striated muscle activation dynamics proposed by Hatze and Zajac. Both models are representative for a broad variety of biomechanical models formulated as ordinary differential equations (ODEs). These models incorporate parameters that directly represent known physiological properties. Other parameters have been introduced to reproduce empirical observations. We used sensitivity analysis to investigate the influence of model parameters on the ODE solutions. In addition, we expanded an existing approach to treating initial conditions as parameters and to calculating second-order sensitivities. Furthermore, we used a global sensitivity analysis approach to include finite ranges of parameter values. Hence, a theoretician striving for model reduction could use the method for identifying particularly low sensitivities to detect superfluous parameters. An experimenter could use it for identifying particularly high sensitivities to improve parameter estimation. Hatze's nonlinear model incorporates some parameters to which activation dynamics is clearly more sensitive than to any parameter in Zajac's linear model. Other than Zajac's model, Hatze's model can, however, reproduce measured shifts in optimal muscle length with varied muscle activity. Accordingly we extracted a specific parameter set for Hatze's model that combines best with a particular muscle force-length relation. PMID:26417379

  10. Performance of Transit Model Fitting in Processing Four Years of Kepler Science Data

    NASA Astrophysics Data System (ADS)

    Li, Jie; Burke, Christopher J.; Jenkins, Jon Michael; Quintana, Elisa V.; Rowe, Jason; Seader, Shawn; Tenenbaum, Peter; Twicken, Joseph D.

    2014-06-01

    We present transit model fitting performance of the Kepler Science Operations Center (SOC) Pipeline in processing four years of science data, which were collected by the Kepler spacecraft from May 13, 2009 to May 12, 2013. Threshold Crossing Events (TCEs), which represent transiting planet detections, are generated by the Transiting Planet Search (TPS) component of the pipeline and subsequently processed in the Data Validation (DV) component. The transit model is used in DV to fit TCEs and derive parameters that are used in various diagnostic tests to validate planetary candidates. The standard transit model includes five fit parameters: transit epoch time (i.e. central time of first transit), orbital period, impact parameter, ratio of planet radius to star radius and ratio of semi-major axis to star radius. In the latest Kepler SOC pipeline codebase, the light curve of the target for which a TCE is generated is initially fitted by a trapezoidal model with four parameters: transit epoch time, depth, duration and ingress time. The trapezoidal model fit, implemented with repeated Levenberg-Marquardt minimization, provides a quick and high fidelity assessment of the transit signal. The fit parameters of the trapezoidal model with the minimum chi-square metric are converted to set initial values of the fit parameters of the standard transit model. Additional parameters, such as the equilibrium temperature and effective stellar flux of the planet candidate, are derived from the fit parameters of the standard transit model to characterize pipeline candidates for the search of Earth-size planets in the Habitable Zone. The uncertainties of all derived parameters are updated in the latest codebase to take into account for the propagated errors of the fit parameters as well as the uncertainties in stellar parameters. The results of the transit model fitting of the TCEs identified by the Kepler SOC Pipeline, including fitted and derived parameters, fit goodness metrics and diagnostic figures, are included in the DV report and one-page report summary, which are accessible by the science community at NASA Exoplanet Archive. Funding for the Kepler Mission has been provided by the NASA Science Mission Directorate.

  11. Evaluation, Calibration and Comparison of the Precipitation-Runoff Modeling System (PRMS) National Hydrologic Model (NHM) Using Moderate Resolution Imaging Spectroradiometer (MODIS) and Snow Data Assimilation System (SNODAS) Gridded Datasets

    NASA Astrophysics Data System (ADS)

    Norton, P. A., II; Haj, A. E., Jr.

    2014-12-01

    The United States Geological Survey is currently developing a National Hydrologic Model (NHM) to support and facilitate coordinated and consistent hydrologic modeling efforts at the scale of the continental United States. As part of this effort, the Geospatial Fabric (GF) for the NHM was created. The GF is a database that contains parameters derived from datasets that characterize the physical features of watersheds. The GF was used to aggregate catchments and flowlines defined in the National Hydrography Dataset Plus dataset for more than 100,000 hydrologic response units (HRUs), and to establish initial parameter values for input to the Precipitation-Runoff Modeling System (PRMS). Many parameter values are adjusted in PRMS using an automated calibration process. Using these adjusted parameter values, the PRMS model estimated variables such as evapotranspiration (ET), potential evapotranspiration (PET), snow-covered area (SCA), and snow water equivalent (SWE). In order to evaluate the effectiveness of parameter calibration, and model performance in general, several satellite-based Moderate Resolution Imaging Spectroradiometer (MODIS) and Snow Data Assimilation System (SNODAS) gridded datasets including ET, PET, SCA, and SWE were compared to PRMS-simulated values. The MODIS and SNODAS data were spatially averaged for each HRU, and compared to PRMS-simulated ET, PET, SCA, and SWE values for each HRU in the Upper Missouri River watershed. Default initial GF parameter values and PRMS calibration ranges were evaluated. Evaluation results, and the use of MODIS and SNODAS datasets to update GF parameter values and PRMS calibration ranges, are presented and discussed.

  12. An automatic and effective parameter optimization method for model tuning

    NASA Astrophysics Data System (ADS)

    Zhang, T.; Li, L.; Lin, Y.; Xue, W.; Xie, F.; Xu, H.; Huang, X.

    2015-11-01

    Physical parameterizations in general circulation models (GCMs), having various uncertain parameters, greatly impact model performance and model climate sensitivity. Traditional manual and empirical tuning of these parameters is time-consuming and ineffective. In this study, a "three-step" methodology is proposed to automatically and effectively obtain the optimum combination of some key parameters in cloud and convective parameterizations according to a comprehensive objective evaluation metrics. Different from the traditional optimization methods, two extra steps, one determining the model's sensitivity to the parameters and the other choosing the optimum initial value for those sensitive parameters, are introduced before the downhill simplex method. This new method reduces the number of parameters to be tuned and accelerates the convergence of the downhill simplex method. Atmospheric GCM simulation results show that the optimum combination of these parameters determined using this method is able to improve the model's overall performance by 9 %. The proposed methodology and software framework can be easily applied to other GCMs to speed up the model development process, especially regarding unavoidable comprehensive parameter tuning during the model development stage.

  13. Quantum Parameter Estimation: From Experimental Design to Constructive Algorithm

    NASA Astrophysics Data System (ADS)

    Yang, Le; Chen, Xi; Zhang, Ming; Dai, Hong-Yi

    2017-11-01

    In this paper we design the following two-step scheme to estimate the model parameter ω 0 of the quantum system: first we utilize the Fisher information with respect to an intermediate variable v=\\cos ({ω }0t) to determine an optimal initial state and to seek optimal parameters of the POVM measurement operators; second we explore how to estimate ω 0 from v by choosing t when a priori information knowledge of ω 0 is available. Our optimal initial state can achieve the maximum quantum Fisher information. The formulation of the optimal time t is obtained and the complete algorithm for parameter estimation is presented. We further explore how the lower bound of the estimation deviation depends on the a priori information of the model. Supported by the National Natural Science Foundation of China under Grant Nos. 61273202, 61673389, and 61134008

  14. Iterative integral parameter identification of a respiratory mechanics model.

    PubMed

    Schranz, Christoph; Docherty, Paul D; Chiew, Yeong Shiong; Möller, Knut; Chase, J Geoffrey

    2012-07-18

    Patient-specific respiratory mechanics models can support the evaluation of optimal lung protective ventilator settings during ventilation therapy. Clinical application requires that the individual's model parameter values must be identified with information available at the bedside. Multiple linear regression or gradient-based parameter identification methods are highly sensitive to noise and initial parameter estimates. Thus, they are difficult to apply at the bedside to support therapeutic decisions. An iterative integral parameter identification method is applied to a second order respiratory mechanics model. The method is compared to the commonly used regression methods and error-mapping approaches using simulated and clinical data. The clinical potential of the method was evaluated on data from 13 Acute Respiratory Distress Syndrome (ARDS) patients. The iterative integral method converged to error minima 350 times faster than the Simplex Search Method using simulation data sets and 50 times faster using clinical data sets. Established regression methods reported erroneous results due to sensitivity to noise. In contrast, the iterative integral method was effective independent of initial parameter estimations, and converged successfully in each case tested. These investigations reveal that the iterative integral method is beneficial with respect to computing time, operator independence and robustness, and thus applicable at the bedside for this clinical application.

  15. Calibration by Hydrological Response Unit of a National Hydrologic Model to Improve Spatial Representation and Distribution of Parameters

    NASA Astrophysics Data System (ADS)

    Norton, P. A., II

    2015-12-01

    The U. S. Geological Survey is developing a National Hydrologic Model (NHM) to support consistent hydrologic modeling across the conterminous United States (CONUS). The Precipitation-Runoff Modeling System (PRMS) simulates daily hydrologic and energy processes in watersheds, and is used for the NHM application. For PRMS each watershed is divided into hydrologic response units (HRUs); by default each HRU is assumed to have a uniform hydrologic response. The Geospatial Fabric (GF) is a database containing initial parameter values for input to PRMS and was created for the NHM. The parameter values in the GF were derived from datasets that characterize the physical features of the entire CONUS. The NHM application is composed of more than 100,000 HRUs from the GF. Selected parameter values commonly are adjusted by basin in PRMS using an automated calibration process based on calibration targets, such as streamflow. Providing each HRU with distinct values that captures variability within the CONUS may improve simulation performance of the NHM. During calibration of the NHM by HRU, selected parameter values are adjusted for PRMS based on calibration targets, such as streamflow, snow water equivalent (SWE) and actual evapotranspiration (AET). Simulated SWE, AET, and runoff were compared to value ranges derived from multiple sources (e.g. the Snow Data Assimilation System, the Moderate Resolution Imaging Spectroradiometer (i.e. MODIS) Global Evapotranspiration Project, the Simplified Surface Energy Balance model, and the Monthly Water Balance Model). This provides each HRU with a distinct set of parameter values that captures the variability within the CONUS, leading to improved model performance. We present simulation results from the NHM after preliminary calibration, including the results of basin-level calibration for the NHM using: 1) default initial GF parameter values, and 2) parameter values calibrated by HRU.

  16. Dual Extended Kalman Filter for the Identification of Time-Varying Human Manual Control Behavior

    NASA Technical Reports Server (NTRS)

    Popovici, Alexandru; Zaal, Peter M. T.; Pool, Daan M.

    2017-01-01

    A Dual Extended Kalman Filter was implemented for the identification of time-varying human manual control behavior. Two filters that run concurrently were used, a state filter that estimates the equalization dynamics, and a parameter filter that estimates the neuromuscular parameters and time delay. Time-varying parameters were modeled as a random walk. The filter successfully estimated time-varying human control behavior in both simulated and experimental data. Simple guidelines are proposed for the tuning of the process and measurement covariance matrices and the initial parameter estimates. The tuning was performed on simulation data, and when applied on experimental data, only an increase in measurement process noise power was required in order for the filter to converge and estimate all parameters. A sensitivity analysis to initial parameter estimates showed that the filter is more sensitive to poor initial choices of neuromuscular parameters than equalization parameters, and bad choices for initial parameters can result in divergence, slow convergence, or parameter estimates that do not have a real physical interpretation. The promising results when applied to experimental data, together with its simple tuning and low dimension of the state-space, make the use of the Dual Extended Kalman Filter a viable option for identifying time-varying human control parameters in manual tracking tasks, which could be used in real-time human state monitoring and adaptive human-vehicle haptic interfaces.

  17. A combined surface/volume scattering retracking algorithm for ice sheet satellite altimetry

    NASA Technical Reports Server (NTRS)

    Davis, Curt H.

    1992-01-01

    An algorithm that is based upon a combined surface-volume scattering model is developed. It can be used to retrack individual altimeter waveforms over ice sheets. An iterative least-squares procedure is used to fit the combined model to the return waveforms. The retracking algorithm comprises two distinct sections. The first generates initial model parameter estimates from a filtered altimeter waveform. The second uses the initial estimates, the theoretical model, and the waveform data to generate corrected parameter estimates. This retracking algorithm can be used to assess the accuracy of elevations produced from current retracking algorithms when subsurface volume scattering is present. This is extremely important so that repeated altimeter elevation measurements can be used to accurately detect changes in the mass balance of the ice sheets. By analyzing the distribution of the model parameters over large portions of the ice sheet, regional and seasonal variations in the near-surface properties of the snowpack can be quantified.

  18. Analysis of Seasonal Chlorophyll-a Using An Adjoint Three-Dimensional Ocean Carbon Cycle Model

    NASA Astrophysics Data System (ADS)

    Tjiputra, J.; Winguth, A.; Polzin, D.

    2004-12-01

    The misfit between numerical ocean model and observations can be reduced using data assimilation. This can be achieved by optimizing the model parameter values using adjoint model. The adjoint model minimizes the model-data misfit by estimating the sensitivity or gradient of the cost function with respect to initial condition, boundary condition, or parameters. The adjoint technique was used to assimilate seasonal chlorophyll-a data from the Sea-viewing Wide Field-of-view Sensor (SeaWiFS) satellite to a marine biogeochemical model HAMOCC5.1. An Identical Twin Experiment (ITE) was conducted to test the robustness of the model and the non-linearity level of the forward model. The ITE experiment successfully recovered most of the perturbed parameter to their initial values, and identified the most sensitive ecosystem parameters, which contribute significantly to model-data bias. The regional assimilations of SeaWiFS chlorophyll-a data into the model were able to reduce the model-data misfit (i.e. the cost function) significantly. The cost function reduction mostly occurred in the high latitudes (e.g. the model-data misfit in the northern region during summer season was reduced by 54%). On the other hand, the equatorial regions appear to be relatively stable with no strong reduction in cost function. The optimized parameter set is used to forecast the carbon fluxes between marine ecosystem compartments (e.g. Phytoplankton, Zooplankton, Nutrients, Particulate Organic Carbon, and Dissolved Organic Carbon). The a posteriori model run using the regional best-fit parameterization yields approximately 36 PgC/yr of global net primary productions in the euphotic zone.

  19. Impact of the time scale of model sensitivity response on coupled model parameter estimation

    NASA Astrophysics Data System (ADS)

    Liu, Chang; Zhang, Shaoqing; Li, Shan; Liu, Zhengyu

    2017-11-01

    That a model has sensitivity responses to parameter uncertainties is a key concept in implementing model parameter estimation using filtering theory and methodology. Depending on the nature of associated physics and characteristic variability of the fluid in a coupled system, the response time scales of a model to parameters can be different, from hourly to decadal. Unlike state estimation, where the update frequency is usually linked with observational frequency, the update frequency for parameter estimation must be associated with the time scale of the model sensitivity response to the parameter being estimated. Here, with a simple coupled model, the impact of model sensitivity response time scales on coupled model parameter estimation is studied. The model includes characteristic synoptic to decadal scales by coupling a long-term varying deep ocean with a slow-varying upper ocean forced by a chaotic atmosphere. Results show that, using the update frequency determined by the model sensitivity response time scale, both the reliability and quality of parameter estimation can be improved significantly, and thus the estimated parameters make the model more consistent with the observation. These simple model results provide a guideline for when real observations are used to optimize the parameters in a coupled general circulation model for improving climate analysis and prediction initialization.

  20. Verification of Advances in a Coupled Snow-runoff Modeling Framework for Operational Streamflow Forecasts

    NASA Astrophysics Data System (ADS)

    Barik, M. G.; Hogue, T. S.; Franz, K. J.; He, M.

    2011-12-01

    The National Oceanic and Atmospheric Administration's (NOAA's) River Forecast Centers (RFCs) issue hydrologic forecasts related to flood events, reservoir operations for water supply, streamflow regulation, and recreation on the nation's streams and rivers. The RFCs use the National Weather Service River Forecast System (NWSRFS) for streamflow forecasting which relies on a coupled snow model (i.e. SNOW17) and rainfall-runoff model (i.e. SAC-SMA) in snow-dominated regions of the US. Errors arise in various steps of the forecasting system from input data, model structure, model parameters, and initial states. The goal of the current study is to undertake verification of potential improvements in the SNOW17-SAC-SMA modeling framework developed for operational streamflow forecasts. We undertake verification for a range of parameters sets (i.e. RFC, DREAM (Differential Evolution Adaptive Metropolis)) as well as a data assimilation (DA) framework developed for the coupled models. Verification is also undertaken for various initial conditions to observe the influence of variability in initial conditions on the forecast. The study basin is the North Fork America River Basin (NFARB) located on the western side of the Sierra Nevada Mountains in northern California. Hindcasts are verified using both deterministic (i.e. Nash Sutcliffe efficiency, root mean square error, and joint distribution) and probabilistic (i.e. reliability diagram, discrimination diagram, containing ratio, and Quantile plots) statistics. Our presentation includes comparison of the performance of different optimized parameters and the DA framework as well as assessment of the impact associated with the initial conditions used for streamflow forecasts for the NFARB.

  1. NASA AVOSS Fast-Time Wake Prediction Models: User's Guide

    NASA Technical Reports Server (NTRS)

    Ahmad, Nash'at N.; VanValkenburg, Randal L.; Pruis, Matthew

    2014-01-01

    The National Aeronautics and Space Administration (NASA) is developing and testing fast-time wake transport and decay models to safely enhance the capacity of the National Airspace System (NAS). The fast-time wake models are empirical algorithms used for real-time predictions of wake transport and decay based on aircraft parameters and ambient weather conditions. The aircraft dependent parameters include the initial vortex descent velocity and the vortex pair separation distance. The atmospheric initial conditions include vertical profiles of temperature or potential temperature, eddy dissipation rate, and crosswind. The current distribution includes the latest versions of the APA (3.4) and the TDP (2.1) models. This User's Guide provides detailed information on the model inputs, file formats, and the model output. An example of a model run and a brief description of the Memphis 1995 Wake Vortex Dataset is also provided.

  2. Effects of random initial conditions on the dynamical scaling behaviors of a fixed-energy Manna sandpile model in one dimension

    NASA Astrophysics Data System (ADS)

    Kwon, Sungchul; Kim, Jin Min

    2015-01-01

    For a fixed-energy (FE) Manna sandpile model in one dimension, we investigate the effects of random initial conditions on the dynamical scaling behavior of an order parameter. In the FE Manna model, the density ρ of total particles is conserved, and an absorbing phase transition occurs at ρc as ρ varies. In this work, we show that, for a given ρ , random initial distributions of particles lead to the domain structure in which domains with particle densities higher and lower than ρc alternate with each other. In the domain structure, the dominant length scale is the average domain length, which increases via the coalescence of adjacent domains. At ρc, the domain structure slows down the decay of an order parameter and also causes anomalous finite-size effects, i.e., power-law decay followed by an exponential one before the quasisteady state. As a result, the interplay of particle conservation and random initial conditions causes the domain structure, which is the origin of the anomalous dynamical scaling behaviors for random initial conditions.

  3. Optimization of Acid Black 172 decolorization by electrocoagulation using response surface methodology

    PubMed Central

    2012-01-01

    This paper utilizes a statistical approach, the response surface optimization methodology, to determine the optimum conditions for the Acid Black 172 dye removal efficiency from aqueous solution by electrocoagulation. The experimental parameters investigated were initial pH: 4–10; initial dye concentration: 0–600 mg/L; applied current: 0.5-3.5 A and reaction time: 3–15 min. These parameters were changed at five levels according to the central composite design to evaluate their effects on decolorization through analysis of variance. High R2 value of 94.48% shows a high correlation between the experimental and predicted values and expresses that the second-order regression model is acceptable for Acid Black 172 dye removal efficiency. It was also found that some interactions and squares influenced the electrocoagulation performance as well as the selected parameters. Optimum dye removal efficiency of 90.4% was observed experimentally at initial pH of 7, initial dye concentration of 300 mg/L, applied current of 2 A and reaction time of 9.16 min, which is close to model predicted (90%) result. PMID:23369574

  4. Driven neutron star collapse: Type I critical phenomena and the initial black hole mass distribution

    NASA Astrophysics Data System (ADS)

    Noble, Scott C.; Choptuik, Matthew W.

    2016-01-01

    We study the general relativistic collapse of neutron star (NS) models in spherical symmetry. Our initially stable models are driven to collapse by the addition of one of two things: an initially ingoing velocity profile, or a shell of minimally coupled, massless scalar field that falls onto the star. Tolman-Oppenheimer-Volkoff (TOV) solutions with an initially isentropic, gamma-law equation of state serve as our NS models. The initial values of the velocity profile's amplitude and the star's central density span a parameter space which we have surveyed extensively and which we find provides a rich picture of the possible end states of NS collapse. This parameter space survey elucidates the boundary between Type I and Type II critical behavior in perfect fluids which coincides, on the subcritical side, with the boundary between dispersed and bound end states. For our particular model, initial velocity amplitudes greater than 0.3 c are needed to probe the regime where arbitrarily small black holes can form. In addition, we investigate Type I behavior in our system by varying the initial amplitude of the initially imploding scalar field. In this case we find that the Type I critical solutions resemble TOV solutions on the 1-mode unstable branch of equilibrium solutions, and that the critical solutions' frequencies agree well with the fundamental mode frequencies of the unstable equilibria. Additionally, the critical solution's scaling exponent is shown to be well approximated by a linear function of the initial star's central density.

  5. Mesoscopic modeling and parameter estimation of a lithium-ion battery based on LiFePO4/graphite

    NASA Astrophysics Data System (ADS)

    Jokar, Ali; Désilets, Martin; Lacroix, Marcel; Zaghib, Karim

    2018-03-01

    A novel numerical model for simulating the behavior of lithium-ion batteries based on LiFePO4(LFP)/graphite is presented. The model is based on the modified Single Particle Model (SPM) coupled to a mesoscopic approach for the LFP electrode. The model comprises one representative spherical particle as the graphite electrode, and N LFP units as the positive electrode. All the SPM equations are retained to model the negative electrode performance. The mesoscopic model rests on non-equilibrium thermodynamic conditions and uses a non-monotonic open circuit potential for each unit. A parameter estimation study is also carried out to identify all the parameters needed for the model. The unknown parameters are the solid diffusion coefficient of the negative electrode (Ds,n), reaction-rate constant of the negative electrode (Kn), negative and positive electrode porosity (εn&εn), initial State-Of-Charge of the negative electrode (SOCn,0), initial partial composition of the LFP units (yk,0), minimum and maximum resistance of the LFP units (Rmin&Rmax), and solution resistance (Rcell). The results show that the mesoscopic model can simulate successfully the electrochemical behavior of lithium-ion batteries at low and high charge/discharge rates. The model also describes adequately the lithiation/delithiation of the LFP particles, however, it is computationally expensive compared to macro-based models.

  6. Application of the Aquifer Impact Model to support decisions at a CO 2 sequestration site: Modeling and Analysis: Application of the Aquifer Impact Model to support decisions at a CO 2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bacon, Diana Holford; Locke II, Randall A.; Keating, Elizabeth

    The National Risk Assessment Partnership (NRAP) has developed a suite of tools to assess and manage risk at CO2 sequestration sites (1). The NRAP tool suite includes the Aquifer Impact Model (AIM), based on reduced order models developed using site-specific data from two aquifers (alluvium and carbonate). The models accept aquifer parameters as a range of variable inputs so they may have more broad applicability. Guidelines have been developed for determining the aquifer types for which the ROMs should be applicable. This paper considers the applicability of the aquifer models in AIM to predicting the impact of CO2 or Brinemore » leakage were it to occur at the Illinois Basin Decatur Project (IBDP). Based on the results of the sensitivity analysis, the hydraulic parameters and leakage source term magnitude are more sensitive than clay fraction or cation exchange capacity. Sand permeability was the only hydraulic parameter measured at the IBDP site. More information on the other hydraulic parameters, such as sand fraction and sand/clay correlation lengths, could reduce uncertainty in risk estimates. Some non-adjustable parameters, such as the initial pH and TDS and the pH no-impact threshold, are significantly different for the ROM than for the observations at the IBDP site. The reduced order model could be made more useful to a wider range of sites if the initial conditions and no-impact threshold values were adjustable parameters.« less

  7. Controlled Release Drug Delivery via Polymeric Microspheres: A Neat Application of the Spherical Diffusion Equation

    ERIC Educational Resources Information Center

    Ormerod, C. S.; Nelson, M.

    2017-01-01

    Various applied mathematics undergraduate skills are demonstrated via an adaptation of Crank's axisymmetric spherical diffusion model. By the introduction of a one-parameter Heaviside initial condition, the pharmaceutically problematic initial mass flux is attenuated. Quantities germane to the pharmaceutical industry are examined and the model is…

  8. Data assimilation method based on the constraints of confidence region

    NASA Astrophysics Data System (ADS)

    Li, Yong; Li, Siming; Sheng, Yao; Wang, Luheng

    2018-03-01

    The ensemble Kalman filter (EnKF) is a distinguished data assimilation method that is widely used and studied in various fields including methodology and oceanography. However, due to the limited sample size or imprecise dynamics model, it is usually easy for the forecast error variance to be underestimated, which further leads to the phenomenon of filter divergence. Additionally, the assimilation results of the initial stage are poor if the initial condition settings differ greatly from the true initial state. To address these problems, the variance inflation procedure is usually adopted. In this paper, we propose a new method based on the constraints of a confidence region constructed by the observations, called EnCR, to estimate the inflation parameter of the forecast error variance of the EnKF method. In the new method, the state estimate is more robust to both the inaccurate forecast models and initial condition settings. The new method is compared with other adaptive data assimilation methods in the Lorenz-63 and Lorenz-96 models under various model parameter settings. The simulation results show that the new method performs better than the competing methods.

  9. A Comparison of the One-, the Modified Three-, and the Three-Parameter Item Response Theory Models in the Test Development Item Selection Process.

    ERIC Educational Resources Information Center

    Eignor, Daniel R.; Douglass, James B.

    This paper attempts to provide some initial information about the use of a variety of item response theory (IRT) models in the item selection process; its purpose is to compare the information curves derived from the selection of items characterized by several different IRT models and their associated parameter estimation programs. These…

  10. The effect of various parameters of large scale radio propagation models on improving performance mobile communications

    NASA Astrophysics Data System (ADS)

    Pinem, M.; Fauzi, R.

    2018-02-01

    One technique for ensuring continuity of wireless communication services and keeping a smooth transition on mobile communication networks is the soft handover technique. In the Soft Handover (SHO) technique the inclusion and reduction of Base Station from the set of active sets is determined by initiation triggers. One of the initiation triggers is based on the strong reception signal. In this paper we observed the influence of parameters of large-scale radio propagation models to improve the performance of mobile communications. The observation parameters for characterizing the performance of the specified mobile system are Drop Call, Radio Link Degradation Rate and Average Size of Active Set (AS). The simulated results show that the increase in altitude of Base Station (BS) Antenna and Mobile Station (MS) Antenna contributes to the improvement of signal power reception level so as to improve Radio Link quality and increase the average size of Active Set and reduce the average Drop Call rate. It was also found that Hata’s propagation model contributed significantly to improvements in system performance parameters compared to Okumura’s propagation model and Lee’s propagation model.

  11. Toward an improvement over Kerner-Klenov-Wolf three-phase cellular automaton model.

    PubMed

    Jiang, Rui; Wu, Qing-Song

    2005-12-01

    The Kerner-Klenov-Wolf (KKW) three-phase cellular automaton model has a nonrealistic velocity of the upstream front in widening synchronized flow pattern which separates synchronized flow downstream and free flow upstream. This paper presents an improved model, which is a combination of the initial KKW model and a modified Nagel-Schreckenberg (MNS) model. In the improved KKW model, a parameter is introduced to determine the vehicle moves according to the MNS model or the initial KKW model. The improved KKW model can not only simulate the empirical observations as the initial KKW model, but also overcome the nonrealistic velocity problem. The mechanism of the improvement is discussed.

  12. Fracture characterization by hybrid enumerative search and Gauss-Newton least-squares inversion methods

    NASA Astrophysics Data System (ADS)

    Alkharji, Mohammed N.

    Most fracture characterization methods provide a general description of the fracture parameters as part of the reservoirs parameters; the fracture interaction and geometry within the reservoir is given less attention. T-Matrix and Linear Slip effective medium fracture models are implemented to invert the elastic tensor for the parameters and geometries of the fractures within the reservoir. The fracture inverse problem has an ill-posed, overdetermined, underconstrained rank-deficit system of equations. Least-squares inverse methods are used to solve the problem. A good starting initial model for the parameters is a key factor in the reliability of the inversion. Most methods assume that the starting parameters are close to the solution to avoid inaccurate local minimum solutions. The prior knowledge of the fracture parameters and their geometry is not available. We develop a hybrid, enumerative and Gauss-Newton, method that estimates the fracture parameters and geometry from the elastic tensor with no prior knowledge of the initial parameter values. The fracture parameters are separated into two groups. The first group contains the fracture parameters with no prior information, and the second group contains the parameters with known prior information. Different models are generated from the first group parameters by sampling the solution space over a predefined range of possible solutions for each parameter. Each model generated by the first group is fixed and used as a starting model to invert for the second group of parameters using the Gauss-Newton method. The least-squares residual between the observed elastic tensor and the estimated elastic tensor is calculated for each model. The model parameters that yield the least-squares residual corresponds to the correct fracture reservoir parameters and geometry. Two synthetic examples of fractured reservoirs with oil and gas saturations were inverted with no prior information about the fracture properties. The results showed that the hybrid algorithm successfully predicted the fracture parametrization, geometry, and the fluid content within the modeled reservoir. The method was also applied on an elastic tensor extracted from the Weyburn field in Saskatchewan, Canada. The solution suggested no presence of fractures but only a VTI system caused by the shale layering in the targeted reservoir, this interpretation is supported by other Weyburn field data.

  13. Calibration and compensation method of three-axis geomagnetic sensor based on pre-processing total least square iteration

    NASA Astrophysics Data System (ADS)

    Zhou, Y.; Zhang, X.; Xiao, W.

    2018-04-01

    As the geomagnetic sensor is susceptible to interference, a pre-processing total least square iteration method is proposed for calibration compensation. Firstly, the error model of the geomagnetic sensor is analyzed and the correction model is proposed, then the characteristics of the model are analyzed and converted into nine parameters. The geomagnetic data is processed by Hilbert transform (HHT) to improve the signal-to-noise ratio, and the nine parameters are calculated by using the combination of Newton iteration method and the least squares estimation method. The sifter algorithm is used to filter the initial value of the iteration to ensure that the initial error is as small as possible. The experimental results show that this method does not need additional equipment and devices, can continuously update the calibration parameters, and better than the two-step estimation method, it can compensate geomagnetic sensor error well.

  14. A three-dimensional cohesive sediment transport model with data assimilation: Model development, sensitivity analysis and parameter estimation

    NASA Astrophysics Data System (ADS)

    Wang, Daosheng; Cao, Anzhou; Zhang, Jicai; Fan, Daidu; Liu, Yongzhi; Zhang, Yue

    2018-06-01

    Based on the theory of inverse problems, a three-dimensional sigma-coordinate cohesive sediment transport model with the adjoint data assimilation is developed. In this model, the physical processes of cohesive sediment transport, including deposition, erosion and advection-diffusion, are parameterized by corresponding model parameters. These parameters are usually poorly known and have traditionally been assigned empirically. By assimilating observations into the model, the model parameters can be estimated using the adjoint method; meanwhile, the data misfit between model results and observations can be decreased. The model developed in this work contains numerous parameters; therefore, it is necessary to investigate the parameter sensitivity of the model, which is assessed by calculating a relative sensitivity function and the gradient of the cost function with respect to each parameter. The results of parameter sensitivity analysis indicate that the model is sensitive to the initial conditions, inflow open boundary conditions, suspended sediment settling velocity and resuspension rate, while the model is insensitive to horizontal and vertical diffusivity coefficients. A detailed explanation of the pattern of sensitivity analysis is also given. In ideal twin experiments, constant parameters are estimated by assimilating 'pseudo' observations. The results show that the sensitive parameters are estimated more easily than the insensitive parameters. The conclusions of this work can provide guidance for the practical applications of this model to simulate sediment transport in the study area.

  15. GENERAL: Entanglement sudden death induced by the Dzialoshinskii-Moriya interaction

    NASA Astrophysics Data System (ADS)

    Zeng, Hong-Fang; Shao, Bin; Yang, Lin-Guang; Li, Jian; Zou, Jian

    2009-08-01

    In this paper, we study the entanglement dynamics of two-spin Heisenberg XYZ model with the Dzialoshinskii-Moriya (DM) interaction. The system is initially prepared in the Werner state. The effects of purity of the initial state and DM coupling parameter on the evolution of entanglement are investigated. The necessary and sufficient condition for the appearance of the entanglement sudden death (ESD) phenomenon has been deduced. The result shows that the ESD always occurs if the initial state is sufficiently impure for the given coupling parameter or the DM interaction is sufficiently strong for the given initial state. Moreover, the critical values of them are calculated.

  16. 3D glasma initial state for relativistic heavy ion collisions

    DOE PAGES

    Schenke, Björn; Schlichting, Sören

    2016-10-13

    We extend the impact-parameter-dependent Glasma model to three dimensions using explicit small-x evolution of the two incoming nuclear gluon distributions. We compute rapidity distributions of produced gluons and the early-time energy momentum tensor as a function of space-time rapidity and transverse coordinates. Finally, we study rapidity correlations and fluctuations of the initial geometry and multiplicity distributions and make comparisons to existing models for the three-dimensional initial state.

  17. Simultaneous Estimation of Microphysical Parameters and Atmospheric State Variables With Radar Data and Ensemble Square-root Kalman Filter

    NASA Astrophysics Data System (ADS)

    Tong, M.; Xue, M.

    2006-12-01

    An important source of model error for convective-scale data assimilation and prediction is microphysical parameterization. This study investigates the possibility of estimating up to five fundamental microphysical parameters, which are closely involved in the definition of drop size distribution of microphysical species in a commonly used single-moment ice microphysics scheme, using radar observations and the ensemble Kalman filter method. The five parameters include the intercept parameters for rain, snow and hail/graupel, and the bulk densities of hail/graupel and snow. Parameter sensitivity and identifiability are first examined. The ensemble square-root Kalman filter (EnSRF) is employed for simultaneous state and parameter estimation. OSS experiments are performed for a model-simulated supercell storm, in which the five microphysical parameters are estimated individually or in different combinations starting from different initial guesses. When error exists in only one of the microphysical parameters, the parameter can be successfully estimated without exception. The estimation of multiple parameters is found to be less robust, with end results of estimation being sensitive to the realization of the initial parameter perturbation. This is believed to be because of the reduced parameter identifiability and the existence of non-unique solutions. The results of state estimation are, however, always improved when simultaneous parameter estimation is performed, even when the estimated parameters values are not accurate.

  18. SURFplus Model Calibration for PBX 9502

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Menikoff, Ralph

    2017-12-06

    The SURFplus reactive burn model is calibrated for the TATB based explosive PBX 9502 at three initial temperatures; hot (75 C), ambient (23 C) and cold (-55 C). The CJ state depends on the initial temperature due to the variation in the initial density and initial specific energy of the PBX reactants. For the reactants, a porosity model for full density TATB is used. This allows the initial PBX density to be set to its measured value even though the coeffcient of thermal expansion for the TATB and the PBX differ. The PBX products EOS is taken as independent ofmore » the initial PBX state. The initial temperature also affects the sensitivity to shock initiation. The model rate parameters are calibrated to Pop plot data, the failure diameter, the limiting detonation speed just above the failure diameters, and curvature effect data for small curvature.« less

  19. Total Ionizing Dose Influence on the Single Event Effect Sensitivity in Samsung 8Gb NAND Flash Memories

    NASA Astrophysics Data System (ADS)

    Edmonds, Larry D.; Irom, Farokh; Allen, Gregory R.

    2017-08-01

    A recent model provides risk estimates for the deprogramming of initially programmed floating gates via prompt charge loss produced by an ionizing radiation environment. The environment can be a mixture of electrons, protons, and heavy ions. The model requires several input parameters. This paper extends the model to include TID effects in the control circuitry by including one additional parameter. Parameters intended to produce conservative risk estimates for the Samsung 8 Gb SLC NAND flash memory are given, subject to some qualifications.

  20. Calculation of the Initial Magnetic Field for Mercury's Magnetosphere Hybrid Model

    NASA Astrophysics Data System (ADS)

    Alexeev, Igor; Parunakian, David; Dyadechkin, Sergey; Belenkaya, Elena; Khodachenko, Maxim; Kallio, Esa; Alho, Markku

    2018-03-01

    Several types of numerical models are used to analyze the interactions of the solar wind flow with Mercury's magnetosphere, including kinetic models that determine magnetic and electric fields based on the spatial distribution of charges and currents, magnetohydrodynamic models that describe plasma as a conductive liquid, and hybrid models that describe ions kinetically in collisionless mode and represent electrons as a massless neutralizing liquid. The structure of resulting solutions is determined not only by the chosen set of equations that govern the behavior of plasma, but also by the initial and boundary conditions; i.e., their effects are not limited to the amount of computational work required to achieve a quasi-stationary solution. In this work, we have proposed using the magnetic field computed by the paraboloid model of Mercury's magnetosphere as the initial condition for subsequent hybrid modeling. The results of the model have been compared to measurements performed by the Messenger spacecraft during a single crossing of the magnetosheath and the magnetosphere. The selected orbit lies in the terminator plane, which allows us to observe two crossings of the bow shock and the magnetopause. In our calculations, we have defined the initial parameters of the global magnetospheric current systems in a way that allows us to minimize paraboloid magnetic field deviation along the trajectory of the Messenger from the experimental data. We have shown that the optimal initial field parameters include setting the penetration of a partial interplanetary magnetic field into the magnetosphere with a penetration coefficient of 0.2.

  1. A simple method for identifying parameter correlations in partially observed linear dynamic models.

    PubMed

    Li, Pu; Vu, Quoc Dong

    2015-12-14

    Parameter estimation represents one of the most significant challenges in systems biology. This is because biological models commonly contain a large number of parameters among which there may be functional interrelationships, thus leading to the problem of non-identifiability. Although identifiability analysis has been extensively studied by analytical as well as numerical approaches, systematic methods for remedying practically non-identifiable models have rarely been investigated. We propose a simple method for identifying pairwise correlations and higher order interrelationships of parameters in partially observed linear dynamic models. This is made by derivation of the output sensitivity matrix and analysis of the linear dependencies of its columns. Consequently, analytical relations between the identifiability of the model parameters and the initial conditions as well as the input functions can be achieved. In the case of structural non-identifiability, identifiable combinations can be obtained by solving the resulting homogenous linear equations. In the case of practical non-identifiability, experiment conditions (i.e. initial condition and constant control signals) can be provided which are necessary for remedying the non-identifiability and unique parameter estimation. It is noted that the approach does not consider noisy data. In this way, the practical non-identifiability issue, which is popular for linear biological models, can be remedied. Several linear compartment models including an insulin receptor dynamics model are taken to illustrate the application of the proposed approach. Both structural and practical identifiability of partially observed linear dynamic models can be clarified by the proposed method. The result of this method provides important information for experimental design to remedy the practical non-identifiability if applicable. The derivation of the method is straightforward and thus the algorithm can be easily implemented into a software packet.

  2. Initial Conditions in the Averaging Cognitive Model

    ERIC Educational Resources Information Center

    Noventa, S.; Massidda, D.; Vidotto, G.

    2010-01-01

    The initial state parameters s[subscript 0] and w[subscript 0] are intricate issues of the averaging cognitive models in Information Integration Theory. Usually they are defined as a measure of prior information (Anderson, 1981; 1982) but there are no general rules to deal with them. In fact, there is no agreement as to their treatment except in…

  3. Zener Diode Compact Model Parameter Extraction Using Xyce-Dakota Optimization.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Buchheit, Thomas E.; Wilcox, Ian Zachary; Sandoval, Andrew J

    This report presents a detailed process for compact model parameter extraction for DC circuit Zener diodes. Following the traditional approach of Zener diode parameter extraction, circuit model representation is defined and then used to capture the different operational regions of a real diode's electrical behavior. The circuit model contains 9 parameters represented by resistors and characteristic diodes as circuit model elements. The process of initial parameter extraction, the identification of parameter values for the circuit model elements, is presented in a way that isolates the dependencies between certain electrical parameters and highlights both the empirical nature of the extraction andmore » portions of the real diode physical behavior which of the parameters are intended to represent. Optimization of the parameters, a necessary part of a robost parameter extraction process, is demonstrated using a 'Xyce-Dakota' workflow, discussed in more detail in the report. Among other realizations during this systematic approach of electrical model parameter extraction, non-physical solutions are possible and can be difficult to avoid because of the interdependencies between the different parameters. The process steps described are fairly general and can be leveraged for other types of semiconductor device model extractions. Also included in the report are recommendations for experiment setups for generating optimum dataset for model extraction and the Parameter Identification and Ranking Table (PIRT) for Zener diodes.« less

  4. No Future in the Past? The role of initial topography on landform evolution model predictions

    NASA Astrophysics Data System (ADS)

    Hancock, G. R.; Coulthard, T. J.; Lowry, J.

    2014-12-01

    Our understanding of earth surface processes is based on long-term empirical understandings, short-term field measurements as well as numerical models. In particular, numerical landscape evolution models (LEMs) have been developed which have the capability to capture a range of both surface (erosion and deposition), tectonics, as well as near surface or critical zone processes (i.e. pedogenesis). These models have a range of applications for understanding both surface and whole of landscape dynamics through to more applied situations such as degraded site rehabilitation. LEMs are now at the stage of development where if calibrated, can provide some level of reliability. However, these models are largely calibrated based on parameters determined from present surface conditions which are the product of much longer-term geology-soil-climate-vegetation interactions. Here, we assess the effect of the initial landscape dimensions and associated error as well as parameterisation for a potential post-mining landform design. The results demonstrate that subtle surface changes in the initial DEM as well as parameterisation can have a large impact on landscape behaviour, erosion depth and sediment discharge. For example, the predicted sediment output from LEM's is shown to be highly variable even with very subtle changes in initial surface conditions. This has two important implications in that decadal time scale field data is needed to (a) better parameterise models and (b) evaluate their predictions. We question how a LEM using parameters derived from field plots can firstly be employed to examine long-term landscape evolution. Secondly, the potential range of outcomes is examined based on estimated temporal parameter change and thirdly, the need for more detailed and rigorous field data for calibration and validation of these models is discussed.

  5. Smoothing-based compressed state Kalman filter for joint state-parameter estimation: Applications in reservoir characterization and CO2 storage monitoring

    NASA Astrophysics Data System (ADS)

    Li, Y. J.; Kokkinaki, Amalia; Darve, Eric F.; Kitanidis, Peter K.

    2017-08-01

    The operation of most engineered hydrogeological systems relies on simulating physical processes using numerical models with uncertain parameters and initial conditions. Predictions by such uncertain models can be greatly improved by Kalman-filter techniques that sequentially assimilate monitoring data. Each assimilation constitutes a nonlinear optimization, which is solved by linearizing an objective function about the model prediction and applying a linear correction to this prediction. However, if model parameters and initial conditions are uncertain, the optimization problem becomes strongly nonlinear and a linear correction may yield unphysical results. In this paper, we investigate the utility of one-step ahead smoothing, a variant of the traditional filtering process, to eliminate nonphysical results and reduce estimation artifacts caused by nonlinearities. We present the smoothing-based compressed state Kalman filter (sCSKF), an algorithm that combines one step ahead smoothing, in which current observations are used to correct the state and parameters one step back in time, with a nonensemble covariance compression scheme, that reduces the computational cost by efficiently exploring the high-dimensional state and parameter space. Numerical experiments show that when model parameters are uncertain and the states exhibit hyperbolic behavior with sharp fronts, as in CO2 storage applications, one-step ahead smoothing reduces overshooting errors and, by design, gives physically consistent state and parameter estimates. We compared sCSKF with commonly used data assimilation methods and showed that for the same computational cost, combining one step ahead smoothing and nonensemble compression is advantageous for real-time characterization and monitoring of large-scale hydrogeological systems with sharp moving fronts.

  6. A unified inversion scheme to process multifrequency measurements of various dispersive electromagnetic properties

    NASA Astrophysics Data System (ADS)

    Han, Y.; Misra, S.

    2018-04-01

    Multi-frequency measurement of a dispersive electromagnetic (EM) property, such as electrical conductivity, dielectric permittivity, or magnetic permeability, is commonly analyzed for purposes of material characterization. Such an analysis requires inversion of the multi-frequency measurement based on a specific relaxation model, such as Cole-Cole model or Pelton's model. We develop a unified inversion scheme that can be coupled to various type of relaxation models to independently process multi-frequency measurement of varied EM properties for purposes of improved EM-based geomaterial characterization. The proposed inversion scheme is firstly tested in few synthetic cases in which different relaxation models are coupled into the inversion scheme and then applied to multi-frequency complex conductivity, complex resistivity, complex permittivity, and complex impedance measurements. The method estimates up to seven relaxation-model parameters exhibiting convergence and accuracy for random initializations of the relaxation-model parameters within up to 3-orders of magnitude variation around the true parameter values. The proposed inversion method implements a bounded Levenberg algorithm with tuning initial values of damping parameter and its iterative adjustment factor, which are fixed in all the cases shown in this paper and irrespective of the type of measured EM property and the type of relaxation model. Notably, jump-out step and jump-back-in step are implemented as automated methods in the inversion scheme to prevent the inversion from getting trapped around local minima and to honor physical bounds of model parameters. The proposed inversion scheme can be easily used to process various types of EM measurements without major changes to the inversion scheme.

  7. Aerosol and Surface Parameter Retrievals for a Multi-Angle, Multiband Spectrometer

    NASA Technical Reports Server (NTRS)

    Broderick, Daniel

    2012-01-01

    This software retrieves the surface and atmosphere parameters of multi-angle, multiband spectra. The synthetic spectra are generated by applying the modified Rahman-Pinty-Verstraete Bidirectional Reflectance Distribution Function (BRDF) model, and a single-scattering dominated atmosphere model to surface reflectance data from Multiangle Imaging SpectroRadiometer (MISR). The aerosol physical model uses a single scattering approximation using Rayleigh scattering molecules, and Henyey-Greenstein aerosols. The surface and atmosphere parameters of the models are retrieved using the Lavenberg-Marquardt algorithm. The software can retrieve the surface and atmosphere parameters with two different scales. The surface parameters are retrieved pixel-by-pixel while the atmosphere parameters are retrieved for a group of pixels where the same atmosphere model parameters are applied. This two-scale approach allows one to select the natural scale of the atmosphere properties relative to surface properties. The software also takes advantage of an intelligent initial condition given by the solution of the neighbor pixels.

  8. Dynamical patterns and regime shifts in the nonlinear model of soil microorganisms growth

    NASA Astrophysics Data System (ADS)

    Zaitseva, Maria; Vladimirov, Artem; Winter, Anna-Marie; Vasilyeva, Nadezda

    2017-04-01

    Dynamical model of soil microorganisms growth and turnover is formulated as a system of nonlinear partial differential equations of reaction-diffusion type. We consider spatial distributions of concentrations of several substrates and microorganisms. Biochemical reactions are modelled by chemical kinetic equations. Transport is modelled by simple linear diffusion for all chemical substances, while for microorganisms we use different transport functions, e.g. some of them can actively move along gradient of substrate concentration, while others cannot move. We solve our model in two dimensions, starting from uniform state with small initial perturbations for various parameters and find parameter range, where small initial perturbations grow and evolve. We search for bifurcation points and critical regime shifts in our model and analyze time-space profile and phase portraits of these solutions approaching critical regime shifts in the system, exploring possibility to detect such shifts in advance. This work is supported by NordForsk, project #81513.

  9. Approximation of the breast height diameter distribution of two-cohort stands by mixture models I Parameter estimation

    Treesearch

    Rafal Podlaski; Francis A. Roesch

    2013-01-01

    Study assessed the usefulness of various methods for choosing the initial values for the numerical procedures for estimating the parameters of mixture distributions and analysed variety of mixture models to approximate empirical diameter at breast height (dbh) distributions. Two-component mixtures of either the Weibull distribution or the gamma distribution were...

  10. Estimating parameters of a forest ecosystem C model with measurements of stocks and fluxes as joint constraints

    Treesearch

    Andrew D. Richardson; Mathew Williams; David Y. Hollinger; David J.P. Moore; D. Bryan Dail; Eric A. Davidson; Neal A. Scott; Robert S. Evans; Holly. Hughes

    2010-01-01

    We conducted an inverse modeling analysis, using a variety of data streams (tower-based eddy covariance measurements of net ecosystem exchange, NEE, of CO2, chamber-based measurements of soil respiration, and ancillary ecological measurements of leaf area index, litterfall, and woody biomass increment) to estimate parameters and initial carbon (C...

  11. The Routine Fitting of Kinetic Data to Models

    PubMed Central

    Berman, Mones; Shahn, Ezra; Weiss, Marjory F.

    1962-01-01

    A mathematical formalism is presented for use with digital computers to permit the routine fitting of data to physical and mathematical models. Given a set of data, the mathematical equations describing a model, initial conditions for an experiment, and initial estimates for the values of model parameters, the computer program automatically proceeds to obtain a least squares fit of the data by an iterative adjustment of the values of the parameters. When the experimental measures are linear combinations of functions, the linear coefficients for a least squares fit may also be calculated. The values of both the parameters of the model and the coefficients for the sum of functions may be unknown independent variables, unknown dependent variables, or known constants. In the case of dependence, only linear dependencies are provided for in routine use. The computer program includes a number of subroutines, each one of which performs a special task. This permits flexibility in choosing various types of solutions and procedures. One subroutine, for example, handles linear differential equations, another, special non-linear functions, etc. The use of analytic or numerical solutions of equations is possible. PMID:13867975

  12. Influence of Population Variation of Physiological Parameters in Computational Models of Space Physiology

    NASA Technical Reports Server (NTRS)

    Myers, J. G.; Feola, A.; Werner, C.; Nelson, E. S.; Raykin, J.; Samuels, B.; Ethier, C. R.

    2016-01-01

    The earliest manifestations of Visual Impairment and Intracranial Pressure (VIIP) syndrome become evident after months of spaceflight and include a variety of ophthalmic changes, including posterior globe flattening and distension of the optic nerve sheath. Prevailing evidence links the occurrence of VIIP to the cephalic fluid shift induced by microgravity and the subsequent pressure changes around the optic nerve and eye. Deducing the etiology of VIIP is challenging due to the wide range of physiological parameters that may be influenced by spaceflight and are required to address a realistic spectrum of physiological responses. Here, we report on the application of an efficient approach to interrogating physiological parameter space through computational modeling. Specifically, we assess the influence of uncertainty in input parameters for two models of VIIP syndrome: a lumped-parameter model (LPM) of the cardiovascular and central nervous systems, and a finite-element model (FEM) of the posterior eye, optic nerve head (ONH) and optic nerve sheath. Methods: To investigate the parameter space in each model, we employed Latin hypercube sampling partial rank correlation coefficient (LHSPRCC) strategies. LHS techniques outperform Monte Carlo approaches by enforcing efficient sampling across the entire range of all parameters. The PRCC method estimates the sensitivity of model outputs to these parameters while adjusting for the linear effects of all other inputs. The LPM analysis addressed uncertainties in 42 physiological parameters, such as initial compartmental volume and nominal compartment percentage of total cardiac output in the supine state, while the FEM evaluated the effects on biomechanical strain from uncertainties in 23 material and pressure parameters for the ocular anatomy. Results and Conclusion: The LPM analysis identified several key factors including high sensitivity to the initial fluid distribution. The FEM study found that intraocular pressure and intracranial pressure had dominant impact on the peak strains in the ONH and retro-laminar optic nerve, respectively; optic nerve and lamina cribrosa stiffness were also important. This investigation illustrates the ability of LHSPRCC to identify the most influential physiological parameters, which must therefore be well-characterized to produce the most accurate numerical results.

  13. Robust estimation of thermodynamic parameters (ΔH, ΔS and ΔCp) for prediction of retention time in gas chromatography - Part II (Application).

    PubMed

    Claumann, Carlos Alberto; Wüst Zibetti, André; Bolzan, Ariovaldo; Machado, Ricardo A F; Pinto, Leonel Teixeira

    2015-12-18

    For this work, an analysis of parameter estimation for the retention factor in GC model was performed, considering two different criteria: sum of square error, and maximum error in absolute value; relevant statistics are described for each case. The main contribution of this work is the implementation of an initialization scheme (specialized) for the estimated parameters, which features fast convergence (low computational time) and is based on knowledge of the surface of the error criterion. In an application to a series of alkanes, specialized initialization resulted in significant reduction to the number of evaluations of the objective function (reducing computational time) in the parameter estimation. The obtained reduction happened between one and two orders of magnitude, compared with the simple random initialization. Copyright © 2015 Elsevier B.V. All rights reserved.

  14. An Initial Non-Equilibrium Porous-Media Model for CFD Simulation of Stirling Regenerators

    NASA Technical Reports Server (NTRS)

    Tew, Roy C.; Simon, Terry; Gedeon, David; Ibrahim, Mounir; Rong, Wei

    2006-01-01

    The objective of this paper is to define empirical parameters for an initial thermal non-equilibrium porous-media model for use in Computational Fluid Dynamics (CFD) codes for simulation of Stirling regenerators. The two codes currently used at Glenn Research Center for Stirling modeling are Fluent and CFD-ACE. The codes porous-media models are equilibrium models, which assume solid matrix and fluid are in thermal equilibrium. This is believed to be a poor assumption for Stirling regenerators; Stirling 1-D regenerator models, used in Stirling design, use non-equilibrium regenerator models and suggest regenerator matrix and gas average temperatures can differ by several degrees at a given axial location and time during the cycle. Experimentally based information was used to define: hydrodynamic dispersion, permeability, inertial coefficient, fluid effective thermal conductivity, and fluid-solid heat transfer coefficient. Solid effective thermal conductivity was also estimated. Determination of model parameters was based on planned use in a CFD model of Infinia's Stirling Technology Demonstration Converter (TDC), which uses a random-fiber regenerator matrix. Emphasis is on use of available data to define empirical parameters needed in a thermal non-equilibrium porous media model for Stirling regenerator simulation. Such a model has not yet been implemented by the authors or their associates.

  15. Description of the National Hydrologic Model for use with the Precipitation-Runoff Modeling System (PRMS)

    USGS Publications Warehouse

    Regan, R. Steven; Markstrom, Steven L.; Hay, Lauren E.; Viger, Roland J.; Norton, Parker A.; Driscoll, Jessica M.; LaFontaine, Jacob H.

    2018-01-08

    This report documents several components of the U.S. Geological Survey National Hydrologic Model of the conterminous United States for use with the Precipitation-Runoff Modeling System (PRMS). It provides descriptions of the (1) National Hydrologic Model, (2) Geospatial Fabric for National Hydrologic Modeling, (3) PRMS hydrologic simulation code, (4) parameters and estimation methods used to compute spatially and temporally distributed default values as required by PRMS, (5) National Hydrologic Model Parameter Database, and (6) model extraction tool named Bandit. The National Hydrologic Model Parameter Database contains values for all PRMS parameters used in the National Hydrologic Model. The methods and national datasets used to estimate all the PRMS parameters are described. Some parameter values are derived from characteristics of topography, land cover, soils, geology, and hydrography using traditional Geographic Information System methods. Other parameters are set to long-established default values and computation of initial values. Additionally, methods (statistical, sensitivity, calibration, and algebraic) were developed to compute parameter values on the basis of a variety of nationally-consistent datasets. Values in the National Hydrologic Model Parameter Database can periodically be updated on the basis of new parameter estimation methods and as additional national datasets become available. A companion ScienceBase resource provides a set of static parameter values as well as images of spatially-distributed parameters associated with PRMS states and fluxes for each Hydrologic Response Unit across the conterminuous United States.

  16. [Evaluation of the influence of humidity and temperature on the drug stability by initial average rate experiment].

    PubMed

    He, Ning; Sun, Hechun; Dai, Miaomiao

    2014-05-01

    To evaluate the influence of temperature and humidity on the drug stability by initial average rate experiment, and to obtained the kinetic parameters. The effect of concentration error, drug degradation extent, humidity and temperature numbers, humidity and temperature range, and average humidity and temperature on the accuracy and precision of kinetic parameters in the initial average rate experiment was explored. The stability of vitamin C, as a solid state model, was investigated by an initial average rate experiment. Under the same experimental conditions, the kinetic parameters obtained from this proposed method were comparable to those from classical isothermal experiment at constant humidity. The estimates were more accurate and precise by controlling the extent of drug degradation, changing humidity and temperature range, or by setting the average temperature closer to room temperature. Compared with isothermal experiments at constant humidity, our proposed method saves time, labor, and materials.

  17. Sensitivities of Modeled Tropical Cyclones to Surface Friction and the Coriolis Parameter

    NASA Technical Reports Server (NTRS)

    Chao, Winston C.; Chen, Baode; Tao, Wei-Kuo; Lau, William K. M. (Technical Monitor)

    2002-01-01

    In this investigation the sensitivities of a 2-D tropical cyclone (TC) model to surface frictional coefficient and the Coriolis parameter are studied and their implication is discussed. The model used is an axisymmetric version of the latest version of the Goddard cloud ensemble model. The model has stretched vertical grids with 33 levels varying from 30 m near the bottom to 1140 m near the top. The vertical domain is about 21 km. The horizontal domain covers a radius of 962 km (770 grids) with a grid size of 1.25 km. The time step is 10 seconds. An open lateral boundary condition is used. The sea surface temperature is specified at 29C. Unless specified otherwise, the Coriolis parameter is set at its value at 15 deg N. The Newtonian cooling is used with a time scale of 12 hours. The reference vertical temperature profile used in the Newtonian cooling is that of Jordan. The Newtonian cooling models not only the effect of radiative processes but also the effect of processes with scale larger than that of TC. Our experiments showed that if the Newtonian cooling is replaced by a radiation package, the simulated TC is much weaker. The initial condition has a temperature uniform in the radial direction and its vertical profile is that of Jordan. The initial winds are a weak Rankin vortex in the tangential winds superimposed on a resting atmosphere. The initial sea level pressure is set at 1015 hPa everywhere. Since there is no surface pressure perturbation, the initial condition is not in gradient balance. This initial condition is enough to lead to cyclogenesis, but the initial stage (say, the first 24 hrs) is not considered to resemble anything observed. The control experiment reaches quasi-equilibration after about 10 days with an eye wall extending from 15 to 25 km radius, reasonable comparing with the observations. The maximum surface wind of more than 70 m/s is located at about 18 km radius. The minimum sea level pressure on day 10 is about 886 hPa. Thus the overall simulation is considered successful and the model is considered adequate for our investigation.

  18. The WS transform for the Kuramoto model with distributed amplitudes, phase lag and time delay

    NASA Astrophysics Data System (ADS)

    Lohe, M. A.

    2017-12-01

    We apply the Watanabe-Strogatz (WS) transform to a generalized Kuramoto model with distributed parameters describing the amplitude of oscillation, phase lag, and time delay at each node of the system. The model has global coupling and identical frequencies, but allows for repulsive interactions at arbitrary nodes leading to conformist-contrarian phenomena together with variable amplitude and time-delay effects. We show how to determine the initial values of the WS system for any initial conditions for the Kuramoto system, and investigate the asymptotic behaviour of the WS variables. For the case of zero time delay the possible asymptotic configurations are determined by the sign of a single parameter μ which measures whether or not the attractive nodes dominate the repulsive nodes. If μ>0 the system completely synchronizes from general initial conditions, whereas if μ<0 one of two types of phase-locked synchronization occurs, depending on the initial values, while for μ=0 periodic solutions can occur. For the case of arbitrary non-uniform time delays we derive a stability condition for completely synchronized solutions.

  19. An empirical model for parameters affecting energy consumption in boron removal from boron-containing wastewaters by electrocoagulation.

    PubMed

    Yilmaz, A Erdem; Boncukcuoğlu, Recep; Kocakerim, M Muhtar

    2007-06-01

    In this study, it was investigated parameters affecting energy consumption in boron removal from boron containing wastewaters prepared synthetically, via electrocoagulation method. The solution pH, initial boron concentration, dose of supporting electrolyte, current density and temperature of solution were selected as experimental parameters affecting energy consumption. The obtained experimental results showed that boron removal efficiency reached up to 99% under optimum conditions, in which solution pH was 8.0, current density 6.0 mA/cm(2), initial boron concentration 100mg/L and solution temperature 293 K. The current density was an important parameter affecting energy consumption too. High current density applied to electrocoagulation cell increased energy consumption. Increasing solution temperature caused to decrease energy consumption that high temperature decreased potential applied under constant current density. That increasing initial boron concentration and dose of supporting electrolyte caused to increase specific conductivity of solution decreased energy consumption. As a result, it was seen that energy consumption for boron removal via electrocoagulation method could be minimized at optimum conditions. An empirical model was predicted by statistically. Experimentally obtained values were fitted with values predicted from empirical model being as following; [formula in text]. Unfortunately, the conditions obtained for optimum boron removal were not the conditions obtained for minimum energy consumption. It was determined that support electrolyte must be used for increase boron removal and decrease electrical energy consumption.

  20. Measurement-based perturbation theory and differential equation parameter estimation with applications to satellite gravimetry

    NASA Astrophysics Data System (ADS)

    Xu, Peiliang

    2018-06-01

    The numerical integration method has been routinely used by major institutions worldwide, for example, NASA Goddard Space Flight Center and German Research Center for Geosciences (GFZ), to produce global gravitational models from satellite tracking measurements of CHAMP and/or GRACE types. Such Earth's gravitational products have found widest possible multidisciplinary applications in Earth Sciences. The method is essentially implemented by solving the differential equations of the partial derivatives of the orbit of a satellite with respect to the unknown harmonic coefficients under the conditions of zero initial values. From the mathematical and statistical point of view, satellite gravimetry from satellite tracking is essentially the problem of estimating unknown parameters in the Newton's nonlinear differential equations from satellite tracking measurements. We prove that zero initial values for the partial derivatives are incorrect mathematically and not permitted physically. The numerical integration method, as currently implemented and used in mathematics and statistics, chemistry and physics, and satellite gravimetry, is groundless, mathematically and physically. Given the Newton's nonlinear governing differential equations of satellite motion with unknown equation parameters and unknown initial conditions, we develop three methods to derive new local solutions around a nominal reference orbit, which are linked to measurements to estimate the unknown corrections to approximate values of the unknown parameters and the unknown initial conditions. Bearing in mind that satellite orbits can now be tracked almost continuously at unprecedented accuracy, we propose the measurement-based perturbation theory and derive global uniformly convergent solutions to the Newton's nonlinear governing differential equations of satellite motion for the next generation of global gravitational models. Since the solutions are global uniformly convergent, theoretically speaking, they are able to extract smallest possible gravitational signals from modern and future satellite tracking measurements, leading to the production of global high-precision, high-resolution gravitational models. By directly turning the nonlinear differential equations of satellite motion into the nonlinear integral equations, and recognizing the fact that satellite orbits are measured with random errors, we further reformulate the links between satellite tracking measurements and the global uniformly convergent solutions to the Newton's governing differential equations as a condition adjustment model with unknown parameters, or equivalently, the weighted least squares estimation of unknown differential equation parameters with equality constraints, for the reconstruction of global high-precision, high-resolution gravitational models from modern (and future) satellite tracking measurements.

  1. Love-type wave propagation in a pre-stressed viscoelastic medium influenced by smooth moving punch

    NASA Astrophysics Data System (ADS)

    Singh, A. K.; Parween, Z.; Chatterjee, M.; Chattopadhyay, A.

    2015-04-01

    In the present paper, a mathematical model studying the effect of smooth moving semi-infinite punch on the propagation of Love-type wave in an initially stressed viscoelastic strip is developed. The dynamic stress concentration due to the punch for the force of a constant intensity has been obtained in the closed form. Method based on Weiner-hopf technique which is indicated by Matczynski has been employed. The study manifests the significant effect of various affecting parameters viz. speed of moving punch associated with Love-type wave speed, horizontal compressive/tensile initial stress, vertical compressive/tensile initial stress, frequency parameter, and viscoelastic parameter on dynamic stress concentration due to semi-infinite punch. Moreover, some important peculiarities have been traced out and depicted by means of graphs.

  2. Analysis of geologic terrain models for determination of optimum SAR sensor configuration and optimum information extraction for exploration of global non-renewable resources. Pilot study: Arkansas Remote Sensing Laboratory, part 1, part 2, and part 3

    NASA Technical Reports Server (NTRS)

    Kaupp, V. H.; Macdonald, H. C.; Waite, W. P.; Stiles, J. A.; Frost, F. S.; Shanmugam, K. S.; Smith, S. A.; Narayanan, V.; Holtzman, J. C. (Principal Investigator)

    1982-01-01

    Computer-generated radar simulations and mathematical geologic terrain models were used to establish the optimum radar sensor operating parameters for geologic research. An initial set of mathematical geologic terrain models was created for three basic landforms and families of simulated radar images were prepared from these models for numerous interacting sensor, platform, and terrain variables. The tradeoffs between the various sensor parameters and the quantity and quality of the extractable geologic data were investigated as well as the development of automated techniques of digital SAR image analysis. Initial work on a texture analysis of SEASAT SAR imagery is reported. Computer-generated radar simulations are shown for combinations of two geologic models and three SAR angles of incidence.

  3. Reciprocal Sliding Friction Model for an Electro-Deposited Coating and Its Parameter Estimation Using Markov Chain Monte Carlo Method

    PubMed Central

    Kim, Kyungmok; Lee, Jaewook

    2016-01-01

    This paper describes a sliding friction model for an electro-deposited coating. Reciprocating sliding tests using ball-on-flat plate test apparatus are performed to determine an evolution of the kinetic friction coefficient. The evolution of the friction coefficient is classified into the initial running-in period, steady-state sliding, and transition to higher friction. The friction coefficient during the initial running-in period and steady-state sliding is expressed as a simple linear function. The friction coefficient in the transition to higher friction is described with a mathematical model derived from Kachanov-type damage law. The model parameters are then estimated using the Markov Chain Monte Carlo (MCMC) approach. It is identified that estimated friction coefficients obtained by MCMC approach are in good agreement with measured ones. PMID:28773359

  4. Modeling Efficient Serial Visual Search

    DTIC Science & Technology

    2012-08-01

    parafovea size) to explore the parameter space associated with serial search efficiency. Visual search as a paradigm has been studied meticulously for...continues (Over, Hooge , Vlaskamp, & Erkelens, 2007). Over et al. (2007) found that participants initially attended to general properties of the search environ...the efficiency of human serial visual search. There were three parameters that were manipulated in the modeling of the visual search process in this

  5. Numerical study of unsteady Williamson fluid flow and heat transfer in the presence of MHD through a permeable stretching surface

    NASA Astrophysics Data System (ADS)

    Bibi, Madiha; Khalil-Ur-Rehman; Malik, M. Y.; Tahir, M.

    2018-04-01

    In the present article, unsteady flow field characteristics of the Williamson fluid model are explored. The nanosized particles are suspended in the flow regime having the interaction of a magnetic field. The fluid flow is induced due to a stretching permeable surface. The flow model is controlled through coupled partial differential equations to the used shooting method for a numerical solution. The obtained partial differential equations are converted into ordinary differential equations as an initial value problem. The shooting method is used to find a numerical solution. The mathematical modeling yields physical parameters, namely the Weissenberg number, the Prandtl number, the unsteadiness parameter, the magnetic parameter, the mass transfer parameter, the Lewis number, the thermophoresis parameter and Brownian parameters. It is found that the Williamson fluid velocity, temperature and nanoparticles concentration are a decreasing function of the unsteadiness parameter.

  6. Improving the analysis of slug tests

    USGS Publications Warehouse

    McElwee, C.D.

    2002-01-01

    This paper examines several techniques that have the potential to improve the quality of slug test analysis. These techniques are applicable in the range from low hydraulic conductivities with overdamped responses to high hydraulic conductivities with nonlinear oscillatory responses. Four techniques for improving slug test analysis will be discussed: use of an extended capability nonlinear model, sensitivity analysis, correction for acceleration and velocity effects, and use of multiple slug tests. The four-parameter nonlinear slug test model used in this work is shown to allow accurate analysis of slug tests with widely differing character. The parameter ?? represents a correction to the water column length caused primarily by radius variations in the wellbore and is most useful in matching the oscillation frequency and amplitude. The water column velocity at slug initiation (V0) is an additional model parameter, which would ideally be zero but may not be due to the initiation mechanism. The remaining two model parameters are A (parameter for nonlinear effects) and K (hydraulic conductivity). Sensitivity analysis shows that in general ?? and V0 have the lowest sensitivity and K usually has the highest. However, for very high K values the sensitivity to A may surpass the sensitivity to K. Oscillatory slug tests involve higher accelerations and velocities of the water column; thus, the pressure transducer responses are affected by these factors and the model response must be corrected to allow maximum accuracy for the analysis. The performance of multiple slug tests will allow some statistical measure of the experimental accuracy and of the reliability of the resulting aquifer parameters. ?? 2002 Elsevier Science B.V. All rights reserved.

  7. System identification for modeling for control of flexible structures

    NASA Technical Reports Server (NTRS)

    Mettler, Edward; Milman, Mark

    1986-01-01

    The major components of a design and operational flight strategy for flexible structure control systems are presented. In this strategy an initial distributed parameter control design is developed and implemented from available ground test data and on-orbit identification using sophisticated modeling and synthesis techniques. The reliability of this high performance controller is directly linked to the accuracy of the parameters on which the design is based. Because uncertainties inevitably grow without system monitoring, maintaining the control system requires an active on-line system identification function to supply parameter updates and covariance information. Control laws can then be modified to improve performance when the error envelopes are decreased. In terms of system safety and stability the covariance information is of equal importance as the parameter values themselves. If the on-line system ID function detects an increase in parameter error covariances, then corresponding adjustments must be made in the control laws to increase robustness. If the error covariances exceed some threshold, an autonomous calibration sequence could be initiated to restore the error enveloped to an acceptable level.

  8. Climate modeling for Yamal territory using supercomputer atmospheric circulation model ECHAM5-wiso

    NASA Astrophysics Data System (ADS)

    Denisova, N. Y.; Gribanov, K. G.; Werner, M.; Zakharov, V. I.

    2015-11-01

    Dependences of monthly means of regional averages of model atmospheric parameters on initial and boundary condition remoteness in the past are the subject of the study. We used atmospheric general circulation model ECHAM5-wiso for simulation of monthly means of regional averages of climate parameters for Yamal region and different periods of premodeling. Time interval was varied from several months to 12 years. We present dependences of model monthly means of regional averages of surface temperature, 2 m air temperature and humidity for December of 2000 on duration of premodeling. Comparison of these results with reanalysis data showed that best coincidence with true parameters could be reached if duration of pre-modelling is approximately 10 years.

  9. Determining optimal parameters in magnetic spacecraft stabilization via attitude feedback

    NASA Astrophysics Data System (ADS)

    Bruni, Renato; Celani, Fabio

    2016-10-01

    The attitude control of a spacecraft using magnetorquers can be achieved by a feedback control law which has four design parameters. However, the practical determination of appropriate values for these parameters is a critical open issue. We propose here an innovative systematic approach for finding these values: they should be those that minimize the convergence time to the desired attitude. This a particularly diffcult optimization problem, for several reasons: 1) such time cannot be expressed in analytical form as a function of parameters and initial conditions; 2) design parameters may range over very wide intervals; 3) convergence time depends also on the initial conditions of the spacecraft, which are not known in advance. To overcome these diffculties, we present a solution approach based on derivative-free optimization. These algorithms do not need to write analytically the objective function: they only need to compute it in a number of points. We also propose a fast probing technique to identify which regions of the search space have to be explored densely. Finally, we formulate a min-max model to find robust parameters, namely design parameters that minimize convergence time under the worst initial conditions. Results are very promising.

  10. Complete set of homogeneous isotropic analytic solutions in scalar-tensor cosmology with radiation and curvature

    NASA Astrophysics Data System (ADS)

    Bars, Itzhak; Chen, Shih-Hung; Steinhardt, Paul J.; Turok, Neil

    2012-10-01

    We study a model of a scalar field minimally coupled to gravity, with a specific potential energy for the scalar field, and include curvature and radiation as two additional parameters. Our goal is to obtain analytically the complete set of configurations of a homogeneous and isotropic universe as a function of time. This leads to a geodesically complete description of the Universe, including the passage through the cosmological singularities, at the classical level. We give all the solutions analytically without any restrictions on the parameter space of the model or initial values of the fields. We find that for generic solutions the Universe goes through a singular (zero-size) bounce by entering a period of antigravity at each big crunch and exiting from it at the following big bang. This happens cyclically again and again without violating the null-energy condition. There is a special subset of geodesically complete nongeneric solutions which perform zero-size bounces without ever entering the antigravity regime in all cycles. For these, initial values of the fields are synchronized and quantized but the parameters of the model are not restricted. There is also a subset of spatial curvature-induced solutions that have finite-size bounces in the gravity regime and never enter the antigravity phase. These exist only within a small continuous domain of parameter space without fine-tuning the initial conditions. To obtain these results, we identified 25 regions of a 6-parameter space in which the complete set of analytic solutions are explicitly obtained.

  11. Nonlinear dynamic analysis of cantilevered piezoelectric energy harvesters under simultaneous parametric and external excitations

    NASA Astrophysics Data System (ADS)

    Fang, Fei; Xia, Guanghui; Wang, Jianguo

    2018-02-01

    The nonlinear dynamics of cantilevered piezoelectric beams is investigated under simultaneous parametric and external excitations. The beam is composed of a substrate and two piezoelectric layers and assumed as an Euler-Bernoulli model with inextensible deformation. A nonlinear distributed parameter model of cantilevered piezoelectric energy harvesters is proposed using the generalized Hamilton's principle. The proposed model includes geometric and inertia nonlinearity, but neglects the material nonlinearity. Using the Galerkin decomposition method and harmonic balance method, analytical expressions of the frequency-response curves are presented when the first bending mode of the beam plays a dominant role. Using these expressions, we investigate the effects of the damping, load resistance, electromechanical coupling, and excitation amplitude on the frequency-response curves. We also study the difference between the nonlinear lumped-parameter and distributed-parameter model for predicting the performance of the energy harvesting system. Only in the case of parametric excitation, we demonstrate that the energy harvesting system has an initiation excitation threshold below which no energy can be harvested. We also illustrate that the damping and load resistance affect the initiation excitation threshold.

  12. Nonlinear dynamic analysis of cantilevered piezoelectric energy harvesters under simultaneous parametric and external excitations

    NASA Astrophysics Data System (ADS)

    Fang, Fei; Xia, Guanghui; Wang, Jianguo

    2018-06-01

    The nonlinear dynamics of cantilevered piezoelectric beams is investigated under simultaneous parametric and external excitations. The beam is composed of a substrate and two piezoelectric layers and assumed as an Euler-Bernoulli model with inextensible deformation. A nonlinear distributed parameter model of cantilevered piezoelectric energy harvesters is proposed using the generalized Hamilton's principle. The proposed model includes geometric and inertia nonlinearity, but neglects the material nonlinearity. Using the Galerkin decomposition method and harmonic balance method, analytical expressions of the frequency-response curves are presented when the first bending mode of the beam plays a dominant role. Using these expressions, we investigate the effects of the damping, load resistance, electromechanical coupling, and excitation amplitude on the frequency-response curves. We also study the difference between the nonlinear lumped-parameter and distributed-parameter model for predicting the performance of the energy harvesting system. Only in the case of parametric excitation, we demonstrate that the energy harvesting system has an initiation excitation threshold below which no energy can be harvested. We also illustrate that the damping and load resistance affect the initiation excitation threshold.

  13. Detonation initiation in a model of explosive: Comparative atomistic and hydrodynamics simulations

    NASA Astrophysics Data System (ADS)

    Murzov, S. A.; Sergeev, O. V.; Dyachkov, S. A.; Egorova, M. S.; Parshikov, A. N.; Zhakhovsky, V. V.

    2016-11-01

    Here we extend consistent simulations to reactive materials by the example of AB model explosive. The kinetic model of chemical reactions observed in a molecular dynamics (MD) simulation of self-sustained detonation wave can be used in hydrodynamic simulation of detonation initiation. Kinetic coefficients are obtained by minimization of difference between profiles of species calculated from the kinetic model and observed in MD simulations of isochoric thermal decomposition with a help of downhill simplex method combined with random walk in multidimensional space of fitting kinetic model parameters.

  14. Instabilities in a nonstationary model of self-gravitating disks. III. The phenomenon of lopsidedness and a comparison of perturbation modes

    NASA Astrophysics Data System (ADS)

    Mirtadjieva, K. T.; Nuritdinov, S. N.; Ruzibaev, J. K.; Khalid, Muhammad

    2011-06-01

    This is an examination of the gravitational instability of the major large-scale perturbation modes for a fixed value of the azimuthal wave number m = 1 in nonlinearly nonstationary disk models with isotropic and anisotropic velocity diagrams for the purpose of explaining the displacement of the nucleus away from the geometric center (lopsidedness) in spiral galaxies. Nonstationary analogs of the dispersion relations for these perturbation modes are obtained. Critical diagrams of the initial virial ratio are constructed from the rotation parameters for the models in each case. A comparative analysis is made of the instability growth rates for the major horizontal perturbation modes in terms of two models, and it is found that, on the average, the instability growth rate for the m = 1 mode with a radial wave number N = 3 almost always has a clear advantage relative to the other modes. An analysis of these results shows that if the initial total kinetic energy in an isotropic model is no more than 12.4% of the initial potential energy, then, regardless of the value of the rotation parameter Ω, an instability of the radial motions always occurs and causes the nucleus to shift away from the geometrical center. This instability is aperiodic when Ω = 0 and is oscillatory when Ω ≠ 0 . For the anisotropic model, this kind of structure involving the nucleus develops when the initial total kinetic energy in the model is no more than 30.6% of the initial potential energy.

  15. HIV Model Parameter Estimates from Interruption Trial Data including Drug Efficacy and Reservoir Dynamics

    PubMed Central

    Luo, Rutao; Piovoso, Michael J.; Martinez-Picado, Javier; Zurakowski, Ryan

    2012-01-01

    Mathematical models based on ordinary differential equations (ODE) have had significant impact on understanding HIV disease dynamics and optimizing patient treatment. A model that characterizes the essential disease dynamics can be used for prediction only if the model parameters are identifiable from clinical data. Most previous parameter identification studies for HIV have used sparsely sampled data from the decay phase following the introduction of therapy. In this paper, model parameters are identified from frequently sampled viral-load data taken from ten patients enrolled in the previously published AutoVac HAART interruption study, providing between 69 and 114 viral load measurements from 3–5 phases of viral decay and rebound for each patient. This dataset is considerably larger than those used in previously published parameter estimation studies. Furthermore, the measurements come from two separate experimental conditions, which allows for the direct estimation of drug efficacy and reservoir contribution rates, two parameters that cannot be identified from decay-phase data alone. A Markov-Chain Monte-Carlo method is used to estimate the model parameter values, with initial estimates obtained using nonlinear least-squares methods. The posterior distributions of the parameter estimates are reported and compared for all patients. PMID:22815727

  16. Numerical Parameter Optimization of the Ignition and Growth Model for HMX Based Plastic Bonded Explosives

    NASA Astrophysics Data System (ADS)

    Gambino, James; Tarver, Craig; Springer, H. Keo; White, Bradley; Fried, Laurence

    2017-06-01

    We present a novel method for optimizing parameters of the Ignition and Growth reactive flow (I&G) model for high explosives. The I&G model can yield accurate predictions of experimental observations. However, calibrating the model is a time-consuming task especially with multiple experiments. In this study, we couple the differential evolution global optimization algorithm to simulations of shock initiation experiments in the multi-physics code ALE3D. We develop parameter sets for HMX based explosives LX-07 and LX-10. The optimization finds the I&G model parameters that globally minimize the difference between calculated and experimental shock time of arrival at embedded pressure gauges. This work was performed under the auspices of the U.S. DOE by LLNL under contract DE-AC52-07NA27344. LLNS, LLC LLNL-ABS- 724898.

  17. Evaluation of Potential Evapotranspiration from a Hydrologic Model on a National Scale

    NASA Astrophysics Data System (ADS)

    Hakala, K. A.; Hay, L.; Markstrom, S. L.

    2014-12-01

    The US Geological Survey has developed a National Hydrologic Model (NHM) to support coordinated, comprehensive and consistent hydrologic model development and facilitate the application of simulations on the scale of the continental US. The NHM has a consistent geospatial fabric for modeling, consisting of over 100,000 hydrologic response units (HRUs). Each HRU requires accurate parameter estimates, some of which are attained from automated calibration. However, improved calibration can be achieved by initially utilizing as many parameters as possible from national data sets. This presentation investigates the effectiveness of calculating potential evapotranspiration (PET) parameters based on mean monthly values from the NOAA PET Atlas. Additional PET products are then used to evaluate the PET parameters. Effectively utilizing existing national-scale data sets can simplify the effort in establishing a robust NHM.

  18. The role of different sampling methods in improving biological activity prediction using deep belief network.

    PubMed

    Ghasemi, Fahimeh; Fassihi, Afshin; Pérez-Sánchez, Horacio; Mehri Dehnavi, Alireza

    2017-02-05

    Thousands of molecules and descriptors are available for a medicinal chemist thanks to the technological advancements in different branches of chemistry. This fact as well as the correlation between them has raised new problems in quantitative structure activity relationship studies. Proper parameter initialization in statistical modeling has merged as another challenge in recent years. Random selection of parameters leads to poor performance of deep neural network (DNN). In this research, deep belief network (DBN) was applied to initialize DNNs. DBN is composed of some stacks of restricted Boltzmann machine, an energy-based method that requires computing log likelihood gradient for all samples. Three different sampling approaches were suggested to solve this gradient. In this respect, the impact of DBN was applied based on the different sampling approaches mentioned above to initialize the DNN architecture in predicting biological activity of all fifteen Kaggle targets that contain more than 70k molecules. The same as other fields of processing research, the outputs of these models demonstrated significant superiority to that of DNN with random parameters. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  19. Attitude determination of a high altitude balloon system. Part 2: Development of the parameter determination process

    NASA Technical Reports Server (NTRS)

    Nigro, N. J.; Elkouh, A. F.

    1975-01-01

    The attitude of the balloon system is determined as a function of time if: (a) a method for simulating the motion of the system is available, and (b) the initial state is known. The initial state is obtained by fitting the system motion (as measured by sensors) to the corresponding output predicted by the mathematical model. In the case of the LACATE experiment the sensors consisted of three orthogonally oriented rate gyros and a magnetometer all mounted on the research platform. The initial state was obtained by fitting the angular velocity components measured with the gyros to the corresponding values obtained from the solution of the math model. A block diagram illustrating the attitude determination process employed for the LACATE experiment is shown. The process consists of three essential parts; a process for simulating the balloon system, an instrumentation system for measuring the output, and a parameter estimation process for systematically and efficiently solving the initial state. Results are presented and discussed.

  20. System Identification Applied to Dynamic CFD Simulation and Wind Tunnel Data

    NASA Technical Reports Server (NTRS)

    Murphy, Patrick C.; Klein, Vladislav; Frink, Neal T.; Vicroy, Dan D.

    2011-01-01

    Demanding aerodynamic modeling requirements for military and civilian aircraft have provided impetus for researchers to improve computational and experimental techniques. Model validation is a key component for these research endeavors so this study is an initial effort to extend conventional time history comparisons by comparing model parameter estimates and their standard errors using system identification methods. An aerodynamic model of an aircraft performing one-degree-of-freedom roll oscillatory motion about its body axes is developed. The model includes linear aerodynamics and deficiency function parameters characterizing an unsteady effect. For estimation of unknown parameters two techniques, harmonic analysis and two-step linear regression, were applied to roll-oscillatory wind tunnel data and to computational fluid dynamics (CFD) simulated data. The model used for this study is a highly swept wing unmanned aerial combat vehicle. Differences in response prediction, parameters estimates, and standard errors are compared and discussed

  1. Identification of the most sensitive parameters in the activated sludge model implemented in BioWin software.

    PubMed

    Liwarska-Bizukojc, Ewa; Biernacki, Rafal

    2010-10-01

    In order to simulate biological wastewater treatment processes, data concerning wastewater and sludge composition, process kinetics and stoichiometry are required. Selection of the most sensitive parameters is an important step of model calibration. The aim of this work is to verify the predictability of the activated sludge model, which is implemented in BioWin software, and select its most influential kinetic and stoichiometric parameters with the help of sensitivity analysis approach. Two different measures of sensitivity are applied: the normalised sensitivity coefficient (S(i,j)) and the mean square sensitivity measure (delta(j)(msqr)). It occurs that 17 kinetic and stoichiometric parameters of the BioWin activated sludge (AS) model can be regarded as influential on the basis of S(i,j) calculations. Half of the influential parameters are associated with growth and decay of phosphorus accumulating organisms (PAOs). The identification of the set of the most sensitive parameters should support the users of this model and initiate the elaboration of determination procedures for the parameters, for which it has not been done yet. Copyright 2010 Elsevier Ltd. All rights reserved.

  2. A Four-parameter Budyko Equation for Mean Annual Water Balance

    NASA Astrophysics Data System (ADS)

    Tang, Y.; Wang, D.

    2016-12-01

    In this study, a four-parameter Budyko equation for long-term water balance at watershed scale is derived based on the proportionality relationships of the two-stage partitioning of precipitation. The four-parameter Budyko equation provides a practical solution to balance model simplicity and representation of dominated hydrologic processes. Under the four-parameter Budyko framework, the key hydrologic processes related to the lower bound of Budyko curve are determined, that is, the lower bound is corresponding to the situation when surface runoff and initial evaporation not competing with base flow generation are zero. The derived model is applied to 166 MOPEX watersheds in United States, and the dominant controlling factors on each parameter are determined. Then, four statistical models are proposed to predict the four model parameters based on the dominant controlling factors, e.g., saturated hydraulic conductivity, fraction of sand, time period between two storms, watershed slope, and Normalized Difference Vegetation Index. This study shows a potential application of the four-parameter Budyko equation to constrain land-surface parameterizations in ungauged watersheds or general circulation models.

  3. EVOLUTIONARY MODELS OF SUPER-EARTHS AND MINI-NEPTUNES INCORPORATING COOLING AND MASS LOSS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Howe, Alex R.; Burrows, Adam, E-mail: arhowe@astro.princeton.edu, E-mail: burrows@astro.princeton.edu

    We construct models of the structural evolution of super-Earth- and mini-Neptune-type exoplanets with H{sub 2}–He envelopes, incorporating radiative cooling and XUV-driven mass loss. We conduct a parameter study of these models, focusing on initial mass, radius, and envelope mass fractions, as well as orbital distance, metallicity, and the specific prescription for mass loss. From these calculations, we investigate how the observed masses and radii of exoplanets today relate to the distribution of their initial conditions. Orbital distance and the initial envelope mass fraction are the most important factors determining planetary evolution, particularly radius evolution. Initial mass also becomes important belowmore » a “turnoff mass,” which varies with orbital distance, with mass–radius curves being approximately flat for higher masses. Initial radius is the least important parameter we study, with very little difference between the hot start and cold start limits after an age of 100 Myr. Model sets with no mass loss fail to produce results consistent with observations, but a plausible range of mass-loss scenarios is allowed. In addition, we present scenarios for the formation of the Kepler-11 planets. Our best fit to observations of Kepler-11b and Kepler-11c involves formation beyond the snow line, after which they moved inward, circularized, and underwent a reduced degree of mass loss.« less

  4. Evaluation and uncertainty analysis of regional-scale CLM4.5 net carbon flux estimates

    NASA Astrophysics Data System (ADS)

    Post, Hanna; Hendricks Franssen, Harrie-Jan; Han, Xujun; Baatz, Roland; Montzka, Carsten; Schmidt, Marius; Vereecken, Harry

    2018-01-01

    Modeling net ecosystem exchange (NEE) at the regional scale with land surface models (LSMs) is relevant for the estimation of regional carbon balances, but studies on it are very limited. Furthermore, it is essential to better understand and quantify the uncertainty of LSMs in order to improve them. An important key variable in this respect is the prognostic leaf area index (LAI), which is very sensitive to forcing data and strongly affects the modeled NEE. We applied the Community Land Model (CLM4.5-BGC) to the Rur catchment in western Germany and compared estimated and default ecological key parameters for modeling carbon fluxes and LAI. The parameter estimates were previously estimated with the Markov chain Monte Carlo (MCMC) approach DREAM(zs) for four of the most widespread plant functional types in the catchment. It was found that the catchment-scale annual NEE was strongly positive with default parameter values but negative (and closer to observations) with the estimated values. Thus, the estimation of CLM parameters with local NEE observations can be highly relevant when determining regional carbon balances. To obtain a more comprehensive picture of model uncertainty, CLM ensembles were set up with perturbed meteorological input and uncertain initial states in addition to uncertain parameters. C3 grass and C3 crops were particularly sensitive to the perturbed meteorological input, which resulted in a strong increase in the standard deviation of the annual NEE sum (σ NEE) for the different ensemble members from ˜ 2 to 3 g C m-2 yr-1 (with uncertain parameters) to ˜ 45 g C m-2 yr-1 (C3 grass) and ˜ 75 g C m-2 yr-1 (C3 crops) with perturbed forcings. This increase in uncertainty is related to the impact of the meteorological forcings on leaf onset and senescence, and enhanced/reduced drought stress related to perturbation of precipitation. The NEE uncertainty for the forest plant functional type (PFT) was considerably lower (σ NEE ˜ 4.0-13.5 g C m-2 yr-1 with perturbed parameters, meteorological forcings and initial states). We conclude that LAI and NEE uncertainty with CLM is clearly underestimated if uncertain meteorological forcings and initial states are not taken into account.

  5. State and parameter estimation of the heat shock response system using Kalman and particle filters.

    PubMed

    Liu, Xin; Niranjan, Mahesan

    2012-06-01

    Traditional models of systems biology describe dynamic biological phenomena as solutions to ordinary differential equations, which, when parameters in them are set to correct values, faithfully mimic observations. Often parameter values are tweaked by hand until desired results are achieved, or computed from biochemical experiments carried out in vitro. Of interest in this article, is the use of probabilistic modelling tools with which parameters and unobserved variables, modelled as hidden states, can be estimated from limited noisy observations of parts of a dynamical system. Here we focus on sequential filtering methods and take a detailed look at the capabilities of three members of this family: (i) extended Kalman filter (EKF), (ii) unscented Kalman filter (UKF) and (iii) the particle filter, in estimating parameters and unobserved states of cellular response to sudden temperature elevation of the bacterium Escherichia coli. While previous literature has studied this system with the EKF, we show that parameter estimation is only possible with this method when the initial guesses are sufficiently close to the true values. The same turns out to be true for the UKF. In this thorough empirical exploration, we show that the non-parametric method of particle filtering is able to reliably estimate parameters and states, converging from initial distributions relatively far away from the underlying true values. Software implementation of the three filters on this problem can be freely downloaded from http://users.ecs.soton.ac.uk/mn/HeatShock

  6. Parameters and kinetics of olive mill wastewater dephenolization by immobilized Rhodotorula glutinis cells.

    PubMed

    Bozkoyunlu, Gaye; Takaç, Serpil

    2014-01-01

    Olive mill wastewater (OMW) with total phenol (TP) concentration range of 300-1200 mg/L was treated with alginate-immobilized Rhodotorula glutinis cells in batch system. The effects of pellet properties (diameter, alginate concentration and cell loading (CL)) and operational parameters (initial TP concentration, agitation rate and reusability of pellets) on dephenolization of OMW were studied. Up to 87% dephenolization was obtained after 120 h biodegradations. The utilization number of pellets increased with the addition of calcium ions into the biodegradation medium. The overall effectiveness factors calculated for different conditions showed that diffusional limitations arising from pellet size and pellet composition could be neglected. Mass transfer limitations appeared to be more effective at high substrate concentrations and low agitation rates. The parameters of logistic model for growth kinetics of R. glutinis in OMW were estimated at different initial phenol concentrations of OMW by curve-fitting of experimental data with the model.

  7. Improved initial guess with semi-subpixel level accuracy in digital image correlation by feature-based method

    NASA Astrophysics Data System (ADS)

    Zhang, Yunlu; Yan, Lei; Liou, Frank

    2018-05-01

    The quality initial guess of deformation parameters in digital image correlation (DIC) has a serious impact on convergence, robustness, and efficiency of the following subpixel level searching stage. In this work, an improved feature-based initial guess (FB-IG) scheme is presented to provide initial guess for points of interest (POIs) inside a large region. Oriented FAST and Rotated BRIEF (ORB) features are semi-uniformly extracted from the region of interest (ROI) and matched to provide initial deformation information. False matched pairs are eliminated by the novel feature guided Gaussian mixture model (FG-GMM) point set registration algorithm, and nonuniform deformation parameters of the versatile reproducing kernel Hilbert space (RKHS) function are calculated simultaneously. Validations on simulated images and real-world mini tensile test verify that this scheme can robustly and accurately compute initial guesses with semi-subpixel level accuracy in cases with small or large translation, deformation, or rotation.

  8. Constraints on rapidity-dependent initial conditions from charged-particle pseudorapidity densities and two-particle correlations

    NASA Astrophysics Data System (ADS)

    Ke, Weiyao; Moreland, J. Scott; Bernhard, Jonah E.; Bass, Steffen A.

    2017-10-01

    We study the initial three-dimensional spatial configuration of the quark-gluon plasma (QGP) produced in relativistic heavy-ion collisions using centrality and pseudorapidity-dependent measurements of the medium's charged particle density and two-particle correlations. A cumulant-generating function is first used to parametrize the rapidity dependence of local entropy deposition and extend arbitrary boost-invariant initial conditions to nonzero beam rapidities. The model is then compared to p +Pb and Pb + Pb charged-particle pseudorapidity densities and two-particle pseudorapidity correlations and systematically optimized using Bayesian parameter estimation to extract high-probability initial condition parameters. The optimized initial conditions are then compared to a number of experimental observables including the pseudorapidity-dependent anisotropic flows, event-plane decorrelations, and flow correlations. We find that the form of the initial local longitudinal entropy profile is well constrained by these experimental measurements.

  9. Use of an anaerobic sequencing batch reactor for parameter estimation in modelling of anaerobic digestion.

    PubMed

    Batstone, D J; Torrijos, M; Ruiz, C; Schmidt, J E

    2004-01-01

    The model structure in anaerobic digestion has been clarified following publication of the IWA Anaerobic Digestion Model No. 1 (ADM1). However, parameter values are not well known, and uncertainty and variability in the parameter values given is almost unknown. Additionally, platforms for identification of parameters, namely continuous-flow laboratory digesters, and batch tests suffer from disadvantages such as long run times, and difficulty in defining initial conditions, respectively. Anaerobic sequencing batch reactors (ASBRs) are sequenced into fill-react-settle-decant phases, and offer promising possibilities for estimation of parameters, as they are by nature, dynamic in behaviour, and allow repeatable behaviour to establish initial conditions, and evaluate parameters. In this study, we estimated parameters describing winery wastewater (most COD as ethanol) degradation using data from sequencing operation, and validated these parameters using unsequenced pulses of ethanol and acetate. The model used was the ADM1, with an extension for ethanol degradation. Parameter confidence spaces were found by non-linear, correlated analysis of the two main Monod parameters; maximum uptake rate (k(m)), and half saturation concentration (K(S)). These parameters could be estimated together using only the measured acetate concentration (20 points per cycle). From interpolating the single cycle acetate data to multiple cycles, we estimate that a practical "optimal" identifiability could be achieved after two cycles for the acetate parameters, and three cycles for the ethanol parameters. The parameters found performed well in the short term, and represented the pulses of acetate and ethanol (within 4 days of the winery-fed cycles) very well. The main discrepancy was poor prediction of pH dynamics, which could be due to an unidentified buffer with an overall influence the same as a weak base (possibly CaCO3). Based on this work, ASBR systems are effective for parameter estimation, especially for comparative wastewater characterisation. The main disadvantages are heavy computational requirements for multiple cycles, and difficulty in establishing the correct biomass concentration in the reactor, though the last is also a disadvantage for continuous fixed film reactors, and especially, batch tests.

  10. Hybrid method to estimate two-layered superficial tissue optical properties from simulated data of diffuse reflectance spectroscopy.

    PubMed

    Hsieh, Hong-Po; Ko, Fan-Hua; Sung, Kung-Bin

    2018-04-20

    An iterative curve fitting method has been applied in both simulation [J. Biomed. Opt.17, 107003 (2012)JBOPFO1083-366810.1117/1.JBO.17.10.107003] and phantom [J. Biomed. Opt.19, 077002 (2014)JBOPFO1083-366810.1117/1.JBO.19.7.077002] studies to accurately extract optical properties and the top layer thickness of a two-layered superficial tissue model from diffuse reflectance spectroscopy (DRS) data. This paper describes a hybrid two-step parameter estimation procedure to address two main issues of the previous method, including (1) high computational intensity and (2) converging to local minima. The parameter estimation procedure contained a novel initial estimation step to obtain an initial guess, which was used by a subsequent iterative fitting step to optimize the parameter estimation. A lookup table was used in both steps to quickly obtain reflectance spectra and reduce computational intensity. On simulated DRS data, the proposed parameter estimation procedure achieved high estimation accuracy and a 95% reduction of computational time compared to previous studies. Furthermore, the proposed initial estimation step led to better convergence of the following fitting step. Strategies used in the proposed procedure could benefit both the modeling and experimental data processing of not only DRS but also related approaches such as near-infrared spectroscopy.

  11. Models for estimating photosynthesis parameters from in situ production profiles

    NASA Astrophysics Data System (ADS)

    Kovač, Žarko; Platt, Trevor; Sathyendranath, Shubha; Antunović, Suzana

    2017-12-01

    The rate of carbon assimilation in phytoplankton primary production models is mathematically prescribed with photosynthesis irradiance functions, which convert a light flux (energy) into a material flux (carbon). Information on this rate is contained in photosynthesis parameters: the initial slope and the assimilation number. The exactness of parameter values is crucial for precise calculation of primary production. Here we use a model of the daily production profile based on a suite of photosynthesis irradiance functions and extract photosynthesis parameters from in situ measured daily production profiles at the Hawaii Ocean Time-series station Aloha. For each function we recover parameter values, establish parameter distributions and quantify model skill. We observe that the choice of the photosynthesis irradiance function to estimate the photosynthesis parameters affects the magnitudes of parameter values as recovered from in situ profiles. We also tackle the problem of parameter exchange amongst the models and the effect it has on model performance. All models displayed little or no bias prior to parameter exchange, but significant bias following parameter exchange. The best model performance resulted from using optimal parameter values. Model formulation was extended further by accounting for spectral effects and deriving a spectral analytical solution for the daily production profile. The daily production profile was also formulated with time dependent growing biomass governed by a growth equation. The work on parameter recovery was further extended by exploring how to extract photosynthesis parameters from information on watercolumn production. It was demonstrated how to estimate parameter values based on a linearization of the full analytical solution for normalized watercolumn production and from the solution itself, without linearization. The paper complements previous works on photosynthesis irradiance models by analysing the skill and consistency of photosynthesis irradiance functions and parameters for modeling in situ production profiles. In light of the results obtained in this work we argue that the choice of the primary production model should reflect the available data and these models should be data driven regarding parameter estimation.

  12. Numerical scheme approximating solution and parameters in a beam equation

    NASA Astrophysics Data System (ADS)

    Ferdinand, Robert R.

    2003-12-01

    We present a mathematical model which describes vibration in a metallic beam about its equilibrium position. This model takes the form of a nonlinear second-order (in time) and fourth-order (in space) partial differential equation with boundary and initial conditions. A finite-element Galerkin approximation scheme is used to estimate model solution. Infinite-dimensional model parameters are then estimated numerically using an inverse method procedure which involves the minimization of a least-squares cost functional. Numerical results are presented and future work to be done is discussed.

  13. Bayesian inference for OPC modeling

    NASA Astrophysics Data System (ADS)

    Burbine, Andrew; Sturtevant, John; Fryer, David; Smith, Bruce W.

    2016-03-01

    The use of optical proximity correction (OPC) demands increasingly accurate models of the photolithographic process. Model building and inference techniques in the data science community have seen great strides in the past two decades which make better use of available information. This paper aims to demonstrate the predictive power of Bayesian inference as a method for parameter selection in lithographic models by quantifying the uncertainty associated with model inputs and wafer data. Specifically, the method combines the model builder's prior information about each modelling assumption with the maximization of each observation's likelihood as a Student's t-distributed random variable. Through the use of a Markov chain Monte Carlo (MCMC) algorithm, a model's parameter space is explored to find the most credible parameter values. During parameter exploration, the parameters' posterior distributions are generated by applying Bayes' rule, using a likelihood function and the a priori knowledge supplied. The MCMC algorithm used, an affine invariant ensemble sampler (AIES), is implemented by initializing many walkers which semiindependently explore the space. The convergence of these walkers to global maxima of the likelihood volume determine the parameter values' highest density intervals (HDI) to reveal champion models. We show that this method of parameter selection provides insights into the data that traditional methods do not and outline continued experiments to vet the method.

  14. Use of Multi-class Empirical Orthogonal Function for Identification of Hydrogeological Parameters and Spatiotemporal Pattern of Multiple Recharges in Groundwater Modeling

    NASA Astrophysics Data System (ADS)

    Huang, C. L.; Hsu, N. S.; Yeh, W. W. G.; Hsieh, I. H.

    2017-12-01

    This study develops an innovative calibration method for regional groundwater modeling by using multi-class empirical orthogonal functions (EOFs). The developed method is an iterative approach. Prior to carrying out the iterative procedures, the groundwater storage hydrographs associated with the observation wells are calculated. The combined multi-class EOF amplitudes and EOF expansion coefficients of the storage hydrographs are then used to compute the initial gauss of the temporal and spatial pattern of multiple recharges. The initial guess of the hydrogeological parameters are also assigned according to in-situ pumping experiment. The recharges include net rainfall recharge and boundary recharge, and the hydrogeological parameters are riverbed leakage conductivity, horizontal hydraulic conductivity, vertical hydraulic conductivity, storage coefficient, and specific yield. The first step of the iterative algorithm is to conduct the numerical model (i.e. MODFLOW) by the initial guess / adjusted values of the recharges and parameters. Second, in order to determine the best EOF combination of the error storage hydrographs for determining the correction vectors, the objective function is devised as minimizing the root mean square error (RMSE) of the simulated storage hydrographs. The error storage hydrograph are the differences between the storage hydrographs computed from observed and simulated groundwater level fluctuations. Third, adjust the values of recharges and parameters and repeat the iterative procedures until the stopping criterion is reached. The established methodology was applied to the groundwater system of Ming-Chu Basin, Taiwan. The study period is from January 1st to December 2ed in 2012. Results showed that the optimal EOF combination for the multiple recharges and hydrogeological parameters can decrease the RMSE of the simulated storage hydrographs dramatically within three calibration iterations. It represents that the iterative approach that using EOF techniques can capture the groundwater flow tendency and detects the correction vector of the simulated error sources. Hence, the established EOF-based methodology can effectively and accurately identify the multiple recharges and hydrogeological parameters.

  15. An improved computer model for prediction of axial gas turbine performance losses

    NASA Technical Reports Server (NTRS)

    Jenkins, R. M.

    1984-01-01

    The calculation model performs a rapid preliminary pitchline optimization of axial gas turbine annular flowpath geometry, as well as an initial estimate of blade profile shapes, given only a minimum of thermodynamic cycle requirements. No geometric parameters need be specified. The following preliminary design data are determined: (1) the optimum flowpath geometry, within mechanical stress limits; (2) initial estimates of cascade blade shapes; and (3) predictions of expected turbine performance. The model uses an inverse calculation technique whereby blade profiles are generated by designing channels to yield a specified velocity distribution on the two walls. Velocity distributions are then used to calculate the cascade loss parameters. Calculated blade shapes are used primarily to determine whether the assumed velocity loadings are physically realistic. Model verification is accomplished by comparison of predicted turbine geometry and performance with an array of seven NASA single-stage axial gas turbine configurations.

  16. Using a hybrid model to predict solute transfer from initially saturated soil into surface runoff with controlled drainage water.

    PubMed

    Tong, Juxiu; Hu, Bill X; Yang, Jinzhong; Zhu, Yan

    2016-06-01

    The mixing layer theory is not suitable for predicting solute transfer from initially saturated soil to surface runoff water under controlled drainage conditions. By coupling the mixing layer theory model with the numerical model Hydrus-1D, a hybrid solute transfer model has been proposed to predict soil solute transfer from an initially saturated soil into surface water, under controlled drainage water conditions. The model can also consider the increasing ponding water conditions on soil surface before surface runoff. The data of solute concentration in surface runoff and drainage water from a sand experiment is used as the reference experiment. The parameters for the water flow and solute transfer model and mixing layer depth under controlled drainage water condition are identified. Based on these identified parameters, the model is applied to another initially saturated sand experiment with constant and time-increasing mixing layer depth after surface runoff, under the controlled drainage water condition with lower drainage height at the bottom. The simulation results agree well with the observed data. Study results suggest that the hybrid model can accurately simulate the solute transfer from initially saturated soil into surface runoff under controlled drainage water condition. And it has been found that the prediction with increasing mixing layer depth is better than that with the constant one in the experiment with lower drainage condition. Since lower drainage condition and deeper ponded water depth result in later runoff start time, more solute sources in the mixing layer are needed for the surface water, and larger change rate results in the increasing mixing layer depth.

  17. Semiparametric modeling: Correcting low-dimensional model error in parametric models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berry, Tyrus, E-mail: thb11@psu.edu; Harlim, John, E-mail: jharlim@psu.edu; Department of Meteorology, the Pennsylvania State University, 503 Walker Building, University Park, PA 16802-5013

    2016-03-01

    In this paper, a semiparametric modeling approach is introduced as a paradigm for addressing model error arising from unresolved physical phenomena. Our approach compensates for model error by learning an auxiliary dynamical model for the unknown parameters. Practically, the proposed approach consists of the following steps. Given a physics-based model and a noisy data set of historical observations, a Bayesian filtering algorithm is used to extract a time-series of the parameter values. Subsequently, the diffusion forecast algorithm is applied to the retrieved time-series in order to construct the auxiliary model for the time evolving parameters. The semiparametric forecasting algorithm consistsmore » of integrating the existing physics-based model with an ensemble of parameters sampled from the probability density function of the diffusion forecast. To specify initial conditions for the diffusion forecast, a Bayesian semiparametric filtering method that extends the Kalman-based filtering framework is introduced. In difficult test examples, which introduce chaotically and stochastically evolving hidden parameters into the Lorenz-96 model, we show that our approach can effectively compensate for model error, with forecasting skill comparable to that of the perfect model.« less

  18. An Investigation on the Sensitivity of the Parameters of Urban Flood Model

    NASA Astrophysics Data System (ADS)

    M, A. B.; Lohani, B.; Jain, A.

    2015-12-01

    Global climatic change has triggered weather patterns which lead to heavy and sudden rainfall in different parts of world. The impact of heavy rainfall is severe especially on urban areas in the form of urban flooding. In order to understand the effect of heavy rainfall induced flooding, it is necessary to model the entire flooding scenario more accurately, which is now becoming possible with the availability of high resolution airborne LiDAR data and other real time observations. However, there is not much understanding on the optimal use of these data and on the effect of other parameters on the performance of the flood model. This study aims at developing understanding on these issues. In view of the above discussion, the aim of this study is to (i) understand that how the use of high resolution LiDAR data improves the performance of urban flood model, and (ii) understand the sensitivity of various hydrological parameters on urban flood modelling. In this study, modelling of flooding in urban areas due to heavy rainfall is carried out considering Indian Institute of Technology (IIT) Kanpur, India as the study site. The existing model MIKE FLOOD, which is accepted by Federal Emergency Management Agency (FEMA), is used along with the high resolution airborne LiDAR data. Once the model is setup it is made to run by changing the parameters such as resolution of Digital Surface Model (DSM), manning's roughness, initial losses, catchment description, concentration time, runoff reduction factor. In order to realize this, the results obtained from the model are compared with the field observations. The parametric study carried out in this work demonstrates that the selection of catchment description plays a very important role in urban flood modelling. Results also show the significant impact of resolution of DSM, initial losses and concentration time on urban flood model. This study will help in understanding the effect of various parameters that should be part of a flood model for its accurate performance.

  19. Coupling model of aerobic waste degradation considering temperature, initial moisture content and air injection volume.

    PubMed

    Ma, Jun; Liu, Lei; Ge, Sai; Xue, Qiang; Li, Jiangshan; Wan, Yong; Hui, Xinminnan

    2018-03-01

    A quantitative description of aerobic waste degradation is important in evaluating landfill waste stability and economic management. This research aimed to develop a coupling model to predict the degree of aerobic waste degradation. On the basis of the first-order kinetic equation and the law of conservation of mass, we first developed the coupling model of aerobic waste degradation that considered temperature, initial moisture content and air injection volume to simulate and predict the chemical oxygen demand in the leachate. Three different laboratory experiments on aerobic waste degradation were simulated to test the model applicability. Parameter sensitivity analyses were conducted to evaluate the reliability of parameters. The coupling model can simulate aerobic waste degradation, and the obtained simulation agreed with the corresponding results of the experiment. Comparison of the experiment and simulation demonstrated that the coupling model is a new approach to predict aerobic waste degradation and can be considered as the basis for selecting the economic air injection volume and appropriate management in the future.

  20. Posterior uncertainty of GEOS-5 L-band radiative transfer model parameters and brightness temperatures after calibration with SMOS observations

    NASA Astrophysics Data System (ADS)

    De Lannoy, G. J.; Reichle, R. H.; Vrugt, J. A.

    2012-12-01

    Simulated L-band (1.4 GHz) brightness temperatures are very sensitive to the values of the parameters in the radiative transfer model (RTM). We assess the optimum RTM parameter values and their (posterior) uncertainty in the Goddard Earth Observing System (GEOS-5) land surface model using observations of multi-angular brightness temperature over North America from the Soil Moisture Ocean Salinity (SMOS) mission. Two different parameter estimation methods are being compared: (i) a particle swarm optimization (PSO) approach, and (ii) an MCMC simulation procedure using the differential evolution adaptive Metropolis (DREAM) algorithm. Our results demonstrate that both methods provide similar "optimal" parameter values. Yet, DREAM exhibits better convergence properties, resulting in a reduced spread of the posterior ensemble. The posterior parameter distributions derived with both methods are used for predictive uncertainty estimation of brightness temperature. This presentation will highlight our model-data synthesis framework and summarize our initial findings.

  1. Controlled release drug delivery via polymeric microspheres: a neat application of the spherical diffusion equation

    NASA Astrophysics Data System (ADS)

    Ormerod, C. S.; Nelson, M.

    2017-11-01

    Various applied mathematics undergraduate skills are demonstrated via an adaptation of Crank's axisymmetric spherical diffusion model. By the introduction of a one-parameter Heaviside initial condition, the pharmaceutically problematic initial mass flux is attenuated. Quantities germane to the pharmaceutical industry are examined and the model is tested with data derived from industry journals. A binomial algorithm for the acceleration of alternating sequences is demonstrated. The model is accompanied by a MAPLE worksheet for further student exploration.

  2. StePar: an automatic code for stellar parameter determination

    NASA Astrophysics Data System (ADS)

    Tabernero, H. M.; González Hernández, J. I.; Montes, D.

    2013-05-01

    We introduce a new automatic code (StePar) for determinig stellar atmospheric parameters (T_{eff}, log{g}, ξ and [Fe/H]) in an automated way. StePar employs the 2002 version of the MOOG code (Sneden 1973) and a grid of Kurucz ATLAS9 plane-paralell model atmospheres (Kurucz 1993). The atmospheric parameters are obtained from the EWs of 263 Fe I and 36 Fe II lines (obtained from Sousa et al. 2008, A&A, 487, 373) iterating until the excitation and ionization equilibrium are fullfilled. StePar uses a Downhill Simplex method that minimizes a quadratic form composed by the excitation and ionization equilibrium conditions. Atmospheric parameters determined by StePar are independent of the stellar parameters initial-guess for the problem star, therefore we employ the canonical solar values as initial input. StePar can only deal with FGK stars from F6 to K4, also it can not work with fast rotators, veiled spectra, very metal poor stars or Signal to noise ratio below 30. Optionally StePar can operate with MARCS models (Gustafson et al. 2008, A&A, 486, 951) instead of Kurucz ATLAS9 models, additionally Turbospectrum (Alvarez & Plez 1998, A&A, 330, 1109) can replace the MOOG code and play its role during the parameter determination. StePar has been used to determine stellar parameters for some studies (Tabernero et al. 2012, A&A, 547, A13; Wisniewski et al. 2012, AJ, 143, 107). In addition StePar is being used to obtain parameters for FGK stars from the GAIA-ESO Survey.

  3. AQUATOX Frequently Asked Questions

    EPA Pesticide Factsheets

    Capabilities, Installation, Source Code, Example Study Files, Biotic State Variables, Initial Conditions, Loadings, Volume, Sediments, Parameters, Libraries, Ecotoxicology, Waterbodies, Link to Watershed Models, Output, Metals, Troubleshooting

  4. Initiation and structures of gaseous detonation

    NASA Astrophysics Data System (ADS)

    Vasil'ev, A. A.; Vasiliev, V. A.

    2018-03-01

    The analysis of the initiation of a detonation wave (DW) and the emergence of a multi-front structure of the DW-front are presented. It is shown that the structure of the DW arises spontaneously at the stage of a strong overdriven of the wave. The hypothesis of the gradual enhancement of small perturbations on an initially smooth initiating blast wave, traditionally used in the numerical simulation of multi-front detonation, does not agree with the experimental data. The instability of the DW is due to the chemical energy release of the combustible mixture Q. A technique for determining the Q-value of mixture was proposed, based on reconstruction of the trajectory of the expanding wave from the position of the strong explosion model. The wave trajectory at the critical initiation of a multifront detonation in a combustible mixture is compared with the trajectory of an explosive wave from the same initiator in an inert mixture whose gas-dynamic parameters are equivalent to the parameters of the combustible mixture. The energy release of a mixture is defined as the difference in the joint energy release of the initiator and the fuel mixture during the critical initiation and energy release of the initiator when the blast wave is excited in an inert mixture. Observable deviations of the experimental profile of Q from existing model representations were found.

  5. Self-similar solutions to isothermal shock problems

    NASA Astrophysics Data System (ADS)

    Deschner, Stephan C.; Illenseer, Tobias F.; Duschl, Wolfgang J.

    We investigate exact solutions for isothermal shock problems in different one-dimensional geometries. These solutions are given as analytical expressions if possible, or are computed using standard numerical methods for solving ordinary differential equations. We test the numerical solutions against the analytical expressions to verify the correctness of all numerical algorithms. We use similarity methods to derive a system of ordinary differential equations (ODE) yielding exact solutions for power law density distributions as initial conditions. Further, the system of ODEs accounts for implosion problems (IP) as well as explosion problems (EP) by changing the initial or boundary conditions, respectively. Taking genuinely isothermal approximations into account leads to additional insights of EPs in contrast to earlier models. We neglect a constant initial energy contribution but introduce a parameter to adjust the initial mass distribution of the system. Moreover, we show that due to this parameter a constant initial density is not allowed for isothermal EPs. Reasonable restrictions for this parameter are given. Both, the (genuinely) isothermal implosion as well as the explosion problem are solved for the first time.

  6. A hydroclimatological approach to predicting regional landslide probability using Landlab

    NASA Astrophysics Data System (ADS)

    Strauch, Ronda; Istanbulluoglu, Erkan; Nudurupati, Sai Siddhartha; Bandaragoda, Christina; Gasparini, Nicole M.; Tucker, Gregory E.

    2018-02-01

    We develop a hydroclimatological approach to the modeling of regional shallow landslide initiation that integrates spatial and temporal dimensions of parameter uncertainty to estimate an annual probability of landslide initiation based on Monte Carlo simulations. The physically based model couples the infinite-slope stability model with a steady-state subsurface flow representation and operates in a digital elevation model. Spatially distributed gridded data for soil properties and vegetation classification are used for parameter estimation of probability distributions that characterize model input uncertainty. Hydrologic forcing to the model is through annual maximum daily recharge to subsurface flow obtained from a macroscale hydrologic model. We demonstrate the model in a steep mountainous region in northern Washington, USA, over 2700 km2. The influence of soil depth on the probability of landslide initiation is investigated through comparisons among model output produced using three different soil depth scenarios reflecting the uncertainty of soil depth and its potential long-term variability. We found elevation-dependent patterns in probability of landslide initiation that showed the stabilizing effects of forests at low elevations, an increased landslide probability with forest decline at mid-elevations (1400 to 2400 m), and soil limitation and steep topographic controls at high alpine elevations and in post-glacial landscapes. These dominant controls manifest themselves in a bimodal distribution of spatial annual landslide probability. Model testing with limited observations revealed similarly moderate model confidence for the three hazard maps, suggesting suitable use as relative hazard products. The model is available as a component in Landlab, an open-source, Python-based landscape earth systems modeling environment, and is designed to be easily reproduced utilizing HydroShare cyberinfrastructure.

  7. Evaluation of Potential Evapotranspiration from a Hydrologic Model on a National Scale

    NASA Astrophysics Data System (ADS)

    Hakala, Kirsti; Markstrom, Steven; Hay, Lauren

    2015-04-01

    The U.S. Geological Survey has developed a National Hydrologic Model (NHM) to support coordinated, comprehensive and consistent hydrologic model development and facilitate the application of simulations on the scale of the continental U.S. The NHM has a consistent geospatial fabric for modeling, consisting of over 100,000 hydrologic response units HRUs). Each HRU requires accurate parameter estimates, some of which are attained from automated calibration. However, improved calibration can be achieved by initially utilizing as many parameters as possible from national data sets. This presentation investigates the effectiveness of calculating potential evapotranspiration (PET) parameters based on mean monthly values from the NOAA PET Atlas. Additional PET products are then used to evaluate the PET parameters. Effectively utilizing existing national-scale data sets can simplify the effort in establishing a robust NHM.

  8. Simultaneous Intrinsic and Extrinsic Parameter Identification of a Hand-Mounted Laser-Vision Sensor

    PubMed Central

    Lee, Jong Kwang; Kim, Kiho; Lee, Yongseok; Jeong, Taikyeong

    2011-01-01

    In this paper, we propose a simultaneous intrinsic and extrinsic parameter identification of a hand-mounted laser-vision sensor (HMLVS). A laser-vision sensor (LVS), consisting of a camera and a laser stripe projector, is used as a sensor component of the robotic measurement system, and it measures the range data with respect to the robot base frame using the robot forward kinematics and the optical triangulation principle. For the optimal estimation of the model parameters, we applied two optimization techniques: a nonlinear least square optimizer and a particle swarm optimizer. Best-fit parameters, including both the intrinsic and extrinsic parameters of the HMLVS, are simultaneously obtained based on the least-squares criterion. From the simulation and experimental results, it is shown that the parameter identification problem considered was characterized by a highly multimodal landscape; thus, the global optimization technique such as a particle swarm optimization can be a promising tool to identify the model parameters for a HMLVS, while the nonlinear least square optimizer often failed to find an optimal solution even when the initial candidate solutions were selected close to the true optimum. The proposed optimization method does not require good initial guesses of the system parameters to converge at a very stable solution and it could be applied to a kinematically dissimilar robot system without loss of generality. PMID:22164104

  9. Determination of ionospheric electron density profiles from satellite UV (Ultraviolet) emission measurements, fiscal year 1984

    NASA Astrophysics Data System (ADS)

    Daniell, R. E.; Strickland, D. J.; Decker, D. T.; Jasperse, J. R.; Carlson, H. C., Jr.

    1985-04-01

    The possible use of satellite ultraviolet measurements to deduce the ionospheric electron density profile (EDP) on a global basis is discussed. During 1984 comparisons were continued between the hybrid daytime ionospheric model and the experimental observations. These comparison studies indicate that: (1) the essential features of the EDP and certain UV emissions can be modelled; (2) the models are sufficiently sensitive to input parameters to yield poor agreement with observations when typical input values are used; (3) reasonable adjustments of the parameters can produce excellent agreement between theory and data for either EDP or airglow but not both; and (4) the qualitative understanding of the relationship between two input parameters (solar flux and neutral densities) and the model EDP and airglow features has been verified. The development of a hybrid dynamic model for the nighttime midlatitude ionosphere has been initiated. This model is similar to the daytime hybrid model, but uses the sunset EDP as an initial value and calculates the EDP as a function of time through the night. In addition, a semiempirical model has been developed, based on the assumption that the nighttime EDP is always well described by a modified Chapman function. This model has great simplicity and allows the EDP to be inferred in a straightforward manner from optical observations. Comparisons with data are difficult, however, because of the low intensity of the nightglow.

  10. Sensitivity of Beam Parameters to a Station C Solenoid Scan on Axis II

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schulze, Martin E.

    Magnet scans are a standard technique for determining beam parameters in accelerators. Beam parameters are inferred from spot size measurements using a model of the beam optics. The sensitivity of the measured beam spot size to the beam parameters is investigated for typical DARHT Axis II beam energies and currents. In a typical S4 solenoid scan, the downstream transport is tuned to achieve a round beam at Station C with an envelope radius of about 1.5 cm with a very small divergence with S4 off. The typical beam energy and current are 16.0 MeV and 1.625 kA. Figures 1-3 showmore » the sensitivity of the bean size at Station C to the emittance, initial radius and initial angle respectively. To better understand the relative sensitivity of the beam size to the emittance, initial radius and initial angle, linear regressions were performed for each parameter as a function of the S4 setting. The results are shown in Figure 4. The measured slope was scaled to have a maximum value of 1 in order to present the relative sensitivities in a single plot. Figure 4 clearly shows the beam size at the minimum of the S4 scan is most sensitive to emittance and relatively insensitive to initial radius and angle as expected. The beam emittance is also very sensitive to the beam size of the converging beam and becomes insensitive to the beam size of the diverging beam. Measurements of the beam size of the diverging beam provide the greatest sensitivity to the initial beam radius and to a lesser extent the initial beam angle. The converging beam size is initially very sensitive to the emittance and initial angle at low S4 currents. As the S4 current is increased the sensitivity to the emittance remains strong while the sensitivity to the initial angle diminishes.« less

  11. Mathematical modeling of a Ti:sapphire solid-state laser

    NASA Technical Reports Server (NTRS)

    Swetits, John J.

    1987-01-01

    The project initiated a study of a mathematical model of a tunable Ti:sapphire solid-state laser. A general mathematical model was developed for the purpose of identifying design parameters which will optimize the system, and serve as a useful predictor of the system's behavior.

  12. A new Bayesian recursive technique for parameter estimation

    NASA Astrophysics Data System (ADS)

    Kaheil, Yasir H.; Gill, M. Kashif; McKee, Mac; Bastidas, Luis

    2006-08-01

    The performance of any model depends on how well its associated parameters are estimated. In the current application, a localized Bayesian recursive estimation (LOBARE) approach is devised for parameter estimation. The LOBARE methodology is an extension of the Bayesian recursive estimation (BARE) method. It is applied in this paper on two different types of models: an artificial intelligence (AI) model in the form of a support vector machine (SVM) application for forecasting soil moisture and a conceptual rainfall-runoff (CRR) model represented by the Sacramento soil moisture accounting (SAC-SMA) model. Support vector machines, based on statistical learning theory (SLT), represent the modeling task as a quadratic optimization problem and have already been used in various applications in hydrology. They require estimation of three parameters. SAC-SMA is a very well known model that estimates runoff. It has a 13-dimensional parameter space. In the LOBARE approach presented here, Bayesian inference is used in an iterative fashion to estimate the parameter space that will most likely enclose a best parameter set. This is done by narrowing the sampling space through updating the "parent" bounds based on their fitness. These bounds are actually the parameter sets that were selected by BARE runs on subspaces of the initial parameter space. The new approach results in faster convergence toward the optimal parameter set using minimum training/calibration data and fewer sets of parameter values. The efficacy of the localized methodology is also compared with the previously used BARE algorithm.

  13. Improving RNA nearest neighbor parameters for helices by going beyond the two-state model.

    PubMed

    Spasic, Aleksandar; Berger, Kyle D; Chen, Jonathan L; Seetin, Matthew G; Turner, Douglas H; Mathews, David H

    2018-06-01

    RNA folding free energy change nearest neighbor parameters are widely used to predict folding stabilities of secondary structures. They were determined by linear regression to datasets of optical melting experiments on small model systems. Traditionally, the optical melting experiments are analyzed assuming a two-state model, i.e. a structure is either complete or denatured. Experimental evidence, however, shows that structures exist in an ensemble of conformations. Partition functions calculated with existing nearest neighbor parameters predict that secondary structures can be partially denatured, which also directly conflicts with the two-state model. Here, a new approach for determining RNA nearest neighbor parameters is presented. Available optical melting data for 34 Watson-Crick helices were fit directly to a partition function model that allows an ensemble of conformations. Fitting parameters were the enthalpy and entropy changes for helix initiation, terminal AU pairs, stacks of Watson-Crick pairs and disordered internal loops. The resulting set of nearest neighbor parameters shows a 38.5% improvement in the sum of residuals in fitting the experimental melting curves compared to the current literature set.

  14. The relative pose estimation of aircraft based on contour model

    NASA Astrophysics Data System (ADS)

    Fu, Tai; Sun, Xiangyi

    2017-02-01

    This paper proposes a relative pose estimation approach based on object contour model. The first step is to obtain a two-dimensional (2D) projection of three-dimensional (3D)-model-based target, which will be divided into 40 forms by clustering and LDA analysis. Then we proceed by extracting the target contour in each image and computing their Pseudo-Zernike Moments (PZM), thus a model library is constructed in an offline mode. Next, we spot a projection contour that resembles the target silhouette most in the present image from the model library with reference of PZM; then similarity transformation parameters are generated as the shape context is applied to match the silhouette sampling location, from which the identification parameters of target can be further derived. Identification parameters are converted to relative pose parameters, in the premise that these values are the initial result calculated via iterative refinement algorithm, as the relative pose parameter is in the neighborhood of actual ones. At last, Distance Image Iterative Least Squares (DI-ILS) is employed to acquire the ultimate relative pose parameters.

  15. A new universal dynamic model to describe eating rate and cumulative intake curves123

    PubMed Central

    Paynter, Jonathan; Peterson, Courtney M; Heymsfield, Steven B

    2017-01-01

    Background: Attempts to model cumulative intake curves with quadratic functions have not simultaneously taken gustatory stimulation, satiation, and maximal food intake into account. Objective: Our aim was to develop a dynamic model for cumulative intake curves that captures gustatory stimulation, satiation, and maximal food intake. Design: We developed a first-principles model describing cumulative intake that universally describes gustatory stimulation, satiation, and maximal food intake using 3 key parameters: 1) the initial eating rate, 2) the effective duration of eating, and 3) the maximal food intake. These model parameters were estimated in a study (n = 49) where eating rates were deliberately changed. Baseline data was used to determine the quality of model's fit to data compared with the quadratic model. The 3 parameters were also calculated in a second study consisting of restrained and unrestrained eaters. Finally, we calculated when the gustatory stimulation phase is short or absent. Results: The mean sum squared error for the first-principles model was 337.1 ± 240.4 compared with 581.6 ± 563.5 for the quadratic model, or a 43% improvement in fit. Individual comparison demonstrated lower errors for 94% of the subjects. Both sex (P = 0.002) and eating duration (P = 0.002) were associated with the initial eating rate (adjusted R2 = 0.23). Sex was also associated (P = 0.03 and P = 0.012) with the effective eating duration and maximum food intake (adjusted R2 = 0.06 and 0.11). In participants directed to eat as much as they could compared with as much as they felt comfortable with, the maximal intake parameter was approximately double the amount. The model found that certain parameter regions resulted in both stimulation and satiation phases, whereas others only produced a satiation phase. Conclusions: The first-principles model better quantifies interindividual differences in food intake, shows how aspects of food intake differ across subpopulations, and can be applied to determine how eating behavior factors influence total food intake. PMID:28077377

  16. An Initial Non-Equilibrium Porous-Media Model for CFD Simulation of Stirling Regenerators

    NASA Technical Reports Server (NTRS)

    Tew, Roy; Simon, Terry; Gedeon, David; Ibrahim, Mounir; Rong, Wei

    2006-01-01

    The objective of this paper is to define empirical parameters (or closwre models) for an initial thermai non-equilibrium porous-media model for use in Computational Fluid Dynamics (CFD) codes for simulation of Stirling regenerators. The two CFD codes currently being used at Glenn Research Center (GRC) for Stirling engine modeling are Fluent and CFD-ACE. The porous-media models available in each of these codes are equilibrium models, which assmne that the solid matrix and the fluid are in thermal equilibrium at each spatial location within the porous medium. This is believed to be a poor assumption for the oscillating-flow environment within Stirling regenerators; Stirling 1-D regenerator models, used in Stirling design, we non-equilibrium regenerator models and suggest regenerator matrix and gas average temperatures can differ by several degrees at a given axial location end time during the cycle. A NASA regenerator research grant has been providing experimental and computational results to support definition of various empirical coefficients needed in defining a noa-equilibrium, macroscopic, porous-media model (i.e., to define "closure" relations). The grant effort is being led by Cleveland State University, with subcontractor assistance from the University of Minnesota, Gedeon Associates, and Sunpower, Inc. Friction-factor and heat-transfer correlations based on data taken with the NASAlSunpower oscillating-flow test rig also provide experimentally based correlations that are useful in defining parameters for the porous-media model; these correlations are documented in Gedeon Associates' Sage Stirling-Code Manuals. These sources of experimentally based information were used to define the following terms and parameters needed in the non-equilibrium porous-media model: hydrodynamic dispersion, permeability, inertial coefficient, fluid effective thermal conductivity (including themal dispersion and estimate of tortuosity effects}, and fluid-solid heat transfer coefficient. Solid effective thermal conductivity (including the effect of tortuosity) was also estimated. Determination of the porous-media model parameters was based on planned use in a CFD model of Infinia's Stirling Technology Demonstration Convertor (TDC), which uses a random-fiber regenerator matrix. The non-equilibrium porous-media model presented is considered to be an initial, or "draft," model for possible incorporation in commercial CFD codes, with the expectation that the empirical parameters will likely need to be updated once resulting Stirling CFD model regenerator and engine results have been analyzed. The emphasis of the paper is on use of available data to define empirical parameters (and closure models) needed in a thermal non-equilibrium porous-media model for Stirling regenerator simulation. Such a model has not yet been implemented by the authors or their associates. However, it is anticipated that a thermal non-equilibrium model such as that presented here, when iacorporated in the CFD codes, will improve our ability to accurately model Stirling regenerators with CFD relative to current thermal-equilibrium porous-media models.

  17. Nonlinear mathematical modeling and sensitivity analysis of hydraulic drive unit

    NASA Astrophysics Data System (ADS)

    Kong, Xiangdong; Yu, Bin; Quan, Lingxiao; Ba, Kaixian; Wu, Liujie

    2015-09-01

    The previous sensitivity analysis researches are not accurate enough and also have the limited reference value, because those mathematical models are relatively simple and the change of the load and the initial displacement changes of the piston are ignored, even experiment verification is not conducted. Therefore, in view of deficiencies above, a nonlinear mathematical model is established in this paper, including dynamic characteristics of servo valve, nonlinear characteristics of pressure-flow, initial displacement of servo cylinder piston and friction nonlinearity. The transfer function block diagram is built for the hydraulic drive unit closed loop position control, as well as the state equations. Through deriving the time-varying coefficient items matrix and time-varying free items matrix of sensitivity equations respectively, the expression of sensitivity equations based on the nonlinear mathematical model are obtained. According to structure parameters of hydraulic drive unit, working parameters, fluid transmission characteristics and measured friction-velocity curves, the simulation analysis of hydraulic drive unit is completed on the MATLAB/Simulink simulation platform with the displacement step 2 mm, 5 mm and 10 mm, respectively. The simulation results indicate that the developed nonlinear mathematical model is sufficient by comparing the characteristic curves of experimental step response and simulation step response under different constant load. Then, the sensitivity function time-history curves of seventeen parameters are obtained, basing on each state vector time-history curve of step response characteristic. The maximum value of displacement variation percentage and the sum of displacement variation absolute values in the sampling time are both taken as sensitivity indexes. The sensitivity indexes values above are calculated and shown visually in histograms under different working conditions, and change rules are analyzed. Then the sensitivity indexes values of four measurable parameters, such as supply pressure, proportional gain, initial position of servo cylinder piston and load force, are verified experimentally on test platform of hydraulic drive unit, and the experimental research shows that the sensitivity analysis results obtained through simulation are approximate to the test results. This research indicates each parameter sensitivity characteristics of hydraulic drive unit, the performance-affected main parameters and secondary parameters are got under different working conditions, which will provide the theoretical foundation for the control compensation and structure optimization of hydraulic drive unit.

  18. Applications of Monte Carlo method to nonlinear regression of rheological data

    NASA Astrophysics Data System (ADS)

    Kim, Sangmo; Lee, Junghaeng; Kim, Sihyun; Cho, Kwang Soo

    2018-02-01

    In rheological study, it is often to determine the parameters of rheological models from experimental data. Since both rheological data and values of the parameters vary in logarithmic scale and the number of the parameters is quite large, conventional method of nonlinear regression such as Levenberg-Marquardt (LM) method is usually ineffective. The gradient-based method such as LM is apt to be caught in local minima which give unphysical values of the parameters whenever the initial guess of the parameters is far from the global optimum. Although this problem could be solved by simulated annealing (SA), the Monte Carlo (MC) method needs adjustable parameter which could be determined in ad hoc manner. We suggest a simplified version of SA, a kind of MC methods which results in effective values of the parameters of most complicated rheological models such as the Carreau-Yasuda model of steady shear viscosity, discrete relaxation spectrum and zero-shear viscosity as a function of concentration and molecular weight.

  19. Prediction of compressibility parameters of the soils using artificial neural network.

    PubMed

    Kurnaz, T Fikret; Dagdeviren, Ugur; Yildiz, Murat; Ozkan, Ozhan

    2016-01-01

    The compression index and recompression index are one of the important compressibility parameters to determine the settlement calculation for fine-grained soil layers. These parameters can be determined by carrying out laboratory oedometer test on undisturbed samples; however, the test is quite time-consuming and expensive. Therefore, many empirical formulas based on regression analysis have been presented to estimate the compressibility parameters using soil index properties. In this paper, an artificial neural network (ANN) model is suggested for prediction of compressibility parameters from basic soil properties. For this purpose, the input parameters are selected as the natural water content, initial void ratio, liquid limit and plasticity index. In this model, two output parameters, including compression index and recompression index, are predicted in a combined network structure. As the result of the study, proposed ANN model is successful for the prediction of the compression index, however the predicted recompression index values are not satisfying compared to the compression index.

  20. From global to local: exploring the relationship between parameters and behaviors in models of electrical excitability.

    PubMed

    Fletcher, Patrick; Bertram, Richard; Tabak, Joel

    2016-06-01

    Models of electrical activity in excitable cells involve nonlinear interactions between many ionic currents. Changing parameters in these models can produce a variety of activity patterns with sometimes unexpected effects. Further more, introducing new currents will have different effects depending on the initial parameter set. In this study we combined global sampling of parameter space and local analysis of representative parameter sets in a pituitary cell model to understand the effects of adding K (+) conductances, which mediate some effects of hormone action on these cells. Global sampling ensured that the effects of introducing K (+) conductances were captured across a wide variety of contexts of model parameters. For each type of K (+) conductance we determined the types of behavioral transition that it evoked. Some transitions were counterintuitive, and may have been missed without the use of global sampling. In general, the wide range of transitions that occurred when the same current was applied to the model cell at different locations in parameter space highlight the challenge of making accurate model predictions in light of cell-to-cell heterogeneity. Finally, we used bifurcation analysis and fast/slow analysis to investigate why specific transitions occur in representative individual models. This approach relies on the use of a graphics processing unit (GPU) to quickly map parameter space to model behavior and identify parameter sets for further analysis. Acceleration with modern low-cost GPUs is particularly well suited to exploring the moderate-sized (5-20) parameter spaces of excitable cell and signaling models.

  1. On the generation of climate model ensembles

    NASA Astrophysics Data System (ADS)

    Haughton, Ned; Abramowitz, Gab; Pitman, Andy; Phipps, Steven J.

    2014-10-01

    Climate model ensembles are used to estimate uncertainty in future projections, typically by interpreting the ensemble distribution for a particular variable probabilistically. There are, however, different ways to produce climate model ensembles that yield different results, and therefore different probabilities for a future change in a variable. Perhaps equally importantly, there are different approaches to interpreting the ensemble distribution that lead to different conclusions. Here we use a reduced-resolution climate system model to compare three common ways to generate ensembles: initial conditions perturbation, physical parameter perturbation, and structural changes. Despite these three approaches conceptually representing very different categories of uncertainty within a modelling system, when comparing simulations to observations of surface air temperature they can be very difficult to separate. Using the twentieth century CMIP5 ensemble for comparison, we show that initial conditions ensembles, in theory representing internal variability, significantly underestimate observed variance. Structural ensembles, perhaps less surprisingly, exhibit over-dispersion in simulated variance. We argue that future climate model ensembles may need to include parameter or structural perturbation members in addition to perturbed initial conditions members to ensure that they sample uncertainty due to internal variability more completely. We note that where ensembles are over- or under-dispersive, such as for the CMIP5 ensemble, estimates of uncertainty need to be treated with care.

  2. Interdisciplinary Modeling and Dynamics of Archipelago Straits

    DTIC Science & Technology

    2009-01-01

    modeling, tidal modeling and multi-dynamics nested domains and non-hydrostatic modeling WORK COMPLETED Realistic Multiscale Simulations, Real-time...six state variables (chlorophyll, nitrate , ammonium, detritus, phytoplankton, and zooplankton) were needed to initialize simulations. Using biological...parameters from literature, climatology from World Ocean Atlas data for nitrate and chlorophyll profiles extracted from satellite data, a first

  3. Hybrid Inflation: Multi-field Dynamics and Cosmological Constraints

    NASA Astrophysics Data System (ADS)

    Clesse, Sébastien

    2011-09-01

    The dynamics of hybrid models is usually approximated by the evolution of a scalar field slowly rolling along a nearly flat valley. Inflation ends with a waterfall phase, due to a tachyonic instability. This final phase is usually assumed to be nearly instantaneous. In this thesis, we go beyond these approximations and analyze the exact 2-field dynamics of hybrid models. Several effects are put in evidence: 1) the possible slow-roll violations along the valley induce the non existence of inflation at small field values. Provided super-planckian fields, the scalar spectrum of the original model is red, in agreement with observations. 2) The initial field values are not fine-tuned along the valley but also occupy a considerable part of the field space exterior to it. They form a structure with fractal boundaries. Using bayesian methods, their distribution in the whole parameter space is studied. Natural bounds on the potential parameters are derived. 3) For the original model, inflation is found to continue for more than 60 e-folds along waterfall trajectories in some part of the parameter space. The scalar power spectrum of adiabatic perturbations is modified and is generically red, possibly in agreement with CMB observations. Topological defects are conveniently stretched outside the observable Universe. 4) The analysis of the initial conditions is extended to the case of a closed Universe, in which the initial singularity is replaced by a classical bounce. In the third part of the thesis, we study how the present CMB constraints on the cosmological parameters could be ameliorated with the observation of the 21cm cosmic background, by future giant radio-telescopes. Forecasts are determined for a characteristic Fast Fourier Transform Telescope, by using both Fisher matrix and MCMC methods.

  4. Parameter estimation in plasmonic QED

    NASA Astrophysics Data System (ADS)

    Jahromi, H. Rangani

    2018-03-01

    We address the problem of parameter estimation in the presence of plasmonic modes manipulating emitted light via the localized surface plasmons in a plasmonic waveguide at the nanoscale. The emitter that we discuss is the nitrogen vacancy centre (NVC) in diamond modelled as a qubit. Our goal is to estimate the β factor measuring the fraction of emitted energy captured by waveguide surface plasmons. The best strategy to obtain the most accurate estimation of the parameter, in terms of the initial state of the probes and different control parameters, is investigated. In particular, for two-qubit estimation, it is found although we may achieve the best estimation at initial instants by using the maximally entangled initial states, at long times, the optimal estimation occurs when the initial state of the probes is a product one. We also find that decreasing the interqubit distance or increasing the propagation length of the plasmons improve the precision of the estimation. Moreover, decrease of spontaneous emission rate of the NVCs retards the quantum Fisher information (QFI) reduction and therefore the vanishing of the QFI, measuring the precision of the estimation, is delayed. In addition, if the phase parameter of the initial state of the two NVCs is equal to πrad, the best estimation with the two-qubit system is achieved when initially the NVCs are maximally entangled. Besides, the one-qubit estimation has been also analysed in detail. Especially, we show that, using a two-qubit probe, at any arbitrary time, enhances considerably the precision of estimation in comparison with one-qubit estimation.

  5. Reactive flow model development for PBXW-126 using modern nonlinear optimization methods

    NASA Astrophysics Data System (ADS)

    Murphy, M. J.; Simpson, R. L.; Urtiew, P. A.; Souers, P. C.; Garcia, F.; Garza, R. G.

    1996-05-01

    The initiation and detonation behavior of PBXW-126 has been characterized and is described. PBXW-126 is a composite explosive consisting of approximately equal amounts of RDX, AP, AL, and NTO with a polyurethane binder. The three term ignition and growth of reaction model parameters (ignition+two growth terms) have been found using nonlinear optimization methods to determine the "best" set of model parameters. The ignition term treats the initiation of up to 0.5% of the RDX. The first growth term in the model treats the RDX growth of reaction up to 20% reacted. The second growth term treats the subsequent growth of reaction of the remaining AP/AL/NTO. The unreacted equation of state (EOS) was determined from the wave profiles of embedded gauge tests while the JWL product EOS was determined from cylinder expansion test results. The nonlinear optimization code, NLQPEB/GLO, was used to determine the "best" set of coefficients for the three term Lee-Tarver ignition and growth of reaction model.

  6. Bayesian Treatment of Uncertainty in Environmental Modeling: Optimization, Sampling and Data Assimilation Using the DREAM Software Package

    NASA Astrophysics Data System (ADS)

    Vrugt, J. A.

    2012-12-01

    In the past decade much progress has been made in the treatment of uncertainty in earth systems modeling. Whereas initial approaches has focused mostly on quantification of parameter and predictive uncertainty, recent methods attempt to disentangle the effects of parameter, forcing (input) data, model structural and calibration data errors. In this talk I will highlight some of our recent work involving theory, concepts and applications of Bayesian parameter and/or state estimation. In particular, new methods for sequential Monte Carlo (SMC) and Markov Chain Monte Carlo (MCMC) simulation will be presented with emphasis on massively parallel distributed computing and quantification of model structural errors. The theoretical and numerical developments will be illustrated using model-data synthesis problems in hydrology, hydrogeology and geophysics.

  7. Comparison of different stomatal conductance algorithms for ozone flux modelling [Proceedings

    Treesearch

    P. Buker; L. D. Emberson; M. R. Ashmore; G. Gerosa; C. Jacobs; W. J. Massman; J. Muller; N. Nikolov; K. Novak; E. Oksanen; D. De La Torre; J. -P. Tuovinen

    2006-01-01

    The ozone deposition model (D03SE) that has been developed and applied within the EMEP photooxidant model (Emberson et al., 2000, Simpson et al. 2003) currently estimates stomatal ozone flux using a stomatal conductance (gs) model based on the multiplicative algorithm initially developed by Jarvis (1976). This model links gs to environmental and phenological parameters...

  8. Dynamical recovery of SU(2) symmetry in the mass-quenched Hubbard model

    NASA Astrophysics Data System (ADS)

    Du, Liang; Fiete, Gregory A.

    2018-02-01

    We use nonequilibrium dynamical mean-field theory with iterative perturbation theory as an impurity solver to study the recovery of SU(2) symmetry in real time following a hopping integral parameter quench from a mass-imbalanced to a mass-balanced single-band Hubbard model at half filling. A dynamical order parameter γ (t ) is defined to characterize the evolution of the system towards SU(2) symmetry. By comparing the momentum-dependent occupation from an equilibrium calculation [with the SU(2) symmetric Hamiltonian after the quench at an effective temperature] with the data from our nonequilibrium calculation, we conclude that the SU(2) symmetry recovered state is a thermalized state. Further evidence from the evolution of the density of states supports this conclusion. We find the order parameter in the weak Coulomb interaction regime undergoes an approximate exponential decay. We numerically investigate the interplay of the relevant parameters (initial temperature, Coulomb interaction strength, initial mass-imbalance ratio) and their combined effect on the thermalization behavior. Finally, we study evolution of the order parameter as the hopping parameter is changed with either a linear ramp or a pulse. Our results can be useful in strategies to engineer the relaxation behavior of interacting quantum many-particle systems.

  9. Parameter optimization for surface flux transport models

    NASA Astrophysics Data System (ADS)

    Whitbread, T.; Yeates, A. R.; Muñoz-Jaramillo, A.; Petrie, G. J. D.

    2017-11-01

    Accurate prediction of solar activity calls for precise calibration of solar cycle models. Consequently we aim to find optimal parameters for models which describe the physical processes on the solar surface, which in turn act as proxies for what occurs in the interior and provide source terms for coronal models. We use a genetic algorithm to optimize surface flux transport models using National Solar Observatory (NSO) magnetogram data for Solar Cycle 23. This is applied to both a 1D model that inserts new magnetic flux in the form of idealized bipolar magnetic regions, and also to a 2D model that assimilates specific shapes of real active regions. The genetic algorithm searches for parameter sets (meridional flow speed and profile, supergranular diffusivity, initial magnetic field, and radial decay time) that produce the best fit between observed and simulated butterfly diagrams, weighted by a latitude-dependent error structure which reflects uncertainty in observations. Due to the easily adaptable nature of the 2D model, the optimization process is repeated for Cycles 21, 22, and 24 in order to analyse cycle-to-cycle variation of the optimal solution. We find that the ranges and optimal solutions for the various regimes are in reasonable agreement with results from the literature, both theoretical and observational. The optimal meridional flow profiles for each regime are almost entirely within observational bounds determined by magnetic feature tracking, with the 2D model being able to accommodate the mean observed profile more successfully. Differences between models appear to be important in deciding values for the diffusive and decay terms. In like fashion, differences in the behaviours of different solar cycles lead to contrasts in parameters defining the meridional flow and initial field strength.

  10. Serial robot for the trajectory optimization and error compensation of TMT mask exchange system

    NASA Astrophysics Data System (ADS)

    Wang, Jianping; Zhang, Feifan; Zhou, Zengxiang; Zhai, Chao

    2015-10-01

    Mask exchange system is the main part of Multi-Object Broadband Imaging Echellette (MOBIE) on the Thirty Meter Telescope (TMT). According to the conception of the TMT mask exchange system, the pre-design was introduced in the paper which was based on IRB 140 robot. The stiffness model of IRB 140 in SolidWorks was analyzed under different gravity vectors for further error compensation. In order to find the right location and path planning, the robot and the mask cassette model was imported into MOBIE model to perform different schemes simulation. And obtained the initial installation position and routing. Based on these initial parameters, IRB 140 robot was operated to simulate the path and estimate the mask exchange time. Meanwhile, MATLAB and ADAMS software were used to perform simulation analysis and optimize the route to acquire the kinematics parameters and compare with the experiment results. After simulation and experimental research mentioned in the paper, the theoretical reference was acquired which could high efficient improve the structure of the mask exchange system parameters optimization of the path and precision of the robot position.

  11. Analysis of the Large Urban Fire Environment. Part II. Parametric Analysis and Model City Simulations.

    DTIC Science & Technology

    1982-11-01

    algorithm for turning-region boundary value problem -70- d. Program control parameters: ALPHA (Qq) max’ maximum value of Qq in present coding. BETA, BLOSS...Parameters available for either system descrip- tion or program control . (These parameters are currently unused, so they are set equal to zero.) IGUESS...Parameter that controls the initial choices of first-shoot values along y = 0. IGUESS = 1: Discretized versions of P(r, 0), T(r, 0), and u(r, 0) must

  12. Permutation on hybrid natural inflation

    NASA Astrophysics Data System (ADS)

    Carone, Christopher D.; Erlich, Joshua; Ramos, Raymundo; Sher, Marc

    2014-09-01

    We analyze a model of hybrid natural inflation based on the smallest non-Abelian discrete group S3. Leading invariant terms in the scalar potential have an accidental global symmetry that is spontaneously broken, providing a pseudo-Goldstone boson that is identified as the inflaton. The S3 symmetry restricts both the form of the inflaton potential and the couplings of the inflaton field to the waterfall fields responsible for the end of inflation. We identify viable points in the model parameter space. Although the power in tensor modes is small in most of the parameter space of the model, we identify parameter choices that yield potentially observable values of r without super-Planckian initial values of the inflaton field.

  13. Optimization of Biomathematical Model Predictions for Cognitive Performance Impairment in Individuals: Accounting for Unknown Traits and Uncertain States in Homeostatic and Circadian Processes

    PubMed Central

    Van Dongen, Hans P. A.; Mott, Christopher G.; Huang, Jen-Kuang; Mollicone, Daniel J.; McKenzie, Frederic D.; Dinges, David F.

    2007-01-01

    Current biomathematical models of fatigue and performance do not accurately predict cognitive performance for individuals with a priori unknown degrees of trait vulnerability to sleep loss, do not predict performance reliably when initial conditions are uncertain, and do not yield statistically valid estimates of prediction accuracy. These limitations diminish their usefulness for predicting the performance of individuals in operational environments. To overcome these 3 limitations, a novel modeling approach was developed, based on the expansion of a statistical technique called Bayesian forecasting. The expanded Bayesian forecasting procedure was implemented in the two-process model of sleep regulation, which has been used to predict performance on the basis of the combination of a sleep homeostatic process and a circadian process. Employing the two-process model with the Bayesian forecasting procedure to predict performance for individual subjects in the face of unknown traits and uncertain states entailed subject-specific optimization of 3 trait parameters (homeostatic build-up rate, circadian amplitude, and basal performance level) and 2 initial state parameters (initial homeostatic state and circadian phase angle). Prior information about the distribution of the trait parameters in the population at large was extracted from psychomotor vigilance test (PVT) performance measurements in 10 subjects who had participated in a laboratory experiment with 88 h of total sleep deprivation. The PVT performance data of 3 additional subjects in this experiment were set aside beforehand for use in prospective computer simulations. The simulations involved updating the subject-specific model parameters every time the next performance measurement became available, and then predicting performance 24 h ahead. Comparison of the predictions to the subjects' actual data revealed that as more data became available for the individuals at hand, the performance predictions became increasingly more accurate and had progressively smaller 95% confidence intervals, as the model parameters converged efficiently to those that best characterized each individual. Even when more challenging simulations were run (mimicking a change in the initial homeostatic state; simulating the data to be sparse), the predictions were still considerably more accurate than would have been achieved by the two-process model alone. Although the work described here is still limited to periods of consolidated wakefulness with stable circadian rhythms, the results obtained thus far indicate that the Bayesian forecasting procedure can successfully overcome some of the major outstanding challenges for biomathematical prediction of cognitive performance in operational settings. Citation: Van Dongen HPA; Mott CG; Huang JK; Mollicone DJ; McKenzie FD; Dinges DF. Optimization of biomathematical model predictions for cognitive performance impairment in individuals: accounting for unknown traits and uncertain states in homeostatic and circadian processes. SLEEP 2007;30(9):1129-1143. PMID:17910385

  14. Comparison of maximum runup through analytical and numerical approaches for different fault parameters estimates

    NASA Astrophysics Data System (ADS)

    Kanoglu, U.; Wronna, M.; Baptista, M. A.; Miranda, J. M. A.

    2017-12-01

    The one-dimensional analytical runup theory in combination with near shore synthetic waveforms is a promising tool for tsunami rapid early warning systems. Its application in realistic cases with complex bathymetry and initial wave condition from inverse modelling have shown that maximum runup values can be estimated reasonably well. In this study we generate a simplistic bathymetry domains which resemble realistic near-shore features. We investigate the accuracy of the analytical runup formulae to the variation of fault source parameters and near-shore bathymetric features. To do this we systematically vary the fault plane parameters to compute the initial tsunami wave condition. Subsequently, we use the initial conditions to run the numerical tsunami model using coupled system of four nested grids and compare the results to the analytical estimates. Variation of the dip angle of the fault plane showed that analytical estimates have less than 10% difference for angles 5-45 degrees in a simple bathymetric domain. These results shows that the use of analytical formulae for fast run up estimates constitutes a very promising approach in a simple bathymetric domain and might be implemented in Hazard Mapping and Early Warning.

  15. Dynamical initial-state model for relativistic heavy-ion collisions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shen, Chun; Schenke, Bjorn

    We present a fully three-dimensional model providing initial conditions for energy and net-baryon density distributions in heavy ion collisions at arbitrary collision energy. The model includes the dynamical deceleration of participating nucleons or valence quarks, depending on the implementation. The duration of the deceleration continues until the string spanned between colliding participants is assumed to thermalize, which is either after a fixed proper time, or a uctuating time depending on sampled final rapidities. Energy is deposited in space-time along the string, which in general will span a range of space-time rapidities and proper times. We study various observables obtained directlymore » from the initial state model, including net-baryon rapidity distributions, 2-particle rapidity correlations, as well as the rapidity decorrelation of the transverse geometry. Their dependence on the model implementation and parameter values is investigated. Here, we also present the implementation of the model with 3+1 dimensional hydrodynamics, which involves the addition of source terms that deposit energy and net-baryon densities produced by the initial state model at proper times greater than the initial time for the hydrodynamic simulation.« less

  16. Dynamical initial-state model for relativistic heavy-ion collisions

    NASA Astrophysics Data System (ADS)

    Shen, Chun; Schenke, Björn

    2018-02-01

    We present a fully three-dimensional model providing initial conditions for energy and net-baryon density distributions in heavy-ion collisions at arbitrary collision energy. The model includes the dynamical deceleration of participating nucleons or valence quarks, depending on the implementation. The duration of the deceleration continues until the string spanned between colliding participants is assumed to thermalize, which is either after a fixed proper time, or a fluctuating time depending on sampled final rapidities. Energy is deposited in space time along the string, which in general will span a range of space-time rapidities and proper times. We study various observables obtained directly from the initial-state model, including net-baryon rapidity distributions, two-particle rapidity correlations, as well as the rapidity decorrelation of the transverse geometry. Their dependence on the model implementation and parameter values is investigated. We also present the implementation of the model with 3+1-dimensional hydrodynamics, which involves the addition of source terms that deposit energy and net-baryon densities produced by the initial-state model at proper times greater than the initial time for the hydrodynamic simulation.

  17. Dynamical initial-state model for relativistic heavy-ion collisions

    DOE PAGES

    Shen, Chun; Schenke, Bjorn

    2018-02-15

    We present a fully three-dimensional model providing initial conditions for energy and net-baryon density distributions in heavy ion collisions at arbitrary collision energy. The model includes the dynamical deceleration of participating nucleons or valence quarks, depending on the implementation. The duration of the deceleration continues until the string spanned between colliding participants is assumed to thermalize, which is either after a fixed proper time, or a uctuating time depending on sampled final rapidities. Energy is deposited in space-time along the string, which in general will span a range of space-time rapidities and proper times. We study various observables obtained directlymore » from the initial state model, including net-baryon rapidity distributions, 2-particle rapidity correlations, as well as the rapidity decorrelation of the transverse geometry. Their dependence on the model implementation and parameter values is investigated. Here, we also present the implementation of the model with 3+1 dimensional hydrodynamics, which involves the addition of source terms that deposit energy and net-baryon densities produced by the initial state model at proper times greater than the initial time for the hydrodynamic simulation.« less

  18. Event-based stormwater management pond runoff temperature model

    NASA Astrophysics Data System (ADS)

    Sabouri, F.; Gharabaghi, B.; Sattar, A. M. A.; Thompson, A. M.

    2016-09-01

    Stormwater management wet ponds are generally very shallow and hence can significantly increase (about 5.4 °C on average in this study) runoff temperatures in summer months, which adversely affects receiving urban stream ecosystems. This study uses gene expression programming (GEP) and artificial neural networks (ANN) modeling techniques to advance our knowledge of the key factors governing thermal enrichment effects of stormwater ponds. The models developed in this study build upon and compliment the ANN model developed by Sabouri et al. (2013) that predicts the catchment event mean runoff temperature entering the pond as a function of event climatic and catchment characteristic parameters. The key factors that control pond outlet runoff temperature, include: (1) Upland Catchment Parameters (catchment drainage area and event mean runoff temperature inflow to the pond); (2) Climatic Parameters (rainfall depth, event mean air temperature, and pond initial water temperature); and (3) Pond Design Parameters (pond length-to-width ratio, pond surface area, pond average depth, and pond outlet depth). We used monitoring data for three summers from 2009 to 2011 in four stormwater management ponds, located in the cities of Guelph and Kitchener, Ontario, Canada to develop the models. The prediction uncertainties of the developed ANN and GEP models for the case study sites are around 0.4% and 1.7% of the median value. Sensitivity analysis of the trained models indicates that the thermal enrichment of the pond outlet runoff is inversely proportional to pond length-to-width ratio, pond outlet depth, and directly proportional to event runoff volume, event mean pond inflow runoff temperature, and pond initial water temperature.

  19. Transient Calibration of a Variably-Saturated Groundwater Flow Model By Iterative Ensemble Smoothering: Synthetic Case and Application to the Flow Induced During Shaft Excavation and Operation of the Bure Underground Research Laboratory

    NASA Astrophysics Data System (ADS)

    Lam, D. T.; Kerrou, J.; Benabderrahmane, H.; Perrochet, P.

    2017-12-01

    The calibration of groundwater flow models in transient state can be motivated by the expected improved characterization of the aquifer hydraulic properties, especially when supported by a rich transient dataset. In the prospect of setting up a calibration strategy for a variably-saturated transient groundwater flow model of the area around the ANDRA's Bure Underground Research Laboratory, we wish to take advantage of the long hydraulic head and flowrate time series collected near and at the access shafts in order to help inform the model hydraulic parameters. A promising inverse approach for such high-dimensional nonlinear model, and which applicability has been illustrated more extensively in other scientific fields, could be an iterative ensemble smoother algorithm initially developed for a reservoir engineering problem. Furthermore, the ensemble-based stochastic framework will allow to address to some extent the uncertainty of the calibration for a subsequent analysis of a flow process dependent prediction. By assimilating the available data in one single step, this method iteratively updates each member of an initial ensemble of stochastic realizations of parameters until the minimization of an objective function. However, as it is well known for ensemble-based Kalman methods, this correction computed from approximations of covariance matrices is most efficient when the ensemble realizations are multi-Gaussian. As shown by the comparison of the updated ensemble mean obtained for our simplified synthetic model of 2D vertical flow by using either multi-Gaussian or multipoint simulations of parameters, the ensemble smoother fails to preserve the initial connectivity of the facies and the parameter bimodal distribution. Given the geological structures depicted by the multi-layered geological model built for the real case, our goal is to find how to still best leverage the performance of the ensemble smoother while using an initial ensemble of conditional multi-Gaussian simulations or multipoint simulations as conceptually consistent as possible. Performance of the algorithm including additional steps to help mitigate the effects of non-Gaussian patterns, such as Gaussian anamorphosis, or resampling of facies from the training image using updated local probability constraints will be assessed.

  20. Automatic Determination of the Conic Coronal Mass Ejection Model Parameters

    NASA Technical Reports Server (NTRS)

    Pulkkinen, A.; Oates, T.; Taktakishvili, A.

    2009-01-01

    Characterization of the three-dimensional structure of solar transients using incomplete plane of sky data is a difficult problem whose solutions have potential for societal benefit in terms of space weather applications. In this paper transients are characterized in three dimensions by means of conic coronal mass ejection (CME) approximation. A novel method for the automatic determination of cone model parameters from observed halo CMEs is introduced. The method uses both standard image processing techniques to extract the CME mass from white-light coronagraph images and a novel inversion routine providing the final cone parameters. A bootstrap technique is used to provide model parameter distributions. When combined with heliospheric modeling, the cone model parameter distributions will provide direct means for ensemble predictions of transient propagation in the heliosphere. An initial validation of the automatic method is carried by comparison to manually determined cone model parameters. It is shown using 14 halo CME events that there is reasonable agreement, especially between the heliocentric locations of the cones derived with the two methods. It is argued that both the heliocentric locations and the opening half-angles of the automatically determined cones may be more realistic than those obtained from the manual analysis

  1. An advection-diffusion-reaction size-structured fish population dynamics model combined with a statistical parameter estimation procedure: application to the Indian ocean skipjack tuna fishery.

    PubMed

    Faugeras, Blaise; Maury, Olivier

    2005-10-01

    We develop an advection-diffusion size-structured fish population dynamics model and apply it to simulate the skipjack tuna population in the Indian Ocean. The model is fully spatialized, and movements are parameterized with oceanographical and biological data; thus it naturally reacts to environment changes. We first formulate an initial-boundary value problem and prove existence of a unique positive solution. We then discuss the numerical scheme chosen for the integration of the simulation model. In a second step we address the parameter estimation problem for such a model. With the help of automatic differentiation, we derive the adjoint code which is used to compute the exact gradient of a Bayesian cost function measuring the distance between the outputs of the model and catch and length frequency data. A sensitivity analysis shows that not all parameters can be estimated from the data. Finally twin experiments in which pertubated parameters are recovered from simulated data are successfully conducted.

  2. Aerodynamic parameter estimation via Fourier modulating function techniques

    NASA Technical Reports Server (NTRS)

    Pearson, A. E.

    1995-01-01

    Parameter estimation algorithms are developed in the frequency domain for systems modeled by input/output ordinary differential equations. The approach is based on Shinbrot's method of moment functionals utilizing Fourier based modulating functions. Assuming white measurement noises for linear multivariable system models, an adaptive weighted least squares algorithm is developed which approximates a maximum likelihood estimate and cannot be biased by unknown initial or boundary conditions in the data owing to a special property attending Shinbrot-type modulating functions. Application is made to perturbation equation modeling of the longitudinal and lateral dynamics of a high performance aircraft using flight-test data. Comparative studies are included which demonstrate potential advantages of the algorithm relative to some well established techniques for parameter identification. Deterministic least squares extensions of the approach are made to the frequency transfer function identification problem for linear systems and to the parameter identification problem for a class of nonlinear-time-varying differential system models.

  3. Pinatubo Emulation in Multiple Models (POEMs): co-ordinated experiments in the ISA-MIP model intercomparison activity component of the SPARC Stratospheric Sulphur and it's Role in Climate initiative (SSiRC)

    NASA Astrophysics Data System (ADS)

    Lee, Lindsay; Mann, Graham; Carslaw, Ken; Toohey, Matthew; Aquila, Valentina

    2016-04-01

    The World Climate Research Program's SPARC initiative has a new international activity "Stratospheric Sulphur and its Role in Climate" (SSiRC) to better understand changes in stratospheric aerosol and precursor gaseous sulphur species. One component of SSiRC involves an intercomparison "ISA-MIP" of composition-climate models that simulate the stratospheric aerosol layer interactively. Within PoEMS each modelling group will run a "perturbed physics ensemble" (PPE) of interactive stratospheric aerosol (ISA) simulations of the Pinatubo eruption, varying several uncertain parameters associated with the eruption's SO2 emissions and model processes. A powerful new technique to quantify and attribute sources of uncertainty in complex global models is described by Lee et al. (2011, ACP). The analysis uses Gaussian emulation to derive a probability density function (pdf) of predicted quantities, essentially interpolating the PPE results in multi-dimensional parameter space. Once trained on the ensemble, a Monte Carlo simulation with the fast Gaussian emulator enabling a full variance-based sensitivity analysis. The approach has already been used effectively by Carslaw et al., (2013, Nature) to quantify the uncertainty in the cloud albedo effect forcing from a 3D global aerosol-microphysics model allowing to compare the sensitivy of different predicted quantities to uncertainties in natural and anthropogenic emissions types, and structural parameters in the models. Within ISA-MIP, each group will carry out a PPE of runs, with the subsequent analysis with the emulator assessing the uncertainty in the volcanic forcings predicted by each model. In this poster presentation we will give an outline of the "PoEMS" analysis, describing the uncertain parameters to be varied and the relevance to further understanding differences identified in previous international stratospheric aerosol assessments.

  4. Improved battery parameter estimation method considering operating scenarios for HEV/EV applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Jufeng; Xia, Bing; Shang, Yunlong

    This study presents an improved battery parameter estimation method based on typical operating scenarios in hybrid electric vehicles and pure electric vehicles. Compared with the conventional estimation methods, the proposed method takes both the constant-current charging and the dynamic driving scenarios into account, and two separate sets of model parameters are estimated through different parts of the pulse-rest test. The model parameters for the constant-charging scenario are estimated from the data in the pulse-charging periods, while the model parameters for the dynamic driving scenario are estimated from the data in the rest periods, and the length of the fitted datasetmore » is determined by the spectrum analysis of the load current. In addition, the unsaturated phenomenon caused by the long-term resistor-capacitor (RC) network is analyzed, and the initial voltage expressions of the RC networks in the fitting functions are improved to ensure a higher model fidelity. Simulation and experiment results validated the feasibility of the developed estimation method.« less

  5. Improved battery parameter estimation method considering operating scenarios for HEV/EV applications

    DOE PAGES

    Yang, Jufeng; Xia, Bing; Shang, Yunlong; ...

    2016-12-22

    This study presents an improved battery parameter estimation method based on typical operating scenarios in hybrid electric vehicles and pure electric vehicles. Compared with the conventional estimation methods, the proposed method takes both the constant-current charging and the dynamic driving scenarios into account, and two separate sets of model parameters are estimated through different parts of the pulse-rest test. The model parameters for the constant-charging scenario are estimated from the data in the pulse-charging periods, while the model parameters for the dynamic driving scenario are estimated from the data in the rest periods, and the length of the fitted datasetmore » is determined by the spectrum analysis of the load current. In addition, the unsaturated phenomenon caused by the long-term resistor-capacitor (RC) network is analyzed, and the initial voltage expressions of the RC networks in the fitting functions are improved to ensure a higher model fidelity. Simulation and experiment results validated the feasibility of the developed estimation method.« less

  6. Initialization of hydrodynamics in relativistic heavy ion collisions with an energy-momentum transport model

    NASA Astrophysics Data System (ADS)

    Naboka, V. Yu.; Akkelin, S. V.; Karpenko, Iu. A.; Sinyukov, Yu. M.

    2015-01-01

    A key ingredient of hydrodynamical modeling of relativistic heavy ion collisions is thermal initial conditions, an input that is the consequence of a prethermal dynamics which is not completely understood yet. In the paper we employ a recently developed energy-momentum transport model of the prethermal stage to study influence of the alternative initial states in nucleus-nucleus collisions on flow and energy density distributions of the matter at the starting time of hydrodynamics. In particular, the dependence of the results on isotropic and anisotropic initial states is analyzed. It is found that at the thermalization time the transverse flow is larger and the maximal energy density is higher for the longitudinally squeezed initial momentum distributions. The results are also sensitive to the relaxation time parameter, equation of state at the thermalization time, and transverse profile of initial energy density distribution: Gaussian approximation, Glauber Monte Carlo profiles, etc. Also, test results ensure that the numerical code based on the energy-momentum transport model is capable of providing both averaged and fluctuating initial conditions for the hydrodynamic simulations of relativistic nuclear collisions.

  7. Numerical and analytical simulation of the production process of ZrO2 hollow particles

    NASA Astrophysics Data System (ADS)

    Safaei, Hadi; Emami, Mohsen Davazdah

    2017-12-01

    In this paper, the production process of hollow particles from the agglomerated particles is addressed analytically and numerically. The important parameters affecting this process, in particular, the initial porosity level of particles and the plasma gun types are investigated. The analytical model adopts a combination of quasi-steady thermal equilibrium and mechanical balance. In the analytical model, the possibility of a solid core existing in agglomerated particles is examined. In this model, a range of particle diameters (50μm ≤ D_{p0} ≤ 160 μ m) and various initial porosities ( 0.2 ≤ p ≤ 0.7) are considered. The numerical model employs the VOF technique for two-phase compressible flows. The production process of hollow particles from the agglomerated particles is simulated, considering an initial diameter of D_{p0} = 60 μm and initial porosity of p = 0.3, p = 0.5, and p = 0.7. Simulation results of the analytical model indicate that the solid core diameter is independent of the initial porosity, whereas the thickness of the particle shell strongly depends on the initial porosity. In both models, a hollow particle may hardly develop at small initial porosity values ( p < 0.3), while the particle disintegrates at high initial porosity values ( p > 0.6.

  8. Issues in the inverse modeling of a soil infiltration process

    NASA Astrophysics Data System (ADS)

    Kuraz, Michal; Jacka, Lukas; Leps, Matej

    2017-04-01

    This contribution addresses issues in evaluation of the soil hydraulic parameters (SHP) from the Richards equation based inverse model. The inverse model was representing single ring infiltration experiment on mountainous podzolic soil profile, and was searching for the SHP parameters of the top soil layer. Since the thickness of the top soil layer is often much lower than the depth required to embed the single ring or Guelph permeameter device, the SHPs for the top soil layer are very difficult to measure directly. The SHPs for the top soil layer were therefore identified here by inverse modeling of the single ring infiltration process, where, especially, the initial unsteady part of the experiment is expected to provide very useful data for evaluating the retention curve parameters (excluding the residual water content) and the saturated hydraulic conductivity. The main issue, which is addressed in this contribution, is the uniqueness of the Richards equation inverse model. We tried to answer the question whether is it possible to characterize the unsteady infiltration experiment with a unique set of SHPs values, and whether are all SHP parameters vulnerable with the non-uniqueness. Which is an important issue, since we could further conclude whether the popular gradient methods are appropriate here. Further the issues in assigning the initial and boundary condition setup, the influence of spatial and temporal discretization on the values of the identified SHPs, and the convergence issues with the Richards equation nonlinear operator during automatic calibration procedure are also covered here.

  9. An Initial Study of the Sensitivity of Aircraft Vortex Spacing System (AVOSS) Spacing Sensitivity to Weather and Configuration Input Parameters

    NASA Technical Reports Server (NTRS)

    Riddick, Stephen E.; Hinton, David A.

    2000-01-01

    A study has been performed on a computer code modeling an aircraft wake vortex spacing system during final approach. This code represents an initial engineering model of a system to calculate reduced approach separation criteria needed to increase airport productivity. This report evaluates model sensitivity toward various weather conditions (crosswind, crosswind variance, turbulent kinetic energy, and thermal gradient), code configurations (approach corridor option, and wake demise definition), and post-processing techniques (rounding of provided spacing values, and controller time variance).

  10. Recipe for potassium

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Izutani, Natsuko

    2012-11-12

    I investigate favorable conditions for producing potassium (K). Observations show [K/Fe] > 0 at low metallicities, while zero-metal supernova models show low [K/Fe] (< 0). Theoretically, it is natural that the odd-Z element, potassium decreases with lower metallicity, and thus, the observation should imply new and unknown sites for potassium. In this proceedings, I calculate proton-rich nucleosynthesis with three parameters, the initial Y{sub e} (from 0.51 to 0.60), the initial density {rho}{sub max} (10{sup 7}, 10{sup 8}, and 10{sup 9} [g/cm{sup 3}]), and the e-fold time {tau} for the density (0.01, 0.1, and 1.0 [sec]). Among 90 models I havemore » calculated, only 26 models show [K/Fe] > 0, and they all have {rho}{sub max} = 10{sup 9}[g/cm{sup 3}]. I discuss parameter dependence of [K/Fe].« less

  11. A comparison between Gauss-Newton and Markov chain Monte Carlo basedmethods for inverting spectral induced polarization data for Cole-Coleparameters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Jinsong; Kemna, Andreas; Hubbard, Susan S.

    2008-05-15

    We develop a Bayesian model to invert spectral induced polarization (SIP) data for Cole-Cole parameters using Markov chain Monte Carlo (MCMC) sampling methods. We compare the performance of the MCMC based stochastic method with an iterative Gauss-Newton based deterministic method for Cole-Cole parameter estimation through inversion of synthetic and laboratory SIP data. The Gauss-Newton based method can provide an optimal solution for given objective functions under constraints, but the obtained optimal solution generally depends on the choice of initial values and the estimated uncertainty information is often inaccurate or insufficient. In contrast, the MCMC based inversion method provides extensive globalmore » information on unknown parameters, such as the marginal probability distribution functions, from which we can obtain better estimates and tighter uncertainty bounds of the parameters than with the deterministic method. Additionally, the results obtained with the MCMC method are independent of the choice of initial values. Because the MCMC based method does not explicitly offer single optimal solution for given objective functions, the deterministic and stochastic methods can complement each other. For example, the stochastic method can first be used to obtain the means of the unknown parameters by starting from an arbitrary set of initial values and the deterministic method can then be initiated using the means as starting values to obtain the optimal estimates of the Cole-Cole parameters.« less

  12. A Modified Penalty Parameter Approach for Optimal Estimation of UH with Simultaneous Estimation of Infiltration Parameters

    NASA Astrophysics Data System (ADS)

    Bhattacharjya, Rajib Kumar

    2018-05-01

    The unit hydrograph and the infiltration parameters of a watershed can be obtained from observed rainfall-runoff data by using inverse optimization technique. This is a two-stage optimization problem. In the first stage, the infiltration parameters are obtained and the unit hydrograph ordinates are estimated in the second stage. In order to combine this two-stage method into a single stage one, a modified penalty parameter approach is proposed for converting the constrained optimization problem to an unconstrained one. The proposed approach is designed in such a way that the model initially obtains the infiltration parameters and then searches the optimal unit hydrograph ordinates. The optimization model is solved using Genetic Algorithms. A reduction factor is used in the penalty parameter approach so that the obtained optimal infiltration parameters are not destroyed during subsequent generation of genetic algorithms, required for searching optimal unit hydrograph ordinates. The performance of the proposed methodology is evaluated by using two example problems. The evaluation shows that the model is superior, simple in concept and also has the potential for field application.

  13. A possible formation channel for blue hook stars in globular cluster - II. Effects of metallicity, mass ratio, tidal enhancement efficiency and helium abundance

    NASA Astrophysics Data System (ADS)

    Lei, Zhenxin; Zhao, Gang; Zeng, Aihua; Shen, Lihua; Lan, Zhongjian; Jiang, Dengkai; Han, Zhanwen

    2016-12-01

    Employing tidally enhanced stellar wind, we studied in binaries the effects of metallicity, mass ratio of primary to secondary, tidal enhancement efficiency and helium abundance on the formation of blue hook (BHk) stars in globular clusters (GCs). A total of 28 sets of binary models combined with different input parameters are studied. For each set of binary model, we presented a range of initial orbital periods that is needed to produce BHk stars in binaries. All the binary models could produce BHk stars within different range of initial orbital periods. We also compared our results with the observation in the Teff-logg diagram of GC NGC 2808 and ω Cen. Most of the BHk stars in these two GCs locate well in the region predicted by our theoretical models, especially when C/N-enhanced model atmospheres are considered. We found that mass ratio of primary to secondary and tidal enhancement efficiency have little effects on the formation of BHk stars in binaries, while metallicity and helium abundance would play important roles, especially for helium abundance. Specifically, with helium abundance increasing in binary models, the space range of initial orbital periods needed to produce BHk stars becomes obviously wider, regardless of other input parameters adopted. Our results were discussed with recent observations and other theoretical models.

  14. Algorithm for retrieving vegetative canopy and leaf parameters from multi- and hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Borel, Christoph

    2009-05-01

    In recent years hyper-spectral data has been used to retrieve information about vegetative canopies such as leaf area index and canopy water content. For the environmental scientist these two parameters are valuable, but there is potentially more information to be gained as high spatial resolution data becomes available. We developed an Amoeba (Nelder-Mead or Simplex) based program to invert a vegetative canopy radiosity model coupled with a leaf (PROSPECT5) reflectance model and modeled for the background reflectance (e.g. soil, water, leaf litter) to a measured reflectance spectrum. The PROSPECT5 leaf model has five parameters: leaf structure parameter Nstru, chlorophyll a+b concentration Cab, carotenoids content Car, equivalent water thickness Cw and dry matter content Cm. The canopy model has two parameters: total leaf area index (LAI) and number of layers. The background reflectance model is either a single reflectance spectrum from a spectral library() derived from a bare area pixel on an image or a linear mixture of soil spectra. We summarize the radiosity model of a layered canopy and give references to the leaf/needle models. The method is then tested on simulated and measured data. We investigate the uniqueness, limitations and accuracy of the retrieved parameters on canopy parameters (low, medium and high leaf area index) spectral resolution (32 to 211 band hyperspectral), sensor noise and initial conditions.

  15. The initial cooling of pahoehoe flow lobes

    USGS Publications Warehouse

    Keszthelyi, L.; Denlinger, R.

    1996-01-01

    In this paper we describe a new thermal model for the initial cooling of pahoehoe lava flows. The accurate modeling of this initial cooling is important for understanding the formation of the distinctive surface textures on pahoehoe lava flows as well as being the first step in modeling such key pahoehoe emplacement processes as lava flow inflation and lava tube formation. This model is constructed from the physical phenomena observed to control the initial cooling of pahoehoe flows and is not an empirical fit to field data. We find that the only significant processes are (a) heat loss by thermal radiation, (b) heat loss by atmospheric convection, (c) heat transport within the flow by conduction with temperature and porosity-dependent thermal properties, and (d) the release of latent heat during crystallization. The numerical model is better able to reproduce field measurements made in Hawai'i between 1989 and 1993 than other published thermal models. By adjusting one parameter at a time, the effect of each of the input parameters on the cooling rate was determined. We show that: (a) the surfaces of porous flows cool more quickly than the surfaces of dense flows, (b) the surface cooling is very sensitive to the efficiency of atmospheric convective cooling, and (c) changes in the glass forming tendency of the lava may have observable petrographic and thermal signatures. These model results provide a quantitative explanation for the recently observed relationship between the surface cooling rate of pahoehoe lobes and the porosity of those lobes (Jones 1992, 1993). The predicted sensitivity of cooling to atmospheric convection suggests a simple field experiment for verification, and the model provides a tool to begin studies of the dynamic crystallization of real lavas. Future versions of the model can also be made applicable to extraterrestrial, submarine, silicic, and pyroclastic flows.

  16. Macromolecular refinement by model morphing using non-atomic parameterizations.

    PubMed

    Cowtan, Kevin; Agirre, Jon

    2018-02-01

    Refinement is a critical step in the determination of a model which explains the crystallographic observations and thus best accounts for the missing phase components. The scattering density is usually described in terms of atomic parameters; however, in macromolecular crystallography the resolution of the data is generally insufficient to determine the values of these parameters for individual atoms. Stereochemical and geometric restraints are used to provide additional information, but produce interrelationships between parameters which slow convergence, resulting in longer refinement times. An alternative approach is proposed in which parameters are not attached to atoms, but to regions of the electron-density map. These parameters can move the density or change the local temperature factor to better explain the structure factors. Varying the size of the region which determines the parameters at a particular position in the map allows the method to be applied at different resolutions without the use of restraints. Potential applications include initial refinement of molecular-replacement models with domain motions, and potentially the use of electron density from other sources such as electron cryo-microscopy (cryo-EM) as the refinement model.

  17. Application of the Response Surface Methodology to Optimize the Fermentation Parameters for Enhanced Docosahexaenoic Acid (DHA) Production by Thraustochytrium sp. ATCC 26185.

    PubMed

    Wu, Kang; Ding, Lijian; Zhu, Peng; Li, Shuang; He, Shan

    2018-04-22

    The aim of this study was to determine the cumulative effect of fermentation parameters and enhance the production of docosahexaenoic acid (DHA) by Thraustochytrium sp. ATCC 26185 using response surface methodology (RSM). Among the eight variables screened for effects of fermentation parameters on DHA production by Plackett-Burman design (PBD), the initial pH, inoculum volume, and fermentation volume were found to be most significant. The Box-Behnken design was applied to derive a statistical model for optimizing these three fermentation parameters for DHA production. The optimal parameters for maximum DHA production were initial pH: 6.89, inoculum volume: 4.16%, and fermentation volume: 140.47 mL, respectively. The maximum yield of DHA production was 1.68 g/L, which was in agreement with predicted values. An increase in DHA production was achieved by optimizing the initial pH, fermentation, and inoculum volume parameters. This optimization strategy led to a significant increase in the amount of DHA produced, from 1.16 g/L to 1.68 g/L. Thraustochytrium sp. ATCC 26185 is a promising resource for microbial DHA production due to the high-level yield of DHA that it produces, and the capacity for large-scale fermentation of this organism.

  18. Modeling Answer Change Behavior: An Application of a Generalized Item Response Tree Model

    ERIC Educational Resources Information Center

    Jeon, Minjeong; De Boeck, Paul; van der Linden, Wim

    2017-01-01

    We present a novel application of a generalized item response tree model to investigate test takers' answer change behavior. The model allows us to simultaneously model the observed patterns of the initial and final responses after an answer change as a function of a set of latent traits and item parameters. The proposed application is illustrated…

  19. Mars approach for global sensitivity analysis of differential equation models with applications to dynamics of influenza infection.

    PubMed

    Lee, Yeonok; Wu, Hulin

    2012-01-01

    Differential equation models are widely used for the study of natural phenomena in many fields. The study usually involves unknown factors such as initial conditions and/or parameters. It is important to investigate the impact of unknown factors (parameters and initial conditions) on model outputs in order to better understand the system the model represents. Apportioning the uncertainty (variation) of output variables of a model according to the input factors is referred to as sensitivity analysis. In this paper, we focus on the global sensitivity analysis of ordinary differential equation (ODE) models over a time period using the multivariate adaptive regression spline (MARS) as a meta model based on the concept of the variance of conditional expectation (VCE). We suggest to evaluate the VCE analytically using the MARS model structure of univariate tensor-product functions which is more computationally efficient. Our simulation studies show that the MARS model approach performs very well and helps to significantly reduce the computational cost. We present an application example of sensitivity analysis of ODE models for influenza infection to further illustrate the usefulness of the proposed method.

  20. Sequential optimization of a terrestrial biosphere model constrained by multiple satellite based products

    NASA Astrophysics Data System (ADS)

    Ichii, K.; Kondo, M.; Wang, W.; Hashimoto, H.; Nemani, R. R.

    2012-12-01

    Various satellite-based spatial products such as evapotranspiration (ET) and gross primary productivity (GPP) are now produced by integration of ground and satellite observations. Effective use of these multiple satellite-based products in terrestrial biosphere models is an important step toward better understanding of terrestrial carbon and water cycles. However, due to the complexity of terrestrial biosphere models with large number of model parameters, the application of these spatial data sets in terrestrial biosphere models is difficult. In this study, we established an effective but simple framework to refine a terrestrial biosphere model, Biome-BGC, using multiple satellite-based products as constraints. We tested the framework in the monsoon Asia region covered by AsiaFlux observations. The framework is based on the hierarchical analysis (Wang et al. 2009) with model parameter optimization constrained by satellite-based spatial data. The Biome-BGC model is separated into several tiers to minimize the freedom of model parameter selections and maximize the independency from the whole model. For example, the snow sub-model is first optimized using MODIS snow cover product, followed by soil water sub-model optimized by satellite-based ET (estimated by an empirical upscaling method; Support Vector Regression (SVR) method; Yang et al. 2007), photosynthesis model optimized by satellite-based GPP (based on SVR method), and respiration and residual carbon cycle models optimized by biomass data. As a result of initial assessment, we found that most of default sub-models (e.g. snow, water cycle and carbon cycle) showed large deviations from remote sensing observations. However, these biases were removed by applying the proposed framework. For example, gross primary productivities were initially underestimated in boreal and temperate forest and overestimated in tropical forests. However, the parameter optimization scheme successfully reduced these biases. Our analysis shows that terrestrial carbon and water cycle simulations in monsoon Asia were greatly improved, and the use of multiple satellite observations with this framework is an effective way for improving terrestrial biosphere models.

  1. Retrieval of Dry Snow Parameters from Radiometric Data Using a Dense Medium Model and Genetic Algorithms

    NASA Technical Reports Server (NTRS)

    Tedesco, Marco; Kim, Edward J.

    2005-01-01

    In this paper, GA-based techniques are used to invert the equations of an electromagnetic model based on Dense Medium Radiative Transfer Theory (DMRT) under the Quasi Crystalline Approximation with Coherent Potential to retrieve snow depth, mean grain size and fractional volume from microwave brightness temperatures. The technique is initially tested on both noisy and not-noisy simulated data. During this phase, different configurations of genetic algorithm parameters are considered to quantify how their change can affect the algorithm performance. A configuration of GA parameters is then selected and the algorithm is applied to experimental data acquired during the NASA Cold Land Process Experiment. Snow parameters retrieved with the GA-DMRT technique are then compared with snow parameters measured on field.

  2. On the appropriate definition of soil profile configuration and initial conditions for land surface-hydrology models in cold regions

    NASA Astrophysics Data System (ADS)

    Sapriza-Azuri, Gonzalo; Gamazo, Pablo; Razavi, Saman; Wheater, Howard S.

    2018-06-01

    Arctic and subarctic regions are amongst the most susceptible regions on Earth to global warming and climate change. Understanding and predicting the impact of climate change in these regions require a proper process representation of the interactions between climate, carbon cycle, and hydrology in Earth system models. This study focuses on land surface models (LSMs) that represent the lower boundary condition of general circulation models (GCMs) and regional climate models (RCMs), which simulate climate change evolution at the global and regional scales, respectively. LSMs typically utilize a standard soil configuration with a depth of no more than 4 m, whereas for cold, permafrost regions, field experiments show that attention to deep soil profiles is needed to understand and close the water and energy balances, which are tightly coupled through the phase change. To address this gap, we design and run a series of model experiments with a one-dimensional LSM, called CLASS (Canadian Land Surface Scheme), as embedded in the MESH (Modélisation Environmentale Communautaire - Surface and Hydrology) modelling system, to (1) characterize the effect of soil profile depth under different climate conditions and in the presence of parameter uncertainty; (2) assess the effect of including or excluding the geothermal flux in the LSM at the bottom of the soil column; and (3) develop a methodology for temperature profile initialization in permafrost regions, where the system has an extended memory, by the use of paleo-records and bootstrapping. Our study area is in Norman Wells, Northwest Territories of Canada, where measurements of soil temperature profiles and historical reconstructed climate data are available. Our results demonstrate a dominant role for parameter uncertainty, that is often neglected in LSMs. Considering such high sensitivity to parameter values and dependency on the climate condition, we show that a minimum depth of 20 m is essential to adequately represent the temperature dynamics. We further show that our proposed initialization procedure is effective and robust to uncertainty in paleo-climate reconstructions and that more than 300 years of reconstructed climate time series are needed for proper model initialization.

  3. Relaxation to a Phase-Locked Equilibrium State in a One-Dimensional Bosonic Josephson Junction

    NASA Astrophysics Data System (ADS)

    Pigneur, Marine; Berrada, Tarik; Bonneau, Marie; Schumm, Thorsten; Demler, Eugene; Schmiedmayer, Jörg

    2018-04-01

    We present an experimental study on the nonequilibrium tunnel dynamics of two coupled one-dimensional Bose-Einstein quasicondensates deep in the Josephson regime. Josephson oscillations are initiated by splitting a single one-dimensional condensate and imprinting a relative phase between the superfluids. Regardless of the initial state and experimental parameters, the dynamics of the relative phase and atom number imbalance shows a relaxation to a phase-locked steady state. The latter is characterized by a high phase coherence and reduced fluctuations with respect to the initial state. We propose an empirical model based on the analogy with the anharmonic oscillator to describe the effect of various experimental parameters. A microscopic theory compatible with our observations is still missing.

  4. Statistical optimization of process parameters for the simultaneous adsorption of Cr(VI) and phenol onto Fe-treated tea waste biomass

    NASA Astrophysics Data System (ADS)

    Gupta, Ankur; Balomajumder, Chandrajit

    2017-12-01

    In this study, simultaneous removal of Cr(VI) and phenol from binary solution was carried out using Fe-treated tea waste biomass. The effect of process parameters such as adsorbent dose, pH, initial concentration of Cr(VI) (mg/L), and initial concentration of phenol (mg/L) was optimized. The analysis of variance of the quadratic model demonstrates that the experimental results are in good agreement with the predicted values. Based on experimental design at an initial concentration of 55 mg/L of Cr(VI), 27.50 mg/L of phenol, pH 2.0, 15 g/L adsorbent dose, 99.99% removal of Cr(VI), and phenol was achieved.

  5. Sequential Markov chain Monte Carlo filter with simultaneous model selection for electrocardiogram signal modeling.

    PubMed

    Edla, Shwetha; Kovvali, Narayan; Papandreou-Suppappola, Antonia

    2012-01-01

    Constructing statistical models of electrocardiogram (ECG) signals, whose parameters can be used for automated disease classification, is of great importance in precluding manual annotation and providing prompt diagnosis of cardiac diseases. ECG signals consist of several segments with different morphologies (namely the P wave, QRS complex and the T wave) in a single heart beat, which can vary across individuals and diseases. Also, existing statistical ECG models exhibit a reliance upon obtaining a priori information from the ECG data by using preprocessing algorithms to initialize the filter parameters, or to define the user-specified model parameters. In this paper, we propose an ECG modeling technique using the sequential Markov chain Monte Carlo (SMCMC) filter that can perform simultaneous model selection, by adaptively choosing from different representations depending upon the nature of the data. Our results demonstrate the ability of the algorithm to track various types of ECG morphologies, including intermittently occurring ECG beats. In addition, we use the estimated model parameters as the feature set to classify between ECG signals with normal sinus rhythm and four different types of arrhythmia.

  6. Sensitivity Analysis of Hydraulic Head to Locations of Model Boundaries

    DOE PAGES

    Lu, Zhiming

    2018-01-30

    Sensitivity analysis is an important component of many model activities in hydrology. Numerous studies have been conducted in calculating various sensitivities. Most of these sensitivity analysis focus on the sensitivity of state variables (e.g. hydraulic head) to parameters representing medium properties such as hydraulic conductivity or prescribed values such as constant head or flux at boundaries, while few studies address the sensitivity of the state variables to some shape parameters or design parameters that control the model domain. Instead, these shape parameters are typically assumed to be known in the model. In this study, based on the flow equation, wemore » derive the equation (and its associated initial and boundary conditions) for sensitivity of hydraulic head to shape parameters using continuous sensitivity equation (CSE) approach. These sensitivity equations can be solved numerically in general or analytically in some simplified cases. Finally, the approach has been demonstrated through two examples and the results are compared favorably to those from analytical solutions or numerical finite difference methods with perturbed model domains, while numerical shortcomings of the finite difference method are avoided.« less

  7. Sensitivity Analysis of Hydraulic Head to Locations of Model Boundaries

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, Zhiming

    Sensitivity analysis is an important component of many model activities in hydrology. Numerous studies have been conducted in calculating various sensitivities. Most of these sensitivity analysis focus on the sensitivity of state variables (e.g. hydraulic head) to parameters representing medium properties such as hydraulic conductivity or prescribed values such as constant head or flux at boundaries, while few studies address the sensitivity of the state variables to some shape parameters or design parameters that control the model domain. Instead, these shape parameters are typically assumed to be known in the model. In this study, based on the flow equation, wemore » derive the equation (and its associated initial and boundary conditions) for sensitivity of hydraulic head to shape parameters using continuous sensitivity equation (CSE) approach. These sensitivity equations can be solved numerically in general or analytically in some simplified cases. Finally, the approach has been demonstrated through two examples and the results are compared favorably to those from analytical solutions or numerical finite difference methods with perturbed model domains, while numerical shortcomings of the finite difference method are avoided.« less

  8. A Comparison of the Forecast Skills among Three Numerical Models

    NASA Astrophysics Data System (ADS)

    Lu, D.; Reddy, S. R.; White, L. J.

    2003-12-01

    Three numerical weather forecast models, MM5, COAMPS and WRF, operating with a joint effort of NOAA HU-NCAS and Jackson State University (JSU) during summer 2003 have been chosen to study their forecast skills against observations. The models forecast over the same region with the same initialization, boundary condition, forecast length and spatial resolution. AVN global dataset have been ingested as initial conditions. Grib resolution of 27 km is chosen to represent the current mesoscale model. The forecasts with the length of 36h are performed to output the result with 12h interval. The key parameters used to evaluate the forecast skill include 12h accumulated precipitation, sea level pressure, wind, surface temperature and dew point. Precipitation is evaluated statistically using conventional skill scores, Threat Score (TS) and Bias Score (BS), for different threshold values based on 12h rainfall observations whereas other statistical methods such as Mean Error (ME), Mean Absolute Error(MAE) and Root Mean Square Error (RMSE) are applied to other forecast parameters.

  9. An improved state-parameter analysis of ecosystem models using data assimilation

    USGS Publications Warehouse

    Chen, M.; Liu, S.; Tieszen, L.L.; Hollinger, D.Y.

    2008-01-01

    Much of the effort spent in developing data assimilation methods for carbon dynamics analysis has focused on estimating optimal values for either model parameters or state variables. The main weakness of estimating parameter values alone (i.e., without considering state variables) is that all errors from input, output, and model structure are attributed to model parameter uncertainties. On the other hand, the accuracy of estimating state variables may be lowered if the temporal evolution of parameter values is not incorporated. This research develops a smoothed ensemble Kalman filter (SEnKF) by combining ensemble Kalman filter with kernel smoothing technique. SEnKF has following characteristics: (1) to estimate simultaneously the model states and parameters through concatenating unknown parameters and state variables into a joint state vector; (2) to mitigate dramatic, sudden changes of parameter values in parameter sampling and parameter evolution process, and control narrowing of parameter variance which results in filter divergence through adjusting smoothing factor in kernel smoothing algorithm; (3) to assimilate recursively data into the model and thus detect possible time variation of parameters; and (4) to address properly various sources of uncertainties stemming from input, output and parameter uncertainties. The SEnKF is tested by assimilating observed fluxes of carbon dioxide and environmental driving factor data from an AmeriFlux forest station located near Howland, Maine, USA, into a partition eddy flux model. Our analysis demonstrates that model parameters, such as light use efficiency, respiration coefficients, minimum and optimum temperatures for photosynthetic activity, and others, are highly constrained by eddy flux data at daily-to-seasonal time scales. The SEnKF stabilizes parameter values quickly regardless of the initial values of the parameters. Potential ecosystem light use efficiency demonstrates a strong seasonality. Results show that the simultaneous parameter estimation procedure significantly improves model predictions. Results also show that the SEnKF can dramatically reduce the variance in state variables stemming from the uncertainty of parameters and driving variables. The SEnKF is a robust and effective algorithm in evaluating and developing ecosystem models and in improving the understanding and quantification of carbon cycle parameters and processes. ?? 2008 Elsevier B.V.

  10. A correlation to estimate the velocity of convective currents in boilover.

    PubMed

    Ferrero, Fabio; Kozanoglu, Bulent; Arnaldos, Josep

    2007-05-08

    The mathematical model proposed by Kozanoglu et al. [B. Kozanoglu, F. Ferrero, M. Muñoz, J. Arnaldos, J. Casal, Velocity of the convective currents in boilover, Chem. Eng. Sci. 61 (8) (2006) 2550-2556] for simulating heat transfer in hydrocarbon mixtures in the process that leads to boilover requires the initial value of the convective current's velocity through the fuel layer as an adjustable parameter. Here, a correlation for predicting this parameter based on the properties of the fuel (average ebullition temperature) and the initial thickness of the fuel layer is proposed.

  11. Estimation of key parameters in adaptive neuron model according to firing patterns based on improved particle swarm optimization algorithm

    NASA Astrophysics Data System (ADS)

    Yuan, Chunhua; Wang, Jiang; Yi, Guosheng

    2017-03-01

    Estimation of ion channel parameters is crucial to spike initiation of neurons. The biophysical neuron models have numerous ion channel parameters, but only a few of them play key roles in the firing patterns of the models. So we choose three parameters featuring the adaptation in the Ermentrout neuron model to be estimated. However, the traditional particle swarm optimization (PSO) algorithm is still easy to fall into local optimum and has the premature convergence phenomenon in the study of some problems. In this paper, we propose an improved method that uses a concave function and dynamic logistic chaotic mapping mixed to adjust the inertia weights of the fitness value, effectively improve the global convergence ability of the algorithm. The perfect predicting firing trajectories of the rebuilt model using the estimated parameters prove that only estimating a few important ion channel parameters can establish the model well and the proposed algorithm is effective. Estimations using two classic PSO algorithms are also compared to the improved PSO to verify that the algorithm proposed in this paper can avoid local optimum and quickly converge to the optimal value. The results provide important theoretical foundations for building biologically realistic neuron models.

  12. Soil mechanics: breaking ground.

    PubMed

    Einav, Itai

    2007-12-15

    In soil mechanics, student's models are classified as simple models that teach us unexplained elements of behaviour; an example is the Cam clay constitutive models of critical state soil mechanics (CSSM). 'Engineer's models' are models that elaborate the theory to fit more behavioural trends; this is usually done by adding fitting parameters to the student's models. Can currently unexplained behavioural trends of soil be explained without adding fitting parameters to CSSM models, by developing alternative student's models based on modern theories?Here I apply an alternative theory to CSSM, called 'breakage mechanics', and develop a simple student's model for sand. Its unique and distinctive feature is the use of an energy balance equation that connects grain size reduction to consumption of energy, which enables us to predict how grain size distribution (gsd) evolves-an unprecedented capability in constitutive modelling. With only four parameters, the model is physically clarifying what CSSM cannot for sand: the dependency of yielding and critical state on the initial gsd and void ratio.

  13. REVIEWS OF TOPICAL PROBLEMS: Cosmology, primordial black holes, and supermassive particles

    NASA Astrophysics Data System (ADS)

    Polnarev, A. G.; Khlopov, M. Yu

    1985-03-01

    Analysis of astrophysical restrictions on the spectrum of primordial black holes (PBH) makes it possible to obtain indirect information about the physical conditions in the very early universe. These restrictions are compared with the probability of PBH production in early dust stages as predicted on the basis of modern models of quantum field theory. As a result of such comparison, restrictions are obtained on the parameters of various models corresponding to different values of the parameters of the spectrum of initial small-scale inhomogeneities.

  14. The Relationship Between Constraint and Ductile Fracture Initiation as Defined by Micromechanical Analyses

    NASA Technical Reports Server (NTRS)

    Panontin, Tina L.; Sheppard, Sheri D.

    1994-01-01

    The use of small laboratory specimens to predict the integrity of large, complex structures relies on the validity of single parameter fracture mechanics. Unfortunately, the constraint loss associated with large scale yielding, whether in a laboratory specimen because of its small size or in a structure because it contains shallow flaws loaded in tension, can cause the breakdown of classical fracture mechanics and the loss of transferability of critical, global fracture parameters. Although the issue of constraint loss can be eliminated by testing actual structural configurations, such an approach can be prohibitively costly. Hence, a methodology that can correct global fracture parameters for constraint effects is desirable. This research uses micromechanical analyses to define the relationship between global, ductile fracture initiation parameters and constraint in two specimen geometries (SECT and SECB with varying a/w ratios) and one structural geometry (circumferentially cracked pipe). Two local fracture criteria corresponding to ductile fracture micromechanisms are evaluated: a constraint-modified, critical strain criterion for void coalescence proposed by Hancock and Cowling and a critical void ratio criterion for void growth based on the Rice and Tracey model. Crack initiation is assumed to occur when the critical value in each case is reached over some critical length. The primary material of interest is A516-70, a high-hardening pressure vessel steel sensitive to constraint; however, a low-hardening structural steel that is less sensitive to constraint is also being studied. Critical values of local fracture parameters are obtained by numerical analysis and experimental testing of circumferentially notched tensile specimens of varying constraint (e.g., notch radius). These parameters are then used in conjunction with large strain, large deformation, two- and three-dimensional finite element analyses of the geometries listed above to predict crack initiation loads and to calculate the associated (critical) global fracture parameters. The loads are verified experimentally, and microscopy is used to measure pre-crack length, crack tip opening displacement (CTOD), and the amount of stable crack growth. Results for A516-70 steel indicate that the constraint-modified, critical strain criterion with a critical length approximately equal to the grain size (0.0025 inch) provides accurate predictions of crack initiation. The critical void growth criterion is shown to considerably underpredict crack initiation loads with the same critical length. The relationship between the critical value of the J-integral for ductile crack initiation and crack depth for SECT and SECB specimens has been determined using the constraint-modified, critical strain criterion, demonstrating that this micromechanical model can be used to correct in-plane constraint effects due to crack depth and bending vs. tension loading. Finally, the relationship developed for the SECT specimens is used to predict the behavior of circumferentially cracked pipe specimens.

  15. Factors which modulate the rates of skeletal muscle mass loss in non-small cell lung cancer patients: a pilot study.

    PubMed

    Atlan, Philippe; Bayar, Mohamed Amine; Lanoy, Emilie; Besse, Benjamin; Planchard, David; Ramon, Jordy; Raynard, Bruno; Antoun, Sami

    2017-11-01

    Advanced non-small cell lung cancer (NSCLC) is associated with weight loss which may reflect skeletal muscle mass (SMM) and/or total adipose tissue (TAT) depletion. This study aimed to describe changes in body composition (BC) parameters and to identify the factors unrelated to the tumor which modulate them. SMM, TAT, and the proportion of SMM to SMM + TAT were assessed with computed tomography. Estimates of each BC parameter at follow-up initiation and across time were derived from a mixed linear model of repeated measurements with a random intercept and a random slope. The same models were used to assess the independent effect of gender, age, body mass index (BMI), and initial values on changes in each BC parameter. Sixty-four patients with stage III or IV NSCLC were reviewed. The mean ± SD decreases in body weight and SMM were respectively 59 ± 3 g/week (P < 0.03) and 7 mm 2 /m 2 /week (P = 0.0003). During follow-up, no changes were identified in TAT nor in muscle density or in the proportion of SMM to SMM + TAT, estimated at 37 ± 2% at baseline. SMM loss was influenced by initial BMI (P < 0.0001) and SMM values (P = 0.0002): the higher the initial BMI or SMM values, the greater the loss observed. Weight loss was greater when the initial weight was heavier (P < 0.0001). Our results demonstrate that SMM wasting in NSCLC is lower when initial SMM and BMI values are low. These exploratory findings after our attempt to better understand the intrinsic factors associated with muscle mass depletion need to be confirmed in larger studies.

  16. An opinion-driven behavioral dynamics model for addictive behaviors

    NASA Astrophysics Data System (ADS)

    Moore, Thomas W.; Finley, Patrick D.; Apelberg, Benjamin J.; Ambrose, Bridget K.; Brodsky, Nancy S.; Brown, Theresa J.; Husten, Corinne; Glass, Robert J.

    2015-04-01

    We present a model of behavioral dynamics that combines a social network-based opinion dynamics model with behavioral mapping. The behavioral component is discrete and history-dependent to represent situations in which an individual's behavior is initially driven by opinion and later constrained by physiological or psychological conditions that serve to maintain the behavior. Individuals are modeled as nodes in a social network connected by directed edges. Parameter sweeps illustrate model behavior and the effects of individual parameters and parameter interactions on model results. Mapping a continuous opinion variable into a discrete behavioral space induces clustering on directed networks. Clusters provide targets of opportunity for influencing the network state; however, the smaller the network the greater the stochasticity and potential variability in outcomes. This has implications both for behaviors that are influenced by close relationships verses those influenced by societal norms and for the effectiveness of strategies for influencing those behaviors.

  17. mocca-SURVEY database I. Accreting white dwarf binary systems in globular clusters - III. Cataclysmic variables - implications of model assumptions

    NASA Astrophysics Data System (ADS)

    Belloni, Diogo; Zorotovic, Mónica; Schreiber, Matthias R.; Leigh, Nathan W. C.; Giersz, Mirek; Askar, Abbas

    2017-06-01

    In this third of a series of papers related to cataclysmic variables (CVs) and related objects, we analyse the population of CVs in a set of 12 globular cluster models evolved with the MOCCA Monte Carlo code, for two initial binary populations (IBPs), two choices of common-envelope phase (CEP) parameters, and three different models for the evolution of CVs and the treatment of angular momentum loss. When more realistic models and parameters are considered, we find that present-day cluster CV duty cycles are extremely low (≲0.1 per cent) that makes their detection during outbursts rather difficult. Additionally, the IBP plays a significant role in shaping the CV population properties, and models that follow the Kroupa IBP are less affected by enhanced angular momentum loss. We also predict from our simulations that CVs formed dynamically in the past few Gyr (massive CVs) correspond to bright CVs (as expected) and that faint CVs formed several Gyr ago (dynamically or not) represent the overwhelming majority. Regarding the CV formation rate, we rule out the notion that it is similar irrespective of the cluster properties. Finally, we discuss the differences in the present-day CV properties related to the IBPs, the initial cluster conditions, the CEP parameters, formation channels, the CV evolution models and the angular momentum loss treatments.

  18. Displacement back analysis for a high slope of the Dagangshan Hydroelectric Power Station based on BP neural network and particle swarm optimization.

    PubMed

    Liang, Zhengzhao; Gong, Bin; Tang, Chunan; Zhang, Yongbin; Ma, Tianhui

    2014-01-01

    The right bank high slope of the Dagangshan Hydroelectric Power Station is located in complicated geological conditions with deep fractures and unloading cracks. How to obtain the mechanical parameters and then evaluate the safety of the slope are the key problems. This paper presented a displacement back analysis for the slope using an artificial neural network model (ANN) and particle swarm optimization model (PSO). A numerical model was established to simulate the displacement increment results, acquiring training data for the artificial neural network model. The backpropagation ANN model was used to establish a mapping function between the mechanical parameters and the monitoring displacements. The PSO model was applied to initialize the weights and thresholds of the backpropagation (BP) network model and determine suitable values of the mechanical parameters. Then the elastic moduli of the rock masses were obtained according to the monitoring displacement data at different excavation stages, and the BP neural network model was proved to be valid by comparing the measured displacements, the displacements predicted by the BP neural network model, and the numerical simulation using the back-analyzed parameters. The proposed model is useful for rock mechanical parameters determination and instability investigation of rock slopes.

  19. On-board adaptive model for state of charge estimation of lithium-ion batteries based on Kalman filter with proportional integral-based error adjustment

    NASA Astrophysics Data System (ADS)

    Wei, Jingwen; Dong, Guangzhong; Chen, Zonghai

    2017-10-01

    With the rapid development of battery-powered electric vehicles, the lithium-ion battery plays a critical role in the reliability of vehicle system. In order to provide timely management and protection for battery systems, it is necessary to develop a reliable battery model and accurate battery parameters estimation to describe battery dynamic behaviors. Therefore, this paper focuses on an on-board adaptive model for state-of-charge (SOC) estimation of lithium-ion batteries. Firstly, a first-order equivalent circuit battery model is employed to describe battery dynamic characteristics. Then, the recursive least square algorithm and the off-line identification method are used to provide good initial values of model parameters to ensure filter stability and reduce the convergence time. Thirdly, an extended-Kalman-filter (EKF) is applied to on-line estimate battery SOC and model parameters. Considering that the EKF is essentially a first-order Taylor approximation of battery model, which contains inevitable model errors, thus, a proportional integral-based error adjustment technique is employed to improve the performance of EKF method and correct model parameters. Finally, the experimental results on lithium-ion batteries indicate that the proposed EKF with proportional integral-based error adjustment method can provide robust and accurate battery model and on-line parameter estimation.

  20. Displacement Back Analysis for a High Slope of the Dagangshan Hydroelectric Power Station Based on BP Neural Network and Particle Swarm Optimization

    PubMed Central

    Liang, Zhengzhao; Gong, Bin; Tang, Chunan; Zhang, Yongbin; Ma, Tianhui

    2014-01-01

    The right bank high slope of the Dagangshan Hydroelectric Power Station is located in complicated geological conditions with deep fractures and unloading cracks. How to obtain the mechanical parameters and then evaluate the safety of the slope are the key problems. This paper presented a displacement back analysis for the slope using an artificial neural network model (ANN) and particle swarm optimization model (PSO). A numerical model was established to simulate the displacement increment results, acquiring training data for the artificial neural network model. The backpropagation ANN model was used to establish a mapping function between the mechanical parameters and the monitoring displacements. The PSO model was applied to initialize the weights and thresholds of the backpropagation (BP) network model and determine suitable values of the mechanical parameters. Then the elastic moduli of the rock masses were obtained according to the monitoring displacement data at different excavation stages, and the BP neural network model was proved to be valid by comparing the measured displacements, the displacements predicted by the BP neural network model, and the numerical simulation using the back-analyzed parameters. The proposed model is useful for rock mechanical parameters determination and instability investigation of rock slopes. PMID:25140345

  1. Fractional Gaussian model in global optimization

    NASA Astrophysics Data System (ADS)

    Dimri, V. P.; Srivastava, R. P.

    2009-12-01

    Earth system is inherently non-linear and it can be characterized well if we incorporate no-linearity in the formulation and solution of the problem. General tool often used for characterization of the earth system is inversion. Traditionally inverse problems are solved using least-square based inversion by linearizing the formulation. The initial model in such inversion schemes is often assumed to follow posterior Gaussian probability distribution. It is now well established that most of the physical properties of the earth follow power law (fractal distribution). Thus, the selection of initial model based on power law probability distribution will provide more realistic solution. We present a new method which can draw samples of posterior probability density function very efficiently using fractal based statistics. The application of the method has been demonstrated to invert band limited seismic data with well control. We used fractal based probability density function which uses mean, variance and Hurst coefficient of the model space to draw initial model. Further this initial model is used in global optimization inversion scheme. Inversion results using initial models generated by our method gives high resolution estimates of the model parameters than the hitherto used gradient based liner inversion method.

  2. Reactivity continuum modeling of leaf, root, and wood decomposition across biomes

    NASA Astrophysics Data System (ADS)

    Koehler, Birgit; Tranvik, Lars J.

    2015-07-01

    Large carbon dioxide amounts are released to the atmosphere during organic matter decomposition. Yet the large-scale and long-term regulation of this critical process in global carbon cycling by litter chemistry and climate remains poorly understood. We used reactivity continuum (RC) modeling to analyze the decadal data set of the "Long-term Intersite Decomposition Experiment," in which fine litter and wood decomposition was studied in eight biome types (224 time series). In 32 and 46% of all sites the litter content of the acid-unhydrolyzable residue (AUR, formerly referred to as lignin) and the AUR/nitrogen ratio, respectively, retarded initial decomposition rates. This initial rate-retarding effect generally disappeared within the first year of decomposition, and rate-stimulating effects of nutrients and a rate-retarding effect of the carbon/nitrogen ratio became more prevalent. For needles and leaves/grasses, the influence of climate on decomposition decreased over time. For fine roots, the climatic influence was initially smaller but increased toward later-stage decomposition. The climate decomposition index was the strongest climatic predictor of decomposition. The similar variability in initial decomposition rates across litter categories as across biome types suggested that future changes in decomposition may be dominated by warming-induced changes in plant community composition. In general, the RC model parameters successfully predicted independent decomposition data for the different litter-biome combinations (196 time series). We argue that parameterization of large-scale decomposition models with RC model parameters, as opposed to the currently common discrete multiexponential models, could significantly improve their mechanistic foundation and predictive accuracy across climate zones and litter categories.

  3. System cost performance analysis (study 2.3). Volume 1: Executive summary. [unmanned automated payload programs and program planning

    NASA Technical Reports Server (NTRS)

    Campbell, B. H.

    1974-01-01

    A study is described which was initiated to identify and quantify the interrelationships between and within the performance, safety, cost, and schedule parameters for unmanned, automated payload programs. The result of the investigation was a systems cost/performance model which was implemented as a digital computer program and could be used to perform initial program planning, cost/performance tradeoffs, and sensitivity analyses for mission model and advanced payload studies. Program objectives and results are described briefly.

  4. Bayesian approach to analyzing holograms of colloidal particles.

    PubMed

    Dimiduk, Thomas G; Manoharan, Vinothan N

    2016-10-17

    We demonstrate a Bayesian approach to tracking and characterizing colloidal particles from in-line digital holograms. We model the formation of the hologram using Lorenz-Mie theory. We then use a tempered Markov-chain Monte Carlo method to sample the posterior probability distributions of the model parameters: particle position, size, and refractive index. Compared to least-squares fitting, our approach allows us to more easily incorporate prior information about the parameters and to obtain more accurate uncertainties, which are critical for both particle tracking and characterization experiments. Our approach also eliminates the need to supply accurate initial guesses for the parameters, so it requires little tuning.

  5. Long-time predictability in disordered spin systems following a deep quench

    NASA Astrophysics Data System (ADS)

    Ye, J.; Gheissari, R.; Machta, J.; Newman, C. M.; Stein, D. L.

    2017-04-01

    We study the problem of predictability, or "nature vs nurture," in several disordered Ising spin systems evolving at zero temperature from a random initial state: How much does the final state depend on the information contained in the initial state, and how much depends on the detailed history of the system? Our numerical studies of the "dynamical order parameter" in Edwards-Anderson Ising spin glasses and random ferromagnets indicate that the influence of the initial state decays as dimension increases. Similarly, this same order parameter for the Sherrington-Kirkpatrick infinite-range spin glass indicates that this information decays as the number of spins increases. Based on these results, we conjecture that the influence of the initial state on the final state decays to zero in finite-dimensional random-bond spin systems as dimension goes to infinity, regardless of the presence of frustration. We also study the rate at which spins "freeze out" to a final state as a function of dimensionality and number of spins; here the results indicate that the number of "active" spins at long times increases with dimension (for short-range systems) or number of spins (for infinite-range systems). We provide theoretical arguments to support these conjectures, and also study analytically several mean-field models: the random energy model, the uniform Curie-Weiss ferromagnet, and the disordered Curie-Weiss ferromagnet. We find that for these models, the information contained in the initial state does not decay in the thermodynamic limit—in fact, it fully determines the final state. Unlike in short-range models, the presence of frustration in mean-field models dramatically alters the dynamical behavior with respect to the issue of predictability.

  6. Long-time predictability in disordered spin systems following a deep quench.

    PubMed

    Ye, J; Gheissari, R; Machta, J; Newman, C M; Stein, D L

    2017-04-01

    We study the problem of predictability, or "nature vs nurture," in several disordered Ising spin systems evolving at zero temperature from a random initial state: How much does the final state depend on the information contained in the initial state, and how much depends on the detailed history of the system? Our numerical studies of the "dynamical order parameter" in Edwards-Anderson Ising spin glasses and random ferromagnets indicate that the influence of the initial state decays as dimension increases. Similarly, this same order parameter for the Sherrington-Kirkpatrick infinite-range spin glass indicates that this information decays as the number of spins increases. Based on these results, we conjecture that the influence of the initial state on the final state decays to zero in finite-dimensional random-bond spin systems as dimension goes to infinity, regardless of the presence of frustration. We also study the rate at which spins "freeze out" to a final state as a function of dimensionality and number of spins; here the results indicate that the number of "active" spins at long times increases with dimension (for short-range systems) or number of spins (for infinite-range systems). We provide theoretical arguments to support these conjectures, and also study analytically several mean-field models: the random energy model, the uniform Curie-Weiss ferromagnet, and the disordered Curie-Weiss ferromagnet. We find that for these models, the information contained in the initial state does not decay in the thermodynamic limit-in fact, it fully determines the final state. Unlike in short-range models, the presence of frustration in mean-field models dramatically alters the dynamical behavior with respect to the issue of predictability.

  7. Assimilation of Sea Color Data Into A Three Dimensional Biogeochemical Model: Sensitivity Experiments

    NASA Astrophysics Data System (ADS)

    Echevin, V.; Levy, M.; Memery, L.

    The assimilation of two dimensional sea color data fields into a 3 dimensional coupled dynamical-biogeochemical model is performed using a 4DVAR algorithm. The biogeochemical model includes description of nitrates, ammonium, phytoplancton, zooplancton, detritus and dissolved organic matter. A subset of the biogeochemical model poorly known parameters (for example,phytoplancton growth, mortality,grazing) are optimized by minimizing a cost function measuring misfit between the observations and the model trajectory. Twin experiments are performed with an eddy resolving model of 5 km resolution in an academic configuration. Starting from oligotrophic conditions, an initially unstable baroclinic anticyclone splits into several eddies. Strong vertical velocities advect nitrates into the euphotic zone and generate a phytoplancton bloom. Biogeochemical parameters are perturbed to generate surface pseudo-observations of chlorophyll,which are assimilated in the model in order to retrieve the correct parameter perturbations. The impact of the type of measurement (quasi-instantaneous, daily mean, weekly mean) onto the retrieved set of parameters is analysed. Impacts of additional subsurface measurements and of errors in the circulation are also presented.

  8. Bayesian model comparison and parameter inference in systems biology using nested sampling.

    PubMed

    Pullen, Nick; Morris, Richard J

    2014-01-01

    Inferring parameters for models of biological processes is a current challenge in systems biology, as is the related problem of comparing competing models that explain the data. In this work we apply Skilling's nested sampling to address both of these problems. Nested sampling is a Bayesian method for exploring parameter space that transforms a multi-dimensional integral to a 1D integration over likelihood space. This approach focuses on the computation of the marginal likelihood or evidence. The ratio of evidences of different models leads to the Bayes factor, which can be used for model comparison. We demonstrate how nested sampling can be used to reverse-engineer a system's behaviour whilst accounting for the uncertainty in the results. The effect of missing initial conditions of the variables as well as unknown parameters is investigated. We show how the evidence and the model ranking can change as a function of the available data. Furthermore, the addition of data from extra variables of the system can deliver more information for model comparison than increasing the data from one variable, thus providing a basis for experimental design.

  9. Diffusion of Super-Gaussian Profiles

    ERIC Educational Resources Information Center

    Rosenberg, C.-J.; Anderson, D.; Desaix, M.; Johannisson, P.; Lisak, M.

    2007-01-01

    The present analysis describes an analytically simple and systematic approximation procedure for modelling the free diffusive spreading of initially super-Gaussian profiles. The approach is based on a self-similar ansatz for the evolution of the diffusion profile, and the parameter functions involved in the modelling are determined by suitable…

  10. Dynamics of landslide model with time delay and periodic parameter perturbations

    NASA Astrophysics Data System (ADS)

    Kostić, Srđan; Vasović, Nebojša; Franović, Igor; Jevremović, Dragutin; Mitrinovic, David; Todorović, Kristina

    2014-09-01

    In present paper, we analyze the dynamics of a single-block model on an inclined slope with Dieterich-Ruina friction law under the variation of two new introduced parameters: time delay Td and initial shear stress μ. It is assumed that this phenomenological model qualitatively simulates the motion along the infinite creeping slope. The introduction of time delay is proposed to mimic the memory effect of the sliding surface and it is generally considered as a function of history of sliding. On the other hand, periodic perturbation of initial shear stress emulates external triggering effect of long-distant earthquakes or some non-natural vibration source. The effects of variation of a single observed parameter, Td or μ, as well as their co-action, are estimated for three different sliding regimes: β < 1, β = 1 and β > 1, where β stands for the ratio of long-term to short-term stress changes. The results of standard local bifurcation analysis indicate the onset of complex dynamics for very low values of time delay. On the other side, numerical approach confirms an additional complexity that was not observed by local analysis, due to the possible effect of global bifurcations. The most complex dynamics is detected for β < 1, with a complete Ruelle-Takens-Newhouse route to chaos under the variation of Td, or the co-action of both parameters Td and μ. These results correspond well with the previous experimental observations on clay and siltstone with low clay fraction. In the same regime, the perturbation of only a single parameter, μ, renders the oscillatory motion of the block. Within the velocity-independent regime, β = 1, the inclusion and variation of Td generates a transition to equilibrium state, whereas the small oscillations of μ induce oscillatory motion with decreasing amplitude. The co-action of both parameters, in the same regime, causes the decrease of block's velocity. As for β > 1, highly-frequent, limit-amplitude oscillations of initial stress give rise to oscillatory motion. Also for β > 1, in case of perturbing only the initial shear stress, with smaller amplitude, velocity of the block changes exponentially fast. If the time delay is introduced, besides the stress perturbation, within the same regime, the co-action of Td (Td < 0.1) and small oscillations of μ induce the onset of deterministic chaos.

  11. Explicit least squares system parameter identification for exact differential input/output models

    NASA Technical Reports Server (NTRS)

    Pearson, A. E.

    1993-01-01

    The equation error for a class of systems modeled by input/output differential operator equations has the potential to be integrated exactly, given the input/output data on a finite time interval, thereby opening up the possibility of using an explicit least squares estimation technique for system parameter identification. The paper delineates the class of models for which this is possible and shows how the explicit least squares cost function can be obtained in a way that obviates dealing with unknown initial and boundary conditions. The approach is illustrated by two examples: a second order chemical kinetics model and a third order system of Lorenz equations.

  12. Tectonic predictions with mantle convection models

    NASA Astrophysics Data System (ADS)

    Coltice, Nicolas; Shephard, Grace E.

    2018-04-01

    Over the past 15 yr, numerical models of convection in Earth's mantle have made a leap forward: they can now produce self-consistent plate-like behaviour at the surface together with deep mantle circulation. These digital tools provide a new window into the intimate connections between plate tectonics and mantle dynamics, and can therefore be used for tectonic predictions, in principle. This contribution explores this assumption. First, initial conditions at 30, 20, 10 and 0 Ma are generated by driving a convective flow with imposed plate velocities at the surface. We then compute instantaneous mantle flows in response to the guessed temperature fields without imposing any boundary conditions. Plate boundaries self-consistently emerge at correct locations with respect to reconstructions, except for small plates close to subduction zones. As already observed for other types of instantaneous flow calculations, the structure of the top boundary layer and upper-mantle slab is the dominant character that leads to accurate predictions of surface velocities. Perturbations of the rheological parameters have little impact on the resulting surface velocities. We then compute fully dynamic model evolution from 30 and 10 to 0 Ma, without imposing plate boundaries or plate velocities. Contrary to instantaneous calculations, errors in kinematic predictions are substantial, although the plate layout and kinematics in several areas remain consistent with the expectations for the Earth. For these calculations, varying the rheological parameters makes a difference for plate boundary evolution. Also, identified errors in initial conditions contribute to first-order kinematic errors. This experiment shows that the tectonic predictions of dynamic models over 10 My are highly sensitive to uncertainties of rheological parameters and initial temperature field in comparison to instantaneous flow calculations. Indeed, the initial conditions and the rheological parameters can be good enough for an accurate prediction of instantaneous flow, but not for a prediction after 10 My of evolution. Therefore, inverse methods (sequential or data assimilation methods) using short-term fully dynamic evolution that predict surface kinematics are promising tools for a better understanding of the state of the Earth's mantle.

  13. Development of full regeneration establishment models for the forest vegetation simulator

    Treesearch

    John D. Shaw

    2015-01-01

    For most simulation modeling efforts, the goal of model developers is to produce simulations that are the best representations of realism as possible. Achieving this goal commonly requires a considerable amount of data to set the initial parameters, followed by validation and model improvement – both of which require even more data. The Forest Vegetation Simulator (FVS...

  14. Model for economic evaluation of high energy gas fracturing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Engi, D.

    1984-05-01

    The HEGF/NPV model has been developed and adapted for interactive microcomputer calculations of the economic consequences of reservoir stimulation by high energy gas fracturing (HEGF) in naturally fractured formations. This model makes use of three individual models: a model of the stimulated reservoir, a model of the gas flow in this reservoir, and a model of the discounted expected net cash flow (net present value, or NPV) associated with the enhanced gas production. Nominal values of the input parameters, based on observed data and reasonable estimates, are used to calculate the initial expected increase in the average daily rate ofmore » production resulting from the Meigs County HEGF stimulation experiment. Agreement with the observed initial increase in rate is good. On the basis of this calculation, production from the Meigs County Well is not expected to be profitable, but the HEGF/NPV model probably provides conservative results. Furthermore, analyses of the sensitivity of the expected NPV to variations in the values of certain reservoir parameters suggest that the use of HEGF stimulation in somewhat more favorable formations is potentially profitable. 6 references, 4 figures, 3 tables.« less

  15. Equilibrium and kinetic modelling of Cd(II) biosorption by algae Gelidium and agar extraction algal waste.

    PubMed

    Vilar, Vítor J P; Botelho, Cidália M S; Boaventura, Rui A R

    2006-01-01

    In this study an industrial algal waste from agar extraction has been used as an inexpensive and effective biosorbent for cadmium (II) removal from aqueous solutions. This biosorbent was compared with the algae Gelidium itself, which is the raw material for agar extraction. Equilibrium data follow both Langmuir and Redlich-Peterson models. The parameters of Langmuir equilibrium model are q(max)=18.0 mgg(-1), b=0.19 mgl(-1) and q(max)=9.7 mgg(-1), b=0.16 mgl(-1), respectively for Gelidium and the algal waste. Kinetic experiments were conducted at initial Cd(II) concentrations in the range 6-91 mgl(-1). Data were fitted to pseudo-first- and second-order Lagergren models. For an initial Cd(II) concentration of 91 mgl(-1) the parameters of the pseudo-first-order Lagergren model are k(1,ads)=0.17 and 0.87 min(-1); q(eq)=16.3 and 8.7 mgg(-1), respectively, for Gelidium and algal waste. Kinetic constants vary with the initial metal concentration. The adsorptive behaviour of biosorbent particles was modelled using a batch reactor mass transfer kinetic model. The model successfully predicts Cd(II) concentration profiles and provides significant insights on the biosorbents performance. The homogeneous diffusivity, D(h), is in the range 0.5-2.2 x10(-8) and 2.1-10.4 x10(-8)cm(2)s(-1), respectively, for Gelidium and algal waste.

  16. Incorporating TPC observed parameters and QuikSCAT surface wind observations into hurricane initialization using 4D-VAR approaches

    NASA Astrophysics Data System (ADS)

    Park, Kyungjeen

    This study aims to develop an objective hurricane initialization scheme which incorporates not only forecast model constraints but also observed features such as the initial intensity and size. It is based on the four-dimensional variational (4D-Var) bogus data assimilation (BDA) scheme originally proposed by Zou and Xiao (1999). The 4D-Var BDA consists of two steps: (i) specifying a bogus sea level pressure (SLP) field based on parameters observed by the Tropical Prediction Center (TPC) and (ii) assimilating the bogus SLP field under a forecast model constraint to adjust all model variables. This research focuses on improving the specification of the bogus SLP indicated in the first step. Numerical experiments are carried out for Hurricane Bonnie (1998) and Hurricane Gordon (2000) to test the sensitivity of hurricane track and intensity forecasts to specification of initial vortex. Major results are listed below: (1) A linear regression model is developed for determining the size of initial vortex based on the TPC observed radius of 34kt. (2) A method is proposed to derive a radial profile of SLP from QuikSCAT surface winds. This profile is shown to be more realistic than ideal profiles derived from Fujita's and Holland's formulae. (3) It is found that it takes about 1 h for hurricane prediction model to develop a conceptually correct hurricane structure, featuring a dominant role of hydrostatic balance at the initial time and a dynamic adjustment in less than 30 minutes. (4) Numerical experiments suggest that track prediction is less sensitive to the specification of initial vortex structure than intensity forecast. (5) Hurricane initialization using QuikSCAT-derived initial vortex produced a reasonably good forecast for hurricane landfall, with a position error of 25 km and a 4-h delay at landfalling. (6) Numerical experiments using the linear regression model for the size specification considerably outperforms all the other formulations tested in terms of the intensity prediction for both Hurricanes. For examples, the maximum track error is less than 110 km during the entire three-day forecasts for both hurricanes. The simulated Hurricane Gordon using the linear regression model made a nearly perfect landfall, with no position error and only 1-h error in landfalling time. (7) Diagnosis of model output indicates that the initial vortex specified by the linear regression model produces larger surface fluxes of sensible heat, latent heat and moisture, as well as stronger downward angular momentum transport than all the other schemes do. These enhanced energy supplies offset the energy lost caused by friction and gravity wave propagation, allowing for the model to maintain a strong and realistic hurricane during the entire forward model integration.

  17. Three-dimensional particle-particle simulations: Dependence of relaxation time on plasma parameter

    NASA Astrophysics Data System (ADS)

    Zhao, Yinjian

    2018-05-01

    A particle-particle simulation model is applied to investigate the dependence of the relaxation time on the plasma parameter in a three-dimensional unmagnetized plasma. It is found that the relaxation time increases linearly as the plasma parameter increases within the range of the plasma parameter from 2 to 10; when the plasma parameter equals 2, the relaxation time is independent of the total number of particles, but when the plasma parameter equals 10, the relaxation time slightly increases as the total number of particles increases, which indicates the transition of a plasma from collisional to collisionless. In addition, ions with initial Maxwell-Boltzmann (MB) distribution are found to stay in the MB distribution during the whole simulation time, and the mass of ions does not significantly affect the relaxation time of electrons. This work also shows the feasibility of the particle-particle model when using GPU parallel computing techniques.

  18. Establishing endangered species recovery criteria using predictive simulation modeling

    USGS Publications Warehouse

    McGowan, Conor P.; Catlin, Daniel H.; Shaffer, Terry L.; Gratto-Trevor, Cheri L.; Aron, Carol

    2014-01-01

    Listing a species under the Endangered Species Act (ESA) and developing a recovery plan requires U.S. Fish and Wildlife Service to establish specific and measurable criteria for delisting. Generally, species are listed because they face (or are perceived to face) elevated risk of extinction due to issues such as habitat loss, invasive species, or other factors. Recovery plans identify recovery criteria that reduce extinction risk to an acceptable level. It logically follows that the recovery criteria, the defined conditions for removing a species from ESA protections, need to be closely related to extinction risk. Extinction probability is a population parameter estimated with a model that uses current demographic information to project the population into the future over a number of replicates, calculating the proportion of replicated populations that go extinct. We simulated extinction probabilities of piping plovers in the Great Plains and estimated the relationship between extinction probability and various demographic parameters. We tested the fit of regression models linking initial abundance, productivity, or population growth rate to extinction risk, and then, using the regression parameter estimates, determined the conditions required to reduce extinction probability to some pre-defined acceptable threshold. Binomial regression models with mean population growth rate and the natural log of initial abundance were the best predictors of extinction probability 50 years into the future. For example, based on our regression models, an initial abundance of approximately 2400 females with an expected mean population growth rate of 1.0 will limit extinction risk for piping plovers in the Great Plains to less than 0.048. Our method provides a straightforward way of developing specific and measurable recovery criteria linked directly to the core issue of extinction risk. Published by Elsevier Ltd.

  19. Modelling of the hole-initiated impact ionization current in the framework of hydrodynamic equations

    NASA Astrophysics Data System (ADS)

    Lorenzini, Martino; Van Houdt, Jan

    2002-02-01

    Several research papers have shown the feasibility of the hydrodynamic transport model to investigate impact ionization in semiconductor devices by means of mean-energy-dependent generation rates. However, the analysis has been usually carried out for the case of the electron-initiated impact ionization process and less attention has been paid to the modelling of the generation rate due to impact ionization events initiated by holes. This paper therefore presents an original model for the hole-initiated impact ionization in silicon and validates it by comparing simulation results with substrate currents taken from p-channel transistors manufactured in a 0.35 μm CMOS technology having three different channel lengths. The experimental data are successfully reproduced over a wide range of applied voltages using only one fitting parameter. Since the impact ionization of holes triggers the mechanism responsible for the back-bias enhanced gate current in deep submicron nMOS devices, the model can be exploited in the development of non-volatile memories programmed by secondary electron injection.

  20. A nonlinear model for analysis of slug-test data

    USGS Publications Warehouse

    McElwee, C.D.; Zenner, M.A.

    1998-01-01

    While doing slug tests in high-permeability aquifers, we have consistently seen deviations from the expected response of linear theoretical models. Normalized curves do not coincide for various initial heads, as would be predicted by linear theories, and are shifted to larger times for higher initial heads. We have developed a general nonlinear model based on the Navier-Stokes equation, nonlinear frictional loss, non-Darcian flow, acceleration effects, radius changes in the well bore, and a Hvorslev model for the aquifer, which explains these data features. The model produces a very good fit for both oscillatory and nonoscillatory field data, using a single set of physical parameters to predict the field data for various initial displacements at a given well. This is in contrast to linear models which have a systematic lack of fit and indicate that hydraulic conductivity varies with the initial displacement. We recommend multiple slug tests with a considerable variation in initial head displacement to evaluate the possible presence of nonlinear effects. Our conclusion is that the nonlinear model presented here is an excellent tool to analyze slug tests, covering the range from the underdamped region to the overdamped region.

  1. Model of succession in degraded areas based on carabid beetles (Coleoptera, Carabidae).

    PubMed

    Schwerk, Axel; Szyszko, Jan

    2011-01-01

    Degraded areas constitute challenging tasks with respect to sustainable management of natural resources. Maintaining or even establishing certain successional stages seems to be particularly important. This paper presents a model of the succession in five different types of degraded areas in Poland based on changes in the carabid fauna. Mean Individual Biomass of Carabidae (MIB) was used as a numerical measure for the stage of succession. The run of succession differed clearly among the different types of degraded areas. Initial conditions (origin of soil and origin of vegetation) and landscape related aspects seem to be important with respect to these differences. As characteristic phases, a 'delay phase', an 'increase phase' and a 'stagnation phase' were identified. In general, the runs of succession could be described by four different parameters: (1) 'Initial degradation level', (2) 'delay', (3) 'increase rate' and (4) 'recovery level'. Applying the analytic solution of the logistic equation, characteristic values for the parameters were identified for each of the five area types. The model is of practical use, because it provides a possibility to compare the values of the parameters elaborated in different areas, to give hints for intervention and to provide prognoses about future succession in the areas. Furthermore, it is possible to transfer the model to other indicators of succession.

  2. Predicting temperature drop rate of mass concrete during an initial cooling period using genetic programming

    NASA Astrophysics Data System (ADS)

    Bhattarai, Santosh; Zhou, Yihong; Zhao, Chunju; Zhou, Huawei

    2018-02-01

    Thermal cracking on concrete dams depends upon the rate at which the concrete is cooled (temperature drop rate per day) within an initial cooling period during the construction phase. Thus, in order to control the thermal cracking of such structure, temperature development due to heat of hydration of cement should be dropped at suitable rate. In this study, an attempt have been made to formulate the relation between cooling rate of mass concrete with passage of time (age of concrete) and water cooling parameters: flow rate and inlet temperature of cooling water. Data measured at summer season (April-August from 2009 to 2012) from recently constructed high concrete dam were used to derive a prediction model with the help of Genetic Programming (GP) software “Eureqa”. Coefficient of Determination (R) and Mean Square Error (MSE) were used to evaluate the performance of the model. The value of R and MSE is 0.8855 and 0.002961 respectively. Sensitivity analysis was performed to evaluate the relative impact on the target parameter due to input parameters. Further, testing the proposed model with an independent dataset those not included during analysis, results obtained from the proposed GP model are close enough to the real field data.

  3. On Discontinuous Piecewise Linear Models for Memristor Oscillators

    NASA Astrophysics Data System (ADS)

    Amador, Andrés; Freire, Emilio; Ponce, Enrique; Ros, Javier

    2017-06-01

    In this paper, we provide for the first time rigorous mathematical results regarding the rich dynamics of piecewise linear memristor oscillators. In particular, for each nonlinear oscillator given in [Itoh & Chua, 2008], we show the existence of an infinite family of invariant manifolds and that the dynamics on such manifolds can be modeled without resorting to discontinuous models. Our approach provides topologically equivalent continuous models with one dimension less but with one extra parameter associated to the initial conditions. It is possible to justify the periodic behavior exhibited by three-dimensional memristor oscillators, by taking advantage of known results for planar continuous piecewise linear systems. The analysis developed not only confirms the numerical results contained in previous works [Messias et al., 2010; Scarabello & Messias, 2014] but also goes much further by showing the existence of closed surfaces in the state space which are foliated by periodic orbits. The important role of initial conditions that justify the infinite number of periodic orbits exhibited by these models, is stressed. The possibility of unsuspected bistable regimes under specific configurations of parameters is also emphasized.

  4. Reactive flow model development for PBXW-126 using modern nonlinear optimization methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Murphy, M.J.; Simpson, R.L.; Urtiew, P.A.

    1995-08-01

    The initiation and detonation behavior of PBXW-126 has been characterized and is described. PBXW-126 is a composite explosive consisting of approximately equal amounts of RDX, AP, AL, and NTO with a polyurethane binder. The three term ignition and growth of reaction model parameters (ignition + two growth terms) have been found using nonlinear optimization methods to determine the {open_quotes}best{close_quotes} set of model parameters. The ignition term treats the initiation of up to 0.5% of the RDX The first growth term in the model treats the RDX growth of reaction up to 20% reacted. The second growth term treats the subsequentmore » growth of reaction of the remaining AP/AL/NTO. The unreacted equation of state (EOS) was determined from the wave profiles of embedded gauge tests while the JWL product EOS was determined from cylinder expansion test results. The nonlinear optimization code, NLQPEB/GLO, was used to determine the {open_quotes}best{close_quotes} set of coefficients for the three term Lee-Tarver ignition and growth of reaction model.« less

  5. Pneumatic tyres interacting with deformable terrains

    NASA Astrophysics Data System (ADS)

    Bekakos, C. A.; Papazafeiropoulos, G.; O'Boy, D. J.; Prins, J.

    2016-09-01

    In this study, a numerical model of a deformable tyre interacting with a deformable road has been developed with the use of the finite element code ABAQUS (v. 6.13). Two tyre models with different widths, not necessarily identical to any real industry tyres, have been created purely for research use. The behaviour of these tyres under various vertical loads and different inflation pressures is studied, initially in contact with a rigid surface and then with a deformable terrain. After ensuring that the tyre model gives realistic results in terms of the interaction with a rigid surface, the rolling process of the tyre on a deformable road was studied. The effects of friction coefficient, inflation pressure, rebar orientation and vertical load on the overall performance are reported. Regarding the modelling procedure, a sequence of models were analysed, using the coupling implicit - explicit method. The numerical results reveal that not only there is significant dependence of the final tyre response on the various initial driving parameters, but also special conditions emerge, where the desired response of the tyre results from specific optimum combination of these parameters.

  6. Techno-economical optimization of Reactive Blue 19 removal by combined electrocoagulation/coagulation process through MOPSO using RSM and ANFIS models.

    PubMed

    Taheri, M; Alavi Moghaddam, M R; Arami, M

    2013-10-15

    In this research, Response Surface Methodology (RSM) and Adaptive Neuro Fuzzy Inference System (ANFIS) models were applied for optimization of Reactive Blue 19 removal using combined electrocoagulation/coagulation process through Multi-Objective Particle Swarm Optimization (MOPSO). By applying RSM, the effects of five independent parameters including applied current, reaction time, initial dye concentration, initial pH and dosage of Poly Aluminum Chloride were studied. According to the RSM results, all the independent parameters are equally important in dye removal efficiency. In addition, ANFIS was applied for dye removal efficiency and operating costs modeling. High R(2) values (≥85%) indicate that the predictions of RSM and ANFIS models are acceptable for both responses. ANFIS was also used in MOPSO for finding the best techno-economical Reactive Blue 19 elimination conditions according to RSM design. Through MOPSO and the selected ANFIS model, Minimum and maximum values of 58.27% and 99.67% dye removal efficiencies were obtained, respectively. Copyright © 2013 Elsevier Ltd. All rights reserved.

  7. Predicting Bone Mechanical State During Recovery After Long-Duration Skeletal Unloading Using QCT and Finite Element Modeling

    NASA Technical Reports Server (NTRS)

    Chang, Katarina L.; Pennline, James A.

    2013-01-01

    During long-duration missions at the International Space Station, astronauts experience weightlessness leading to skeletal unloading. Unloading causes a lack of a mechanical stimulus that triggers bone cellular units to remove mass from the skeleton. A mathematical system of the cellular dynamics predicts theoretical changes to volume fractions and ash fraction in response to temporal variations in skeletal loading. No current model uses image technology to gather information about a skeletal site s initial properties to calculate bone remodeling changes and then to compare predicted bone strengths with the initial strength. The goal of this study is to use quantitative computed tomography (QCT) in conjunction with a computational model of the bone remodeling process to establish initial bone properties to predict changes in bone mechanics during bone loss and recovery with finite element (FE) modeling. Input parameters for the remodeling model include bone volume fraction and ash fraction, which are both computed from the QCT images. A non-destructive approach to measure ash fraction is also derived. Voxel-based finite element models (FEM) created from QCTs provide initial evaluation of bone strength. Bone volume fraction and ash fraction outputs from the computational model predict changes to the elastic modulus of bone via a two-parameter equation. The modulus captures the effect of bone remodeling and functions as the key to evaluate of changes in strength. Application of this time-dependent modulus to FEMs and composite beam theory enables an assessment of bone mechanics during recovery. Prediction of bone strength is not only important for astronauts, but is also pertinent to millions of patients with osteoporosis and low bone density.

  8. Fitting the Rasch Model to Account for Variation in Item Discrimination

    ERIC Educational Resources Information Center

    Weitzman, R. A.

    2009-01-01

    Building on the Kelley and Gulliksen versions of classical test theory, this article shows that a logistic model having only a single item parameter can account for varying item discrimination, as well as difficulty, by using item-test correlations to adjust incorrect-correct (0-1) item responses prior to an initial model fit. The fit occurs…

  9. Effect of the curvature parameter on least-squares prediction within poor data coverage: case study for Africa

    NASA Astrophysics Data System (ADS)

    Abd-Elmotaal, Hussein; Kühtreiber, Norbert

    2016-04-01

    In the framework of the IAG African Geoid Project, there are a lot of large data gaps in its gravity database. These gaps are filled initially using unequal weight least-squares prediction technique. This technique uses a generalized Hirvonen covariance function model to replace the empirically determined covariance function. The generalized Hirvonen covariance function model has a sensitive parameter which is related to the curvature parameter of the covariance function at the origin. This paper studies the effect of the curvature parameter on the least-squares prediction results, especially in the large data gaps as appearing in the African gravity database. An optimum estimation of the curvature parameter has also been carried out. A wide comparison among the results obtained in this research along with their obtained accuracy is given and thoroughly discussed.

  10. Dynamics of a quasiparticle in the α-T3 model: role of pseudospin polarization and transverse magnetic field on zitterbewegung

    NASA Astrophysics Data System (ADS)

    Biswas, Tutul; Kanti Ghosh, Tarun

    2018-02-01

    We consider the α-T 3 model which provides a smooth crossover between the honeycomb lattice with pseudospin 1/2 and the dice lattice with pseudospin 1 through the variation of a parameter α. We study the dynamics of a wave packet representing a quasiparticle in the α-T3 model with zero and finite transverse magnetic field. For zero field, it is shown that the wave packet undergoes a transient zitterbewegung (ZB). Various features of ZB depending on the initial pseudospin polarization of the wave packet have been revealed. For an intermediate value of the parameter α i.e. for 0<α<1 the resulting ZB consists of two distinct frequencies when the wave packet was located initially in rim site. However, the wave packet exhibits single frequency ZB for α=0 and α=1 . It is also unveiled that the frequency of ZB corresponding to α=1 gets exactly half of that corresponding to the α=0 case. On the other hand, when the initial wave packet was in hub site, the ZB consists of only one frequency for all values of α. Using stationary phase approximation, we find analytical expression of velocity average which can be used to extract the associated timescale over which the transient nature of ZB persists. On the contrary, the wave packet undergoes permanent ZB in presence of a transverse magnetic field. Due to the presence of a large number of Landau energy levels, the oscillations in ZB appear to be much more complicated. The oscillation pattern depends significantly on the initial pseudospin polarization of the wave packet. Furthermore, it is revealed that the number of the frequency components involved in ZB depends on the parameter α.

  11. Dynamics of a quasiparticle in the α-T3 model: role of pseudospin polarization and transverse magnetic field on zitterbewegung.

    PubMed

    Biswas, Tutul; Kanti Ghosh, Tarun

    2018-01-22

    We consider the α-T 3 model which provides a smooth crossover between the honeycomb lattice with pseudospin 1/2 and the dice lattice with pseudospin 1 through the variation of a parameter α. We study the dynamics of a wave packet representing a quasiparticle in the α-T 3 model with zero and finite transverse magnetic field. For zero field, it is shown that the wave packet undergoes a transient zitterbewegung (ZB). Various features of ZB depending on the initial pseudospin polarization of the wave packet have been revealed. For an intermediate value of the parameter α i.e. for [Formula: see text] the resulting ZB consists of two distinct frequencies when the wave packet was located initially in rim site. However, the wave packet exhibits single frequency ZB for [Formula: see text] and [Formula: see text]. It is also unveiled that the frequency of ZB corresponding to [Formula: see text] gets exactly half of that corresponding to the [Formula: see text] case. On the other hand, when the initial wave packet was in hub site, the ZB consists of only one frequency for all values of α. Using stationary phase approximation, we find analytical expression of velocity average which can be used to extract the associated timescale over which the transient nature of ZB persists. On the contrary, the wave packet undergoes permanent ZB in presence of a transverse magnetic field. Due to the presence of a large number of Landau energy levels, the oscillations in ZB appear to be much more complicated. The oscillation pattern depends significantly on the initial pseudospin polarization of the wave packet. Furthermore, it is revealed that the number of the frequency components involved in ZB depends on the parameter α.

  12. A Study of Parameters of the Counterpropagating Leader and its Influence on the Lightning Protection of Objects Using Large-Scale Laboratory Modeling

    NASA Astrophysics Data System (ADS)

    Syssoev, V. S.; Kostinskiy, A. Yu.; Makalskiy, L. M.; Rakov, A. V.; Andreev, M. G.; Bulatov, M. U.; Sukharevsky, D. I.; Naumova, M. U.

    2014-04-01

    In this work, the results of experiments on initiating the upward and descending leaders during the development of a long spark when studying lightning protection of objects with the help of large-scale models are shown. The influence of the counterpropagating leaders on the process of the lightning strike of ground-based and insulated objects is discussed. In the first case, the upward negative leader is initiated by the positive downward leader, which propagates from the high-voltage electrode of the "rod-rod"-type Marx generator (the rod is located on the plane and is 3-m high) in the gap with a length of 9-12 m. The positive-voltage pulse with a duration of 7500 μs had an amplitude of up to 3 MV. In the second case, initiation of the positive upward leader was performed in the electric field created by a cloud of negatively charged aerosol, which simulates the charged thunderstorm cell. In this case, all the phases characteristic of the ascending lightnings initiated by the tall ground-based objects and the triggered lightnings during the experiments with an actual thunderstorm cloud were observed in the forming spark discharge with a length of 1.5-2.0 m. The main parameters of the counterpropagating leader, which is initiated by the objects during the large-scale model experiments with a long spark, are shown.

  13. Examining the effect of initialization strategies on the performance of Gaussian mixture modeling.

    PubMed

    Shireman, Emilie; Steinley, Douglas; Brusco, Michael J

    2017-02-01

    Mixture modeling is a popular technique for identifying unobserved subpopulations (e.g., components) within a data set, with Gaussian (normal) mixture modeling being the form most widely used. Generally, the parameters of these Gaussian mixtures cannot be estimated in closed form, so estimates are typically obtained via an iterative process. The most common estimation procedure is maximum likelihood via the expectation-maximization (EM) algorithm. Like many approaches for identifying subpopulations, finite mixture modeling can suffer from locally optimal solutions, and the final parameter estimates are dependent on the initial starting values of the EM algorithm. Initial values have been shown to significantly impact the quality of the solution, and researchers have proposed several approaches for selecting the set of starting values. Five techniques for obtaining starting values that are implemented in popular software packages are compared. Their performances are assessed in terms of the following four measures: (1) the ability to find the best observed solution, (2) settling on a solution that classifies observations correctly, (3) the number of local solutions found by each technique, and (4) the speed at which the start values are obtained. On the basis of these results, a set of recommendations is provided to the user.

  14. Model Sensitivity and Use of the Comparative Finite Element Method in Mammalian Jaw Mechanics: Mandible Performance in the Gray Wolf

    PubMed Central

    Tseng, Zhijie Jack; Mcnitt-Gray, Jill L.; Flashner, Henryk; Wang, Xiaoming; Enciso, Reyes

    2011-01-01

    Finite Element Analysis (FEA) is a powerful tool gaining use in studies of biological form and function. This method is particularly conducive to studies of extinct and fossilized organisms, as models can be assigned properties that approximate living tissues. In disciplines where model validation is difficult or impossible, the choice of model parameters and their effects on the results become increasingly important, especially in comparing outputs to infer function. To evaluate the extent to which performance measures are affected by initial model input, we tested the sensitivity of bite force, strain energy, and stress to changes in seven parameters that are required in testing craniodental function with FEA. Simulations were performed on FE models of a Gray Wolf (Canis lupus) mandible. Results showed that unilateral bite force outputs are least affected by the relative ratios of the balancing and working muscles, but only ratios above 0.5 provided balancing-working side joint reaction force relationships that are consistent with experimental data. The constraints modeled at the bite point had the greatest effect on bite force output, but the most appropriate constraint may depend on the study question. Strain energy is least affected by variation in bite point constraint, but larger variations in strain energy values are observed in models with different number of tetrahedral elements, masticatory muscle ratios and muscle subgroups present, and number of material properties. These findings indicate that performance measures are differentially affected by variation in initial model parameters. In the absence of validated input values, FE models can nevertheless provide robust comparisons if these parameters are standardized within a given study to minimize variation that arise during the model-building process. Sensitivity tests incorporated into the study design not only aid in the interpretation of simulation results, but can also provide additional insights on form and function. PMID:21559475

  15. Shapes, rotation, and pole solutions of the selected Hilda and Trojan asteroids

    NASA Astrophysics Data System (ADS)

    Gritsevich, Maria; Sonnett, Sarah; Torppa, Johanna; Mainzer, Amy; Muinonen, Karri; Penttilä, Antti; Grav, Thomas; Masiero, Joseph; Bauer, James; Kramer, Emily

    2017-04-01

    Binary asteroid systems contain key information about the dynamical and chemical environments in which they formed. For example, determining the formation environments of Trojan and Hilda asteroids (in 1:1 and 3:2 mean-motion resonance with Jupiter, respectively) will provide critical constraints on how small bodies and the planets that drive their migration must have moved throughout Solar System history, see e.g. [1-3]. Therefore, identifying and characterizing binary asteroids within the Trojan and Hilda populations could offer a powerful means of discerning between Solar System evolution models. Dozens of possibly close or contact binary Trojans and Hildas were identified within the data obtained by NEOWISE [4]. Densely sampled light curves of these candidate binaries have been obtained in order to resolve rotational light curve features that are indicative of binarity (e.g., [5-7]). We present analysis of the shapes, rotation, and pole solutions of some of the follow-up targets observed with optical ground-based telescopes. For modelling the asteroid photometric properties, we use parameters describing the shape, surface light scattering properties and spin state of the asteroid. Scattering properties of the asteroid surface are modeled using a two parameter H-G12 magnitude system. Determination of the initial best-fit parameters is carried out by first using a triaxial ellipsoid shape model, and scanning over the period values and spin axis orientations, while fitting the other parameters, after which all parameters were fitted, taking the initial values for spin properties from the spin scanning. In addition to the best-fit parameters, we also provide the distribution of the possible solution, which should cover the inaccuracies of the solution, caused by the observing errors and model. The distribution of solutions is generated by Markov-Chain Monte Carlo sampling the spin and shape model parameters, using both an ellipsoid shape model and a convex model, Gaussian curvature of which is defined as a spherical harmonics series [8]. References: [1] Marzari F. and Scholl H. (1998), A&A, 339, 278. [2] Morbidelli A. et al. (2005), Nature, 435, 462. [3] Nesvorny D. et al. (2013), ApJ, 768, 45. [4] Sonnett S. et al. (2015), ApJ, 799, 191. [5] Behrend R. et al. (2006), A&A, 446, 1177. [6] Lacerda P. and Jewitt D. C. (2007), AJ, 133, 1393. [7] Oey J. (2016), MPB, 43, 45. [8] Muinonen et al., ACM 2017.

  16. Dependence of tropical cyclone development on coriolis parameter: A theoretical model

    NASA Astrophysics Data System (ADS)

    Deng, Liyuan; Li, Tim; Bi, Mingyu; Liu, Jia; Peng, Melinda

    2018-03-01

    A simple theoretical model was formulated to investigate how tropical cyclone (TC) intensification depends on the Coriolis parameter. The theoretical framework includes a two-layer free atmosphere and an Ekman boundary layer at the bottom. The linkage between the free atmosphere and the boundary layer is through the Ekman pumping vertical velocity in proportion to the vorticity at the top of the boundary layer. The closure of this linear system assumes a simple relationship between the free atmosphere diabatic heating and the boundary layer moisture convergence. Under a set of realistic atmospheric parameter values, the model suggests that the most preferred latitude for TC development is around 5° without considering other factors. The theoretical result is confirmed by high-resolution WRF model simulations in a zero-mean flow and a constant SST environment on an f -plane with different Coriolis parameters. Given an initially balanced weak vortex, the TC-like vortex intensifies most rapidly at the reference latitude of 5°. Thus, the WRF model simulations confirm the f-dependent characteristics of TC intensification rate as suggested by the theoretical model.

  17. Critical phenomena at the threshold of immediate merger in binary black hole systems: The extreme mass ratio case

    NASA Astrophysics Data System (ADS)

    Gundlach, Carsten; Akcay, Sarp; Barack, Leor; Nagar, Alessandro

    2012-10-01

    In numerical simulations of black hole binaries, Pretorius and Khurana [Classical Quantum Gravity 24, S83 (2007)CQGRDG0264-938110.1088/0264-9381/24/12/S07] have observed critical behavior at the threshold between scattering and immediate merger. The number of orbits scales as n≃-γln⁡|p-p*| along any one-parameter family of initial data such that the threshold is at p=p*. Hence, they conjecture that in ultrarelativistic collisions almost all the kinetic energy can be converted into gravitational waves if the impact parameter is fine-tuned to the threshold. As a toy model for the binary, they consider the geodesic motion of a test particle in a Kerr black hole spacetime, where the unstable circular geodesics play the role of critical solutions, and calculate the critical exponent γ. Here, we incorporate radiation reaction into this model using the self-force approximation. The critical solution now evolves adiabatically along a sequence of unstable circular geodesic orbits under the effect of the self-force. We confirm that almost all the initial energy and angular momentum are radiated on the critical solution. Our calculation suggests that, even for infinite initial energy, this happens over a finite number of orbits given by n∞≃0.41/η, where η is the (small) mass ratio. We derive expressions for the time spent on the critical solution, number of orbits and radiated energy as functions of the initial energy and impact parameter.

  18. Influence of primary fragment excitation energy and spin distributions on fission observables

    NASA Astrophysics Data System (ADS)

    Litaize, Olivier; Thulliez, Loïc; Serot, Olivier; Chebboubi, Abdelaziz; Tamagno, Pierre

    2018-03-01

    Fission observables in the case of 252Cf(sf) are investigated by exploring several models involved in the excitation energy sharing and spin-parity assignment between primary fission fragments. In a first step the parameters used in the FIFRELIN Monte Carlo code "reference route" are presented: two parameters for the mass dependent temperature ratio law and two constant spin cut-off parameters for light and heavy fragment groups respectively. These parameters determine the initial fragment entry zone in excitation energy and spin-parity (E*, Jπ). They are chosen to reproduce the light and heavy average prompt neutron multiplicities. When these target observables are achieved all other fission observables can be predicted. We show here the influence of input parameters on the saw-tooth curve and we discuss the influence of a mass and energy-dependent spin cut-off model on gamma-rays related fission observables. The part of the model involving level densities, neutron transmission coefficients or photon strength functions remains unchanged.

  19. Adaptive estimation of nonlinear parameters of a nonholonomic spherical robot using a modified fuzzy-based speed gradient algorithm

    NASA Astrophysics Data System (ADS)

    Roozegar, Mehdi; Mahjoob, Mohammad J.; Ayati, Moosa

    2017-05-01

    This paper deals with adaptive estimation of the unknown parameters and states of a pendulum-driven spherical robot (PDSR), which is a nonlinear in parameters (NLP) chaotic system with parametric uncertainties. Firstly, the mathematical model of the robot is deduced by applying the Newton-Euler methodology for a system of rigid bodies. Then, based on the speed gradient (SG) algorithm, the states and unknown parameters of the robot are estimated online for different step length gains and initial conditions. The estimated parameters are updated adaptively according to the error between estimated and true state values. Since the errors of the estimated states and parameters as well as the convergence rates depend significantly on the value of step length gain, this gain should be chosen optimally. Hence, a heuristic fuzzy logic controller is employed to adjust the gain adaptively. Simulation results indicate that the proposed approach is highly encouraging for identification of this NLP chaotic system even if the initial conditions change and the uncertainties increase; therefore, it is reliable to be implemented on a real robot.

  20. Multivariate modelling of prostate cancer combining magnetic resonance derived T2, diffusion, dynamic contrast-enhanced and spectroscopic parameters.

    PubMed

    Riches, S F; Payne, G S; Morgan, V A; Dearnaley, D; Morgan, S; Partridge, M; Livni, N; Ogden, C; deSouza, N M

    2015-05-01

    The objectives are determine the optimal combination of MR parameters for discriminating tumour within the prostate using linear discriminant analysis (LDA) and to compare model accuracy with that of an experienced radiologist. Multiparameter MRIs in 24 patients before prostatectomy were acquired. Tumour outlines from whole-mount histology, T2-defined peripheral zone (PZ), and central gland (CG) were superimposed onto slice-matched parametric maps. T2, Apparent Diffusion Coefficient, initial area under the gadolinium curve, vascular parameters (K(trans),Kep,Ve), and (choline+polyamines+creatine)/citrate were compared between tumour and non-tumour tissues. Receiver operating characteristic (ROC) curves determined sensitivity and specificity at spectroscopic voxel resolution and per lesion, and LDA determined the optimal multiparametric model for identifying tumours. Accuracy was compared with an expert observer. Tumours were significantly different from PZ and CG for all parameters (all p < 0.001). Area under the ROC curve for discriminating tumour from non-tumour was significantly greater (p < 0.001) for the multiparametric model than for individual parameters; at 90 % specificity, sensitivity was 41 % (MRSI voxel resolution) and 59 % per lesion. At this specificity, an expert observer achieved 28 % and 49 % sensitivity, respectively. The model was more accurate when parameters from all techniques were included and performed better than an expert observer evaluating these data. • The combined model increases diagnostic accuracy in prostate cancer compared with individual parameters • The optimal combined model includes parameters from diffusion, spectroscopy, perfusion, and anatominal MRI • The computed model improves tumour detection compared to an expert viewing parametric maps.

  1. Semiautomatic approaches to account for 3-D distortion of the electric field from local, near-surface structures in 3-D resistivity inversions of 3-D regional magnetotelluric data

    USGS Publications Warehouse

    Rodriguez, Brian D.

    2017-03-31

    This report summarizes the results of three-dimensional (3-D) resistivity inversion simulations that were performed to account for local 3-D distortion of the electric field in the presence of 3-D regional structure, without any a priori information on the actual 3-D distribution of the known subsurface geology. The methodology used a 3-D geologic model to create a 3-D resistivity forward (“known”) model that depicted the subsurface resistivity structure expected for the input geologic configuration. The calculated magnetotelluric response of the modeled resistivity structure was assumed to represent observed magnetotelluric data and was subsequently used as input into a 3-D resistivity inverse model that used an iterative 3-D algorithm to estimate 3-D distortions without any a priori geologic information. A publicly available inversion code, WSINV3DMT, was used for all of the simulated inversions, initially using the default parameters, and subsequently using adjusted inversion parameters. A semiautomatic approach of accounting for the static shift using various selections of the highest frequencies and initial models was also tested. The resulting 3-D resistivity inversion simulation was compared to the “known” model and the results evaluated. The inversion approach that produced the lowest misfit to the various local 3-D distortions was an inversion that employed an initial model volume resistivity that was nearest to the maximum resistivities in the near-surface layer.

  2. Reduced uncertainty of regional scale CLM predictions of net carbon fluxes and leaf area indices with estimated plant-specific parameters

    NASA Astrophysics Data System (ADS)

    Post, Hanna; Hendricks Franssen, Harrie-Jan; Han, Xujun; Baatz, Roland; Montzka, Carsten; Schmidt, Marius; Vereecken, Harry

    2016-04-01

    Reliable estimates of carbon fluxes and states at regional scales are required to reduce uncertainties in regional carbon balance estimates and to support decision making in environmental politics. In this work the Community Land Model version 4.5 (CLM4.5-BGC) was applied at a high spatial resolution (1 km2) for the Rur catchment in western Germany. In order to improve the model-data consistency of net ecosystem exchange (NEE) and leaf area index (LAI) for this study area, five plant functional type (PFT)-specific CLM4.5-BGC parameters were estimated with time series of half-hourly NEE data for one year in 2011/2012, using the DiffeRential Evolution Adaptive Metropolis (DREAM) algorithm, a Markov Chain Monte Carlo (MCMC) approach. The parameters were estimated separately for four different plant functional types (needleleaf evergreen temperate tree, broadleaf deciduous temperate tree, C3-grass and C3-crop) at four different sites. The four sites are located inside or close to the Rur catchment. We evaluated modeled NEE for one year in 2012/2013 with NEE measured at seven eddy covariance sites in the catchment, including the four parameter estimation sites. Modeled LAI was evaluated by means of LAI derived from remotely sensed RapidEye images of about 18 days in 2011/2012. Performance indices were based on a comparison between measurements and (i) a reference run with CLM default parameters, and (ii) a 60 instance CLM ensemble with parameters sampled from the DREAM posterior probability density functions (pdfs). The difference between the observed and simulated NEE sum reduced 23% if estimated parameters instead of default parameters were used as input. The mean absolute difference between modeled and measured LAI was reduced by 59% on average. Simulated LAI was not only improved in terms of the absolute value but in some cases also in terms of the timing (beginning of vegetation onset), which was directly related to a substantial improvement of the NEE estimates in spring. In order to obtain a more comprehensive estimate of the model uncertainty, a second CLM ensemble was set up, where initial conditions and atmospheric forcings were perturbed in addition to the parameter estimates. This resulted in very high standard deviations (STD) of the modeled annual NEE sums for C3-grass and C3-crop PFTs, ranging between 24.1 and 225.9 gC m-2 y-1, compared to STD = 0.1 - 3.4 gC m-2 y-1 (effect of parameter uncertainty only, without additional perturbation of initial states and atmospheric forcings). The higher spread of modeled NEE for the C3-crop and C3-grass indicated that the model uncertainty was notably higher for those PFTs compared to the forest-PFTs. Our findings highlight the potential of parameter and uncertainty estimation to support the understanding and further development of land surface models such as CLM.

  3. Numerical optimization of Ignition and Growth reactive flow modeling for PAX2A

    NASA Astrophysics Data System (ADS)

    Baker, E. L.; Schimel, B.; Grantham, W. J.

    1996-05-01

    Variable metric nonlinear optimization has been successfully applied to the parameterization of unreacted and reacted products thermodynamic equations of state and reactive flow modeling of the HMX based high explosive PAX2A. The NLQPEB nonlinear optimization program has been recently coupled to the LLNL developed two-dimensional high rate continuum modeling programs DYNA2D and CALE. The resulting program has the ability to optimize initial modeling parameters. This new optimization capability was used to optimally parameterize the Ignition and Growth reactive flow model to experimental manganin gauge records. The optimization varied the Ignition and Growth reaction rate model parameters in order to minimize the difference between the calculated pressure histories and the experimental pressure histories.

  4. Performance model for grid-connected photovoltaic inverters.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boyson, William Earl; Galbraith, Gary M.; King, David L.

    2007-09-01

    This document provides an empirically based performance model for grid-connected photovoltaic inverters used for system performance (energy) modeling and for continuous monitoring of inverter performance during system operation. The versatility and accuracy of the model were validated for a variety of both residential and commercial size inverters. Default parameters for the model can be obtained from manufacturers specification sheets, and the accuracy of the model can be further refined using measurements from either well-instrumented field measurements in operational systems or using detailed measurements from a recognized testing laboratory. An initial database of inverter performance parameters was developed based on measurementsmore » conducted at Sandia National Laboratories and at laboratories supporting the solar programs of the California Energy Commission.« less

  5. Deciphering DNA replication dynamics in eukaryotic cell populations in relation with their averaged chromatin conformations

    NASA Astrophysics Data System (ADS)

    Goldar, A.; Arneodo, A.; Audit, B.; Argoul, F.; Rappailles, A.; Guilbaud, G.; Petryk, N.; Kahli, M.; Hyrien, O.

    2016-03-01

    We propose a non-local model of DNA replication that takes into account the observed uncertainty on the position and time of replication initiation in eukaryote cell populations. By picturing replication initiation as a two-state system and considering all possible transition configurations, and by taking into account the chromatin’s fractal dimension, we derive an analytical expression for the rate of replication initiation. This model predicts with no free parameter the temporal profiles of initiation rate, replication fork density and fraction of replicated DNA, in quantitative agreement with corresponding experimental data from both S. cerevisiae and human cells and provides a quantitative estimate of initiation site redundancy. This study shows that, to a large extent, the program that regulates the dynamics of eukaryotic DNA replication is a collective phenomenon that emerges from the stochastic nature of replication origins initiation.

  6. Two-dimensional advective transport in ground-water flow parameter estimation

    USGS Publications Warehouse

    Anderman, E.R.; Hill, M.C.; Poeter, E.P.

    1996-01-01

    Nonlinear regression is useful in ground-water flow parameter estimation, but problems of parameter insensitivity and correlation often exist given commonly available hydraulic-head and head-dependent flow (for example, stream and lake gain or loss) observations. To address this problem, advective-transport observations are added to the ground-water flow, parameter-estimation model MODFLOWP using particle-tracking methods. The resulting model is used to investigate the importance of advective-transport observations relative to head-dependent flow observations when either or both are used in conjunction with hydraulic-head observations in a simulation of the sewage-discharge plume at Otis Air Force Base, Cape Cod, Massachusetts, USA. The analysis procedure for evaluating the probable effect of new observations on the regression results consists of two steps: (1) parameter sensitivities and correlations calculated at initial parameter values are used to assess the model parameterization and expected relative contributions of different types of observations to the regression; and (2) optimal parameter values are estimated by nonlinear regression and evaluated. In the Cape Cod parameter-estimation model, advective-transport observations did not significantly increase the overall parameter sensitivity; however: (1) inclusion of advective-transport observations decreased parameter correlation enough for more unique parameter values to be estimated by the regression; (2) realistic uncertainties in advective-transport observations had a small effect on parameter estimates relative to the precision with which the parameters were estimated; and (3) the regression results and sensitivity analysis provided insight into the dynamics of the ground-water flow system, especially the importance of accurate boundary conditions. In this work, advective-transport observations improved the calibration of the model and the estimation of ground-water flow parameters, and use of regression and related techniques produced significant insight into the physical system.

  7. Correlates of individual, and age-related, differences in short-term learning.

    PubMed

    Zhang, Zhiyong; Davis, Hasker P; Salthouse, Timothy A; Tucker-Drob, Elliot M

    2007-07-01

    Latent growth models were applied to data on multitrial verbal and spatial learning tasks from two independent studies. Although significant individual differences in both initial level of performance and subsequent learning were found in both tasks, age differences were found only in mean initial level, and not in mean learning. In neither task was fluid or crystallized intelligence associated with learning. Although there were moderate correlations among the level parameters across the verbal and spatial tasks, the learning parameters were not significantly correlated with one another across task modalities. These results are inconsistent with the existence of a general (e.g., material-independent) learning ability.

  8. A large-scale forest landscape model incorporating multi-scale processes and utilizing forest inventory data

    Treesearch

    Wen J. Wang; Hong S. He; Martin A. Spetich; Stephen R. Shifley; Frank R. Thompson III; David R. Larsen; Jacob S. Fraser; Jian Yang

    2013-01-01

    Two challenges confronting forest landscape models (FLMs) are how to simulate fine, standscale processes while making large-scale (i.e., .107 ha) simulation possible, and how to take advantage of extensive forest inventory data such as U.S. Forest Inventory and Analysis (FIA) data to initialize and constrain model parameters. We present the LANDIS PRO model that...

  9. Ergodic model for the expansion of spherical nanoplasmas.

    PubMed

    Peano, F; Coppa, G; Peinetti, F; Mulas, R; Silva, L O

    2007-06-01

    Recently, the collisionless expansion of spherical nanoplasmas has been analyzed with a new ergodic model, clarifying the transition from hydrodynamiclike to Coulomb-explosion regimes, and providing accurate laws for the relevant features of the phenomenon. A complete derivation of the model is presented here. The important issue of the self-consistent initial conditions is addressed by analyzing the initial charging transient due to the electron expansion, in the approximation of immobile ions. A comparison among different kinetic models for the expansion is presented, showing that the ergodic model provides a simplified description, which retains the essential information on the electron distribution, in particular, the energy spectrum. Results are presented for a wide range of initial conditions (determined from a single dimensionless parameter), in excellent agreement with calculations from the exact Vlasov-Poisson theory, thus providing a complete and detailed characterization of all the stages of the expansion.

  10. Validating an Air Traffic Management Concept of Operation Using Statistical Modeling

    NASA Technical Reports Server (NTRS)

    He, Yuning; Davies, Misty Dawn

    2013-01-01

    Validating a concept of operation for a complex, safety-critical system (like the National Airspace System) is challenging because of the high dimensionality of the controllable parameters and the infinite number of states of the system. In this paper, we use statistical modeling techniques to explore the behavior of a conflict detection and resolution algorithm designed for the terminal airspace. These techniques predict the robustness of the system simulation to both nominal and off-nominal behaviors within the overall airspace. They also can be used to evaluate the output of the simulation against recorded airspace data. Additionally, the techniques carry with them a mathematical value of the worth of each prediction-a statistical uncertainty for any robustness estimate. Uncertainty Quantification (UQ) is the process of quantitative characterization and ultimately a reduction of uncertainties in complex systems. UQ is important for understanding the influence of uncertainties on the behavior of a system and therefore is valuable for design, analysis, and verification and validation. In this paper, we apply advanced statistical modeling methodologies and techniques on an advanced air traffic management system, namely the Terminal Tactical Separation Assured Flight Environment (T-TSAFE). We show initial results for a parameter analysis and safety boundary (envelope) detection in the high-dimensional parameter space. For our boundary analysis, we developed a new sequential approach based upon the design of computer experiments, allowing us to incorporate knowledge from domain experts into our modeling and to determine the most likely boundary shapes and its parameters. We carried out the analysis on system parameters and describe an initial approach that will allow us to include time-series inputs, such as the radar track data, into the analysis

  11. Progressive Learning of Topic Modeling Parameters: A Visual Analytics Framework.

    PubMed

    El-Assady, Mennatallah; Sevastjanova, Rita; Sperrle, Fabian; Keim, Daniel; Collins, Christopher

    2018-01-01

    Topic modeling algorithms are widely used to analyze the thematic composition of text corpora but remain difficult to interpret and adjust. Addressing these limitations, we present a modular visual analytics framework, tackling the understandability and adaptability of topic models through a user-driven reinforcement learning process which does not require a deep understanding of the underlying topic modeling algorithms. Given a document corpus, our approach initializes two algorithm configurations based on a parameter space analysis that enhances document separability. We abstract the model complexity in an interactive visual workspace for exploring the automatic matching results of two models, investigating topic summaries, analyzing parameter distributions, and reviewing documents. The main contribution of our work is an iterative decision-making technique in which users provide a document-based relevance feedback that allows the framework to converge to a user-endorsed topic distribution. We also report feedback from a two-stage study which shows that our technique results in topic model quality improvements on two independent measures.

  12. A new equilibrium torus solution and GRMHD initial conditions

    NASA Astrophysics Data System (ADS)

    Penna, Robert F.; Kulkarni, Akshay; Narayan, Ramesh

    2013-11-01

    Context. General relativistic magnetohydrodynamic (GRMHD) simulations are providing influential models for black hole spin measurements, gamma ray bursts, and supermassive black hole feedback. Many of these simulations use the same initial condition: a rotating torus of fluid in hydrostatic equilibrium. A persistent concern is that simulation results sometimes depend on arbitrary features of the initial torus. For example, the Bernoulli parameter (which is related to outflows), appears to be controlled by the Bernoulli parameter of the initial torus. Aims: In this paper, we give a new equilibrium torus solution and describe two applications for the future. First, it can be used as a more physical initial condition for GRMHD simulations than earlier torus solutions. Second, it can be used in conjunction with earlier torus solutions to isolate the simulation results that depend on initial conditions. Methods: We assume axisymmetry, an ideal gas equation of state, constant entropy, and ignore self-gravity. We fix an angular momentum distribution and solve the relativistic Euler equations in the Kerr metric. Results: The Bernoulli parameter, rotation rate, and geometrical thickness of the torus can be adjusted independently. Our torus tends to be more bound and have a larger radial extent than earlier torus solutions. Conclusions: While this paper was in preparation, several GRMHD simulations appeared based on our equilibrium torus. We believe it will continue to provide a more realistic starting point for future simulations.

  13. A method for nonlinear exponential regression analysis

    NASA Technical Reports Server (NTRS)

    Junkin, B. G.

    1971-01-01

    A computer-oriented technique is presented for performing a nonlinear exponential regression analysis on decay-type experimental data. The technique involves the least squares procedure wherein the nonlinear problem is linearized by expansion in a Taylor series. A linear curve fitting procedure for determining the initial nominal estimates for the unknown exponential model parameters is included as an integral part of the technique. A correction matrix was derived and then applied to the nominal estimate to produce an improved set of model parameters. The solution cycle is repeated until some predetermined criterion is satisfied.

  14. Identification of Spey engine dynamics in the augmentor wing jet STOL research aircraft from flight data

    NASA Technical Reports Server (NTRS)

    Dehoff, R. L.; Reed, W. B.; Trankle, T. L.

    1977-01-01

    The development and validation of a spey engine model is described. An analysis of the dynamical interactions involved in the propulsion unit is presented. The model was reduced to contain only significant effects, and was used, in conjunction with flight data obtained from an augmentor wing jet STOL research aircraft, to develop initial estimates of parameters in the system. The theoretical background employed in estimating the parameters is outlined. The software package developed for processing the flight data is described. Results are summarized.

  15. Dynamical insurance models with investment: Constrained singular problems for integrodifferential equations

    NASA Astrophysics Data System (ADS)

    Belkina, T. A.; Konyukhova, N. B.; Kurochkin, S. V.

    2016-01-01

    Previous and new results are used to compare two mathematical insurance models with identical insurance company strategies in a financial market, namely, when the entire current surplus or its constant fraction is invested in risky assets (stocks), while the rest of the surplus is invested in a risk-free asset (bank account). Model I is the classical Cramér-Lundberg risk model with an exponential claim size distribution. Model II is a modification of the classical risk model (risk process with stochastic premiums) with exponential distributions of claim and premium sizes. For the survival probability of an insurance company over infinite time (as a function of its initial surplus), there arise singular problems for second-order linear integrodifferential equations (IDEs) defined on a semiinfinite interval and having nonintegrable singularities at zero: model I leads to a singular constrained initial value problem for an IDE with a Volterra integral operator, while II model leads to a more complicated nonlocal constrained problem for an IDE with a non-Volterra integral operator. A brief overview of previous results for these two problems depending on several positive parameters is given, and new results are presented. Additional results are concerned with the formulation, analysis, and numerical study of "degenerate" problems for both models, i.e., problems in which some of the IDE parameters vanish; moreover, passages to the limit with respect to the parameters through which we proceed from the original problems to the degenerate ones are singular for small and/or large argument values. Such problems are of mathematical and practical interest in themselves. Along with insurance models without investment, they describe the case of surplus completely invested in risk-free assets, as well as some noninsurance models of surplus dynamics, for example, charity-type models.

  16. Han's model parameters for microalgae grown under intermittent illumination: Determined using particle swarm optimization.

    PubMed

    Pozzobon, Victor; Perre, Patrick

    2018-01-21

    This work provides a model and the associated set of parameters allowing for microalgae population growth computation under intermittent lightning. Han's model is coupled with a simple microalgae growth model to yield a relationship between illumination and population growth. The model parameters were obtained by fitting a dataset available in literature using Particle Swarm Optimization method. In their work, authors grew microalgae in excess of nutrients under flashing conditions. Light/dark cycles used for these experimentations are quite close to those found in photobioreactor, i.e. ranging from several seconds to one minute. In this work, in addition to producing the set of parameters, Particle Swarm Optimization robustness was assessed. To do so, two different swarm initialization techniques were used, i.e. uniform and random distribution throughout the search-space. Both yielded the same results. In addition, swarm distribution analysis reveals that the swarm converges to a unique minimum. Thus, the produced set of parameters can be trustfully used to link light intensity to population growth rate. Furthermore, the set is capable to describe photodamages effects on population growth. Hence, accounting for light overexposure effect on algal growth. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. Experimental parameter identification of a multi-scale musculoskeletal model controlled by electrical stimulation: application to patients with spinal cord injury.

    PubMed

    Benoussaad, Mourad; Poignet, Philippe; Hayashibe, Mitsuhiro; Azevedo-Coste, Christine; Fattal, Charles; Guiraud, David

    2013-06-01

    We investigated the parameter identification of a multi-scale physiological model of skeletal muscle, based on Huxley's formulation. We focused particularly on the knee joint controlled by quadriceps muscles under electrical stimulation (ES) in subjects with a complete spinal cord injury. A noninvasive and in vivo identification protocol was thus applied through surface stimulation in nine subjects and through neural stimulation in one ES-implanted subject. The identification protocol included initial identification steps, which are adaptations of existing identification techniques to estimate most of the parameters of our model. Then we applied an original and safer identification protocol in dynamic conditions, which required resolution of a nonlinear programming (NLP) problem to identify the serial element stiffness of quadriceps. Each identification step and cross validation of the estimated model in dynamic condition were evaluated through a quadratic error criterion. The results highlighted good accuracy, the efficiency of the identification protocol and the ability of the estimated model to predict the subject-specific behavior of the musculoskeletal system. From the comparison of parameter values between subjects, we discussed and explored the inter-subject variability of parameters in order to select parameters that have to be identified in each patient.

  18. Adaptive Estimation and Heuristic Optimization of Nonlinear Spacecraft Attitude Dynamics

    DTIC Science & Technology

    2016-09-15

    Algorithm GPS Global Positioning System HOUF Higher Order Unscented Filter IC initial conditions IMM Interacting Multiple Model IMU Inertial Measurement Unit ...sources ranging from inertial measurement units to star sensors are used to construct observations for attitude estimation algorithms. The sensor...parameters. A single vector measurement will provide two independent parameters, as a unit vector constraint removes a DOF making the problem underdetermined

  19. Hiereachical Bayesian Model for Combining Geochemical and Geophysical Data for Environmental Applications Software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Jinsong

    2013-05-01

    Development of a hierarchical Bayesian model to estimate the spatiotemporal distribution of aqueous geochemical parameters associated with in-situ bioremediation using surface spectral induced polarization (SIP) data and borehole geochemical measurements collected during a bioremediation experiment at a uranium-contaminated site near Rifle, Colorado. The SIP data are first inverted for Cole-Cole parameters including chargeability, time constant, resistivity at the DC frequency and dependence factor, at each pixel of two-dimensional grids using a previously developed stochastic method. Correlations between the inverted Cole-Cole parameters and the wellbore-based groundwater chemistry measurements indicative of key metabolic processes within the aquifer (e.g. ferrous iron, sulfate, uranium)more » were established and used as a basis for petrophysical model development. The developed Bayesian model consists of three levels of statistical sub-models: 1) data model, providing links between geochemical and geophysical attributes, 2) process model, describing the spatial and temporal variability of geochemical properties in the subsurface system, and 3) parameter model, describing prior distributions of various parameters and initial conditions. The unknown parameters are estimated using Markov chain Monte Carlo methods. By combining the temporally distributed geochemical data with the spatially distributed geophysical data, we obtain the spatio-temporal distribution of ferrous iron, sulfate and sulfide, and their associated uncertainity information. The obtained results can be used to assess the efficacy of the bioremediation treatment over space and time and to constrain reactive transport models.« less

  20. Application of a nonlinear slug test model

    USGS Publications Warehouse

    McElwee, C.D.

    2001-01-01

    Knowledge of the hydraulic conductivity distribution is of utmost importance in understanding the dynamics of an aquifer and in planning the consequences of any action taken upon that aquifer. Slug tests have been used extensively to measure hydraulic conductivity in the last 50 years since Hvorslev's (1951) work. A general nonlinear model based on the Navier-Stokes equation, nonlinear frictional loss, non-Darcian flow, acceleration effects, radius changes in the wellbore, and a Hvorslev model for the aquifer has been implemented in this work. The nonlinear model has three parameters: ??, which is related primarily to radius changes in the water column; A, which is related to the nonlinear head losses; and K, the hydraulic conductivity. An additional parameter has been added representing the initial velocity of the water column at slug initiation and is incorporated into an analytical solution to generate the first time step before a sequential numerical solution generates the remainder of the time solution. Corrections are made to the model output for acceleration before it is compared to the experimental data. Sensitivity analysis and least squares fitting are used to estimate the aquifer parameters and produce some diagnostic results, which indicate the accuracy of the fit. Finally, an example of field data has been presented to illustrate the application of the model to data sets that exhibit nonlinear behavior. Multiple slug tests should be taken at a given location to test for nonlinear effects and to determine repeatability.

  1. Part-to-itself model inversion in process compensated resonance testing

    NASA Astrophysics Data System (ADS)

    Mayes, Alexander; Jauriqui, Leanne; Biedermann, Eric; Heffernan, Julieanne; Livings, Richard; Aldrin, John C.; Goodlet, Brent; Mazdiyasni, Siamack

    2018-04-01

    Process Compensated Resonance Testing (PCRT) is a non-destructive evaluation (NDE) method involving the collection and analysis of a part's resonance spectrum to characterize its material or damage state. Prior work used the finite element method (FEM) to develop forward modeling and model inversion techniques. In many cases, the inversion problem can become confounded by multiple parameters having similar effects on a part's resonance frequencies. To reduce the influence of confounding parameters and isolate the change in a part (e.g., creep), a part-to-itself (PTI) approach can be taken. A PTI approach involves inverting only the change in resonance frequencies from the before and after states of a part. This approach reduces the possible inversion parameters to only those that change in response to in-service loads and damage mechanisms. To evaluate the effectiveness of using a PTI inversion approach, creep strain and material properties were estimated in virtual and real samples using FEM inversion. Virtual and real dog bone samples composed of nickel-based superalloy Mar-M-247 were examined. Virtual samples were modeled with typically observed variations in material properties and dimensions. Creep modeling was verified with the collected resonance spectra from an incrementally crept physical sample. All samples were inverted against a model space that allowed for change in the creep damage state and the material properties but was blind to initial part dimensions. Results quantified the capabilities of PTI inversion in evaluating creep strain and material properties, as well as its sensitivity to confounding initial dimensions.

  2. Numerical solution of a logistic growth model for a population with Allee effect considering fuzzy initial values and fuzzy parameters

    NASA Astrophysics Data System (ADS)

    Amarti, Z.; Nurkholipah, N. S.; Anggriani, N.; Supriatna, A. K.

    2018-03-01

    Predicting the future of population number is among the important factors that affect the consideration in preparing a good management for the population. This has been done by various known method, one among them is by developing a mathematical model describing the growth of the population. The model usually takes form in a differential equation or a system of differential equations, depending on the complexity of the underlying properties of the population. The most widely used growth models currently are those having a sigmoid solution of time series, including the Verhulst logistic equation and the Gompertz equation. In this paper we consider the Allee effect of the Verhulst’s logistic population model. The Allee effect is a phenomenon in biology showing a high correlation between population size or density and the mean individual fitness of the population. The method used to derive the solution is the Runge-Kutta numerical scheme, since it is in general regarded as one among the good numerical scheme which is relatively easy to implement. Further exploration is done via the fuzzy theoretical approach to accommodate the impreciseness of the initial values and parameters in the model.

  3. Hysteresis compensation of the Prandtl-Ishlinskii model for piezoelectric actuators using modified particle swarm optimization with chaotic map.

    PubMed

    Long, Zhili; Wang, Rui; Fang, Jiwen; Dai, Xufei; Li, Zuohua

    2017-07-01

    Piezoelectric actuators invariably exhibit hysteresis nonlinearities that tend to become significant under the open-loop condition and could cause oscillations and errors in nanometer-positioning tasks. Chaotic map modified particle swarm optimization (MPSO) is proposed and implemented to identify the Prandtl-Ishlinskii model for piezoelectric actuators. Hysteresis compensation is attained through application of an inverse Prandtl-Ishlinskii model, in which the parameters are formulated based on the original model with chaotic map MPSO. To strengthen the diversity and improve the searching ergodicity of the swarm, an initial method of adaptive inertia weight based on a chaotic map is proposed. To compare and prove that the swarm's convergence occurs before stochastic initialization and to attain an optimal particle swarm optimization algorithm, the parameters of a proportional-integral-derivative controller are searched using self-tuning, and the simulated results are used to verify the search effectiveness of chaotic map MPSO. The results show that chaotic map MPSO is superior to its competitors for identifying the Prandtl-Ishlinskii model and that the inverse Prandtl-Ishlinskii model can provide hysteresis compensation under different conditions in a simple and effective manner.

  4. DATA-CONSTRAINED CORONAL MASS EJECTIONS IN A GLOBAL MAGNETOHYDRODYNAMICS MODEL

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jin, M.; Manchester, W. B.; Van der Holst, B.

    We present a first-principles-based coronal mass ejection (CME) model suitable for both scientific and operational purposes by combining a global magnetohydrodynamics (MHD) solar wind model with a flux-rope-driven CME model. Realistic CME events are simulated self-consistently with high fidelity and forecasting capability by constraining initial flux rope parameters with observational data from GONG, SOHO /LASCO, and STEREO /COR. We automate this process so that minimum manual intervention is required in specifying the CME initial state. With the newly developed data-driven Eruptive Event Generator using Gibson–Low configuration, we present a method to derive Gibson–Low flux rope parameters through a handful ofmore » observational quantities so that the modeled CMEs can propagate with the desired CME speeds near the Sun. A test result with CMEs launched with different Carrington rotation magnetograms is shown. Our study shows a promising result for using the first-principles-based MHD global model as a forecasting tool, which is capable of predicting the CME direction of propagation, arrival time, and ICME magnetic field at 1 au (see the companion paper by Jin et al. 2016a).« less

  5. Biodrying of sewage sludge: kinetics of volatile solids degradation under different initial moisture contents and air-flow rates.

    PubMed

    Villegas, Manuel; Huiliñir, Cesar

    2014-12-01

    This study focuses on the kinetics of the biodegradation of volatile solids (VS) of sewage sludge for biodrying under different initial moisture contents (Mc) and air-flow rates (AFR). For the study, a 3(2) factorial design, whose factors were AFR (1, 2 or 3L/minkgTS) and initial Mc (59%, 68% and 78% w.b.), was used. Using seven kinetic models and a nonlinear regression method, kinetic parameters were estimated and the models were analyzed with two statistical indicators. Initial Mc of around 68% increases the temperature matrix and VS consumption, with higher moisture removal at lower initial Mc values. Lower AFRs gave higher matrix temperatures and VS consumption, while higher AFRs increased water removal. The kinetic models proposed successfully simulate VS biodegradation, with root mean square error (RMSE) between 0.007929 and 0.02744, and they can be used as a tool for satisfactory prediction of VS in biodrying. Copyright © 2014 Elsevier Ltd. All rights reserved.

  6. Prognostics of lithium-ion batteries based on Dempster-Shafer theory and the Bayesian Monte Carlo method

    NASA Astrophysics Data System (ADS)

    He, Wei; Williard, Nicholas; Osterman, Michael; Pecht, Michael

    A new method for state of health (SOH) and remaining useful life (RUL) estimations for lithium-ion batteries using Dempster-Shafer theory (DST) and the Bayesian Monte Carlo (BMC) method is proposed. In this work, an empirical model based on the physical degradation behavior of lithium-ion batteries is developed. Model parameters are initialized by combining sets of training data based on DST. BMC is then used to update the model parameters and predict the RUL based on available data through battery capacity monitoring. As more data become available, the accuracy of the model in predicting RUL improves. Two case studies demonstrating this approach are presented.

  7. Fractional Brownian motion and multivariate-t models for longitudinal biomedical data, with application to CD4 counts in HIV-positive patients.

    PubMed

    Stirrup, Oliver T; Babiker, Abdel G; Carpenter, James R; Copas, Andrew J

    2016-04-30

    Longitudinal data are widely analysed using linear mixed models, with 'random slopes' models particularly common. However, when modelling, for example, longitudinal pre-treatment CD4 cell counts in HIV-positive patients, the incorporation of non-stationary stochastic processes such as Brownian motion has been shown to lead to a more biologically plausible model and a substantial improvement in model fit. In this article, we propose two further extensions. Firstly, we propose the addition of a fractional Brownian motion component, and secondly, we generalise the model to follow a multivariate-t distribution. These extensions are biologically plausible, and each demonstrated substantially improved fit on application to example data from the Concerted Action on SeroConversion to AIDS and Death in Europe study. We also propose novel procedures for residual diagnostic plots that allow such models to be assessed. Cohorts of patients were simulated from the previously reported and newly developed models in order to evaluate differences in predictions made for the timing of treatment initiation under different clinical management strategies. A further simulation study was performed to demonstrate the substantial biases in parameter estimates of the mean slope of CD4 decline with time that can occur when random slopes models are applied in the presence of censoring because of treatment initiation, with the degree of bias found to depend strongly on the treatment initiation rule applied. Our findings indicate that researchers should consider more complex and flexible models for the analysis of longitudinal biomarker data, particularly when there are substantial missing data, and that the parameter estimates from random slopes models must be interpreted with caution. © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.

  8. Modeling multilayer x-ray reflectivity using genetic algorithms

    NASA Astrophysics Data System (ADS)

    Sánchez del Río, M.; Pareschi, G.; Michetschläger, C.

    2000-06-01

    The x-ray reflectivity of a multilayer is a non-linear function of many parameters (materials, layer thickness, density, roughness). Non-linear fitting of experimental data with simulations requires the use of initial values sufficiently close to the optimum value. This is a difficult task when the topology of the space of the variables is highly structured. We apply global optimization methods to fit multilayer reflectivity. Genetic algorithms are stochastic methods based on the model of natural evolution: the improvement of a population along successive generations. A complete set of initial parameters constitutes an individual. The population is a collection of individuals. Each generation is built from the parent generation by applying some operators (selection, crossover, mutation, etc.) on the members of the parent generation. The pressure of selection drives the population to include "good" individuals. For large number of generations, the best individuals will approximate the optimum parameters. Some results on fitting experimental hard x-ray reflectivity data for Ni/C and W/Si multilayers using genetic algorithms are presented. This method can also be applied to design multilayers optimized for a target application.

  9. Theory of structure formation in snowfields motivated by penitentes, suncups, and dirt cones.

    PubMed

    Betterton, M D

    2001-05-01

    Penitentes and suncups are structures formed as snow melts, typically high in the mountains. When the snow is dirty, dirt cones and other structures can form instead. Building on previous field observations and experiments, this paper presents a theory of ablation morphologies, and the role of surface dirt in determining the structures formed. The glaciological literature indicates that sunlight, heating from air, and dirt all play a role in the formation of structure on an ablating snow surface. The present paper formulates a minimal model for the formation of ablation morphologies as a function of measurable parameters and considers the linear stability of this model. The dependence of ablation morphologies on weather conditions and initial dirt thickness is studied, focusing on the initial growth of perturbations away from a flat surface. We derive a single-parameter expression for the melting rate as a function of dirt thickness, which agrees well with a set of measurements by Driedger. An interesting result is the prediction of a dirt-induced traveling instability for a range of parameters.

  10. Parameter identification of process simulation models as a means for knowledge acquisition and technology transfer

    NASA Astrophysics Data System (ADS)

    Batzias, Dimitris F.; Ifanti, Konstantina

    2012-12-01

    Process simulation models are usually empirical, therefore there is an inherent difficulty in serving as carriers for knowledge acquisition and technology transfer, since their parameters have no physical meaning to facilitate verification of the dependence on the production conditions; in such a case, a 'black box' regression model or a neural network might be used to simply connect input-output characteristics. In several cases, scientific/mechanismic models may be proved valid, in which case parameter identification is required to find out the independent/explanatory variables and parameters, which each parameter depends on. This is a difficult task, since the phenomenological level at which each parameter is defined is different. In this paper, we have developed a methodological framework under the form of an algorithmic procedure to solve this problem. The main parts of this procedure are: (i) stratification of relevant knowledge in discrete layers immediately adjacent to the layer that the initial model under investigation belongs to, (ii) design of the ontology corresponding to these layers, (iii) elimination of the less relevant parts of the ontology by thinning, (iv) retrieval of the stronger interrelations between the remaining nodes within the revised ontological network, and (v) parameter identification taking into account the most influential interrelations revealed in (iv). The functionality of this methodology is demonstrated by quoting two representative case examples on wastewater treatment.

  11. The absorption and first-pass metabolism of [14C]-1,3-dinitrobenzene in the isolated vascularly perfused rat small intestine.

    PubMed

    Adams, P C; Rickert, D E

    1996-11-01

    We tested the hypothesis that the small intestine is capable of the first-pass, reductive metabolism of xenobiotics. A simplified version of the isolated vascularly perfused rat small intestine was developed to test this hypothesis with 1,3-dinitrobenzene (1,3-DNB) as a model xenobiotic. Both 3-nitroaniline (3-NA) and 3-nitroacetanilide (3-NAA) were formed and absorbed following intralumenal doses of 1,3-DNB (1.8 or 4.2 mumol) to isolated vascularly perfused rat small intestine. Dose, fasting, or antibiotic pretreatment had no effect on the absorption and metabolism of 1,3-DNB in this model system. The failure of antibiotic pretreatment to alter the metabolism of 1,3-DNA indicated that 1,3-DNB metabolism was mammalian rather than microfloral in origin. All data from experiments initiated with lumenal 1,3-DNB were fit to a pharmacokinetic model (model A). ANOVA analysis revealed that dose, fasting, or antibiotic pretreatment had no statistically significant effect on the model-dependent parameters. 3-NA (1.5 mumol) was administered to the lumen of isolated vascularly perfused rat small intestine to evaluate model A predictions for the absorption and metabolism of this metabolite. All data from experiments initiated with 3-NA were fit to a pharmacokinetic model (model B). Comparison of corresponding model-dependent pharmacokinetic parameters (i.e. those parameters which describe the same processes in models A and B) revealed quantitative differences. Evidence for significant quantitative differences in the pharmacokinetics or metabolism of formed versus preformed 3-NA in rat small intestine may require better definition of the rate constants used to describe tissue and lumenal processes or identification and incorporation of the remaining unidentified metabolites into the models.

  12. Evaluation of solar Type II radio burst estimates of initial solar wind shock speed using a kinematic model of the solar wind on the April 2001 solar event swarm

    NASA Astrophysics Data System (ADS)

    Sun, W.; Dryer, M.; Fry, C. D.; Deehr, C. S.; Smith, Z.; Akasofu, S.-I.; Kartalev, M. D.; Grigorov, K. G.

    2002-04-01

    We compare simulation results of real time shock arrival time prediction with observations by the ACE satellite for a series of solar flares/coronal mass ejections which took place between 28 March and 18 April, 2001 on the basis of the Hakamada-Akasofu-Fry, version 2 (HAFv.2) model. It is found, via an ex post facto calculation, that the initial speed of shock waves as an input parameter of the modeling is crucial for the agreement between the observation and the simulation. The initial speed determined by metric Type II radio burst observations must be substantially reduced (30 percent in average) for most high-speed shock waves.

  13. INM Integrated Noise Model Version 2. Programmer’s Guide

    DTIC Science & Technology

    1979-09-01

    cost, turnaround time, and system-dependent limitations. 3.2 CONVERSION PROBLEMS Item Item Item No. Desciption Category 1 BLOCK DATA Initialization IBM ...Restricted 2 Boolean Operations Differences Call Statement Parameters Extensions 4 Data Initialization IBM Restricted 5 ENTRY Differences 6 EQUIVALENCE...Machine Dependent 7 Format: A CDC Extension 8 Hollerith Strings IBM Restricted 9 Hollerith Variables IBM Restricted 10 Identifier Names CDC Extension

  14. Kinetics, isothermal and thermodynamics studies of electrocoagulation removal of basic dye rhodamine B from aqueous solution using steel electrodes

    NASA Astrophysics Data System (ADS)

    Adeogun, Abideen Idowu; Balakrishnan, Ramesh Babu

    2017-07-01

    Electrocoagulation was used for the removal of basic dye rhodamine B from aqueous solution, and the process was carried out in a batch electrochemical cell with steel electrodes in monopolar connection. The effects of some important parameters such as current density, pH, temperature and initial dye concentration, on the process, were investigated. Equilibrium was attained after 10 min at 30 °C. Pseudo-first-order, pseudo-second-order, Elovich and Avrami kinetic models were used to test the experimental data in order to elucidate the kinetic adsorption process; pseudo-first-order and Avrami models best fitted the data. Experimental data were analysed using six model equations: Langmuir, Freudlinch, Redlich-Peterson, Temkin, Dubinin-Radushkevich and Sips isotherms and it was found that the data fitted well with Sips isotherm model. The study showed that the process depends on current density, temperature, pH and initial dye concentration. The calculated thermodynamics parameters (Δ G°, Δ H° and Δ S°) indicated that the process is spontaneous and endothermic in nature.

  15. Biosorption of alpacide blue from aqueous solution by lignocellulosic biomass: Luffa cylindrica fibers.

    PubMed

    Kesraoui, Aida; Moussa, Asma; Ali, Ghada Ben; Seffen, Mongi

    2016-08-01

    The aim of the present work is to develop an effective and inexpensive pollutant-removal technology using lignocellulosic fibers: Luffa cylindrica, for the biosorption of an anionic dye: alpacide blue. The influence of some experimental parameters such as pH, temperature, initial concentration of the polluted solution, and mass of the sorbent L. cylindrica on the biosorption of alpacide blue by L. cylindrica fibers has been investigated. Optimal parameters for maximum quantity of biosorption dye were achieved after 2 h of treatment in a batch system using an initial dye concentration of 20 mg/L, a mass of 1 g of L. cylindrica fibers, and pH 2. In these conditions, the quantity of dye retained is 2 mg/g and the retention rate is 78 %. Finally, a mathematical modeling of kinetics and isotherms has been used for mathematical modeling; the model of pseudo-second order is more appropriate to describe this phenomenon of biosorption. Concerning biosorption isotherms, the Freundlich model is the most appropriate for a biosorption of alpacide blue dye by L. cylindrica fibers.

  16. Optimization of Equation of State and Burn Model Parameters for Explosives

    NASA Astrophysics Data System (ADS)

    Bergh, Magnus; Wedberg, Rasmus; Lundgren, Jonas

    2017-06-01

    A reactive burn model implemented in a multi-dimensional hydrocode can be a powerful tool for predicting non-ideal effects as well as initiation phenomena in explosives. Calibration against experiment is, however, critical and non-trivial. Here, a procedure is presented for calibrating the Ignition and Growth Model utilizing hydrocode simulation in conjunction with the optimization program LS-OPT. The model is applied to the explosive PBXN-109. First, a cylinder expansion test is presented together with a new automatic routine for product equation of state calibration. Secondly, rate stick tests and instrumented gap tests are presented. Data from these experiments are used to calibrate burn model parameters. Finally, we discuss the applicability and development of this optimization routine.

  17. Folding and stability of helical bundle proteins from coarse-grained models.

    PubMed

    Kapoor, Abhijeet; Travesset, Alex

    2013-07-01

    We develop a coarse-grained model where solvent is considered implicitly, electrostatics are included as short-range interactions, and side-chains are coarse-grained to a single bead. The model depends on three main parameters: hydrophobic, electrostatic, and side-chain hydrogen bond strength. The parameters are determined by considering three level of approximations and characterizing the folding for three selected proteins (training set). Nine additional proteins (containing up to 126 residues) as well as mutated versions (test set) are folded with the given parameters. In all folding simulations, the initial state is a random coil configuration. Besides the native state, some proteins fold into an additional state differing in the topology (structure of the helical bundle). We discuss the stability of the native states, and compare the dynamics of our model to all atom molecular dynamics simulations as well as some general properties on the interactions governing folding dynamics. Copyright © 2013 Wiley Periodicals, Inc.

  18. The minimal SUSY B - L model: from the unification scale to the LHC

    DOE PAGES

    Ovrut, Burt A.; Purves, Austin; Spinner, Sogee

    2015-06-26

    Here, this paper introduces a random statistical scan over the high-energy initial parameter space of the minimal SUSY B - L model — denoted as the B - L MSSM. Each initial set of points is renormalization group evolved to the electroweak scale — being subjected, sequentially, to the requirement of radiative B - L and electroweak symmetry breaking, the present experimental lower bounds on the B - L vector boson and sparticle masses, as well as the lightest neutral Higgs mass of ~125 GeV. The subspace of initial parameters that satisfies all such constraints is presented, shown to bemore » robust and to contain a wide range of different configurations of soft supersymmetry breaking masses. The low-energy predictions of each such “valid” point — such as the sparticle mass spectrum and, in particular, the LSP — are computed and then statistically analyzed over the full subspace of valid points. Finally, the amount of fine-tuning required is quantified and compared to the MSSM computed using an identical random scan. The B - L MSSM is shown to generically require less fine-tuninng.« less

  19. Particle acceleration at a reconnecting magnetic separator

    NASA Astrophysics Data System (ADS)

    Threlfall, J.; Neukirch, T.; Parnell, C. E.; Eradat Oskoui, S.

    2015-02-01

    Context. While the exact acceleration mechanism of energetic particles during solar flares is (as yet) unknown, magnetic reconnection plays a key role both in the release of stored magnetic energy of the solar corona and the magnetic restructuring during a flare. Recent work has shown that special field lines, called separators, are common sites of reconnection in 3D numerical experiments. To date, 3D separator reconnection sites have received little attention as particle accelerators. Aims: We investigate the effectiveness of separator reconnection as a particle acceleration mechanism for electrons and protons. Methods: We study the particle acceleration using a relativistic guiding-centre particle code in a time-dependent kinematic model of magnetic reconnection at a separator. Results: The effect upon particle behaviour of initial position, pitch angle, and initial kinetic energy are examined in detail, both for specific (single) particle examples and for large distributions of initial conditions. The separator reconnection model contains several free parameters, and we study the effect of changing these parameters upon particle acceleration, in particular in view of the final particle energy ranges that agree with observed energy spectra.

  20. FITPOP, a heuristic simulation model of population dynamics and genetics with special reference to fisheries

    USGS Publications Warehouse

    McKenna, James E.

    2000-01-01

    Although, perceiving genetic differences and their effects on fish population dynamics is difficult, simulation models offer a means to explore and illustrate these effects. I partitioned the intrinsic rate of increase parameter of a simple logistic-competition model into three components, allowing specification of effects of relative differences in fitness and mortality, as well as finite rate of increase. This model was placed into an interactive, stochastic environment to allow easy manipulation of model parameters (FITPOP). Simulation results illustrated the effects of subtle differences in genetic and population parameters on total population size, overall fitness, and sensitivity of the system to variability. Several consequences of mixing genetically distinct populations were illustrated. For example, behaviors such as depression of population size after initial introgression and extirpation of native stocks due to continuous stocking of genetically inferior fish were reproduced. It also was shown that carrying capacity relative to the amount of stocking had an important influence on population dynamics. Uncertainty associated with parameter estimates reduced confidence in model projections. The FITPOP model provided a simple tool to explore population dynamics, which may assist in formulating management strategies and identifying research needs.

  1. Mechanistic modelling of drug release from a polymer matrix using magnetic resonance microimaging.

    PubMed

    Kaunisto, Erik; Tajarobi, Farhad; Abrahmsen-Alami, Susanna; Larsson, Anette; Nilsson, Bernt; Axelsson, Anders

    2013-03-12

    In this paper a new model describing drug release from a polymer matrix tablet is presented. The utilization of the model is described as a two step process where, initially, polymer parameters are obtained from a previously published pure polymer dissolution model. The results are then combined with drug parameters obtained from literature data in the new model to predict solvent and drug concentration profiles and polymer and drug release profiles. The modelling approach was applied to the case of a HPMC matrix highly loaded with mannitol (model drug). The results showed that the drug release rate can be successfully predicted, using the suggested modelling approach. However, the model was not able to accurately predict the polymer release profile, possibly due to the sparse amount of usable pure polymer dissolution data. In addition to the case study, a sensitivity analysis of model parameters relevant to drug release was performed. The analysis revealed important information that can be useful in the drug formulation process. Copyright © 2013 Elsevier B.V. All rights reserved.

  2. Optimization of thiamethoxam adsorption parameters using multi-walled carbon nanotubes by means of fractional factorial design.

    PubMed

    Panić, Sanja; Rakić, Dušan; Guzsvány, Valéria; Kiss, Erne; Boskovic, Goran; Kónya, Zoltán; Kukovecz, Ákos

    2015-12-01

    The aim of this work was to evaluate significant factors affecting the thiamethoxam adsorption efficiency using oxidized multi-walled carbon nanotubes (MWCNTs) as adsorbents. Five factors (initial solution concentration of thiamethoxam in water, temperature, solution pH, MWCNTs weight and contact time) were investigated using 2V(5-1) fractional factorial design. The obtained linear model was statistically tested using analysis of variance (ANOVA) and the analysis of residuals was used to investigate the model validity. It was observed that the factors and their second-order interactions affecting the thiamethoxam removal can be divided into three groups: very important, moderately important and insignificant ones. The initial solution concentration was found to be the most influencing parameter on thiamethoxam adsorption from water. Optimization of the factors levels was carried out by minimizing those parameters which are usually critical in real life: the temperature (energy), contact time (money) and weight of MWCNTs (potential health hazard), in order to maximize the adsorbed amount of the pollutant. The results of maximal adsorbed thiamethoxam amount in both real and optimized experiments indicate that among minimized parameters the adsorption time is one that makes the largest difference. The results of this study indicate that fractional factorial design is very useful tool for screening the higher number of parameters and reducing the number of adsorption experiments. Copyright © 2015 Elsevier Ltd. All rights reserved.

  3. The influence of Monte Carlo source parameters on detector design and dose perturbation in small field dosimetry

    NASA Astrophysics Data System (ADS)

    Charles, P. H.; Crowe, S. B.; Kairn, T.; Knight, R.; Hill, B.; Kenny, J.; Langton, C. M.; Trapp, J. V.

    2014-03-01

    To obtain accurate Monte Carlo simulations of small radiation fields, it is important model the initial source parameters (electron energy and spot size) accurately. However recent studies have shown that small field dosimetry correction factors are insensitive to these parameters. The aim of this work is to extend this concept to test if these parameters affect dose perturbations in general, which is important for detector design and calculating perturbation correction factors. The EGSnrc C++ user code cavity was used for all simulations. Varying amounts of air between 0 and 2 mm were deliberately introduced upstream to a diode and the dose perturbation caused by the air was quantified. These simulations were then repeated using a range of initial electron energies (5.5 to 7.0 MeV) and electron spot sizes (0.7 to 2.2 FWHM). The resultant dose perturbations were large. For example 2 mm of air caused a dose reduction of up to 31% when simulated with a 6 mm field size. However these values did not vary by more than 2 % when simulated across the full range of source parameters tested. If a detector is modified by the introduction of air, one can be confident that the response of the detector will be the same across all similar linear accelerators and the Monte Carlo modelling of each machine is not required.

  4. Nonlinear modelling of cancer: bridging the gap between cells and tumours

    PubMed Central

    Lowengrub, J S; Frieboes, H B; Jin, F; Chuang, Y-L; Li, X; Macklin, P; Wise, S M; Cristini, V

    2010-01-01

    Despite major scientific, medical and technological advances over the last few decades, a cure for cancer remains elusive. The disease initiation is complex, and including initiation and avascular growth, onset of hypoxia and acidosis due to accumulation of cells beyond normal physiological conditions, inducement of angiogenesis from the surrounding vasculature, tumour vascularization and further growth, and invasion of surrounding tissue and metastasis. Although the focus historically has been to study these events through experimental and clinical observations, mathematical modelling and simulation that enable analysis at multiple time and spatial scales have also complemented these efforts. Here, we provide an overview of this multiscale modelling focusing on the growth phase of tumours and bypassing the initial stage of tumourigenesis. While we briefly review discrete modelling, our focus is on the continuum approach. We limit the scope further by considering models of tumour progression that do not distinguish tumour cells by their age. We also do not consider immune system interactions nor do we describe models of therapy. We do discuss hybrid-modelling frameworks, where the tumour tissue is modelled using both discrete (cell-scale) and continuum (tumour-scale) elements, thus connecting the micrometre to the centimetre tumour scale. We review recent examples that incorporate experimental data into model parameters. We show that recent mathematical modelling predicts that transport limitations of cell nutrients, oxygen and growth factors may result in cell death that leads to morphological instability, providing a mechanism for invasion via tumour fingering and fragmentation. These conditions induce selection pressure for cell survivability, and may lead to additional genetic mutations. Mathematical modelling further shows that parameters that control the tumour mass shape also control its ability to invade. Thus, tumour morphology may serve as a predictor of invasiveness and treatment prognosis. PMID:20808719

  5. Nonlinear modelling of cancer: bridging the gap between cells and tumours

    NASA Astrophysics Data System (ADS)

    Lowengrub, J. S.; Frieboes, H. B.; Jin, F.; Chuang, Y.-L.; Li, X.; Macklin, P.; Wise, S. M.; Cristini, V.

    2010-01-01

    Despite major scientific, medical and technological advances over the last few decades, a cure for cancer remains elusive. The disease initiation is complex, and including initiation and avascular growth, onset of hypoxia and acidosis due to accumulation of cells beyond normal physiological conditions, inducement of angiogenesis from the surrounding vasculature, tumour vascularization and further growth, and invasion of surrounding tissue and metastasis. Although the focus historically has been to study these events through experimental and clinical observations, mathematical modelling and simulation that enable analysis at multiple time and spatial scales have also complemented these efforts. Here, we provide an overview of this multiscale modelling focusing on the growth phase of tumours and bypassing the initial stage of tumourigenesis. While we briefly review discrete modelling, our focus is on the continuum approach. We limit the scope further by considering models of tumour progression that do not distinguish tumour cells by their age. We also do not consider immune system interactions nor do we describe models of therapy. We do discuss hybrid-modelling frameworks, where the tumour tissue is modelled using both discrete (cell-scale) and continuum (tumour-scale) elements, thus connecting the micrometre to the centimetre tumour scale. We review recent examples that incorporate experimental data into model parameters. We show that recent mathematical modelling predicts that transport limitations of cell nutrients, oxygen and growth factors may result in cell death that leads to morphological instability, providing a mechanism for invasion via tumour fingering and fragmentation. These conditions induce selection pressure for cell survivability, and may lead to additional genetic mutations. Mathematical modelling further shows that parameters that control the tumour mass shape also control its ability to invade. Thus, tumour morphology may serve as a predictor of invasiveness and treatment prognosis.

  6. An opinion-driven behavioral dynamics model for addictive behaviors

    DOE PAGES

    Moore, Thomas W.; Finley, Patrick D.; Apelberg, Benjamin J.; ...

    2015-04-08

    We present a model of behavioral dynamics that combines a social network-based opinion dynamics model with behavioral mapping. The behavioral component is discrete and history-dependent to represent situations in which an individual’s behavior is initially driven by opinion and later constrained by physiological or psychological conditions that serve to maintain the behavior. Additionally, individuals are modeled as nodes in a social network connected by directed edges. Parameter sweeps illustrate model behavior and the effects of individual parameters and parameter interactions on model results. Mapping a continuous opinion variable into a discrete behavioral space induces clustering on directed networks. Clusters providemore » targets of opportunity for influencing the network state; however, the smaller the network the greater the stochasticity and potential variability in outcomes. Furthermore, this has implications both for behaviors that are influenced by close relationships verses those influenced by societal norms and for the effectiveness of strategies for influencing those behaviors.« less

  7. Photochemical modeling and analysis of meteorological parameters during ozone episodes in Kaohsiung, Taiwan

    NASA Astrophysics Data System (ADS)

    Chen, K. S.; Ho, Y. T.; Lai, C. H.; Chou, Youn-Min

    The events of high ozone concentrations and meteorological conditions covering the Kaohsiung metropolitan area were investigated based on data analysis and model simulation. A photochemical grid model was employed to analyze two ozone episodes in autumn (2000) and winter (2001) seasons, each covering three consecutive days (or 72 h) in the Kaohsiung City. The potential influence of the initial and boundary conditions on model performance was assessed. Model performance can be improved by separately considering the daytime and nighttime ozone concentrations on the lateral boundary conditions of the model domain. The sensitivity analyses of ozone concentrations to the emission reductions in volatile organic compounds (VOC) and nitrogen oxides (NO x) show a VOC-sensitive regime for emission reductions to lower than 30-40% VOC and 30-50% NO x and a NO x-sensitive regime for larger percentage reductions. Meteorological parameters show that warm temperature, sufficient sunlight, low wind, and high surface pressure are distinct parameters that tend to trigger ozone episodes in polluted urban areas, like Kaohsiung.

  8. A practical method to assess model sensitivity and parameter uncertainty in C cycle models

    NASA Astrophysics Data System (ADS)

    Delahaies, Sylvain; Roulstone, Ian; Nichols, Nancy

    2015-04-01

    The carbon cycle combines multiple spatial and temporal scales, from minutes to hours for the chemical processes occurring in plant cells to several hundred of years for the exchange between the atmosphere and the deep ocean and finally to millennia for the formation of fossil fuels. Together with our knowledge of the transformation processes involved in the carbon cycle, many Earth Observation systems are now available to help improving models and predictions using inverse modelling techniques. A generic inverse problem consists in finding a n-dimensional state vector x such that h(x) = y, for a given N-dimensional observation vector y, including random noise, and a given model h. The problem is well posed if the three following conditions hold: 1) there exists a solution, 2) the solution is unique and 3) the solution depends continuously on the input data. If at least one of these conditions is violated the problem is said ill-posed. The inverse problem is often ill-posed, a regularization method is required to replace the original problem with a well posed problem and then a solution strategy amounts to 1) constructing a solution x, 2) assessing the validity of the solution, 3) characterizing its uncertainty. The data assimilation linked ecosystem carbon (DALEC) model is a simple box model simulating the carbon budget allocation for terrestrial ecosystems. Intercomparison experiments have demonstrated the relative merit of various inverse modelling strategies (MCMC, ENKF) to estimate model parameters and initial carbon stocks for DALEC using eddy covariance measurements of net ecosystem exchange of CO2 and leaf area index observations. Most results agreed on the fact that parameters and initial stocks directly related to fast processes were best estimated with narrow confidence intervals, whereas those related to slow processes were poorly estimated with very large uncertainties. While other studies have tried to overcome this difficulty by adding complementary data streams or by considering longer observation windows no systematic analysis has been carried out so far to explain the large differences among results. We consider adjoint based methods to investigate inverse problems using DALEC and various data streams. Using resolution matrices we study the nature of the inverse problems (solution existence, uniqueness and stability) and show how standard regularization techniques affect resolution and stability properties. Instead of using standard prior information as a penalty term in the cost function to regularize the problems we constraint the parameter space using ecological balance conditions and inequality constraints. The efficiency and rapidity of this approach allows us to compute ensembles of solutions to the inverse problems from which we can establish the robustness of the variational method and obtain non Gaussian posterior distributions for the model parameters and initial carbon stocks.

  9. Documentation of the dynamic parameter, water-use, stream and lake flow routing, and two summary output modules and updates to surface-depression storage simulation and initial conditions specification options with the Precipitation-Runoff Modeling System (PRMS)

    USGS Publications Warehouse

    Regan, R. Steve; LaFontaine, Jacob H.

    2017-10-05

    This report documents seven enhancements to the U.S. Geological Survey (USGS) Precipitation-Runoff Modeling System (PRMS) hydrologic simulation code: two time-series input options, two new output options, and three updates of existing capabilities. The enhancements are (1) new dynamic parameter module, (2) new water-use module, (3) new Hydrologic Response Unit (HRU) summary output module, (4) new basin variables summary output module, (5) new stream and lake flow routing module, (6) update to surface-depression storage and flow simulation, and (7) update to the initial-conditions specification. This report relies heavily upon U.S. Geological Survey Techniques and Methods, book 6, chapter B7, which documents PRMS version 4 (PRMS-IV). A brief description of PRMS is included in this report.

  10. Ignition and Growth Reactive Flow Modeling of Shock Initiation of PBX 9502 at -55∘C and -196∘C

    NASA Astrophysics Data System (ADS)

    Chidester, Steven; Tarver, Craig

    2015-06-01

    Recently Gustavsen et al. and Hollowell et al. published two stage gas gun embedded particle velocity gauge experiments on PBX 9502 (95%TATB, 5% Kel-F800) cooled to -55°C and -196°C, respectively. At -196°C, PBX 9502 was shown to be much less shock sensitive than at -55°C, but it did transition to detonation. Previous Ignition and Growth model parameters for shock initiation of PBX 9502 at -55°C are modified based on the new data, and new parameters for -196°C PBX 9502 are created to accurately simulate the measured particle velocity histories and run distances to detonation versus shock pressures. This work was performed under the auspices of the U. S. Department of Energy by the Lawrence Livermore National Laboratory under Contract No. DE-AC52-07NA27344.

  11. Diffusion model of penetration of a chloride-containing environment in the volume of a constructive element

    NASA Astrophysics Data System (ADS)

    Ovchinnikov, I. I.; Snezhkina, O. V.; Ovchinnikov, I. G.

    2018-06-01

    A generalized model of diffusional penetration of a chloride-containing medium into the volume of a compressed reinforced concrete element is considered. The equations of deformation values of reinforced concrete structure are presented, taking into account the degradation of concrete and corrosion of reinforcement. At the initial stage, an applied force calculation of section of the structural element with mechanical properties of the material which are determined by the initial field of concentration of aggressive medium, is carried out. Furthermore, at each discrete instant moment of time, the following properties are determined: the distribution law of concentration for chloride field, corresponding to the parameters of the stress-strain state; the calculation of corrosion damage field of reinforcing elements and the applied force calculation of section of the structural element with parameters corresponding to the distribution of the concentration field and the field of corrosion damage are carried out.

  12. A global parallel model based design of experiments method to minimize model output uncertainty.

    PubMed

    Bazil, Jason N; Buzzard, Gregory T; Rundell, Ann E

    2012-03-01

    Model-based experiment design specifies the data to be collected that will most effectively characterize the biological system under study. Existing model-based design of experiment algorithms have primarily relied on Fisher Information Matrix-based methods to choose the best experiment in a sequential manner. However, these are largely local methods that require an initial estimate of the parameter values, which are often highly uncertain, particularly when data is limited. In this paper, we provide an approach to specify an informative sequence of multiple design points (parallel design) that will constrain the dynamical uncertainty of the biological system responses to within experimentally detectable limits as specified by the estimated experimental noise. The method is based upon computationally efficient sparse grids and requires only a bounded uncertain parameter space; it does not rely upon initial parameter estimates. The design sequence emerges through the use of scenario trees with experimental design points chosen to minimize the uncertainty in the predicted dynamics of the measurable responses of the system. The algorithm was illustrated herein using a T cell activation model for three problems that ranged in dimension from 2D to 19D. The results demonstrate that it is possible to extract useful information from a mathematical model where traditional model-based design of experiments approaches most certainly fail. The experiments designed via this method fully constrain the model output dynamics to within experimentally resolvable limits. The method is effective for highly uncertain biological systems characterized by deterministic mathematical models with limited data sets. Also, it is highly modular and can be modified to include a variety of methodologies such as input design and model discrimination.

  13. Micromechanical investigation of ductile failure in Al 5083-H116 via 3D unit cell modeling

    NASA Astrophysics Data System (ADS)

    Bomarito, G. F.; Warner, D. H.

    2015-01-01

    Ductile failure is governed by the evolution of micro-voids within a material. The micro-voids, which commonly initiate at second phase particles within metal alloys, grow and interact with each other until failure occurs. The evolution of the micro-voids, and therefore ductile failure, depends on many parameters (e.g., stress state, temperature, strain rate, void and particle volume fraction, etc.). In this study, the stress state dependence of the ductile failure of Al 5083-H116 is investigated by means of 3-D Finite Element (FE) periodic cell models. The cell models require only two pieces of information as inputs: (1) the initial particle volume fraction of the alloy and (2) the constitutive behavior of the matrix material. Based on this information, cell models are subjected to a given stress state, defined by the stress triaxiality and the Lode parameter. For each stress state, the cells are loaded in many loading orientations until failure. Material failure is assumed to occur in the weakest orientation, and so the orientation in which failure occurs first is considered as the critical orientation. The result is a description of material failure that is derived from basic principles and requires no fitting parameters. Subsequently, the results of the simulations are used to construct a homogenized material model, which is used in a component-scale FE model. The component-scale FE model is compared to experiments and is shown to over predict ductility. By excluding smaller nucleation events and load path non-proportionality, it is concluded that accuracy could be gained by including more information about the true microstructure in the model; emphasizing that its incorporation into micromechanical models is critical to developing quantitatively accurate physics-based ductile failure models.

  14. Reducing streamflow forecast uncertainty: Application and qualitative assessment of the upper klamath river Basin, Oregon

    USGS Publications Warehouse

    Hay, L.E.; McCabe, G.J.; Clark, M.P.; Risley, J.C.

    2009-01-01

    The accuracy of streamflow forecasts depends on the uncertainty associated with future weather and the accuracy of the hydrologic model that is used to produce the forecasts. We present a method for streamflow forecasting where hydrologic model parameters are selected based on the climate state. Parameter sets for a hydrologic model are conditioned on an atmospheric pressure index defined using mean November through February (NDJF) 700-hectoPascal geopotential heights over northwestern North America [Pressure Index from Geopotential heights (PIG)]. The hydrologic model is applied in the Sprague River basin (SRB), a snowmelt-dominated basin located in the Upper Klamath basin in Oregon. In the SRB, the majority of streamflow occurs during March through May (MAM). Water years (WYs) 1980-2004 were divided into three groups based on their respective PIG values (high, medium, and low PIG). Low (high) PIG years tend to have higher (lower) than average MAM streamflow. Four parameter sets were calibrated for the SRB, each using a different set of WYs. The initial set used WYs 1995-2004 and the remaining three used WYs defined as high-, medium-, and low-PIG years. Two sets of March, April, and May streamflow volume forecasts were made using Ensemble Streamflow Prediction (ESP). The first set of ESP simulations used the initial parameter set. Because the PIG is defined using NDJF pressure heights, forecasts starting in March can be made using the PIG parameter set that corresponds with the year being forecasted. The second set of ESP simulations used the parameter set associated with the given PIG year. Comparison of the ESP sets indicates that more accuracy and less variability in volume forecasts may be possible when the ESP is conditioned using the PIG. This is especially true during the high-PIG years (low-flow years). ?? 2009 American Water Resources Association.

  15. Spatial Moran models, II: cancer initiation in spatially structured tissue

    PubMed Central

    Foo, J; Leder, K

    2016-01-01

    We study the accumulation and spread of advantageous mutations in a spatial stochastic model of cancer initiation on a lattice. The parameters of this general model can be tuned to study a variety of cancer types and genetic progression pathways. This investigation contributes to an understanding of how the selective advantage of cancer cells together with the rates of mutations driving cancer, impact the process and timing of carcinogenesis. These results can be used to give insights into tumor heterogeneity and the “cancer field effect,” the observation that a malignancy is often surrounded by cells that have undergone premalignant transformation. PMID:26126947

  16. Modeling plasma heating by ns laser pulse

    NASA Astrophysics Data System (ADS)

    Colonna, Gianpiero; Laricchiuta, Annarita; Pietanza, Lucia Daniela

    2018-03-01

    The transition to breakdown of a weakly ionized gas, considering inverse bremsstrahlung, has been investigated using a state-to-state self-consistent model for gas discharges, mimicking a ns laser pulse. The paper is focused on the role of the initial ionization on the plasma formation. The results give the hint that some anomalous behaviors, such as signal enhancement by metal nanoparticles, can be attributed to this feature. This approach has been applied to hydrogen gas regarded as a simplified model for LIBS plasmas, as a full kinetic scheme is available, including the collisional-radiative model for atoms and molecules. The model allows the influence of different parameters to be investigated, such as the initial electron molar fraction, on the ionization growth.

  17. Qualitative simulation for process modeling and control

    NASA Technical Reports Server (NTRS)

    Dalle Molle, D. T.; Edgar, T. F.

    1989-01-01

    A qualitative model is developed for a first-order system with a proportional-integral controller without precise knowledge of the process or controller parameters. Simulation of the qualitative model yields all of the solutions to the system equations. In developing the qualitative model, a necessary condition for the occurrence of oscillatory behavior is identified. Initializations that cannot exhibit oscillatory behavior produce a finite set of behaviors. When the phase-space behavior of the oscillatory behavior is properly constrained, these initializations produce an infinite but comprehensible set of asymptotically stable behaviors. While the predictions include all possible behaviors of the real system, a class of spurious behaviors has been identified. When limited numerical information is included in the model, the number of predictions is significantly reduced.

  18. An Analytical Planning Model to Estimate the Optimal Density of Charging Stations for Electric Vehicles.

    PubMed

    Ahn, Yongjun; Yeo, Hwasoo

    2015-01-01

    The charging infrastructure location problem is becoming more significant due to the extensive adoption of electric vehicles. Efficient charging station planning can solve deeply rooted problems, such as driving-range anxiety and the stagnation of new electric vehicle consumers. In the initial stage of introducing electric vehicles, the allocation of charging stations is difficult to determine due to the uncertainty of candidate sites and unidentified charging demands, which are determined by diverse variables. This paper introduces the Estimating the Required Density of EV Charging (ERDEC) stations model, which is an analytical approach to estimating the optimal density of charging stations for certain urban areas, which are subsequently aggregated to city level planning. The optimal charging station's density is derived to minimize the total cost. A numerical study is conducted to obtain the correlations among the various parameters in the proposed model, such as regional parameters, technological parameters and coefficient factors. To investigate the effect of technological advances, the corresponding changes in the optimal density and total cost are also examined by various combinations of technological parameters. Daejeon city in South Korea is selected for the case study to examine the applicability of the model to real-world problems. With real taxi trajectory data, the optimal density map of charging stations is generated. These results can provide the optimal number of chargers for driving without driving-range anxiety. In the initial planning phase of installing charging infrastructure, the proposed model can be applied to a relatively extensive area to encourage the usage of electric vehicles, especially areas that lack information, such as exact candidate sites for charging stations and other data related with electric vehicles. The methods and results of this paper can serve as a planning guideline to facilitate the extensive adoption of electric vehicles.

  19. Model of succession in degraded areas based on carabid beetles (Coleoptera, Carabidae)

    PubMed Central

    Schwerk, Axel; Szyszko, Jan

    2011-01-01

    Abstract Degraded areas constitute challenging tasks with respect to sustainable management of natural resources. Maintaining or even establishing certain successional stages seems to be particularly important. This paper presents a model of the succession in five different types of degraded areas in Poland based on changes in the carabid fauna. Mean Individual Biomass of Carabidae (MIB) was used as a numerical measure for the stage of succession. The run of succession differed clearly among the different types of degraded areas. Initial conditions (origin of soil and origin of vegetation) and landscape related aspects seem to be important with respect to these differences. As characteristic phases, a ‘delay phase’, an ‘increase phase’ and a ‘stagnation phase’ were identified. In general, the runs of succession could be described by four different parameters: (1) ‘Initial degradation level’, (2) ‘delay’, (3) ‘increase rate’ and (4) ‘recovery level’. Applying the analytic solution of the logistic equation, characteristic values for the parameters were identified for each of the five area types. The model is of practical use, because it provides a possibility to compare the values of the parameters elaborated in different areas, to give hints for intervention and to provide prognoses about future succession in the areas. Furthermore, it is possible to transfer the model to other indicators of succession. PMID:21738419

  20. Investigations of respiratory control systems simulation

    NASA Technical Reports Server (NTRS)

    Gallagher, R. R.

    1973-01-01

    The Grodins' respiratory control model was investigated and it was determined that the following modifications were necessary before the model would be adaptable for current research efforts: (1) the controller equation must be modified to allow for integration of the respiratory system model with other physiological systems; (2) the system must be more closely correlated to the salient physiological functionings; (3) the respiratory frequency and the heart rate should be expanded to illustrate other physiological relationships and dependencies; and (4) the model should be adapted to particular individuals through a better defined set of initial parameter values in addition to relating these parameter values to the desired environmental conditions. Several of Milhorn's respiratory control models were also investigated in hopes of using some of their features as modifications for Grodins' model.

  1. Evaluation and linking of effective parameters in particle-based models and continuum models for mixing-limited bimolecular reactions

    NASA Astrophysics Data System (ADS)

    Zhang, Yong; Papelis, Charalambos; Sun, Pengtao; Yu, Zhongbo

    2013-08-01

    Particle-based models and continuum models have been developed to quantify mixing-limited bimolecular reactions for decades. Effective model parameters control reaction kinetics, but the relationship between the particle-based model parameter (such as the interaction radius R) and the continuum model parameter (i.e., the effective rate coefficient Kf) remains obscure. This study attempts to evaluate and link R and Kf for the second-order bimolecular reaction in both the bulk and the sharp-concentration-gradient (SCG) systems. First, in the bulk system, the agent-based method reveals that R remains constant for irreversible reactions and decreases nonlinearly in time for a reversible reaction, while mathematical analysis shows that Kf transitions from an exponential to a power-law function. Qualitative link between R and Kf can then be built for the irreversible reaction with equal initial reactant concentrations. Second, in the SCG system with a reaction interface, numerical experiments show that when R and Kf decline as t-1/2 (for example, to account for the reactant front expansion), the two models capture the transient power-law growth of product mass, and their effective parameters have the same functional form. Finally, revisiting of laboratory experiments further shows that the best fit factor in R and Kf is on the same order, and both models can efficiently describe chemical kinetics observed in the SCG system. Effective model parameters used to describe reaction kinetics therefore may be linked directly, where the exact linkage may depend on the chemical and physical properties of the system.

  2. Investigation of the SCS-CN initial abstraction ratio using a Monte Carlo simulation for the derived flood frequency curves

    NASA Astrophysics Data System (ADS)

    Caporali, E.; Chiarello, V.; Galeati, G.

    2014-12-01

    Peak discharges estimates for a given return period are of primary importance in engineering practice for risk assessment and hydraulic structure design. Different statistical methods are chosen here for the assessment of flood frequency curve: one indirect technique based on the extreme rainfall event analysis, the Peak Over Threshold (POT) model and the Annual Maxima approach as direct techniques using river discharge data. In the framework of the indirect method, a Monte Carlo simulation approach is adopted to determine a derived frequency distribution of peak runoff using a probabilistic formulation of the SCS-CN method as stochastic rainfall-runoff model. A Monte Carlo simulation is used to generate a sample of different runoff events from different stochastic combination of rainfall depth, storm duration, and initial loss inputs. The distribution of the rainfall storm events is assumed to follow the GP law whose parameters are estimated through GEV's parameters of annual maximum data. The evaluation of the initial abstraction ratio is investigated since it is one of the most questionable assumption in the SCS-CN model and plays a key role in river basin characterized by high-permeability soils, mainly governed by infiltration excess mechanism. In order to take into account the uncertainty of the model parameters, this modified approach, that is able to revise and re-evaluate the original value of the initial abstraction ratio, is implemented. In the POT model the choice of the threshold has been an essential issue, mainly based on a compromise between bias and variance. The Generalized Extreme Value (GEV) distribution fitted to the annual maxima discharges is therefore compared with the Pareto distributed peaks to check the suitability of the frequency of occurrence representation. The methodology is applied to a large dam in the Serchio river basin, located in the Tuscany Region. The application has shown as Monte Carlo simulation technique can be a useful tool to provide more robust estimation of the results obtained by direct statistical methods.

  3. Explosive decomposition of ethylene oxide at elevated condition: effect of ignition energy, nitrogen dilution, and turbulence.

    PubMed

    Pekalski, A A; Zevenbergen, J F; Braithwaite, M; Lemkowitz, S M; Pasman, H J

    2005-02-14

    Experimental and theoretical investigation of explosive decomposition of ethylene oxide (EO) at fixed initial experimental parameters (T=100 degrees C, P=4 bar) in a 20-l sphere was conducted. Safety-related parameters, namely the maximum explosion pressure, the maximum rate of pressure rise, and the Kd values, were experimentally determined for pure ethylene oxide and ethylene oxide diluted with nitrogen. The influence of the ignition energy on the explosion parameters was also studied. All these dependencies are quantified in empirical formulas. Additionally, the effect of turbulence on explosive decomposition of ethylene oxide was investigated. In contrast to previous studies, it is found that turbulence significantly influences the explosion severity parameters, mostly the rate of pressure rise. Thermodynamic models are used to calculate the maximum explosion pressure of pure and of nitrogen-diluted ethylene oxide, at different initial temperatures. Soot formation was experimentally observed. Relation between the amounts of soot formed and the explosion pressure was experimentally observed and was calculated.

  4. A study of application of remote sensing to river forecasting. Volume 2: Detailed technical report, NASA-IBM streamflow forecast model user's guide

    NASA Technical Reports Server (NTRS)

    1975-01-01

    The Model is described along with data preparation, determining model parameters, initializing and optimizing parameters (calibration) selecting control options and interpreting results. Some background information is included, and appendices contain a dictionary of variables, a source program listing, and flow charts. The model was operated on an IBM System/360 Model 44, using a model 2250 keyboard/graphics terminal for interactive operation. The model can be set up and operated in a batch processing mode on any System/360 or 370 that has the memory capacity. The model requires 210K bytes of core storage, and the optimization program, OPSET (which was used previous to but not in this study), requires 240K bytes. The data band for one small watershed requires approximately 32 tracks of disk storage.

  5. An optimized knife-edge method for on-orbit MTF estimation of optical sensors using powell parameter fitting

    NASA Astrophysics Data System (ADS)

    Han, Lu; Gao, Kun; Gong, Chen; Zhu, Zhenyu; Guo, Yue

    2017-08-01

    On-orbit Modulation Transfer Function (MTF) is an important indicator to evaluate the performance of the optical remote sensors in a satellite. There are many methods to estimate MTF, such as pinhole method, slit method and so on. Among them, knife-edge method is quite efficient, easy-to-use and recommended in ISO12233 standard for the wholefrequency MTF curve acquisition. However, the accuracy of the algorithm is affected by Edge Spread Function (ESF) fitting accuracy significantly, which limits the range of application. So in this paper, an optimized knife-edge method using Powell algorithm is proposed to improve the ESF fitting precision. Fermi function model is the most popular ESF fitting model, yet it is vulnerable to the initial values of the parameters. Considering the characteristics of simple and fast convergence, Powell algorithm is applied to fit the accurate parameters adaptively with the insensitivity to the initial parameters. Numerical simulation results reveal the accuracy and robustness of the optimized algorithm under different SNR, edge direction and leaning angles conditions. Experimental results using images of the camera in ZY-3 satellite show that this method is more accurate than the standard knife-edge method of ISO12233 in MTF estimation.

  6. Cascades in the Threshold Model for varying system sizes

    NASA Astrophysics Data System (ADS)

    Karampourniotis, Panagiotis; Sreenivasan, Sameet; Szymanski, Boleslaw; Korniss, Gyorgy

    2015-03-01

    A classical model in opinion dynamics is the Threshold Model (TM) aiming to model the spread of a new opinion based on the social drive of peer pressure. Under the TM a node adopts a new opinion only when the fraction of its first neighbors possessing that opinion exceeds a pre-assigned threshold. Cascades in the TM depend on multiple parameters, such as the number and selection strategy of the initially active nodes (initiators), and the threshold distribution of the nodes. For a uniform threshold in the network there is a critical fraction of initiators for which a transition from small to large cascades occurs, which for ER graphs is largerly independent of the system size. Here, we study the spread contribution of each newly assigned initiator under the TM for different initiator selection strategies for synthetic graphs of various sizes. We observe that for ER graphs when large cascades occur, the spread contribution of the added initiator on the transition point is independent of the system size, while the contribution of the rest of the initiators converges to zero at infinite system size. This property is used for the identification of large transitions for various threshold distributions. Supported in part by ARL NS-CTA, ARO, ONR, and DARPA.

  7. Research on flow behaviors of the constituent grains in ferrite-martensite dual phase steels based on nanoindentation measurements

    NASA Astrophysics Data System (ADS)

    Gou, Rui-bin; Dan, Wen-jiao; Zhang, Wei-gang; Yu, Min

    2017-07-01

    To investigate the flow properties of constituent grains in ferrite-martensite dual phase steel, both the flow curve of individual grain and the flow behavior difference among different grains were investigated both using a classical dislocation-based model and nanoindentation technique. In the analysis of grain features, grain size, grain shape and martensite proximity around ferrite grain were parameterized by the diameter of area equivalent circular of the grain d, the grain shape coefficient λ and the martensite proximity coefficient p, respectively. Three grain features influenced significantly on the grain initial strength which increases when the grain size d decreases and when grain shape and martensite proximity coefficients enlarge. In describing the flow behavior of single grain, both single-parameter and multi-parameter empirical formulas of grain initial strength were proposed by defining three grain features as the evaluation parameters. It was found that the martensite proximity is an important determinant of ferrite initial strength, while the influence of grain size is minimal. The influence of individual grain was investigated using an improved flow model of overall stress on the overall flow curve of the steel. It was found that the predicted overall flow curve was in good agreement with the experimental one when the flow behaviors of all the constituent grains in the evaluated region were fully considered.

  8. Protein folding, protein structure and the origin of life: Theoretical methods and solutions of dynamical problems

    NASA Technical Reports Server (NTRS)

    Weaver, D. L.

    1982-01-01

    Theoretical methods and solutions of the dynamics of protein folding, protein aggregation, protein structure, and the origin of life are discussed. The elements of a dynamic model representing the initial stages of protein folding are presented. The calculation and experimental determination of the model parameters are discussed. The use of computer simulation for modeling protein folding is considered.

  9. Reduction of uncertainty for estimating runoff with the NRCS CN model by the adaptation to local climatic conditions

    NASA Astrophysics Data System (ADS)

    Durán-Barroso, Pablo; González, Javier; Valdés, Juan B.

    2016-04-01

    Rainfall-runoff quantification is one of the most important tasks in both engineering and watershed management as it allows to identify, forecast and explain watershed response. For that purpose, the Natural Resources Conservation Service Curve Number method (NRCS CN) is the conceptual lumped model more recognized in the field of rainfall-runoff estimation. Furthermore, there is still an ongoing discussion about the procedure to determine the portion of rainfall retained in the watershed before runoff is generated, called as initial abstractions. This concept is computed as a ratio (λ) of the soil potential maximum retention S of the watershed. Initially, this ratio was assumed to be 0.2, but later it has been proposed to be modified to 0.05. However, the actual procedures to convert NRCS CN model parameters obtained under a different hypothesis about λ do not incorporate any adaptation of climatic conditions of each watershed. By this reason, we propose a new simple method for computing model parameters which is adapted to local conditions taking into account regional patterns of climate conditions. After checking the goodness of this procedure against the actual ones in 34 different watersheds located in Ohio and Texas (United States), we concluded that this novel methodology represents the most accurate and efficient alternative to refit the initial abstraction ratio.

  10. Reactive flow model development for PBXW-126 using modern nonlinear optimization methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Murphy, M.J.; Simpson, R.L.; Urtiew, P.A.

    1996-05-01

    The initiation and detonation behavior of PBXW-126 has been characterized and is described. PBXW-126 is a composite explosive consisting of approximately equal amounts of RDX, AP, AL, and NTO with a polyurethane binder. The three term ignition and growth of reaction model parameters (ignition+two growth terms) have been found using nonlinear optimization methods to determine the {open_quotes}best{close_quotes} set of model parameters. The ignition term treats the initiation of up to 0.5{percent} of the RDX. The first growth term in the model treats the RDX growth of reaction up to 20{percent} reacted. The second growth term treats the subsequent growth ofmore » reaction of the remaining AP/AL/NTO. The unreacted equation of state (EOS) was determined from the wave profiles of embedded gauge tests while the JWL product EOS was determined from cylinder expansion test results. The nonlinear optimization code, NLQPEB/GLO, was used to determine the {open_quotes}best{close_quotes} set of coefficients for the three term Lee-Tarver ignition and growth of reaction model. {copyright} {ital 1996 American Institute of Physics.}« less

  11. Phenol biodegradation by immobilized Pseudomonas putida FNCC-0071 cells in alginate beads

    NASA Astrophysics Data System (ADS)

    Hakim, Lukman Nul; Rochmadi, Sutijan

    2017-06-01

    Phenol is one of industrial liquid waste which is harmful to the environment, so it must be degraded. It can be degraded by immobilized Pseudomonas putida FNCC-0071 cells. It needs the kinetics and mass transfer data to design this process which can be estimated by the proposed dynamic model in this study. This model involves simultaneous diffusion and reaction in the alginate bead and liquid bulk. The preliminary stage of phenol biodegradation process was acclimatization cells. This is the stage where cells were acclimated to phenol as carbon source (substrate). Then the acclimated cells were immobilized in alginate beads by extrusion method. The variation of the initial phenol concentration in the solution is 350 to 850 ppm where 60 g alginate bead contained by cells loaded into its solution in reactor batch, so then biodegradation occurs. In this study, the average radius of alginate bead was 0.152 cm. The occurred kinetic reaction process can be explained by Blanch kinetic model with the decreasing of parameter μmax' while the increasing values of initial phenol concentration in the same time, but the parameters KM, KM', and kt were increasing by the rising values of initial phenol concentration. The value of the parameter β is almost zero. Effective diffusivity of phenol and cells are 1.11 × 10-5±4.5% cm2 s-1 and 1.39 × 10-7± 0.04% cm2 s-1. The partition coefficient of phenol and cells are 0.39 ± 15% and 2.22 ± 18%.

  12. Prediction of Fracture Initiation in Hot Compression of Burn-Resistant Ti-35V-15Cr-0.3Si-0.1C Alloy

    NASA Astrophysics Data System (ADS)

    Zhang, Saifei; Zeng, Weidong; Zhou, Dadi; Lai, Yunjin

    2015-11-01

    An important concern in hot working of metals is whether the desired deformation can be accomplished without fracture of the material. This paper builds a fracture prediction model to predict fracture initiation in hot compression of a burn-resistant beta-stabilized titanium alloy Ti-35V-15Cr-0.3Si-0.1C using a combined approach of upsetting experiments, theoretical failure criteria and finite element (FE) simulation techniques. A series of isothermal compression experiments on cylindrical specimens were conducted in temperature range of 900-1150 °C, strain rate of 0.01-10 s-1 first to obtain fracture samples and primary reduction data. Based on that, a comparison of eight commonly used theoretical failure criteria was made and Oh criterion was selected and coded into a subroutine. FE simulation of upsetting experiments on cylindrical specimens was then performed to determine the fracture threshold values of Oh criterion. By building a correlation between threshold values and the deforming parameters (temperature and strain rate, or Zener-Hollomon parameter), a new fracture prediction model based on Oh criterion was established. The new model shows an exponential decay relationship between threshold values and Zener-Hollomon parameter (Z), and the relative error of the model is less than 15%. This model was then applied successfully in the cogging of Ti-35V-15Cr-0.3Si-0.1C billet.

  13. Assessment of compressive failure process of cortical bone materials using damage-based model.

    PubMed

    Ng, Theng Pin; R Koloor, S S; Djuansjah, J R P; Abdul Kadir, M R

    2017-02-01

    The main failure factors of cortical bone are aging or osteoporosis, accident and high energy trauma or physiological activities. However, the mechanism of damage evolution coupled with yield criterion is considered as one of the unclear subjects in failure analysis of cortical bone materials. Therefore, this study attempts to assess the structural response and progressive failure process of cortical bone using a brittle damaged plasticity model. For this reason, several compressive tests are performed on cortical bone specimens made of bovine femur, in order to obtain the structural response and mechanical properties of the material. Complementary finite element (FE) model of the sample and test is prepared to simulate the elastic-to-damage behavior of the cortical bone using the brittle damaged plasticity model. The FE model is validated in a comparative method using the predicted and measured structural response as load-compressive displacement through simulation and experiment. FE results indicated that the compressive damage initiated and propagated at central region where maximum equivalent plastic strain is computed, which coincided with the degradation of structural compressive stiffness followed by a vast amount of strain energy dissipation. The parameter of compressive damage rate, which is a function dependent on damage parameter and the plastic strain is examined for different rates. Results show that considering a similar rate to the initial slope of the damage parameter in the experiment would give a better sense for prediction of compressive failure. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. Decohesion Elements using Two and Three-Parameter Mixed-Mode Criteria

    NASA Technical Reports Server (NTRS)

    Davila, Carlos G.; Camanho, Pedro P.

    2001-01-01

    An eight-node decohesion element implementing different criteria to predict delamination growth under mixed-mode loading is proposed. The element is used at the interface between solid finite elements to model the initiation and propagation of delamination. A single displacement-based damage parameter is used in a softening law to track the damage state of the interface. The power law criterion and a three-parameter mixed-mode criterion are used to predict delamination growth. The accuracy of the predictions is evaluated in single mode delamination and in the mixed-mode bending tests.

  15. Evaluation of Planetary Boundary Layer Scheme Sensitivities for the Purpose of Parameter Estimation

    EPA Science Inventory

    Meteorological model errors caused by imperfect parameterizations generally cannot be overcome simply by optimizing initial and boundary conditions. However, advanced data assimilation methods are capable of extracting significant information about parameterization behavior from ...

  16. Influence of different computational approaches for stent deployment on cerebral aneurysm haemodynamics

    PubMed Central

    Bernardini, Annarita; Larrabide, Ignacio; Morales, Hernán G.; Pennati, Giancarlo; Petrini, Lorenza; Cito, Salvatore; Frangi, Alejandro F.

    2011-01-01

    Cerebral aneurysms are abnormal focal dilatations of artery walls. The interest in virtual tools to help clinicians to value the effectiveness of different procedures for cerebral aneurysm treatment is constantly growing. This study is focused on the analysis of the influence of different stent deployment approaches on intra-aneurysmal haemodynamics using computational fluid dynamics (CFD). A self-expanding stent was deployed in an idealized aneurysmatic cerebral vessel in two initial positions. Different cases characterized by a progression of simplifications on stent modelling (geometry and material) and vessel material properties were set up, using finite element and fast virtual stenting methods. Then, CFD analysis was performed for untreated and stented vessels. Haemodynamic parameters were analysed qualitatively and quantitatively, comparing the cases and the two initial positions. All the cases predicted a reduction of average wall shear stress and average velocity of almost 50 per cent after stent deployment for both initial positions. Results highlighted that, although some differences in calculated parameters existed across the cases based on the modelling simplifications, all the approaches described the most important effects on intra-aneurysmal haemodynamics. Hence, simpler and faster modelling approaches could be included in clinical workflow and, despite the adopted simplifications, support clinicians in the treatment planning. PMID:22670204

  17. Computational Modeling of the Dielectric Barrier Discharge (DBD) Device for Aeronautical Applications

    DTIC Science & Technology

    2006-06-01

    electron energy equation are solved semi-implicitly in a sequential manner. Each of the governing equations is solved by casting them onto a tridiagonal ...actuator for different device configurations and operating parameters. This will provide the Air Force with a low cost, quick turn around...Atmosphere (ATM) (20:8). Initially, the applied potential difference on the electrodes must be great enough to initiate gas breakdown. While

  18. An automatic segmentation method of a parameter-adaptive PCNN for medical images.

    PubMed

    Lian, Jing; Shi, Bin; Li, Mingcong; Nan, Ziwei; Ma, Yide

    2017-09-01

    Since pre-processing and initial segmentation steps in medical images directly affect the final segmentation results of the regions of interesting, an automatic segmentation method of a parameter-adaptive pulse-coupled neural network is proposed to integrate the above-mentioned two segmentation steps into one. This method has a low computational complexity for different kinds of medical images and has a high segmentation precision. The method comprises four steps. Firstly, an optimal histogram threshold is used to determine the parameter [Formula: see text] for different kinds of images. Secondly, we acquire the parameter [Formula: see text] according to a simplified pulse-coupled neural network (SPCNN). Thirdly, we redefine the parameter V of the SPCNN model by sub-intensity distribution range of firing pixels. Fourthly, we add an offset [Formula: see text] to improve initial segmentation precision. Compared with the state-of-the-art algorithms, the new method achieves a comparable performance by the experimental results from ultrasound images of the gallbladder and gallstones, magnetic resonance images of the left ventricle, and mammogram images of the left and the right breast, presenting the overall metric UM of 0.9845, CM of 0.8142, TM of 0.0726. The algorithm has a great potential to achieve the pre-processing and initial segmentation steps in various medical images. This is a premise for assisting physicians to detect and diagnose clinical cases.

  19. Optimization of γ-aminobutyric acid production by Lactobacillus plantarum Taj-Apis362 from honeybees.

    PubMed

    Tajabadi, Naser; Ebrahimpour, Afshin; Baradaran, Ali; Rahim, Raha Abdul; Mahyudin, Nor Ainy; Manap, Mohd Yazid Abdul; Bakar, Fatimah Abu; Saari, Nazamid

    2015-04-15

    Dominant strains of lactic acid bacteria (LAB) isolated from honey bees were evaluated for their γ-aminobutyric acid (GABA)-producing ability. Out of 24 strains, strain Taj-Apis362 showed the highest GABA-producing ability (1.76 mM) in MRS broth containing 50 mM initial glutamic acid cultured for 60 h. Effects of fermentation parameters, including initial glutamic acid level, culture temperature, initial pH and incubation time on GABA production were investigated via a single parameter optimization strategy. The optimal fermentation condition for GABA production was modeled using response surface methodology (RSM). The results showed that the culture temperature was the most significant factor for GABA production. The optimum conditions for maximum GABA production by Lactobacillus plantarum Taj-Apis362 were an initial glutamic acid concentration of 497.97 mM, culture temperature of 36 °C, initial pH of 5.31 and incubation time of 60 h, which produced 7.15 mM of GABA. The value is comparable with the predicted value of 7.21 mM.

  20. History dependent quantum walk on the cycle with an unbalanced coin

    NASA Astrophysics Data System (ADS)

    Krawec, Walter O.

    2015-06-01

    Recently, a new model of quantum walk, utilizing recycled coins, was introduced; however little is yet known about its properties. In this paper, we study its behavior on the cycle graph. In particular, we will consider its time averaged distribution and how it is affected by the walk's "memory parameter"-a real parameter, between zero and eight, which affects the walk's coin flip operator. Despite an infinite number of different parameters, our analysis provides evidence that only a few produce non-uniform behavior. Our analysis also shows that the initial state, and cycle size modulo four all affect the behavior of this walk. We also prove an interesting relationship between the recycled coin model and a different memory-based quantum walk recently proposed.

  1. A Kinematic Calibration Process for Flight Robotic Arms

    NASA Technical Reports Server (NTRS)

    Collins, Curtis L.; Robinson, Matthew L.

    2013-01-01

    The Mars Science Laboratory (MSL) robotic arm is ten times more massive than any Mars robotic arm before it, yet with similar accuracy and repeatability positioning requirements. In order to assess and validate these requirements, a higher-fidelity model and calibration processes were needed. Kinematic calibration of robotic arms is a common and necessary process to ensure good positioning performance. Most methodologies assume a rigid arm, high-accuracy data collection, and some kind of optimization of kinematic parameters. A new detailed kinematic and deflection model of the MSL robotic arm was formulated in the design phase and used to update the initial positioning and orientation accuracy and repeatability requirements. This model included a higher-fidelity link stiffness matrix representation, as well as a link level thermal expansion model. In addition, it included an actuator backlash model. Analytical results highlighted the sensitivity of the arm accuracy to its joint initialization methodology. Because of this, a new technique for initializing the arm joint encoders through hardstop calibration was developed. This involved selecting arm configurations to use in Earth-based hardstop calibration that had corresponding configurations on Mars with the same joint torque to ensure repeatability in the different gravity environment. The process used to collect calibration data for the arm included the use of multiple weight stand-in turrets with enough metrology targets to reconstruct the full six-degree-of-freedom location of the rover and tool frames. The follow-on data processing of the metrology data utilized a standard differential formulation and linear parameter optimization technique.

  2. Multiscale Informatics for Low-Temperature Propane Oxidation: Further Complexities in Studies of Complex Reactions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burke, Michael P.; Goldsmith, C. Franklin; Klippenstein, Stephen J.

    2015-07-16

    We have developed a multi-scale approach (Burke, M. P.; Klippenstein, S. J.; Harding, L. B. Proc. Combust. Inst. 2013, 34, 547–555.) to kinetic model formulation that directly incorporates elementary kinetic theories as a means to provide reliable, physics-based extrapolation to unexplored conditions. Here, we extend and generalize the multi-scale modeling strategy to treat systems of considerable complexity – involving multi-well reactions, potentially missing reactions, non-statistical product branching ratios, and non-Boltzmann (i.e. non-thermal) reactant distributions. The methodology is demonstrated here for a subsystem of low-temperature propane oxidation, as a representative system for low-temperature fuel oxidation. A multi-scale model is assembled andmore » informed by a wide variety of targets that include ab initio calculations of molecular properties, rate constant measurements of isolated reactions, and complex systems measurements. Active model parameters are chosen to accommodate both “parametric” and “structural” uncertainties. Theoretical parameters (e.g. barrier heights) are included as active model parameters to account for parametric uncertainties in the theoretical treatment; experimental parameters (e.g. initial temperatures) are included to account for parametric uncertainties in the physical models of the experiments. RMG software is used to assess potential structural uncertainties due to missing reactions. Additionally, branching ratios among product channels are included as active model parameters to account for structural uncertainties related to difficulties in modeling sequences of multiple chemically activated steps. The approach is demonstrated here for interpreting time-resolved measurements of OH, HO2, n-propyl, i-propyl, propene, oxetane, and methyloxirane from photolysis-initiated low-temperature oxidation of propane at pressures from 4 to 60 Torr and temperatures from 300 to 700 K. In particular, the multi-scale informed model provides a consistent quantitative explanation of both ab initio calculations and time-resolved species measurements. The present results show that interpretations of OH measurements are significantly more complicated than previously thought – in addition to barrier heights for key transition states considered previously, OH profiles also depend on additional theoretical parameters for R + O2 reactions, secondary reactions, QOOH + O2 reactions, and treatment of non-Boltzmann reaction sequences. Extraction of physically rigorous information from those measurements may require more sophisticated treatment of all of those model aspects, as well as additional experimental data under more conditions, to discriminate among possible interpretations and ensure model reliability. Keywords: Optimization, Uncertainty quantification, Chemical mechanism, Low-Temperature Oxidation, Non-Boltzmann« less

  3. IPMP Global Fit - A one-step direct data analysis tool for predictive microbiology.

    PubMed

    Huang, Lihan

    2017-12-04

    The objective of this work is to develop and validate a unified optimization algorithm for performing one-step global regression analysis of isothermal growth and survival curves for determination of kinetic parameters in predictive microbiology. The algorithm is incorporated with user-friendly graphical interfaces (GUIs) to develop a data analysis tool, the USDA IPMP-Global Fit. The GUIs are designed to guide the users to easily navigate through the data analysis process and properly select the initial parameters for different combinations of mathematical models. The software is developed for one-step kinetic analysis to directly construct tertiary models by minimizing the global error between the experimental observations and mathematical models. The current version of the software is specifically designed for constructing tertiary models with time and temperature as the independent model parameters in the package. The software is tested with a total of 9 different combinations of primary and secondary models for growth and survival of various microorganisms. The results of data analysis show that this software provides accurate estimates of kinetic parameters. In addition, it can be used to improve the experimental design and data collection for more accurate estimation of kinetic parameters. IPMP-Global Fit can be used in combination with the regular USDA-IPMP for solving the inverse problems and developing tertiary models in predictive microbiology. Published by Elsevier B.V.

  4. Vacuum-induced Berry phases in single-mode Jaynes-Cummings models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Yu; Wei, L. F.; Jia, W. Z.

    2010-10-15

    Motivated by work [Phys. Rev. Lett. 89, 220404 (2002)] for detecting the vacuum-induced Berry phases with two-mode Jaynes-Cummings models (JCMs), we show here that, for a parameter-dependent single-mode JCM, certain atom-field states also acquired photon-number-dependent Berry phases after the parameter slowly changed and eventually returned to its initial value. This geometric effect related to the field quantization still exists, even if the field is kept in its vacuum state. Specifically, a feasible Ramsey interference experiment with a cavity quantum electrodynamics system is designed to detect the vacuum-induced Berry phase.

  5. Monte Carlo exploration of Mikheyev-Smirnov-Wolfenstein solutions to the solar neutrino problem

    NASA Technical Reports Server (NTRS)

    Shi, X.; Schramm, D. N.; Bahcall, J. N.

    1992-01-01

    The paper explores the impact of astrophysical uncertainties on the Mikheyev-Smirnov-Wolfenstein (MSW) solution by calculating the allowed MSW solutions for 1000 different solar models with a Monte Carlo selection of solar model input parameters, assuming a full three-family MSW mixing. Applications are made to the chlorine, gallium, Kamiokande, and Borexino experiments. The initial GALLEX result limits the mixing parameters to the upper diagonal and the vertical regions of the MSW triangle. The expected event rates in the Borexino experiment are also calculated, assuming the MSW solutions implied by GALLEX.

  6. Determination of hyporheic travel time distributions and other parameters from concurrent conservative and reactive tracer tests by local-in-global optimization

    NASA Astrophysics Data System (ADS)

    Knapp, Julia L. A.; Cirpka, Olaf A.

    2017-06-01

    The complexity of hyporheic flow paths requires reach-scale models of solute transport in streams that are flexible in their representation of the hyporheic passage. We use a model that couples advective-dispersive in-stream transport to hyporheic exchange with a shape-free distribution of hyporheic travel times. The model also accounts for two-site sorption and transformation of reactive solutes. The coefficients of the model are determined by fitting concurrent stream-tracer tests of conservative (fluorescein) and reactive (resazurin/resorufin) compounds. The flexibility of the shape-free models give rise to multiple local minima of the objective function in parameter estimation, thus requiring global-search algorithms, which is hindered by the large number of parameter values to be estimated. We present a local-in-global optimization approach, in which we use a Markov-Chain Monte Carlo method as global-search method to estimate a set of in-stream and hyporheic parameters. Nested therein, we infer the shape-free distribution of hyporheic travel times by a local Gauss-Newton method. The overall approach is independent of the initial guess and provides the joint posterior distribution of all parameters. We apply the described local-in-global optimization method to recorded tracer breakthrough curves of three consecutive stream sections, and infer section-wise hydraulic parameter distributions to analyze how hyporheic exchange processes differ between the stream sections.

  7. Optimizing Muscle Parameters in Musculoskeletal Modeling Using Monte Carlo Simulations

    NASA Technical Reports Server (NTRS)

    Hanson, Andrea; Reed, Erik; Cavanagh, Peter

    2011-01-01

    Astronauts assigned to long-duration missions experience bone and muscle atrophy in the lower limbs. The use of musculoskeletal simulation software has become a useful tool for modeling joint and muscle forces during human activity in reduced gravity as access to direct experimentation is limited. Knowledge of muscle and joint loads can better inform the design of exercise protocols and exercise countermeasure equipment. In this study, the LifeModeler(TM) (San Clemente, CA) biomechanics simulation software was used to model a squat exercise. The initial model using default parameters yielded physiologically reasonable hip-joint forces. However, no activation was predicted in some large muscles such as rectus femoris, which have been shown to be active in 1-g performance of the activity. Parametric testing was conducted using Monte Carlo methods and combinatorial reduction to find a muscle parameter set that more closely matched physiologically observed activation patterns during the squat exercise. Peak hip joint force using the default parameters was 2.96 times body weight (BW) and increased to 3.21 BW in an optimized, feature-selected test case. The rectus femoris was predicted to peak at 60.1% activation following muscle recruitment optimization, compared to 19.2% activation with default parameters. These results indicate the critical role that muscle parameters play in joint force estimation and the need for exploration of the solution space to achieve physiologically realistic muscle activation.

  8. The added value of remote sensing products in constraining hydrological models

    NASA Astrophysics Data System (ADS)

    Nijzink, Remko C.; Almeida, Susana; Pechlivanidis, Ilias; Capell, René; Gustafsson, David; Arheimer, Berit; Freer, Jim; Han, Dawei; Wagener, Thorsten; Sleziak, Patrik; Parajka, Juraj; Savenije, Hubert; Hrachowitz, Markus

    2017-04-01

    The calibration of a hydrological model still depends on the availability of streamflow data, even though more additional sources of information (i.e. remote sensed data products) have become more widely available. In this research, the model parameters of four different conceptual hydrological models (HYPE, HYMOD, TUW, FLEX) were constrained with remotely sensed products. The models were applied over 27 catchments across Europe to cover a wide range of climates, vegetation and landscapes. The fluxes and states of the models were correlated with the relevant products (e.g. MOD10A snow with modelled snow states), after which new a-posteriori parameter distributions were determined based on a weighting procedure using conditional probabilities. Briefly, each parameter was weighted with the coefficient of determination of the relevant regression between modelled states/fluxes and products. In this way, final feasible parameter sets were derived without the use of discharge time series. Initial results show that improvements in model performance, with regard to streamflow simulations, are obtained when the models are constrained with a set of remotely sensed products simultaneously. In addition, we present a more extensive analysis to assess a model's ability to reproduce a set of hydrological signatures, such as rising limb density or peak distribution. Eventually, this research will enhance our understanding and recommendations in the use of remotely sensed products for constraining conceptual hydrological modelling and improving predictive capability, especially for data sparse regions.

  9. Transient Inverse Calibration of Site-Wide Groundwater Model to Hanford Operational Impacts from 1943 to 1996--Alternative Conceptual Model Considering Interaction with Uppermost Basalt Confined Aquifer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vermeul, Vincent R.; Cole, Charles R.; Bergeron, Marcel P.

    2001-08-29

    The baseline three-dimensional transient inverse model for the estimation of site-wide scale flow parameters, including their uncertainties, using data on the transient behavior of the unconfined aquifer system over the entire historical period of Hanford operations, has been modified to account for the effects of basalt intercommunication between the Hanford unconfined aquifer and the underlying upper basalt confined aquifer. Both the baseline and alternative conceptual models (ACM-1) considered only the groundwater flow component and corresponding observational data in the 3-Dl transient inverse calibration efforts. Subsequent efforts will examine both groundwater flow and transport. Comparisons of goodness of fit measures andmore » parameter estimation results for the ACM-1 transient inverse calibrated model with those from previous site-wide groundwater modeling efforts illustrate that the new 3-D transient inverse model approach will strengthen the technical defensibility of the final model(s) and provide the ability to incorporate uncertainty in predictions related to both conceptual model and parameter uncertainty. These results, however, indicate that additional improvements are required to the conceptual model framework. An investigation was initiated at the end of this basalt inverse modeling effort to determine whether facies-based zonation would improve specific yield parameter estimation results (ACM-2). A description of the justification and methodology to develop this zonation is discussed.« less

  10. Simulation of a Radio-Frequency Photogun for the Generation of Ultrashort Beams

    NASA Astrophysics Data System (ADS)

    Nikiforov, D. A.; Levichev, A. E.; Barnyakov, A. M.; Andrianov, A. V.; Samoilov, S. L.

    2018-04-01

    A radio-frequency photogun for the generation of ultrashort electron beams to be used in fast electron diffractoscopy, wakefield acceleration experiments, and the design of accelerating structures of the millimeter range is modeled. The beam parameters at the photogun output needed for each type of experiment are determined. The general outline of the photogun is given, its electrodynamic parameters are calculated, and the accelerating field distribution is obtained. The particle dynamics is analyzed in the context of the required output beam parameters. The optimal initial beam characteristics and field amplitudes are chosen. A conclusion is made regarding the obtained beam parameters.

  11. CPU time optimization and precise adjustment of the Geant4 physics parameters for a VARIAN 2100 C/D gamma radiotherapy linear accelerator simulation using GAMOS.

    PubMed

    Arce, Pedro; Lagares, Juan Ignacio

    2018-01-25

    We have verified the GAMOS/Geant4 simulation model of a 6 MV VARIAN Clinac 2100 C/D linear accelerator by the procedure of adjusting the initial beam parameters to fit the percentage depth dose and cross-profile dose experimental data at different depths in a water phantom. Thanks to the use of a wide range of field sizes, from 2  ×  2 cm 2 to 40  ×  40 cm 2 , a small phantom voxel size and high statistics, fine precision in the determination of the beam parameters has been achieved. This precision has allowed us to make a thorough study of the different physics models and parameters that Geant4 offers. The three Geant4 electromagnetic physics sets of models, i.e. Standard, Livermore and Penelope, have been compared to the experiment, testing the four different models of angular bremsstrahlung distributions as well as the three available multiple-scattering models, and optimizing the most relevant Geant4 electromagnetic physics parameters. Before the fitting, a comprehensive CPU time optimization has been done, using several of the Geant4 efficiency improvement techniques plus a few more developed in GAMOS.

  12. The methodology of choice Cam-Clay model parameters for loess subsoil

    NASA Astrophysics Data System (ADS)

    Nepelski, Krzysztof; Błazik-Borowa, Ewa

    2018-01-01

    The paper deals with the calibration method of FEM subsoil model described by the constitutive Cam-Clay model. The four-storey residential building and solid substrate are modelled. Identification of the substrate is made using research drilling, CPT static tests, DMT Marchetti dilatometer, and laboratory tests. Latter are performed on the intact soil specimens which are taken from the wide planning trench at the depth of foundation. The real building settlements was measured as the vertical displacement of benchmarks. These measurements were carried out periodically during the erection of the building and its operation. Initially, the Cam Clay model parameters were determined on the basis of the laboratory tests, and later, they were corrected by taking into consideration numerical analyses results (whole building and its parts) and real building settlements.

  13. High fidelity studies of exploding foil initiator bridges, Part 3: ALEGRA MHD simulations

    NASA Astrophysics Data System (ADS)

    Neal, William; Garasi, Christopher

    2017-01-01

    Simulations of high voltage detonators, such as Exploding Bridgewire (EBW) and Exploding Foil Initiators (EFI), have historically been simple, often empirical, one-dimensional models capable of predicting parameters such as current, voltage, and in the case of EFIs, flyer velocity. Experimental methods have correspondingly generally been limited to the same parameters. With the advent of complex, first principles magnetohydrodynamic codes such as ALEGRA and ALE-MHD, it is now possible to simulate these components in three dimensions, and predict a much greater range of parameters than before. A significant improvement in experimental capability was therefore required to ensure these simulations could be adequately verified. In this third paper of a three part study, the experimental results presented in part 2 are compared against 3-dimensional MHD simulations. This improved experimental capability, along with advanced simulations, offer an opportunity to gain a greater understanding of the processes behind the functioning of EBW and EFI detonators.

  14. High fidelity studies of exploding foil initiator bridges, Part 2: Experimental results

    NASA Astrophysics Data System (ADS)

    Neal, William; Bowden, Mike

    2017-01-01

    Simulations of high voltage detonators, such as Exploding Bridgewire (EBW) and Exploding Foil Initiators (EFI), have historically been simple, often empirical, one-dimensional models capable of predicting parameters such as current, voltage, and in the case of EFIs, flyer velocity. Experimental methods have correspondingly generally been limited to the same parameters. With the advent of complex, first principles magnetohydrodynamic codes such as ALEGRA MHD, it is now possible to simulate these components in three dimensions and predict greater range of parameters than before. A significant improvement in experimental capability was therefore required to ensure these simulations could be adequately verified. In this second paper of a three part study, data is presented from a flexible foil EFI header experiment. This study has shown that there is significant bridge expansion before time of peak voltage and that heating within the bridge material is spatially affected by the microstructure of the metal foil.

  15. Performance of the Extravehicular Mobility Unit (EMU) Airlock Coolant Loop Remediation (A/L CLR) Hardware - Final

    NASA Technical Reports Server (NTRS)

    Steele, John W.; Rector, Tony; Gazda, Daniel; Lewis, John

    2011-01-01

    An EMU water processing kit (Airlock Coolant Loop Recovery -- A/L CLR) was developed as a corrective action to Extravehicular Mobility Unit (EMU) coolant flow disruptions experienced on the International Space Station (ISS) in May of 2004 and thereafter. A conservative duty cycle and set of use parameters for A/L CLR use and component life were initially developed and implemented based on prior analysis results and analytical modeling. Several initiatives were undertaken to optimize the duty cycle and use parameters of the hardware. Examination of post-flight samples and EMU Coolant Loop hardware provided invaluable information on the performance of the A/L CLR and has allowed for an optimization of the process. The intent of this paper is to detail the evolution of the A/L CLR hardware, efforts to optimize the duty cycle and use parameters, and the final recommendations for implementation in the post-Shuttle retirement era.

  16. [Model and analysis of spectropolarimetric BRDF of painted target based on GA-LM method].

    PubMed

    Chen, Chao; Zhao, Yong-Qiang; Luo, Li; Pan, Quan; Cheng, Yong-Mei; Wang, Kai

    2010-03-01

    Models based on microfacet were used to describe spectropolarimetric BRDF (short for bidirectional reflectance distribution function) with experimental data. And the spectropolarimetric BRDF values of targets were measured with the comparison to the standard whiteboard, which was considered as Lambert and had a uniform reflectance rate up to 98% at arbitrary angle of view. And then the relationships between measured spectropolarimetric BRDF values and the angles of view, as well as wavelengths which were in a range of 400-720 nm were analyzed in details. The initial value needed to be input to the LM optimization method was difficult to get and greatly impacted the results. Therefore, optimization approach which combines genetic algorithm and Levenberg-Marquardt (LM) was utilized aiming to retrieve parameters of nonlinear models, and the initial values were obtained using GA approach. Simulated experiments were used to test the efficiency of the adopted optimization method. And the simulated experiment ensures the optimization method to have a good performance and be able to retrieve the parameters of nonlinear model efficiently. The correctness of the models was validated by real outdoor sampled data. The parameters of DoP model retrieved are the refraction index of measured targets. The refraction index of the same color painted target but with different materials was also obtained. Conclusion has been drawn that the refraction index from these two targets are very near and this slight difference could be understood by the difference in the conditions of paint targets' surface, not the material of the targets.

  17. Theory of the lattice Boltzmann Method: Dispersion, Dissipation, Isotropy, Galilean Invariance, and Stability

    NASA Technical Reports Server (NTRS)

    Lallemand, Pierre; Luo, Li-Shi

    2000-01-01

    The generalized hydrodynamics (the wave vector dependence of the transport coefficients) of a generalized lattice Boltzmann equation (LBE) is studied in detail. The generalized lattice Boltzmann equation is constructed in moment space rather than in discrete velocity space. The generalized hydrodynamics of the model is obtained by solving the dispersion equation of the linearized LBE either analytically by using perturbation technique or numerically. The proposed LBE model has a maximum number of adjustable parameters for the given set of discrete velocities. Generalized hydrodynamics characterizes dispersion, dissipation (hyper-viscosities), anisotropy, and lack of Galilean invariance of the model, and can be applied to select the values of the adjustable parameters which optimize the properties of the model. The proposed generalized hydrodynamic analysis also provides some insights into stability and proper initial conditions for LBE simulations. The stability properties of some 2D LBE models are analyzed and compared with each other in the parameter space of the mean streaming velocity and the viscous relaxation time. The procedure described in this work can be applied to analyze other LBE models. As examples, LBE models with various interpolation schemes are analyzed. Numerical results on shear flow with an initially discontinuous velocity profile (shock) with or without a constant streaming velocity are shown to demonstrate the dispersion effects in the LBE model; the results compare favorably with our theoretical analysis. We also show that whereas linear analysis of the LBE evolution operator is equivalent to Chapman-Enskog analysis in the long wave-length limit (wave vector k = 0), it can also provide results for large values of k. Such results are important for the stability and other hydrodynamic properties of the LBE method and cannot be obtained through Chapman-Enskog analysis.

  18. On rate-state and Coulomb failure models

    USGS Publications Warehouse

    Gomberg, J.; Beeler, N.; Blanpied, M.

    2000-01-01

    We examine the predictions of Coulomb failure stress and rate-state frictional models. We study the change in failure time (clock advance) Δt due to stress step perturbations (i.e., coseismic static stress increases) added to "background" stressing at a constant rate (i.e., tectonic loading) at time t0. The predictability of Δt implies a predictable change in seismicity rate r(t)/r0, testable using earthquake catalogs, where r0 is the constant rate resulting from tectonic stressing. Models of r(t)/r0, consistent with general properties of aftershock sequences, must predict an Omori law seismicity decay rate, a sequence duration that is less than a few percent of the mainshock cycle time and a return directly to the background rate. A Coulomb model requires that a fault remains locked during loading, that failure occur instantaneously, and that Δt is independent of t0. These characteristics imply an instantaneous infinite seismicity rate increase of zero duration. Numerical calculations of r(t)/r0 for different state evolution laws show that aftershocks occur on faults extremely close to failure at the mainshock origin time, that these faults must be "Coulomb-like," and that the slip evolution law can be precluded. Real aftershock population characteristics also may constrain rate-state constitutive parameters; a may be lower than laboratory values, the stiffness may be high, and/or normal stress may be lower than lithostatic. We also compare Coulomb and rate-state models theoretically. Rate-state model fault behavior becomes more Coulomb-like as constitutive parameter a decreases relative to parameter b. This is because the slip initially decelerates, representing an initial healing of fault contacts. The deceleration is more pronounced for smaller a, more closely simulating a locked fault. Even when the rate-state Δt has Coulomb characteristics, its magnitude may differ by some constant dependent on b. In this case, a rate-state model behaves like a modified Coulomb failure model in which the failure stress threshold is lowered due to weakening, increasing the clock advance. The deviation from a non-Coulomb response also depends on the loading rate, elastic stiffness, initial conditions, and assumptions about how state evolves.

  19. Parameter estimation for compact binary coalescence signals with the first generation gravitational-wave detector network

    NASA Astrophysics Data System (ADS)

    Aasi, J.; Abadie, J.; Abbott, B. P.; Abbott, R.; Abbott, T. D.; Abernathy, M.; Accadia, T.; Acernese, F.; Adams, C.; Adams, T.; Addesso, P.; Adhikari, R.; Affeldt, C.; Agathos, M.; Agatsuma, K.; Ajith, P.; Allen, B.; Allocca, A.; Amador Ceron, E.; Amariutei, D.; Anderson, S. B.; Anderson, W. G.; Arai, K.; Araya, M. C.; Ast, S.; Aston, S. M.; Astone, P.; Atkinson, D.; Aufmuth, P.; Aulbert, C.; Aylott, B. E.; Babak, S.; Baker, P.; Ballardin, G.; Ballmer, S.; Bao, Y.; Barayoga, J. C. B.; Barker, D.; Barone, F.; Barr, B.; Barsotti, L.; Barsuglia, M.; Barton, M. A.; Bartos, I.; Bassiri, R.; Bastarrika, M.; Basti, A.; Batch, J.; Bauchrowitz, J.; Bauer, Th. S.; Bebronne, M.; Beck, D.; Behnke, B.; Bejger, M.; Beker, M. G.; Bell, A. S.; Bell, C.; Belopolski, I.; Benacquista, M.; Berliner, J. M.; Bertolini, A.; Betzwieser, J.; Beveridge, N.; Beyersdorf, P. T.; Bhadbade, T.; Bilenko, I. A.; Billingsley, G.; Birch, J.; Biswas, R.; Bitossi, M.; Bizouard, M. A.; Black, E.; Blackburn, J. K.; Blackburn, L.; Blair, D.; Bland, B.; Blom, M.; Bock, O.; Bodiya, T. P.; Bogan, C.; Bond, C.; Bondarescu, R.; Bondu, F.; Bonelli, L.; Bonnand, R.; Bork, R.; Born, M.; Boschi, V.; Bose, S.; Bosi, L.; Bouhou, B.; Braccini, S.; Bradaschia, C.; Brady, P. R.; Braginsky, V. B.; Branchesi, M.; Brau, J. E.; Breyer, J.; Briant, T.; Bridges, D. O.; Brillet, A.; Brinkmann, M.; Brisson, V.; Britzger, M.; Brooks, A. F.; Brown, D. A.; Bulik, T.; Bulten, H. J.; Buonanno, A.; Burguet–Castell, J.; Buskulic, D.; Buy, C.; Byer, R. L.; Cadonati, L.; Cagnoli, G.; Calloni, E.; Camp, J. B.; Campsie, P.; Cannon, K.; Canuel, B.; Cao, J.; Capano, C. D.; Carbognani, F.; Carbone, L.; Caride, S.; Caudill, S.; Cavaglià, M.; Cavalier, F.; Cavalieri, R.; Cella, G.; Cepeda, C.; Cesarini, E.; Chalermsongsak, T.; Charlton, P.; Chassande-Mottin, E.; Chen, W.; Chen, X.; Chen, Y.; Chincarini, A.; Chiummo, A.; Cho, H. S.; Chow, J.; Christensen, N.; Chua, S. S. Y.; Chung, C. T. Y.; Chung, S.; Ciani, G.; Clara, F.; Clark, D. E.; Clark, J. A.; Clayton, J. H.; Cleva, F.; Coccia, E.; Cohadon, P.-F.; Colacino, C. N.; Colla, A.; Colombini, M.; Conte, A.; Conte, R.; Cook, D.; Corbitt, T. R.; Cordier, M.; Cornish, N.; Corsi, A.; Costa, C. A.; Coughlin, M.; Coulon, J.-P.; Couvares, P.; Coward, D. M.; Cowart, M.; Coyne, D. C.; Creighton, J. D. E.; Creighton, T. D.; Cruise, A. M.; Cumming, A.; Cunningham, L.; Cuoco, E.; Cutler, R. M.; Dahl, K.; Damjanic, M.; Danilishin, S. L.; D'Antonio, S.; Danzmann, K.; Dattilo, V.; Daudert, B.; Daveloza, H.; Davier, M.; Daw, E. J.; Dayanga, T.; De Rosa, R.; DeBra, D.; Debreczeni, G.; Degallaix, J.; Del Pozzo, W.; Dent, T.; Dergachev, V.; DeRosa, R.; Dhurandhar, S.; Di Fiore, L.; Di Lieto, A.; Di Palma, I.; Di Paolo Emilio, M.; Di Virgilio, A.; Díaz, M.; Dietz, A.; Donovan, F.; Dooley, K. L.; Doravari, S.; Dorsher, S.; Drago, M.; Drever, R. W. P.; Driggers, J. C.; Du, Z.; Dumas, J.-C.; Dwyer, S.; Eberle, T.; Edgar, M.; Edwards, M.; Effler, A.; Ehrens, P.; Endrőczi, G.; Engel, R.; Etzel, T.; Evans, K.; Evans, M.; Evans, T.; Factourovich, M.; Fafone, V.; Fairhurst, S.; Farr, B. F.; Farr, W. M.; Favata, M.; Fazi, D.; Fehrmann, H.; Feldbaum, D.; Feroz, F.; Ferrante, I.; Ferrini, F.; Fidecaro, F.; Finn, L. S.; Fiori, I.; Fisher, R. P.; Flaminio, R.; Foley, S.; Forsi, E.; Forte, L. A.; Fotopoulos, N.; Fournier, J.-D.; Franc, J.; Franco, S.; Frasca, S.; Frasconi, F.; Frede, M.; Frei, M. A.; Frei, Z.; Freise, A.; Frey, R.; Fricke, T. T.; Friedrich, D.; Fritschel, P.; Frolov, V. V.; Fujimoto, M.-K.; Fulda, P. J.; Fyffe, M.; Gair, J.; Galimberti, M.; Gammaitoni, L.; Garcia, J.; Garufi, F.; Gáspár, M. E.; Gelencser, G.; Gemme, G.; Genin, E.; Gennai, A.; Gergely, L. Á.; Ghosh, S.; Giaime, J. A.; Giampanis, S.; Giardina, K. D.; Giazotto, A.; Gil-Casanova, S.; Gill, C.; Gleason, J.; Goetz, E.; González, G.; Gorodetsky, M. L.; Goßler, S.; Gouaty, R.; Graef, C.; Graff, P. B.; Granata, M.; Grant, A.; Gray, C.; Greenhalgh, R. J. S.; Gretarsson, A. M.; Griffo, C.; Grote, H.; Grover, K.; Grunewald, S.; Guidi, G. M.; Guido, C.; Gupta, R.; Gustafson, E. K.; Gustafson, R.; Hallam, J. M.; Hammer, D.; Hammond, G.; Hanks, J.; Hanna, C.; Hanson, J.; Harms, J.; Harry, G. M.; Harry, I. W.; Harstad, E. D.; Hartman, M. T.; Haster, C.-J.; Haughian, K.; Hayama, K.; Hayau, J.-F.; Heefner, J.; Heidmann, A.; Heintze, M. C.; Heitmann, H.; Hello, P.; Hemming, G.; Hendry, M. A.; Heng, I. S.; Heptonstall, A. W.; Herrera, V.; Heurs, M.; Hewitson, M.; Hild, S.; Hoak, D.; Hodge, K. A.; Holt, K.; Holtrop, M.; Hong, T.; Hooper, S.; Hough, J.; Howell, E. J.; Hughey, B.; Husa, S.; Huttner, S. H.; Huynh-Dinh, T.; Ingram, D. R.; Inta, R.; Isogai, T.; Ivanov, A.; Izumi, K.; Jacobson, M.; James, E.; Jang, Y. J.; Jaranowski, P.; Jesse, E.; Johnson, W. W.; Jones, D. I.; Jones, R.; Jonker, R. J. G.; Ju, L.; Kalmus, P.; Kalogera, V.; Kandhasamy, S.; Kang, G.; Kanner, J. B.; Kasprzack, M.; Kasturi, R.; Katsavounidis, E.; Katzman, W.; Kaufer, H.; Kaufman, K.; Kawabe, K.; Kawamura, S.; Kawazoe, F.; Keitel, D.; Kelley, D.; Kells, W.; Keppel, D. G.; Keresztes, Z.; Khalaidovski, A.; Khalili, F. Y.; Khazanov, E. A.; Kim, B. K.; Kim, C.; Kim, H.; Kim, K.; Kim, N.; Kim, Y. M.; King, P. J.; Kinzel, D. L.; Kissel, J. S.; Klimenko, S.; Kline, J.; Kokeyama, K.; Kondrashov, V.; Koranda, S.; Korth, W. Z.; Kowalska, I.; Kozak, D.; Kringel, V.; Krishnan, B.; Królak, A.; Kuehn, G.; Kumar, P.; Kumar, R.; Kurdyumov, R.; Kwee, P.; Lam, P. K.; Landry, M.; Langley, A.; Lantz, B.; Lastzka, N.; Lawrie, C.; Lazzarini, A.; Le Roux, A.; Leaci, P.; Lee, C. H.; Lee, H. K.; Lee, H. M.; Leong, J. R.; Leonor, I.; Leroy, N.; Letendre, N.; Lhuillier, V.; Li, J.; Li, T. G. F.; Lindquist, P. E.; Litvine, V.; Liu, Y.; Liu, Z.; Lockerbie, N. A.; Lodhia, D.; Logue, J.; Lorenzini, M.; Loriette, V.; Lormand, M.; Losurdo, G.; Lough, J.; Lubinski, M.; Lück, H.; Lundgren, A. P.; Macarthur, J.; Macdonald, E.; Machenschalk, B.; MacInnis, M.; Macleod, D. M.; Mageswaran, M.; Mailand, K.; Majorana, E.; Maksimovic, I.; Malvezzi, V.; Man, N.; Mandel, I.; Mandic, V.; Mantovani, M.; Marchesoni, F.; Marion, F.; Márka, S.; Márka, Z.; Markosyan, A.; Maros, E.; Marque, J.; Martelli, F.; Martin, I. W.; Martin, R. M.; Marx, J. N.; Mason, K.; Masserot, A.; Matichard, F.; Matone, L.; Matzner, R. A.; Mavalvala, N.; Mazzolo, G.; McCarthy, R.; McClelland, D. E.; McGuire, S. C.; McIntyre, G.; McIver, J.; Meadors, G. D.; Mehmet, M.; Meier, T.; Melatos, A.; Melissinos, A. C.; Mendell, G.; Menéndez, D. F.; Mercer, R. A.; Meshkov, S.; Messenger, C.; Meyer, M. S.; Miao, H.; Michel, C.; Milano, L.; Miller, J.; Minenkov, Y.; Mingarelli, C. M. F.; Mitrofanov, V. P.; Mitselmakher, G.; Mittleman, R.; Moe, B.; Mohan, M.; Mohapatra, S. R. P.; Moraru, D.; Moreno, G.; Morgado, N.; Morgia, A.; Mori, T.; Morriss, S. R.; Mosca, S.; Mossavi, K.; Mours, B.; Mow–Lowry, C. M.; Mueller, C. L.; Mueller, G.; Mukherjee, S.; Mullavey, A.; Müller-Ebhardt, H.; Munch, J.; Murphy, D.; Murray, P. G.; Mytidis, A.; Nash, T.; Naticchioni, L.; Necula, V.; Nelson, J.; Neri, I.; Newton, G.; Nguyen, T.; Nishizawa, A.; Nitz, A.; Nocera, F.; Nolting, D.; Normandin, M. E.; Nuttall, L.; Ochsner, E.; O'Dell, J.; Oelker, E.; Ogin, G. H.; Oh, J. J.; Oh, S. H.; Oldenberg, R. G.; O'Reilly, B.; O'Shaughnessy, R.; Osthelder, C.; Ott, C. D.; Ottaway, D. J.; Ottens, R. S.; Overmier, H.; Owen, B. J.; Page, A.; Palladino, L.; Palomba, C.; Pan, Y.; Pankow, C.; Paoletti, F.; Paoletti, R.; Papa, M. A.; Parisi, M.; Pasqualetti, A.; Passaquieti, R.; Passuello, D.; Pedraza, M.; Penn, S.; Perreca, A.; Persichetti, G.; Phelps, M.; Pichot, M.; Pickenpack, M.; Piergiovanni, F.; Pierro, V.; Pihlaja, M.; Pinard, L.; Pinto, I. M.; Pitkin, M.; Pletsch, H. J.; Plissi, M. V.; Poggiani, R.; Pöld, J.; Postiglione, F.; Poux, C.; Prato, M.; Predoi, V.; Prestegard, T.; Price, L. R.; Prijatelj, M.; Principe, M.; Privitera, S.; Prodi, G. A.; Prokhorov, L. G.; Puncken, O.; Punturo, M.; Puppo, P.; Quetschke, V.; Quitzow-James, R.; Raab, F. J.; Rabeling, D. S.; Rácz, I.; Radkins, H.; Raffai, P.; Rakhmanov, M.; Ramet, C.; Rankins, B.; Rapagnani, P.; Raymond, V.; Re, V.; Reed, C. M.; Reed, T.; Regimbau, T.; Reid, S.; Reitze, D. H.; Ricci, F.; Riesen, R.; Riles, K.; Roberts, M.; Robertson, N. A.; Robinet, F.; Robinson, C.; Robinson, E. L.; Rocchi, A.; Roddy, S.; Rodriguez, C.; Rodruck, M.; Rolland, L.; Rollins, J. G.; Romano, R.; Romie, J. H.; Rosińska, D.; Röver, C.; Rowan, S.; Rüdiger, A.; Ruggi, P.; Ryan, K.; Salemi, F.; Sammut, L.; Sandberg, V.; Sankar, S.; Sannibale, V.; Santamaría, L.; Santiago-Prieto, I.; Santostasi, G.; Saracco, E.; Sassolas, B.; Sathyaprakash, B. S.; Saulson, P. R.; Savage, R. L.; Schilling, R.; Schnabel, R.; Schofield, R. M. S.; Schulz, B.; Schutz, B. F.; Schwinberg, P.; Scott, J.; Scott, S. M.; Seifert, F.; Sellers, D.; Sentenac, D.; Sergeev, A.; Shaddock, D. A.; Shaltev, M.; Shapiro, B.; Shawhan, P.; Shoemaker, D. H.; Sidery, T. L.; Siemens, X.; Sigg, D.; Simakov, D.; Singer, A.; Singer, L.; Sintes, A. M.; Skelton, G. R.; Slagmolen, B. J. J.; Slutsky, J.; Smith, J. R.; Smith, M. R.; Smith, R. J. E.; Smith-Lefebvre, N. D.; Somiya, K.; Sorazu, B.; Speirits, F. C.; Sperandio, L.; Stefszky, M.; Steinert, E.; Steinlechner, J.; Steinlechner, S.; Steplewski, S.; Stochino, A.; Stone, R.; Strain, K. A.; Strigin, S. E.; Stroeer, A. S.; Sturani, R.; Stuver, A. L.; Summerscales, T. Z.; Sung, M.; Susmithan, S.; Sutton, P. J.; Swinkels, B.; Szeifert, G.; Tacca, M.; Taffarello, L.; Talukder, D.; Tanner, D. B.; Tarabrin, S. P.; Taylor, R.; ter Braack, A. P. M.; Thomas, P.; Thorne, K. A.; Thorne, K. S.; Thrane, E.; Thüring, A.; Titsler, C.; Tokmakov, K. V.; Tomlinson, C.; Toncelli, A.; Tonelli, M.; Torre, O.; Torres, C. V.; Torrie, C. I.; Tournefier, E.; Travasso, F.; Traylor, G.; Tse, M.; Ugolini, D.; Vahlbruch, H.; Vajente, G.; van den Brand, J. F. J.; Van Den Broeck, C.; van der Putten, S.; van Veggel, A. A.; Vass, S.; Vasuth, M.; Vaulin, R.; Vavoulidis, M.; Vecchio, A.; Vedovato, G.; Veitch, J.; Veitch, P. J.; Venkateswara, K.; Verkindt, D.; Vetrano, F.; Viceré, A.; Villar, A. E.; Vinet, J.-Y.; Vitale, S.; Vocca, H.; Vorvick, C.; Vyatchanin, S. P.; Wade, A.; Wade, L.; Wade, M.; Waldman, S. J.; Wallace, L.; Wan, Y.; Wang, M.; Wang, X.; Wanner, A.; Ward, R. L.; Was, M.; Weinert, M.; Weinstein, A. J.; Weiss, R.; Welborn, T.; Wen, L.; Wessels, P.; West, M.; Westphal, T.; Wette, K.; Whelan, J. T.; Whitcomb, S. E.; White, D. J.; Whiting, B. F.; Wiesner, K.; Wilkinson, C.; Willems, P. A.; Williams, L.; Williams, R.; Willke, B.; Wimmer, M.; Winkelmann, L.; Winkler, W.; Wipf, C. C.; Wiseman, A. G.; Wittel, H.; Woan, G.; Wooley, R.; Worden, J.; Yablon, J.; Yakushin, I.; Yamamoto, H.; Yamamoto, K.; Yancey, C. C.; Yang, H.; Yeaton-Massey, D.; Yoshida, S.; Yvert, M.; Zadrożny, A.; Zanolin, M.; Zendri, J.-P.; Zhang, F.; Zhang, L.; Zhao, C.; Zotov, N.; Zucker, M. E.; Zweizig, J.

    2013-09-01

    Compact binary systems with neutron stars or black holes are one of the most promising sources for ground-based gravitational-wave detectors. Gravitational radiation encodes rich information about source physics; thus parameter estimation and model selection are crucial analysis steps for any detection candidate events. Detailed models of the anticipated waveforms enable inference on several parameters, such as component masses, spins, sky location and distance, that are essential for new astrophysical studies of these sources. However, accurate measurements of these parameters and discrimination of models describing the underlying physics are complicated by artifacts in the data, uncertainties in the waveform models and in the calibration of the detectors. Here we report such measurements on a selection of simulated signals added either in hardware or software to the data collected by the two LIGO instruments and the Virgo detector during their most recent joint science run, including a “blind injection” where the signal was not initially revealed to the collaboration. We exemplify the ability to extract information about the source physics on signals that cover the neutron-star and black-hole binary parameter space over the component mass range 1M⊙-25M⊙ and the full range of spin parameters. The cases reported in this study provide a snapshot of the status of parameter estimation in preparation for the operation of advanced detectors.

  20. Modeling landslide runout dynamics and hazards: crucial effects of initial conditions

    NASA Astrophysics Data System (ADS)

    Iverson, R. M.; George, D. L.

    2016-12-01

    Physically based numerical models can provide useful tools for forecasting landslide runout and associated hazards, but only if the models employ initial conditions and parameter values that faithfully represent the states of geological materials on slopes. Many models assume that a landslide begins from a heap of granular material poised on a slope and held in check by an imaginary dam. A computer instruction instantaneously removes the dam, unleashing a modeled landslide that accelerates under the influence of a large force imbalance. Thus, an unrealistically large initial acceleration influences all subsequent modeled motion. By contrast, most natural landslides are triggered by small perturbations of statically balanced effective stress states, which are commonly caused by rainfall, snowmelt, or earthquakes. Landslide motion begins with an infinitesimal force imbalance and commensurately small acceleration. However, a small initial force imbalance can evolve into a much larger imbalance if feedback causes a reduction in resisting forces. A well-documented source of such feedback involves dilatancy coupled to pore-pressure evolution, which may either increase or decrease effective Coulomb friction—contingent on initial conditions. Landslide dynamics models that account for this feedback include our D-Claw model (Proc. Roy. Soc. Lon., Ser. A, 2014, doi: 10.1098/rspa.2013.0819 and doi:10.1098/rspa.2013.0820) and a similar model presented by Bouchut et al. (J. Fluid Mech., 2016, doi:10.1017/jfm.2016.417). We illustrate the crucial effects of initial conditions and dilatancy coupled to pore-pressure feedback by using D-Claw to perform simple test calculations and also by computing alternative behaviors of the well-documented Oso, Washington, and West Salt Creek, Colorado, landslides of 2014. We conclude that realistic initial conditions and feedbacks are essential elements in numerical models used to forecast landslide runout dynamics and hazards.

  1. Baby Skyrme models without a potential term

    NASA Astrophysics Data System (ADS)

    Ashcroft, Jennifer; Haberichter, Mareike; Krusch, Steffen

    2015-05-01

    We develop a one-parameter family of static baby Skyrme models that do not require a potential term to admit topological solitons. This is a novel property as the standard baby Skyrme model must contain a potential term in order to have stable soliton solutions, though the Skyrme model does not require this. Our new models satisfy an energy bound that is linear in terms of the topological charge and can be saturated in an extreme limit. They also satisfy a virial theorem that is shared by the Skyrme model. We calculate the solitons of our new models numerically and observe that their form depends significantly on the choice of parameter. In one extreme, we find compactons while at the other there is a scale invariant model in which solitons can be obtained exactly as solutions to a Bogomolny equation. We provide an initial investigation into these solitons and compare them with the baby Skyrmions of other models.

  2. The strength study of the rotating device driver indexing spatial mechanism

    NASA Astrophysics Data System (ADS)

    Zakharenkov, N. V.; Kvasov, I. N.

    2018-04-01

    The indexing spatial mechanisms are widely used in automatic machines. The mechanisms maximum load-bearing capacity measurement is possible based on both the physical and numerical models tests results. The paper deals with the driven disk indexing spatial cam mechanism numerical model at the constant angular cam velocity. The presented mechanism kinematics and geometry parameters and finite element model are analyzed in the SolidWorks design environment. The calculation initial data and missing parameters having been found from the structure analysis were identified. The structure and kinematics analysis revealed the mechanism failures possible reasons. The numerical calculations results showing the structure performance at the contact and bending stresses are represented.

  3. On firework blasts and qualitative parameter dependency.

    PubMed

    Zohdi, T I

    2016-01-01

    In this paper, a mathematical model is developed to qualitatively simulate the progressive time-evolution of a blast from a simple firework. Estimates are made for the blast radius that one can expect for a given amount of detonation energy and pyrotechnic display material. The model balances the released energy from the initial blast pulse with the subsequent kinetic energy and then computes the trajectory of the material under the influence of the drag from the surrounding air, gravity and possible buoyancy. Under certain simplifying assumptions, the model can be solved for analytically. The solution serves as a guide to identifying key parameters that control the evolving blast envelope. Three-dimensional examples are given.

  4. CMB constraints on the inflaton couplings and reheating temperature in α-attractor inflation

    NASA Astrophysics Data System (ADS)

    Drewes, Marco; Kang, Jin U.; Mun, Ui Ri

    2017-11-01

    We study reheating in α-attractor models of inflation in which the inflaton couples to other scalars or fermions. We show that the parameter space contains viable regions in which the inflaton couplings to radiation can be determined from the properties of CMB temperature fluctuations, in particular the spectral index. This may be the only way to measure these fundamental microphysical parameters, which shaped the universe by setting the initial temperature of the hot big bang and contain important information about the embedding of a given model of inflation into a more fundamental theory of physics. The method can be applied to other models of single field inflation.

  5. On firework blasts and qualitative parameter dependency

    PubMed Central

    Zohdi, T. I.

    2016-01-01

    In this paper, a mathematical model is developed to qualitatively simulate the progressive time-evolution of a blast from a simple firework. Estimates are made for the blast radius that one can expect for a given amount of detonation energy and pyrotechnic display material. The model balances the released energy from the initial blast pulse with the subsequent kinetic energy and then computes the trajectory of the material under the influence of the drag from the surrounding air, gravity and possible buoyancy. Under certain simplifying assumptions, the model can be solved for analytically. The solution serves as a guide to identifying key parameters that control the evolving blast envelope. Three-dimensional examples are given. PMID:26997903

  6. Information system of forest growth and productivity by site quality type and elements of forest

    NASA Astrophysics Data System (ADS)

    Khlyustov, V.

    2012-04-01

    Information system of forest growth and productivity by site quality type and elements of forest V.K. Khlustov Head of the Forestry Department of Russian State Agrarian University named after K.A.Timiryazev doctor of agricultural sciences, professor The efficiency of forest management can be improved substantially by development and introduction of principally new models of forest growth and productivity dynamics based on regionalized site specific parameters. Therefore an innovative information system was developed. It describes the current state and gives a forecast for forest stand parameters: growth, structure, commercial and biological productivity depend on type of site quality. In contrast to existing yield tables, the new system has environmental basis: site quality type. The information system contains set of multivariate statistical models and can work at the level of individual trees or at the stand level. The system provides a graphical visualization, as well as export of the emulation results. The System is able to calculate detailed description of any forest stand based on five initial indicators: site quality type, site index, stocking, composition, and tree age by elements of the forest. The results of the model run are following parameters: average diameter and height, top height, number of trees, basal area, growing stock (total, commercial with distribution by size, firewood and residuals), live biomass (stem, bark, branches, foliage). The system also provides the distribution of mentioned above forest stand parameters by tree diameter classes. To predict the future forest stand dynamics the system require in addition the time slot only. Full set of forest parameters mention above will be provided by the System. The most conservative initial parameters (site quality type and site index) can be kept in the form of geo referenced polygons. In this case the system would need only 3 dynamic initial parameters (stocking, composition and age) to simulate forest parameters and their dynamics. The system can substitute traditional processing of forest inventory field data and provide users with detailed information on the current state of forest and give a prediction. Implementation of the proposed system in combination with high resolution remote sensing is able to increase significantly the quality of forest inventory and at the same time reduce the costs. The system is a contribution to site oriented forest management. The System is registered in the Russian State Register of Computer Programs 12.07.2011, No 2011615418.

  7. A simplified model of precipitation enhancement over a heterogeneous surface

    NASA Astrophysics Data System (ADS)

    Cioni, Guido; Hohenegger, Cathy

    2018-06-01

    Soil moisture heterogeneities influence the onset of convection and subsequent evolution of precipitating systems through the triggering of mesoscale circulations. However, local evaporation also plays a role in determining precipitation amounts. Here we aim at disentangling the effect of advection and evaporation on precipitation over the course of a diurnal cycle by formulating a simple conceptual model. The derivation of the model is inspired by the results of simulations performed with a high-resolution (250 m) large eddy simulation model over a surface with varying degrees of heterogeneity. A key element of the conceptual model is the representation of precipitation as a weighted sum of advection and evaporation, each weighed by its own efficiency. The model is then used to isolate the main parameters that control precipitation variations over a spatially drier patch. It is found that these changes surprisingly do not depend on soil moisture itself but instead purely on parameters that describe the atmospheric initial state. The likelihood for enhanced precipitation over drier soils is discussed based on these parameters. Additional experiments are used to test the validity of the model.

  8. Applying Dynamic Energy Budget (DEB) theory to simulate growth and bio-energetics of blue mussels under low seston conditions

    NASA Astrophysics Data System (ADS)

    Rosland, R.; Strand, Ø.; Alunno-Bruscia, M.; Bacher, C.; Strohmeier, T.

    2009-08-01

    A Dynamic Energy Budget (DEB) model for simulation of growth and bioenergetics of blue mussels ( Mytilus edulis) has been tested in three low seston sites in southern Norway. The observations comprise four datasets from laboratory experiments (physiological and biometrical mussel data) and three datasets from in situ growth experiments (biometrical mussel data). Additional in situ data from commercial farms in southern Norway were used for estimation of biometrical relationships in the mussels. Three DEB parameters (shape coefficient, half saturation coefficient, and somatic maintenance rate coefficient) were estimated from experimental data, and the estimated parameters were complemented with parameter values from literature to establish a basic parameter set. Model simulations based on the basic parameter set and site specific environmental forcing matched fairly well with observations, but the model was not successful in simulating growth at the extreme low seston regimes in the laboratory experiments in which the long period of negative growth caused negative reproductive mass. Sensitivity analysis indicated that the model was moderately sensitive to changes in the parameter and initial conditions. The results show the robust properties of the DEB model as it manages to simulate mussel growth in several independent datasets from a common basic parameter set. However, the results also demonstrate limitations of Chl a as a food proxy for blue mussels and limitations of the DEB model to simulate long term starvation. Future work should aim at establishing better food proxies and improving the model formulations of the processes involved in food ingestion and assimilation. The current DEB model should also be elaborated to allow shrinking in the structural tissue in order to produce more realistic growth simulations during long periods of starvation.

  9. Bianchi type string cosmological models in f(R,T) gravity

    NASA Astrophysics Data System (ADS)

    Sahoo, P. K.; Mishra, B.; Sahoo, Parbati; Pacif, S. K. J.

    2016-09-01

    In this work we have studied Bianchi-III and - VI 0 cosmological models with string fluid source in f( R, T) gravity (T. Harko et al., Phys. Rev. D 84, 024020 (2011)), where R is the Ricci scalar and T the trace of the stress energy-momentum tensor in the context of late time accelerating expansion of the universe as suggested by the present observations. The exact solutions of the field equations are obtained by using a time-varying deceleration parameter. The universe is anisotropic and free from initial singularity. Our model initially shows acceleration for a certain period of time and then decelerates consequently. Several dynamical and physical behaviors of the model are also discussed in detail.

  10. Intervertebral disc response to cyclic loading--an animal model.

    PubMed

    Ekström, L; Kaigle, A; Hult, E; Holm, S; Rostedt, M; Hansson, T

    1996-01-01

    The viscoelastic response of a lumbar motion segment loaded in cyclic compression was studied in an in vivo porcine model (N = 7). Using surgical techniques, a miniaturized servohydraulic exciter was attached to the L2-L3 motion segment via pedicle fixation. A dynamic loading scheme was implemented, which consisted of one hour of sinusoidal vibration at 5 Hz, 50 N peak load, followed by one hour of restitution at zero load and one hour of sinusoidal vibration at 5 Hz, 100 N peak load. The force and displacement responses of the motion segment were sampled at 25 Hz. The experimental data were used for evaluating the parameters of two viscoelastic models: a standard linear solid model (three-parameter) and a linear Burger's fluid model (four-parameter). In this study, the creep behaviour under sinusoidal vibration at 5 Hz closely resembled the creep behaviour under static loading observed in previous studies. Expanding the three-parameter solid model into a four-parameter fluid model made it possible to separate out a progressive linear displacement term. This deformation was not fully recovered during restitution and is therefore an indication of a specific effect caused by the cyclic loading. High variability was observed in the parameters determined from the 50 N experimental data, particularly for the elastic modulus E1. However, at the 100 N load level, significant differences between the models were found. Both models accurately predicted the creep response under the first 800 s of 100 N loading, as displayed by mean absolute errors for the calculated deformation data from the experimental data of 1.26 and 0.97 percent for the solid and fluid models respectively. The linear Burger's fluid model, however, yielded superior predictions particularly for the initial elastic response.

  11. A Mathematical Model of Neutral Lipid Content in terms of Initial Nitrogen Concentration and Validation in Coelastrum sp. HA-1 and Application in Chlorella sorokiniana

    PubMed Central

    Zhao, Yue; Liu, Zhiyong; Liu, Chenfeng; Hu, Zhipeng

    2017-01-01

    Microalgae are considered to be a potential major biomass feedstock for biofuel due to their high lipid content. However, no correlation equations as a function of initial nitrogen concentration for lipid accumulation have been developed for simplicity to predict lipid production and optimize the lipid production process. In this study, a lipid accumulation model was developed with simple parameters based on the assumption protein synthesis shift to lipid synthesis by a linear function of nitrogen quota. The model predictions fitted well for the growth, lipid content, and nitrogen consumption of Coelastrum sp. HA-1 under various initial nitrogen concentrations. Then the model was applied successfully in Chlorella sorokiniana to predict the lipid content with different light intensities. The quantitative relationship between initial nitrogen concentrations and the final lipid content with sensitivity analysis of the model were also discussed. Based on the model results, the conversion efficiency from protein synthesis to lipid synthesis is higher and higher in microalgae metabolism process as nitrogen decreases; however, the carbohydrate composition content remains basically unchanged neither in HA-1 nor in C. sorokiniana. PMID:28194424

  12. Axi-symmetric generalized thermoelastic diffusion problem with two-temperature and initial stress under fractional order heat conduction

    NASA Astrophysics Data System (ADS)

    Deswal, Sunita; Kalkal, Kapil Kumar; Sheoran, Sandeep Singh

    2016-09-01

    A mathematical model of fractional order two-temperature generalized thermoelasticity with diffusion and initial stress is proposed to analyze the transient wave phenomenon in an infinite thermoelastic half-space. The governing equations are derived in cylindrical coordinates for a two dimensional axi-symmetric problem. The analytical solution is procured by employing the Laplace and Hankel transforms for time and space variables respectively. The solutions are investigated in detail for a time dependent heat source. By using numerical inversion method of integral transforms, we obtain the solutions for displacement, stress, temperature and diffusion fields in physical domain. Computations are carried out for copper material and displayed graphically. The effect of fractional order parameter, two-temperature parameter, diffusion, initial stress and time on the different thermoelastic and diffusion fields is analyzed on the basis of analytical and numerical results. Some special cases have also been deduced from the present investigation.

  13. Vehicle Mobility or Firing Stability. A Delicate Balance,

    DTIC Science & Technology

    1980-06-01

    parameters with respect to a vehicle’s cross country ride performance and to the firing stability of an initially stationary ve- hicle. It is...model described in the previous sec- tion, with the addition of the necessary roll related parameters , trunnion position data, and the firing reaction...mode of operation to a vehicle weapon sys- tem. Obviously the horizontal acceleration at the gunner’s eyepiece 268 * HOOG (TERP &BECK also has an

  14. On the temperature independence of statistical model parameters for cleavage fracture in ferritic steels

    NASA Astrophysics Data System (ADS)

    Qian, Guian; Lei, Wei-Sheng; Niffenegger, M.; González-Albuixech, V. F.

    2018-04-01

    The work relates to the effect of temperature on the model parameters in local approaches (LAs) to cleavage fracture. According to a recently developed LA model, the physical consensus of plastic deformation being a prerequisite to cleavage fracture enforces any LA model of cleavage fracture to observe initial yielding of a volume element as its threshold stress state to incur cleavage fracture in addition to the conventional practice of confining the fracture process zone within the plastic deformation zone. The physical consistency of the new LA model to the basic LA methodology and the differences between the new LA model and other existing models are interpreted. Then this new LA model is adopted to investigate the temperature dependence of LA model parameters using circumferentially notched round tensile specimens. With the published strength data as input, finite element (FE) calculation is conducted for elastic-perfectly plastic deformation and the realistic elastic-plastic hardening, respectively, to provide stress distributions for model calibration. The calibration results in temperature independent model parameters. This leads to the establishment of a 'master curve' characteristic to synchronise the correlation between the nominal strength and the corresponding cleavage fracture probability at different temperatures. This 'master curve' behaviour is verified by strength data from three different steels, providing a new path to calculate cleavage fracture probability with significantly reduced FE efforts.

  15. Initial sediment transport model of the mining-affected Aries River Basin, Romania

    USGS Publications Warehouse

    Friedel, Michael J.; Linard, Joshua I.

    2008-01-01

    The Romanian government is interested in understanding the effects of existing and future mining activities on long-term dispersal, storage, and remobilization of sediment-associated metals. An initial Soil and Water Assessment Tool (SWAT) model was prepared using available data to evaluate hypothetical failure of the Valea Sesei tailings dam at the Rosia Poieni mine in the Aries River basin. Using the available data, the initial Aries River Basin SWAT model could not be manually calibrated to accurately reproduce monthly streamflow values observed at the Turda gage station. The poor simulation of the monthly streamflow is attributed to spatially limited soil and precipitation data, limited constraint information due to spatially and temporally limited streamflow measurements, and in ability to obtain optimal parameter values when using a manual calibration process. Suggestions to improve the Aries River basin sediment transport model include accounting for heterogeneity in model input, a two-tier nonlinear calibration strategy, and analysis of uncertainty in predictions.

  16. Using sensitivity analysis in model calibration efforts

    USGS Publications Warehouse

    Tiedeman, Claire; Hill, Mary C.

    2003-01-01

    In models of natural and engineered systems, sensitivity analysis can be used to assess relations among system state observations, model parameters, and model predictions. The model itself links these three entities, and model sensitivities can be used to quantify the links. Sensitivities are defined as the derivatives of simulated quantities (such as simulated equivalents of observations, or model predictions) with respect to model parameters. We present four measures calculated from model sensitivities that quantify the observation-parameter-prediction links and that are especially useful during the calibration and prediction phases of modeling. These four measures are composite scaled sensitivities (CSS), prediction scaled sensitivities (PSS), the value of improved information (VOII) statistic, and the observation prediction (OPR) statistic. These measures can be used to help guide initial calibration of models, collection of field data beneficial to model predictions, and recalibration of models updated with new field information. Once model sensitivities have been calculated, each of the four measures requires minimal computational effort. We apply the four measures to a three-layer MODFLOW-2000 (Harbaugh et al., 2000; Hill et al., 2000) model of the Death Valley regional ground-water flow system (DVRFS), located in southern Nevada and California. D’Agnese et al. (1997, 1999) developed and calibrated the model using nonlinear regression methods. Figure 1 shows some of the observations, parameters, and predictions for the DVRFS model. Observed quantities include hydraulic heads and spring flows. The 23 defined model parameters include hydraulic conductivities, vertical anisotropies, recharge rates, evapotranspiration rates, and pumpage. Predictions of interest for this regional-scale model are advective transport paths from potential contamination sites underlying the Nevada Test Site and Yucca Mountain.

  17. Predicting chemical degradation during storage from two successive concentration ratios: Theoretical investigation.

    PubMed

    Peleg, Micha; Normand, Mark D

    2015-09-01

    When a vitamin's, pigment's or other food component's chemical degradation follows a known fixed order kinetics, and its rate constant's temperature-dependence follows a two parameter model, then, at least theoretically, it is possible to extract these two parameters from two successive experimental concentration ratios determined during the food's non-isothermal storage. This requires numerical solution of two simultaneous equations, themselves the numerical solutions of two differential rate equations, with a program especially developed for the purpose. Once calculated, these parameters can be used to reconstruct the entire degradation curve for the particular temperature history and predict the degradation curves for other temperature histories. The concept and computation method were tested with simulated degradation under rising and/or falling oscillating temperature conditions, employing the exponential model to characterize the rate constant's temperature-dependence. In computer simulations, the method's predictions were robust against minor errors in the two concentration ratios. The program to do the calculations was posted as freeware on the Internet. The temperature profile can be entered as an algebraic expression that can include 'If' statements, or as an imported digitized time-temperature data file, to be converted into an Interpolating Function by the program. The numerical solution of the two simultaneous equations requires close initial guesses of the exponential model's parameters. Programs were devised to obtain these initial values by matching the two experimental concentration ratios with a generated degradation curve whose parameters can be varied manually with sliders on the screen. These programs too were made available as freeware on the Internet and were tested with published data on vitamin A. Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. Understanding Climate Uncertainty with an Ocean Focus

    NASA Astrophysics Data System (ADS)

    Tokmakian, R. T.

    2009-12-01

    Uncertainty in climate simulations arises from various aspects of the end-to-end process of modeling the Earth’s climate. First, there is uncertainty from the structure of the climate model components (e.g. ocean/ice/atmosphere). Even the most complex models are deficient, not only in the complexity of the processes they represent, but in which processes are included in a particular model. Next, uncertainties arise from the inherent error in the initial and boundary conditions of a simulation. Initial conditions are the state of the weather or climate at the beginning of the simulation and other such things, and typically come from observations. Finally, there is the uncertainty associated with the values of parameters in the model. These parameters may represent physical constants or effects, such as ocean mixing, or non-physical aspects of modeling and computation. The uncertainty in these input parameters propagates through the non-linear model to give uncertainty in the outputs. The models in 2020 will no doubt be better than today’s models, but they will still be imperfect, and development of uncertainty analysis technology is a critical aspect of understanding model realism and prediction capability. Smith [2002] and Cox and Stephenson [2007] discuss the need for methods to quantify the uncertainties within complicated systems so that limitations or weaknesses of the climate model can be understood. In making climate predictions, we need to have available both the most reliable model or simulation and a methods to quantify the reliability of a simulation. If quantitative uncertainty questions of the internal model dynamics are to be answered with complex simulations such as AOGCMs, then the only known path forward is based on model ensembles that characterize behavior with alternative parameter settings [e.g. Rougier, 2007]. The relevance and feasibility of using "Statistical Analysis of Computer Code Output" (SACCO) methods for examining uncertainty in ocean circulation due to parameter specification will be described and early results using the ocean/ice components of the CCSM climate model in a designed experiment framework will be shown. Cox, P. and D. Stephenson, Climate Change: A Changing Climate for Prediction, 2007, Science 317 (5835), 207, DOI: 10.1126/science.1145956. Rougier, J. C., 2007: Probabilistic Inference for Future Climate Using an Ensemble of Climate Model Evaluations, Climatic Change, 81, 247-264. Smith L., 2002, What might we learn from climate forecasts? Proc. Nat’l Academy of Sciences, Vol. 99, suppl. 1, 2487-2492 doi:10.1073/pnas.012580599.

  19. Modeling the Impact of Simulated Educational Interventions on the Use and Abuse of Pharmaceutical Opioids in the United States: A Report on Initial Efforts

    ERIC Educational Resources Information Center

    Wakeland, Wayne; Nielsen, Alexandra; Schmidt, Teresa D.; McCarty, Dennis; Webster, Lynn R.; Fitzgerald, John; Haddox, J. David

    2013-01-01

    Three educational interventions were simulated in a system dynamics model of the medical use, trafficking, and nonmedical use of pharmaceutical opioids. The study relied on secondary data obtained in the literature for the period of 1995 to 2008 as well as expert panel recommendations regarding model parameters and structure. The behavior of the…

  20. Baryon isocurvature scenario in inflationary cosmology - A particle physics model and its astrophysical implications

    NASA Technical Reports Server (NTRS)

    Yokoyama, Jun'ichi; Suto, Yasushi

    1991-01-01

    A phenomenological model to produce isocurvature baryon-number fluctuations is proposed in the framework of inflationary cosmology. The resulting spectrum of density fluctuation is very different from the conventional Harrison-Zel'dovich shape. The model, with the parameters satisfying several requirements from particle physics and cosmology, provides an appropriate initial condition for the minimal baryon isocurvature scenario of galaxy formation discussed by Peebles.

  1. Injection-Sensitive Mechanics of Hydraulic Fracture Interaction with Discontinuities

    NASA Astrophysics Data System (ADS)

    Chuprakov, D.; Melchaeva, O.; Prioul, R.

    2014-09-01

    We develop a new analytical model, called OpenT, that solves the elasticity problem of a hydraulic fracture (HF) contact with a pre-existing discontinuity natural fracture (NF) and the condition for HF re-initiation at the NF. The model also accounts for fluid penetration into the permeable NFs. For any angle of fracture intersection, the elastic problem of a blunted dislocation discontinuity is solved for the opening and sliding generated at the discontinuity. The sites and orientations of a new tensile crack nucleation are determined based on a mixed stress- and energy-criterion. In the case of tilted fracture intersection, the finite offset of the new crack initiation point along the discontinuity is computed. We show that aside from known controlling parameters such stress contrast, cohesional and frictional properties of the NFs and angle of intersection, the fluid injection parameters such as the injection rate and the fluid viscosity are of first-order in the crossing behavior. The model is compared to three independent laboratory experiments, analytical criteria of Blanton, extended Renshaw-Pollard, as well as fully coupled numerical simulations. The relative computational efficiency of OpenT model (compared to the numerical models) makes the model attractive for implementation in modern engineering tools simulating hydraulic fracture propagation in naturally fractured environments.

  2. Modeling ultrashort electromagnetic pulses with a generalized Kadomtsev-Petviashvili equation

    NASA Astrophysics Data System (ADS)

    Hofstrand, A.; Moloney, J. V.

    2018-03-01

    In this paper we derive a properly scaled model for the nonlinear propagation of intense, ultrashort, mid-infrared electromagnetic pulses (10-100 femtoseconds) through an arbitrary dispersive medium. The derivation results in a generalized Kadomtsev-Petviashvili (gKP) equation. In contrast to envelope-based models such as the Nonlinear Schrödinger (NLS) equation, the gKP equation describes the dynamics of the field's actual carrier wave. It is important to resolve these dynamics when modeling ultrashort pulses. We proceed by giving an original proof of sufficient conditions on the initial pulse for a singularity to form in the field after a finite propagation distance. The model is then numerically simulated in 2D using a spectral-solver with initial data and physical parameters highlighting our theoretical results.

  3. A pharmacometric case study regarding the sensitivity of structural model parameter estimation to error in patient reported dosing times.

    PubMed

    Knights, Jonathan; Rohatagi, Shashank

    2015-12-01

    Although there is a body of literature focused on minimizing the effect of dosing inaccuracies on pharmacokinetic (PK) parameter estimation, most of the work centers on missing doses. No attempt has been made to specifically characterize the effect of error in reported dosing times. Additionally, existing work has largely dealt with cases in which the compound of interest is dosed at an interval no less than its terminal half-life. This work provides a case study investigating how error in patient reported dosing times might affect the accuracy of structural model parameter estimation under sparse sampling conditions when the dosing interval is less than the terminal half-life of the compound, and the underlying kinetics are monoexponential. Additional effects due to noncompliance with dosing events are not explored and it is assumed that the structural model and reasonable initial estimates of the model parameters are known. Under the conditions of our simulations, with structural model CV % ranging from ~20 to 60 %, parameter estimation inaccuracy derived from error in reported dosing times was largely controlled around 10 % on average. Given that no observed dosing was included in the design and sparse sampling was utilized, we believe these error results represent a practical ceiling given the variability and parameter estimates for the one-compartment model. The findings suggest additional investigations may be of interest and are noteworthy given the inability of current PK software platforms to accommodate error in dosing times.

  4. Good Models Gone Bad: Quantifying and Predicting Parameter-Induced Climate Model Simulation Failures

    NASA Astrophysics Data System (ADS)

    Lucas, D. D.; Klein, R.; Tannahill, J.; Brandon, S.; Covey, C. C.; Domyancic, D.; Ivanova, D. P.

    2012-12-01

    Simulations using IPCC-class climate models are subject to fail or crash for a variety of reasons. Statistical analysis of the failures can yield useful insights to better understand and improve the models. During the course of uncertainty quantification (UQ) ensemble simulations to assess the effects of ocean model parameter uncertainties on climate simulations, we experienced a series of simulation failures of the Parallel Ocean Program (POP2). About 8.5% of our POP2 runs failed for numerical reasons at certain combinations of parameter values. We apply support vector machine (SVM) classification from the fields of pattern recognition and machine learning to quantify and predict the probability of failure as a function of the values of 18 POP2 parameters. The SVM classifiers readily predict POP2 failures in an independent validation ensemble, and are subsequently used to determine the causes of the failures via a global sensitivity analysis. Four parameters related to ocean mixing and viscosity are identified as the major sources of POP2 failures. Our method can be used to improve the robustness of complex scientific models to parameter perturbations and to better steer UQ ensembles. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344 and was funded by the Uncertainty Quantification Strategic Initiative Laboratory Directed Research and Development Project at LLNL under project tracking code 10-SI-013 (UCRL LLNL-ABS-569112).

  5. Uncertainty analysis of hydrological modeling in a tropical area using different algorithms

    NASA Astrophysics Data System (ADS)

    Rafiei Emam, Ammar; Kappas, Martin; Fassnacht, Steven; Linh, Nguyen Hoang Khanh

    2018-01-01

    Hydrological modeling outputs are subject to uncertainty resulting from different sources of errors (e.g., error in input data, model structure, and model parameters), making quantification of uncertainty in hydrological modeling imperative and meant to improve reliability of modeling results. The uncertainty analysis must solve difficulties in calibration of hydrological models, which further increase in areas with data scarcity. The purpose of this study is to apply four uncertainty analysis algorithms to a semi-distributed hydrological model, quantifying different source of uncertainties (especially parameter uncertainty) and evaluate their performance. In this study, the Soil and Water Assessment Tools (SWAT) eco-hydrological model was implemented for the watershed in the center of Vietnam. The sensitivity of parameters was analyzed, and the model was calibrated. The uncertainty analysis for the hydrological model was conducted based on four algorithms: Generalized Likelihood Uncertainty Estimation (GLUE), Sequential Uncertainty Fitting (SUFI), Parameter Solution method (ParaSol) and Particle Swarm Optimization (PSO). The performance of the algorithms was compared using P-factor and Rfactor, coefficient of determination (R 2), the Nash Sutcliffe coefficient of efficiency (NSE) and Percent Bias (PBIAS). The results showed the high performance of SUFI and PSO with P-factor>0.83, R-factor <0.56 and R 2>0.91, NSE>0.89, and 0.18

  6. Predicting Ice Sheet and Climate Evolution at Extreme Scales

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heimbach, Patrick

    2016-02-06

    A main research objectives of PISCEES is the development of formal methods for quantifying uncertainties in ice sheet modeling. Uncertainties in simulating and projecting mass loss from the polar ice sheets arise primarily from initial conditions, surface and basal boundary conditions, and model parameters. In general terms, two main chains of uncertainty propagation may be identified: 1. inverse propagation of observation and/or prior onto posterior control variable uncertainties; 2. forward propagation of prior or posterior control variable uncertainties onto those of target output quantities of interest (e.g., climate indices or ice sheet mass loss). A related goal is the developmentmore » of computationally efficient methods for producing initial conditions for an ice sheet that are close to available present-day observations and essentially free of artificial model drift, which is required in order to be useful for model projections (“initialization problem”). To be of maximum value, such optimal initial states should be accompanied by “useful” uncertainty estimates that account for the different sources of uncerainties, as well as the degree to which the optimum state is constrained by available observations. The PISCEES proposal outlined two approaches for quantifying uncertainties. The first targets the full exploration of the uncertainty in model projections with sampling-based methods and a workflow managed by DAKOTA (the main delivery vehicle for software developed under QUEST). This is feasible for low-dimensional problems, e.g., those with a handful of global parameters to be inferred. This approach can benefit from derivative/adjoint information, but it is not necessary, which is why it often referred to as “non-intrusive”. The second approach makes heavy use of derivative information from model adjoints to address quantifying uncertainty in high-dimensions (e.g., basal boundary conditions in ice sheet models). The use of local gradient, or Hessian information (i.e., second derivatives of the cost function), requires additional code development and implementation, and is thus often referred to as an “intrusive” approach. Within PISCEES, MIT has been tasked to develop methods for derivative-based UQ, the ”intrusive” approach discussed above. These methods rely on the availability of first (adjoint) and second (Hessian) derivative code, developed through intrusive methods such as algorithmic differentiation (AD). While representing a significant burden in terms of code development, derivative-baesd UQ is able to cope with very high-dimensional uncertainty spaces. That is, unlike sampling methods (all variations of Monte Carlo), calculational burden is independent of the dimension of the uncertainty space. This is a significant advantage for spatially distributed uncertainty fields, such as threedimensional initial conditions, three-dimensional parameter fields, or two-dimensional surface and basal boundary conditions. Importantly, uncertainty fields for ice sheet models generally fall into this category.« less

  7. Clinical and Genetic Determinants of Warfarin Pharmacokinetics and Pharmacodynamics during Treatment Initiation

    PubMed Central

    Gong, Inna Y.; Schwarz, Ute I.; Crown, Natalie; Dresser, George K.; Lazo-Langner, Alejandro; Zou, GuangYong; Roden, Dan M.; Stein, C. Michael; Rodger, Marc; Wells, Philip S.; Kim, Richard B.; Tirona, Rommel G.

    2011-01-01

    Variable warfarin response during treatment initiation poses a significant challenge to providing optimal anticoagulation therapy. We investigated the determinants of initial warfarin response in a cohort of 167 patients. During the first nine days of treatment with pharmacogenetics-guided dosing, S-warfarin plasma levels and international normalized ratio were obtained to serve as inputs to a pharmacokinetic-pharmacodynamic (PK-PD) model. Individual PK (S-warfarin clearance) and PD (Imax) parameter values were estimated. Regression analysis demonstrated that CYP2C9 genotype, kidney function, and gender were independent determinants of S-warfarin clearance. The values for Imax were dependent on VKORC1 and CYP4F2 genotypes, vitamin K status (as measured by plasma concentrations of proteins induced by vitamin K absence, PIVKA-II) and weight. Importantly, indication for warfarin was a major independent determinant of Imax during initiation, where PD sensitivity was greater in atrial fibrillation than venous thromboembolism. To demonstrate the utility of the global PK-PD model, we compared the predicted initial anticoagulation responses with previously established warfarin dosing algorithms. These insights and modeling approaches have application to personalized warfarin therapy. PMID:22114699

  8. Intake flow modeling in a four stroke diesel using KIVA3

    NASA Technical Reports Server (NTRS)

    Hessel, R. P.; Rutland, C. J.

    1993-01-01

    Intake flow for a dual intake valved diesel engine is modeled using moving valves and realistic geometries. The objectives are to obtain accurate initial conditions for combustion calculations and to provide a tool for studying intake processes. Global simulation parameters are compared with experimental results and show good agreement. The intake process shows a 30 percent difference in mass flows and average swirl in opposite directions across the two intake valves. The effect of the intake process on the flow field at the end of compression is examined. Modeling the intake flow results in swirl and turbulence characteristics that are quite different from those obtained by conventional methods in which compression stroke initial conditions are assumed.

  9. Consequence of reputation in the Sznajd consensus model

    NASA Astrophysics Data System (ADS)

    Crokidakis, Nuno; Forgerini, Fabricio L.

    2010-07-01

    In this work we study a modified version of the Sznajd sociophysics model. In particular we introduce reputation, a mechanism that limits the capacity of persuasion of the agents. The reputation is introduced as a score which is time-dependent, and its introduction avoid dictatorship (all spins parallel) for a wide range of parameters. The relaxation time follows a log-normal-like distribution. In addition, we show that the usual phase transition also occurs, as in the standard model, and it depends on the initial concentration of individuals following an opinion, occurring at a initial density of up spins greater than 1/2. The transition point is determined by means of a finite-size scaling analysis.

  10. Probability Analysis of the Wave-Slamming Pressure Values of the Horizontal Deck with Elastic Support

    NASA Astrophysics Data System (ADS)

    Zuo, Weiguang; Liu, Ming; Fan, Tianhui; Wang, Pengtao

    2018-06-01

    This paper presents the probability distribution of the slamming pressure from an experimental study of regular wave slamming on an elastically supported horizontal deck. The time series of the slamming pressure during the wave impact were first obtained through statistical analyses on experimental data. The exceeding probability distribution of the maximum slamming pressure peak and distribution parameters were analyzed, and the results show that the exceeding probability distribution of the maximum slamming pressure peak accords with the three-parameter Weibull distribution. Furthermore, the range and relationships of the distribution parameters were studied. The sum of the location parameter D and the scale parameter L was approximately equal to 1.0, and the exceeding probability was more than 36.79% when the random peak was equal to the sample average during the wave impact. The variation of the distribution parameters and slamming pressure under different model conditions were comprehensively presented, and the parameter values of the Weibull distribution of wave-slamming pressure peaks were different due to different test models. The parameter values were found to decrease due to the increased stiffness of the elastic support. The damage criterion of the structure model caused by the wave impact was initially discussed, and the structure model was destroyed when the average slamming time was greater than a certain value during the duration of the wave impact. The conclusions of the experimental study were then described.

  11. Dynamics of a quasiparticle in the α-T3 model: Role of pseudospin polarization and transverse magnetic field on zitterbewegung.

    PubMed

    Biswas, Tutul; Ghosh, Tarun Kanti

    2018-01-09

    We consider the $\\alpha$-$T_3$ model which provides a smooth crossover between the honeycomb lattice with pseudospin $1/2$ and the dice lattice with pseudospin $1$ through the variation of a parameter $\\alpha$. We study the dynamics of a wave packet representing a quasiparticle in the $\\alpha$-T$_3$ model with zero and finite transverse magnetic field. For zero field, it is shown that the wave packet undergoes a transient $zitterbewegung$ (ZB). Various features of ZB depending on the initial pseudospin polarization of the wave packet have been revealed. For an intermediate value of the parameter $\\alpha$ i.e. for $0<\\alpha<1$ the resulting ZB consists of two distinct frequencies when the wave packet was located initially in $rim$ site. However, the wave packet exhibits single frequency ZB for $\\alpha=0$ and $\\alpha=1$. It is also unveiled that the frequency of ZB corresponding to $\\alpha=1$ gets exactly half of that corresponding to the $\\alpha=0$ case. On the other hand, when the initial wave packet was in $hub$ site, the ZB consists of only one frequency for all values of $\\alpha$. Using stationary phase approximation we find analytical expression of velocity average which can be used to extract the associated timescale over which the transient nature of ZB persists. On the contrary the wave packet undergoes permanent ZB in presence of a transverse magnetic field. Due to the presence of large number of Landau energy levels the oscillations in ZB appear to be much more complicated. The oscillation pattern depends significantly on the initial pseudospin polarization of the wave packet. Furthermore, it is revealed that the number of the frequency components involved in ZB depends on the parameter $\\alpha$. © 2018 IOP Publishing Ltd.

  12. Determination of enzyme thermal parameters for rational enzyme engineering and environmental/evolutionary studies.

    PubMed

    Lee, Charles K; Monk, Colin R; Daniel, Roy M

    2013-01-01

    Of the two independent processes by which enzymes lose activity with increasing temperature, irreversible thermal inactivation and rapid reversible equilibration with an inactive form, the latter is only describable by the Equilibrium Model. Any investigation of the effect of temperature upon enzymes, a mandatory step in rational enzyme engineering and study of enzyme temperature adaptation, thus requires determining the enzymes' thermodynamic parameters as defined by the Equilibrium Model. The necessary data for this procedure can be collected by carrying out multiple isothermal enzyme assays at 3-5°C intervals over a suitable temperature range. If the collected data meet requirements for V max determination (i.e., if the enzyme kinetics are "ideal"), then the enzyme's Equilibrium Model parameters (ΔH eq, T eq, ΔG (‡) cat, and ΔG (‡) inact) can be determined using a freely available iterative model-fitting software package designed for this purpose.Although "ideal" enzyme reactions are required for determination of all four Equilibrium Model parameters, ΔH eq, T eq, and ΔG (‡) cat can be determined from initial (zero-time) rates for most nonideal enzyme reactions, with substrate saturation being the only requirement.

  13. Resuspension and redistribution of radionuclides during grassland and forest fires in the Chernobyl exclusion zone: part II. Modeling the transport process.

    PubMed

    Yoschenko, V I; Kashparov, V A; Levchuk, S E; Glukhovskiy, A S; Khomutinin, Yu V; Protsak, V P; Lundin, S M; Tschiersch, J

    2006-01-01

    To predict parameters of radionuclide resuspension, transport and deposition during forest and grassland fires, several model modules were developed and adapted. Experimental data of controlled burning of prepared experimental plots in the Chernobyl exclusion zone have been used to evaluate the prognostic power of the models. The predicted trajectories and elevations of the plume match with those visually observed during the fire experiments in the grassland and forest sites. Experimentally determined parameters could be successfully used for the calculation of the initial plume parameters which provide the tools for the description of various fire scenarios and enable prognostic calculations. In summary, the model predicts a release of some per thousand from the radionuclide inventory of the fuel material by the grassland fires. During the forest fire, up to 4% of (137)Cs and (90)Sr and up to 1% of the Pu isotopes can be released from the forest litter according to the model calculations. However, these results depend on the parameters of the fire events. In general, the modeling results are in good accordance with the experimental data. Therefore, the considered models were successfully validated and can be recommended for the assessment of the resuspension and redistribution of radionuclides during grassland and forest fires in contaminated territories.

  14. A kinematic hardening constitutive model for the uniaxial cyclic stress-strain response of magnesium sheet alloys at room temperature

    NASA Astrophysics Data System (ADS)

    He, Zhitao; Chen, Wufan; Wang, Fenghua; Feng, Miaolin

    2017-11-01

    A kinematic hardening constitutive model is presented, in which a modified form of von Mises yield function is adopted, and the initial asymmetric tension and compression yield stresses of magnesium (Mg) alloys at room temperature (RT) are considered. The hardening behavior was classified into slip, twinning, and untwinning deformation modes, and these were described by two forms of back stress to capture the mechanical response of Mg sheet alloys under cyclic loading tests at RT. Experimental values were obtained for AZ31B-O and AZ31B sheet alloys under both tension-compression-tension (T-C-T) and compression-tension (C-T) loadings to calibrate the parameters of back stresses in the proposed model. The predicted parameters of back stresses in the twinning and untwinning modes were expressed as a cubic polynomial. The predicted curves based on these parameters showed good agreement with the tests.

  15. Properties of radar backscatter of forests measured with a multifrequency polarimetric SAR

    NASA Technical Reports Server (NTRS)

    Amar, F.; Karam, M. A.; Fung, A. K.; De Grandi, G.; Lavalle, C.; Sieber, A.

    1992-01-01

    Fully polarimetric airborne synthetic aperture radar (AIRSAR) data, collected in Germany during the MAC Europe campaign, are calibrated using software packages developed at the Joint Research Center (JRC) in Italy for both L- and C-bands. During the period of the overflight dates, extensive ground truth was collected in order to describe the physical and statistical parameters of the canopy, the understory, and the soil. These parameters are compiled and converted into electromagnetic parameters suitable for input to the new polarimetric three-layer canopy model developed at the Wave Scattering Research Center (WSRC) at the University of Texas at Arlington. Comparisons between the theoretical predictions from the model and the calibrated data are carried out. Initial results reveal that the trend of the average phase difference can be predicted by the model, and that the backscattering ratio *shh/ svv is sensitive to the distribution of the primary branches.

  16. The importance of the initial water depth in basin modelling: the example of the Venetian foredeep (NE Italy)

    NASA Astrophysics Data System (ADS)

    Barbieri, C.; Mancin, N.

    2003-04-01

    The Tertiary evolution of the Venetian area (NE Italy) led to the superposition of three overlapping foreland systems, different in both age and polarity, as a consequence of the main orogenic phases of the Dinarides, to the North-East, the Southern Alps, to the North, and the Apennines, to the South-West, respectively. Aim of this work is to quantify the flexural effect produced by the Southalpine main orogenic phases (Serravallian-Early Pliocene) in the Venetian foredeep, and particularly to evaluate the importance of constrained initial water depth for evaluating correctly the contribution to flexure of the surface loads. To this end, a 2-D flexural modelling has been applied along a N-S trending industrial seismic line (courtesy of ENI-AGIP) extended from the Northern Alps to the Adriatic sea. Once interpreted and depth migrated, the geometries of the sedimentary bodies have been studied and the base of the foredeep wedge, Serravallian-Tortonian in age, related to the Southern Alps load, has been recognized. Water depth variations during Miocene time have been constrained on three wells located along this section. According to bathymetric reconstructions, based on the quantitative study of foraminiferal assemblages, an overall neritic environment (0--200m), developed during Langhian time, was followed by a fast deepening to bathyal conditions (200--600m) to the North, toward the Southern Alps, during Serravallian-Tortonian time, whereas neritic conditions still persisted to the South. According to these constraints, a best fit model was obtained for an Effective Elastic Thickness value of about 20 Km and a belt topography equal to the present day one. The extremely good fit of the model to realty highlights that, in the studied region, flexure related to the Southern Alps is fully due to surface loads (topographic load and initial water depth), and no subloads are requested to improve the fit, unlike a previous proposed model. Such a difference can be due to both the better constraining of the bathymetric parameter and the improvement of geophysical and geological data. A test was also performed to evaluate the actual influence of the bathymetric parameter on flexural response of the crust by modelling a condition with maximum, minimum and zero initial water depth respectively. Results show that this parameter can contribute up to 50% to the total flexure in the studied region.

  17. Extension of the PC version of VEPFIT with input and output routines running under Windows

    NASA Astrophysics Data System (ADS)

    Schut, H.; van Veen, A.

    1995-01-01

    The fitting program VEPFIT has been extended with applications running under the Microsoft-Windows environment facilitating the input and output of the VEPFIT fitting module. We have exploited the Microsoft-Windows graphical users interface by making use of dialog windows, scrollbars, command buttons, etc. The user communicates with the program simply by clicking and dragging with the mouse pointing device. Keyboard actions are limited to a minimum. Upon changing one or more input parameters the results of the modeling of the S-parameter and Ps fractions versus positron implantation energy are updated and displayed. This action can be considered as the first step in the fitting procedure upon which the user can decide to further adapt the input parameters or to forward these parameters as initial values to the fitting routine. The modeling step has proven to be helpful for designing positron beam experiments.

  18. An adaptive tracking observer for failure-detection systems

    NASA Technical Reports Server (NTRS)

    Sidar, M.

    1982-01-01

    The design problem of adaptive observers applied to linear, constant and variable parameters, multi-input, multi-output systems, is considered. It is shown that, in order to keep the observer's (or Kalman filter) false-alarm rate (FAR) under a certain specified value, it is necessary to have an acceptable proper matching between the observer (or KF) model and the system parameters. An adaptive observer algorithm is introduced in order to maintain desired system-observer model matching, despite initial mismatching and/or system parameter variations. Only a properly designed adaptive observer is able to detect abrupt changes in the system (actuator, sensor failures, etc.) with adequate reliability and FAR. Conditions for convergence for the adaptive process were obtained, leading to a simple adaptive law (algorithm) with the possibility of an a priori choice of fixed adaptive gains. Simulation results show good tracking performance with small observer output errors and accurate and fast parameter identification, in both deterministic and stochastic cases.

  19. Determining the Stellar Initial Mass by Means of the 17O/18O Ratio on the AGB

    NASA Astrophysics Data System (ADS)

    De Nutte, Rutger; Decin, Leen; Olofsson, Hans; de Koter, Alex; Karakas, Amanda; Lombaert, Robin; Milam, Stefanie; Ramstedt, Sofia; Stancliffe, Richard; Homan, Ward; Van de Sande, Marie

    2016-07-01

    This poster presentsnewly obtainedcircumstellar 12C17O and 12C18O line observations, from which theline intensity are then related directly tothe 17O/18O surface abundance ratiofor a sample of nine AGB stars covering the three spectral types ().These ratios are evaluated in relation to a fundamental stellar evolution parameters: the stellar initial mass. The17O/18O ratio is shown to function as an effective method of determining the initial stellar mass. Through comparison with predictions bystellar evolution models, accurate initial mass estimates are calculated for all nine sources.

  20. An Algorithm and R Program for Fitting and Simulation of Pharmacokinetic and Pharmacodynamic Data.

    PubMed

    Li, Jijie; Yan, Kewei; Hou, Lisha; Du, Xudong; Zhu, Ping; Zheng, Li; Zhu, Cairong

    2017-06-01

    Pharmacokinetic/pharmacodynamic link models are widely used in dose-finding studies. By applying such models, the results of initial pharmacokinetic/pharmacodynamic studies can be used to predict the potential therapeutic dose range. This knowledge can improve the design of later comparative large-scale clinical trials by reducing the number of participants and saving time and resources. However, the modeling process can be challenging, time consuming, and costly, even when using cutting-edge, powerful pharmacological software. Here, we provide a freely available R program for expediently analyzing pharmacokinetic/pharmacodynamic data, including data importation, parameter estimation, simulation, and model diagnostics. First, we explain the theory related to the establishment of the pharmacokinetic/pharmacodynamic link model. Subsequently, we present the algorithms used for parameter estimation and potential therapeutic dose computation. The implementation of the R program is illustrated by a clinical example. The software package is then validated by comparing the model parameters and the goodness-of-fit statistics generated by our R package with those generated by the widely used pharmacological software WinNonlin. The pharmacokinetic and pharmacodynamic parameters as well as the potential recommended therapeutic dose can be acquired with the R package. The validation process shows that the parameters estimated using our package are satisfactory. The R program developed and presented here provides pharmacokinetic researchers with a simple and easy-to-access tool for pharmacokinetic/pharmacodynamic analysis on personal computers.

  1. The shadow map: a general contact definition for capturing the dynamics of biomolecular folding and function.

    PubMed

    Noel, Jeffrey K; Whitford, Paul C; Onuchic, José N

    2012-07-26

    Structure-based models (SBMs) are simplified models of the biomolecular dynamics that arise from funneled energy landscapes. We recently introduced an all-atom SBM that explicitly represents the atomic geometry of a biomolecule. While this initial study showed the robustness of the all-atom SBM Hamiltonian to changes in many of the energetic parameters, an important aspect, which has not been explored previously, is the definition of native interactions. In this study, we propose a general definition for generating atomically grained contact maps called "Shadow". The Shadow algorithm initially considers all atoms within a cutoff distance and then, controlled by a screening parameter, discards the occluded contacts. We show that this choice of contact map is not only well behaved for protein folding, since it produces consistently cooperative folding behavior in SBMs but also desirable for exploring the dynamics of macromolecular assemblies since, it distributes energy similarly between RNAs and proteins despite their disparate internal packing. All-atom structure-based models employing Shadow contact maps provide a general framework for exploring the geometrical features of biomolecules, especially the connections between folding and function.

  2. To Spray or Not to Spray: A Decision Analysis of Coffee Berry Borer in Hawaii

    PubMed Central

    2017-01-01

    Integrated pest management strategies were adopted to combat the coffee berry borer (CBB) after its arrival in Hawaii in 2010. A decision tree framework is used to model the CBB integrated pest management recommendations, for potential use by growers and to assist in developing and evaluating management strategies and policies. The model focuses on pesticide spraying (spray/no spray) as the most significant pest management decision within each period over the entire crop season. The main result from the analysis suggests the most important parameter to maximize net benefit is to ensure a low initial infestation level. A second result looks at the impact of a subsidy for the cost of pesticides and shows a typical farmer receives a positive net benefit of $947.17. Sensitivity analysis of parameters checks the robustness of the model and further confirms the importance of a low initial infestation level vis-a-vis any level of subsidy. The use of a decision tree is shown to be an effective method for understanding integrated pest management strategies and solutions. PMID:29065464

  3. Application of Acoustic Emission on the Characterization of Fracture in Textile Reinforced Cement Laminates

    PubMed Central

    Blom, J.; Wastiels, J.; Aggelis, D. G.

    2014-01-01

    This work studies the acoustic emission (AE) behavior of textile reinforced cementitious (TRC) composites under flexural loading. The main objective is to link specific AE parameters to the fracture mechanisms that are successively dominating the failure of this laminated material. At relatively low load, fracture is initiated by matrix cracking while, at the moment of peak load and thereafter, the fiber pull-out stage is reached. Stress modeling of the material under bending reveals that initiation of shear phenomena can also be activated depending on the shape (curvature) of the plate specimens. Preliminary results show that AE waveform parameters like frequency and energy are changing during loading, following the shift of fracturing mechanisms. Additionally, the AE behavior of specimens with different curvature is very indicative of the stress mode confirming the results of modeling. Moreover, AE source location shows the extent of the fracture process zone and its development in relation to the load. It is seen that AE monitoring yields valuable real time information on the fracture of the material and at the same time supplies valuable feedback to the stress modeling. PMID:24605050

  4. Application of acoustic emission on the characterization of fracture in textile reinforced cement laminates.

    PubMed

    Blom, J; Wastiels, J; Aggelis, D G

    2014-01-01

    This work studies the acoustic emission (AE) behavior of textile reinforced cementitious (TRC) composites under flexural loading. The main objective is to link specific AE parameters to the fracture mechanisms that are successively dominating the failure of this laminated material. At relatively low load, fracture is initiated by matrix cracking while, at the moment of peak load and thereafter, the fiber pull-out stage is reached. Stress modeling of the material under bending reveals that initiation of shear phenomena can also be activated depending on the shape (curvature) of the plate specimens. Preliminary results show that AE waveform parameters like frequency and energy are changing during loading, following the shift of fracturing mechanisms. Additionally, the AE behavior of specimens with different curvature is very indicative of the stress mode confirming the results of modeling. Moreover, AE source location shows the extent of the fracture process zone and its development in relation to the load. It is seen that AE monitoring yields valuable real time information on the fracture of the material and at the same time supplies valuable feedback to the stress modeling.

  5. Towards inverse modeling of turbidity currents: The inverse lock-exchange problem

    NASA Astrophysics Data System (ADS)

    Lesshafft, Lutz; Meiburg, Eckart; Kneller, Ben; Marsden, Alison

    2011-04-01

    A new approach is introduced for turbidite modeling, leveraging the potential of computational fluid dynamics methods to simulate the flow processes that led to turbidite formation. The practical use of numerical flow simulation for the purpose of turbidite modeling so far is hindered by the need to specify parameters and initial flow conditions that are a priori unknown. The present study proposes a method to determine optimal simulation parameters via an automated optimization process. An iterative procedure matches deposit predictions from successive flow simulations against available localized reference data, as in practice may be obtained from well logs, and aims at convergence towards the best-fit scenario. The final result is a prediction of the entire deposit thickness and local grain size distribution. The optimization strategy is based on a derivative-free, surrogate-based technique. Direct numerical simulations are performed to compute the flow dynamics. A proof of concept is successfully conducted for the simple test case of a two-dimensional lock-exchange turbidity current. The optimization approach is demonstrated to accurately retrieve the initial conditions used in a reference calculation.

  6. Model behavior and sensitivity in an application of the cohesive bed component of the community sediment transport modeling system for the York River estuary, VA, USA

    USGS Publications Warehouse

    Fall, Kelsey A.; Harris, Courtney K.; Friedrichs, Carl T.; Rinehimer, J. Paul; Sherwood, Christopher R.

    2014-01-01

    The Community Sediment Transport Modeling System (CSTMS) cohesive bed sub-model that accounts for erosion, deposition, consolidation, and swelling was implemented in a three-dimensional domain to represent the York River estuary, Virginia. The objectives of this paper are to (1) describe the application of the three-dimensional hydrodynamic York Cohesive Bed Model, (2) compare calculations to observations, and (3) investigate sensitivities of the cohesive bed sub-model to user-defined parameters. Model results for summer 2007 showed good agreement with tidal-phase averaged estimates of sediment concentration, bed stress, and current velocity derived from Acoustic Doppler Velocimeter (ADV) field measurements. An important step in implementing the cohesive bed model was specification of both the initial and equilibrium critical shear stress profiles, in addition to choosing other parameters like the consolidation and swelling timescales. This model promises to be a useful tool for investigating the fundamental controls on bed erodibility and settling velocity in the York River, a classical muddy estuary, provided that appropriate data exists to inform the choice of model parameters.

  7. An easy-to-use tool for the evaluation of leachate production at landfill sites.

    PubMed

    Grugnaletti, Matteo; Pantini, Sara; Verginelli, Iason; Lombardi, Francesco

    2016-09-01

    A simulation program for the evaluation of leachate generation at landfill sites is herein presented. The developed tool is based on a water balance model that accounts for all the key processes influencing leachate generation through analytical and empirical equations. After a short description of the tool, different simulations on four Italian landfill sites are shown. The obtained results revealed that when literature values were assumed for the unknown input parameters, the model provided a rough estimation of the leachate production measured in the field. In this case, indeed, the deviations between observed and predicted data appeared, in some cases, significant. Conversely, by performing a preliminary calibration for some of the unknown input parameters (e.g. initial moisture content of wastes, compression index), in nearly all cases the model performances significantly improved. These results although showed the potential capability of a water balance model to estimate the leachate production at landfill sites also highlighted the intrinsic limitation of a deterministic approach to accurately forecast the leachate production over time. Indeed, parameters such as the initial water content of incoming waste and the compression index, that have a great influence on the leachate production, may exhibit temporal variation due to seasonal changing of weather conditions (e.g. rainfall, air humidity) as well as to seasonal variability in the amount and type of specific waste fractions produced (e.g. yard waste, food, plastics) that make their prediction quite complicated. In this sense, we believe that a tool such as the one proposed in this work that requires a limited number of unknown parameters, can be easier handled to quantify the uncertainties. Copyright © 2016 Elsevier Ltd. All rights reserved.

  8. Support Vector Machine Based Monitoring of Cardio-Cerebrovascular Reserve during Simulated Hemorrhage.

    PubMed

    van der Ster, Björn J P; Bennis, Frank C; Delhaas, Tammo; Westerhof, Berend E; Stok, Wim J; van Lieshout, Johannes J

    2017-01-01

    Introduction: In the initial phase of hypovolemic shock, mean blood pressure (BP) is maintained by sympathetically mediated vasoconstriction rendering BP monitoring insensitive to detect blood loss early. Late detection can result in reduced tissue oxygenation and eventually cellular death. We hypothesized that a machine learning algorithm that interprets currently used and new hemodynamic parameters could facilitate in the detection of impending hypovolemic shock. Method: In 42 (27 female) young [mean (sd): 24 (4) years], healthy subjects central blood volume (CBV) was progressively reduced by application of -50 mmHg lower body negative pressure until the onset of pre-syncope. A support vector machine was trained to classify samples into normovolemia (class 0), initial phase of CBV reduction (class 1) or advanced CBV reduction (class 2). Nine models making use of different features were computed to compare sensitivity and specificity of different non-invasive hemodynamic derived signals. Model features included : volumetric hemodynamic parameters (stroke volume and cardiac output), BP curve dynamics, near-infrared spectroscopy determined cortical brain oxygenation, end-tidal carbon dioxide pressure, thoracic bio-impedance, and middle cerebral artery transcranial Doppler (TCD) blood flow velocity. Model performance was tested by quantifying the predictions with three methods : sensitivity and specificity, absolute error, and quantification of the log odds ratio of class 2 vs. class 0 probability estimates. Results: The combination with maximal sensitivity and specificity for classes 1 and 2 was found for the model comprising volumetric features (class 1: 0.73-0.98 and class 2: 0.56-0.96). Overall lowest model error was found for the models comprising TCD curve hemodynamics. Using probability estimates the best combination of sensitivity for class 1 (0.67) and specificity (0.87) was found for the model that contained the TCD cerebral blood flow velocity derived pulse height. The highest combination for class 2 was found for the model with the volumetric features (0.72 and 0.91). Conclusion: The most sensitive models for the detection of advanced CBV reduction comprised data that describe features from volumetric parameters and from cerebral blood flow velocity hemodynamics. In a validated model of hemorrhage in humans these parameters provide the best indication of the progression of central hypovolemia.

  9. Senstitivity analysis of horizontal heat and vapor transfer coefficients for a cloud-topped marine boundary layer during cold-air outbreaks. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Chang, Y. V.

    1986-01-01

    The effects of external parameters on the surface heat and vapor fluxes into the marine atmospheric boundary layer (MABL) during cold-air outbreaks are investigated using the numerical model of Stage and Businger (1981a). These fluxes are nondimensionalized using the horizontal heat (g1) and vapor (g2) transfer coefficient method first suggested by Chou and Atlas (1982) and further formulated by Stage (1983a). In order to simplify the problem, the boundary layer is assumed to be well mixed and horizontally homogeneous, and to have linear shoreline soundings of equivalent potential temperature and mixing ratio. Modifications of initial surface flux estimates, time step limitation, and termination conditions are made to the MABL model to obtain accurate computations. The dependence of g1 and g2 in the cloud topped boundary layer on the external parameters (wind speed, divergence, sea surface temperature, radiative sky temperature, cloud top radiation cooling, and initial shoreline soundings of temperature, and mixing ratio) is studied by a sensitivity analysis, which shows that the uncertainties of horizontal transfer coefficients caused by changes in the parameters are reasonably small.

  10. Ribosome biogenesis in replicating cells: Integration of experiment and theory.

    PubMed

    Earnest, Tyler M; Cole, John A; Peterson, Joseph R; Hallock, Michael J; Kuhlman, Thomas E; Luthey-Schulten, Zaida

    2016-10-01

    Ribosomes-the primary macromolecular machines responsible for translating the genetic code into proteins-are complexes of precisely folded RNA and proteins. The ways in which their production and assembly are managed by the living cell is of deep biological importance. Here we extend a recent spatially resolved whole-cell model of ribosome biogenesis in a fixed volume [Earnest et al., Biophys J 2015, 109, 1117-1135] to include the effects of growth, DNA replication, and cell division. All biological processes are described in terms of reaction-diffusion master equations and solved stochastically using the Lattice Microbes simulation software. In order to determine the replication parameters, we construct and analyze a series of Escherichia coli strains with fluorescently labeled genes distributed evenly throughout their chromosomes. By measuring these cells' lengths and number of gene copies at the single-cell level, we could fit a statistical model of the initiation and duration of chromosome replication. We found that for our slow-growing (120 min doubling time) E. coli cells, replication was initiated 42 min into the cell cycle and completed after an additional 42 min. While simulations of the biogenesis model produce the correct ribosome and mRNA counts over the cell cycle, the kinetic parameters for transcription and degradation are lower than anticipated from a recent analytical time dependent model of in vivo mRNA production. Describing expression in terms of a simple chemical master equation, we show that the discrepancies are due to the lack of nonribosomal genes in the extended biogenesis model which effects the competition of mRNA for ribosome binding, and suggest corrections to parameters to be used in the whole-cell model when modeling expression of the entire transcriptome. © 2016 Wiley Periodicals, Inc. Biopolymers 105: 735-751, 2016. © 2016 Wiley Periodicals, Inc.

  11. Seasonal thermal energy storage in aquifers: Mathematical modeling studies in 1979

    NASA Technical Reports Server (NTRS)

    Tsang, C. F.

    1980-01-01

    A numerical model of water and heat flow in geologic media was developed, verified, and tested. The hydraulic parameters (transmittivity and storativity) and the location of a linear hydrologic barrier were simulated and compared with results from field experiments involving two injection-storage-recovery cycles. For both cycles, the initial simulated and observed temperatures agree (55c).

  12. VizieR Online Data Catalog: Comparison of evolutionary tracks (Martins+, 2013)

    NASA Astrophysics Data System (ADS)

    Martins, F.; Palacios, A.

    2013-11-01

    Tables of evolutionary models for massive stars. The files m*_stol.dat correspond to models computed with the code STAREVOL. The files m*_mesa.dat correspond to models computed with the code MESA. For each code, models with initial masses equal to 7, 9, 15, 20, 25, 40 and 60M⊙ are provided. No rotation is included. The overshooting parameter f is equal to 0.01. The metallicity is solar. (14 data files).

  13. Preparation of char from lotus seed biomass and the exploration of its dye removal capacity through batch and column adsorption studies.

    PubMed

    Nethaji, S; Sivasamy, A; Kumar, R Vimal; Mandal, A B

    2013-06-01

    Char was obtained from lotus seed biomass by a simple single-step acid treatment process. It was used as an adsorbent for the removal of malachite green dye (MG) from simulated dye bath effluent. The adsorbent was characterized for its surface morphology, surface functionalities, and zero point charge. Batch studies were carried out by varying the parameters such as initial aqueous pH, adsorbent dosage, adsorbent particle size, and initial adsorbate concentration. Langmuir and Freundlich isotherms were used to test the isotherm data and the Freundlich isotherm best fitted the data. Thermodynamic studies were carried out and the thermodynamic parameters such as ∆G, ∆H, and ∆S were evaluated. Adsorption kinetics was carried out and the data were tested with pseudofirst-order model, pseudosecond-order model, and intraparticle diffusion model. Adsorption of MG was not solely by intraparticle diffusion but film diffusion also played a major role. Continuous column experiments were also conducted using microcolumn and the spent adsorbent was regenerated using ethanol and was repeatedly used for three cycles in the column to determine the reusability of the regenerated adsorbent. The column data were modeled with the modeling equations such as Adam-Bohart model, Bed Depth Service Time (BDST) model, and Yoon-Nelson model for all the three cycles.

  14. Simultaneous versus sequential optimal experiment design for the identification of multi-parameter microbial growth kinetics as a function of temperature.

    PubMed

    Van Derlinden, E; Bernaerts, K; Van Impe, J F

    2010-05-21

    Optimal experiment design for parameter estimation (OED/PE) has become a popular tool for efficient and accurate estimation of kinetic model parameters. When the kinetic model under study encloses multiple parameters, different optimization strategies can be constructed. The most straightforward approach is to estimate all parameters simultaneously from one optimal experiment (single OED/PE strategy). However, due to the complexity of the optimization problem or the stringent limitations on the system's dynamics, the experimental information can be limited and parameter estimation convergence problems can arise. As an alternative, we propose to reduce the optimization problem to a series of two-parameter estimation problems, i.e., an optimal experiment is designed for a combination of two parameters while presuming the other parameters known. Two different approaches can be followed: (i) all two-parameter optimal experiments are designed based on identical initial parameter estimates and parameters are estimated simultaneously from all resulting experimental data (global OED/PE strategy), and (ii) optimal experiments are calculated and implemented sequentially whereby the parameter values are updated intermediately (sequential OED/PE strategy). This work exploits OED/PE for the identification of the Cardinal Temperature Model with Inflection (CTMI) (Rosso et al., 1993). This kinetic model describes the effect of temperature on the microbial growth rate and encloses four parameters. The three OED/PE strategies are considered and the impact of the OED/PE design strategy on the accuracy of the CTMI parameter estimation is evaluated. Based on a simulation study, it is observed that the parameter values derived from the sequential approach deviate more from the true parameters than the single and global strategy estimates. The single and global OED/PE strategies are further compared based on experimental data obtained from design implementation in a bioreactor. Comparable estimates are obtained, but global OED/PE estimates are, in general, more accurate and reliable. Copyright (c) 2010 Elsevier Ltd. All rights reserved.

  15. A new parametric method to smooth time-series data of metabolites in metabolic networks.

    PubMed

    Miyawaki, Atsuko; Sriyudthsak, Kansuporn; Hirai, Masami Yokota; Shiraishi, Fumihide

    2016-12-01

    Mathematical modeling of large-scale metabolic networks usually requires smoothing of metabolite time-series data to account for measurement or biological errors. Accordingly, the accuracy of smoothing curves strongly affects the subsequent estimation of model parameters. Here, an efficient parametric method is proposed for smoothing metabolite time-series data, and its performance is evaluated. To simplify parameter estimation, the method uses S-system-type equations with simple power law-type efflux terms. Iterative calculation using this method was found to readily converge, because parameters are estimated stepwise. Importantly, smoothing curves are determined so that metabolite concentrations satisfy mass balances. Furthermore, the slopes of smoothing curves are useful in estimating parameters, because they are probably close to their true behaviors regardless of errors that may be present in the actual data. Finally, calculations for each differential equation were found to converge in much less than one second if initial parameters are set at appropriate (guessed) values. Copyright © 2016 Elsevier Inc. All rights reserved.

  16. Constraining Controls on the Emplacement of Long Lava Flows on Earth and Mars Through Modeling in ArcGIS

    NASA Astrophysics Data System (ADS)

    Golder, K.; Burr, D. M.; Tran, L.

    2017-12-01

    Regional volcanic processes shaped many planetary surfaces in the Solar System, often through the emplacement of long, voluminous lava flows. Terrestrial examples of this type of lava flow have been used as analogues for extensive martian flows, including those within the circum-Cerberus outflow channels. This analogy is based on similarities in morphology, extent, and inferred eruptive style between terrestrial and martian flows, which raises the question of how these lava flows appear comparable in size and morphology on different planets. The parameters that influence the areal extent of silicate lavas during emplacement may be categorized as either inherent or external to the lava. The inherent parameters include the lava yield strength, density, composition, water content, crystallinity, exsolved gas content, pressure, and temperature. Each inherent parameter affects the overall viscosity of the lava, and for this work can be considered a subset of the viscosity parameter. External parameters include the effusion rate, total erupted volume, regional slope, and gravity. To investigate which parameter(s) may control(s) the development of long lava flows on Mars, we are applying a computational numerical-modelling to reproduce the observed lava flow morphologies. Using a matrix of boundary conditions in the model enables us to investigate the possible range of emplacement conditions that can yield the observed morphologies. We have constructed the basic model framework in Model Builder within ArcMap, including all governing equations and parameters that we seek to test, and initial implementation and calibration has been performed. The base model is currently capable of generating a lava flow that propagates along a pathway governed by the local topography. At AGU, the results of model calibration using the Eldgá and Laki lava flows in Iceland will be presented, along with the application of the model to lava flows within the Cerberus plains on Mars. We then plan to convert the model into Python, for easy modification and portability within the community.

  17. Stochastic approach to data analysis in fluorescence correlation spectroscopy.

    PubMed

    Rao, Ramachandra; Langoju, Rajesh; Gösch, Michael; Rigler, Per; Serov, Alexandre; Lasser, Theo

    2006-09-21

    Fluorescence correlation spectroscopy (FCS) has emerged as a powerful technique for measuring low concentrations of fluorescent molecules and their diffusion constants. In FCS, the experimental data is conventionally fit using standard local search techniques, for example, the Marquardt-Levenberg (ML) algorithm. A prerequisite for these categories of algorithms is the sound knowledge of the behavior of fit parameters and in most cases good initial guesses for accurate fitting, otherwise leading to fitting artifacts. For known fit models and with user experience about the behavior of fit parameters, these local search algorithms work extremely well. However, for heterogeneous systems or where automated data analysis is a prerequisite, there is a need to apply a procedure, which treats FCS data fitting as a black box and generates reliable fit parameters with accuracy for the chosen model in hand. We present a computational approach to analyze FCS data by means of a stochastic algorithm for global search called PGSL, an acronym for Probabilistic Global Search Lausanne. This algorithm does not require any initial guesses and does the fitting in terms of searching for solutions by global sampling. It is flexible as well as computationally faster at the same time for multiparameter evaluations. We present the performance study of PGSL for two-component with triplet fits. The statistical study and the goodness of fit criterion for PGSL are also presented. The robustness of PGSL on noisy experimental data for parameter estimation is also verified. We further extend the scope of PGSL by a hybrid analysis wherein the output of PGSL is fed as initial guesses to ML. Reliability studies show that PGSL and the hybrid combination of both perform better than ML for various thresholds of the mean-squared error (MSE).

  18. Ignition and Growth Modeling of Shock Initiation of Different Particle Size Formulations of PBXC03 Explosive

    NASA Astrophysics Data System (ADS)

    Hussain, Tariq; Liu, Yan; Huang, Fenglei; Duan, Zhuoping

    2016-01-01

    The change in shock sensitivity of explosives having various explosive grain sizes is discussed. Along with other parameters, explosive grain size is one of the key parameters controlling the macroscopic behavior of shocked pressed explosives. Ignition and growth reactive flow modeling is performed for the shock initiation experiments carried out by using the in situ manganin piezoresistive pressure gauge technique to investigate the influences of the octahydro-1,3,5,7-tetranitro-1,3,5,7-tetrazocine (HMX) particle size on the shock initiation and the subsequent detonation growth process for the three explosive formulations of pressed PBXC03 (87% HMX, 7% 1,3,5-trichloro-2,4,6-trinitrobenzene (TATB), 6% Viton by weight). All of the formulation studied had the same density but different explosive grain sizes. A set of ignition and growth parameters was obtained for all three formulations. Only the coefficient G1 of the first growth term in the reaction rate equation was varied with the grain size; all other parameters were kept the same for all formulations. It was found that G1 decreases almost linearly with HMX particle size for PBXC03. However, the equation of state (EOS) for solid explosive had to be adjusted to fit the experimental data. Both experimental and numerical simulation results show that the shock sensitivity of PBXC03 decreases with increasing HMX particle size for the sustained pressure pulses (around 4 GPa) as obtained in the experiment. This result is in accordance with the results reported elsewhere in literature. For future work, a better approach may be to find standard solid Grüneisen EOS and product Jones-Wilkins-Lee (JWL) EOS for each formulation for the best fit to the experimental data.

  19. Synchronization transition of a coupled system composed of neurons with coexisting behaviors near a Hopf bifurcation

    NASA Astrophysics Data System (ADS)

    Jia, Bing

    2014-05-01

    The coexistence of a resting condition and period-1 firing near a subcritical Hopf bifurcation point, lying between the monostable resting condition and period-1 firing, is often observed in neurons of the central nervous systems. Near such a bifurcation point in the Morris—Lecar (ML) model, the attraction domain of the resting condition decreases while that of the coexisting period-1 firing increases as the bifurcation parameter value increases. With the increase of the coupling strength, and parameter and initial value dependent synchronization transition processes from non-synchronization to compete synchronization are simulated in two coupled ML neurons with coexisting behaviors: one neuron chosen as the resting condition and the other the coexisting period-1 firing. The complete synchronization is either a resting condition or period-1 firing dependent on the initial values of period-1 firing when the bifurcation parameter value is small or middle and is period-1 firing when the parameter value is large. As the bifurcation parameter value increases, the probability of the initial values of a period-1 firing neuron that lead to complete synchronization of period-1 firing increases, while that leading to complete synchronization of the resting condition decreases. It shows that the attraction domain of a coexisting behavior is larger, the probability of initial values leading to complete synchronization of this behavior is higher. The bifurcations of the coupled system are investigated and discussed. The results reveal the complex dynamics of synchronization behaviors of the coupled system composed of neurons with the coexisting resting condition and period-1 firing, and are helpful to further identify the dynamics of the spatiotemporal behaviors of the central nervous system.

  20. Approaches to highly parameterized inversion: A guide to using PEST for model-parameter and predictive-uncertainty analysis

    USGS Publications Warehouse

    Doherty, John E.; Hunt, Randall J.; Tonkin, Matthew J.

    2010-01-01

    Analysis of the uncertainty associated with parameters used by a numerical model, and with predictions that depend on those parameters, is fundamental to the use of modeling in support of decisionmaking. Unfortunately, predictive uncertainty analysis with regard to models can be very computationally demanding, due in part to complex constraints on parameters that arise from expert knowledge of system properties on the one hand (knowledge constraints) and from the necessity for the model parameters to assume values that allow the model to reproduce historical system behavior on the other hand (calibration constraints). Enforcement of knowledge and calibration constraints on parameters used by a model does not eliminate the uncertainty in those parameters. In fact, in many cases, enforcement of calibration constraints simply reduces the uncertainties associated with a number of broad-scale combinations of model parameters that collectively describe spatially averaged system properties. The uncertainties associated with other combinations of parameters, especially those that pertain to small-scale parameter heterogeneity, may not be reduced through the calibration process. To the extent that a prediction depends on system-property detail, its postcalibration variability may be reduced very little, if at all, by applying calibration constraints; knowledge constraints remain the only limits on the variability of predictions that depend on such detail. Regrettably, in many common modeling applications, these constraints are weak. Though the PEST software suite was initially developed as a tool for model calibration, recent developments have focused on the evaluation of model-parameter and predictive uncertainty. As a complement to functionality that it provides for highly parameterized inversion (calibration) by means of formal mathematical regularization techniques, the PEST suite provides utilities for linear and nonlinear error-variance and uncertainty analysis in these highly parameterized modeling contexts. Availability of these utilities is particularly important because, in many cases, a significant proportion of the uncertainty associated with model parameters-and the predictions that depend on them-arises from differences between the complex properties of the real world and the simplified representation of those properties that is expressed by the calibrated model. This report is intended to guide intermediate to advanced modelers in the use of capabilities available with the PEST suite of programs for evaluating model predictive error and uncertainty. A brief theoretical background is presented on sources of parameter and predictive uncertainty and on the means for evaluating this uncertainty. Applications of PEST tools are then discussed for overdetermined and underdetermined problems, both linear and nonlinear. PEST tools for calculating contributions to model predictive uncertainty, as well as optimization of data acquisition for reducing parameter and predictive uncertainty, are presented. The appendixes list the relevant PEST variables, files, and utilities required for the analyses described in the document.

  1. An information theoretic approach to use high-fidelity codes to calibrate low-fidelity codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lewis, Allison, E-mail: lewis.allison10@gmail.com; Smith, Ralph; Williams, Brian

    For many simulation models, it can be prohibitively expensive or physically infeasible to obtain a complete set of experimental data to calibrate model parameters. In such cases, one can alternatively employ validated higher-fidelity codes to generate simulated data, which can be used to calibrate the lower-fidelity code. In this paper, we employ an information-theoretic framework to determine the reduction in parameter uncertainty that is obtained by evaluating the high-fidelity code at a specific set of design conditions. These conditions are chosen sequentially, based on the amount of information that they contribute to the low-fidelity model parameters. The goal is tomore » employ Bayesian experimental design techniques to minimize the number of high-fidelity code evaluations required to accurately calibrate the low-fidelity model. We illustrate the performance of this framework using heat and diffusion examples, a 1-D kinetic neutron diffusion equation, and a particle transport model, and include initial results from the integration of the high-fidelity thermal-hydraulics code Hydra-TH with a low-fidelity exponential model for the friction correlation factor.« less

  2. Parameter learning for performance adaptation

    NASA Technical Reports Server (NTRS)

    Peek, Mark D.; Antsaklis, Panos J.

    1990-01-01

    A parameter learning method is introduced and used to broaden the region of operability of the adaptive control system of a flexible space antenna. The learning system guides the selection of control parameters in a process leading to optimal system performance. A grid search procedure is used to estimate an initial set of parameter values. The optimization search procedure uses a variation of the Hooke and Jeeves multidimensional search algorithm. The method is applicable to any system where performance depends on a number of adjustable parameters. A mathematical model is not necessary, as the learning system can be used whenever the performance can be measured via simulation or experiment. The results of two experiments, the transient regulation and the command following experiment, are presented.

  3. Adjoint tomography and centroid-moment tensor inversion of the Kanto region, Japan

    NASA Astrophysics Data System (ADS)

    Miyoshi, T.

    2017-12-01

    A three-dimensional seismic wave speed model in the Kanto region of Japan was developed using adjoint tomography based on large computing. Starting with a model based on previous travel time tomographic results, we inverted the waveforms obtained at seismic broadband stations from 140 local earthquakes in the Kanto region to obtain the P- and S-wave speeds Vp and Vs. The synthetic displacements were calculated using the spectral element method (SEM; e.g. Komatitsch and Tromp 1999; Peter et al. 2011) in which the Kanto region was parameterized using 16 million grid points. The model parameters Vp and Vs were updated iteratively by Newton's method using the misfit and Hessian kernels until the misfit between the observed and synthetic waveforms was minimized. The proposed model reveals several anomalous areas with extremely low Vs values in comparison with those of the initial model. The synthetic waveforms obtained using the newly proposed model for the selected earthquakes show better fit than the initial model to the observed waveforms in different period ranges within 5-30 s. In the present study, all centroid times of the source solutions were determined using time shifts based on cross correlation to prevent high computing resources before the structural inversion. Additionally, parameters of centroid-moment solutions were fully determined using the SEM assuming the 3D structure (e.g. Liu et al. 2004). As a preliminary result, new solutions were basically same as their initial solutions. This may indicate that the 3D structure is not effective for the source estimation. Acknowledgements: This study was supported by JSPS KAKENHI Grant Number 16K21699.

  4. Greenland Regional and Ice Sheet-wide Geometry Sensitivity to Boundary and Initial conditions

    NASA Astrophysics Data System (ADS)

    Logan, L. C.; Narayanan, S. H. K.; Greve, R.; Heimbach, P.

    2017-12-01

    Ice sheet and glacier model outputs require inputs from uncertainly known initial and boundary conditions, and other parameters. Conservation and constitutive equations formalize the relationship between model inputs and outputs, and the sensitivity of model-derived quantities of interest (e.g., ice sheet volume above floatation) to model variables can be obtained via the adjoint model of an ice sheet. We show how one particular ice sheet model, SICOPOLIS (SImulation COde for POLythermal Ice Sheets), depends on these inputs through comprehensive adjoint-based sensitivity analyses. SICOPOLIS discretizes the shallow-ice and shallow-shelf approximations for ice flow, and is well-suited for paleo-studies of Greenland and Antarctica, among other computational domains. The adjoint model of SICOPOLIS was developed via algorithmic differentiation, facilitated by the source transformation tool OpenAD (developed at Argonne National Lab). While model sensitivity to various inputs can be computed by costly methods involving input perturbation simulations, the time-dependent adjoint model of SICOPOLIS delivers model sensitivities to initial and boundary conditions throughout time at lower cost. Here, we explore both the sensitivities of the Greenland Ice Sheet's entire and regional volumes to: initial ice thickness, precipitation, basal sliding, and geothermal flux over the Holocene epoch. Sensitivity studies such as described here are now accessible to the modeling community, based on the latest version of SICOPOLIS that has been adapted for OpenAD to generate correct and efficient adjoint code.

  5. Bayesian Analysis of Non-Gaussian Long-Range Dependent Processes

    NASA Astrophysics Data System (ADS)

    Graves, T.; Franzke, C.; Gramacy, R. B.; Watkins, N. W.

    2012-12-01

    Recent studies have strongly suggested that surface temperatures exhibit long-range dependence (LRD). The presence of LRD would hamper the identification of deterministic trends and the quantification of their significance. It is well established that LRD processes exhibit stochastic trends over rather long periods of time. Thus, accurate methods for discriminating between physical processes that possess long memory and those that do not are an important adjunct to climate modeling. We have used Markov Chain Monte Carlo algorithms to perform a Bayesian analysis of Auto-Regressive Fractionally-Integrated Moving-Average (ARFIMA) processes, which are capable of modeling LRD. Our principal aim is to obtain inference about the long memory parameter, d,with secondary interest in the scale and location parameters. We have developed a reversible-jump method enabling us to integrate over different model forms for the short memory component. We initially assume Gaussianity, and have tested the method on both synthetic and physical time series such as the Central England Temperature. Many physical processes, for example the Faraday time series from Antarctica, are highly non-Gaussian. We have therefore extended this work by weakening the Gaussianity assumption. Specifically, we assume a symmetric α -stable distribution for the innovations. Such processes provide good, flexible, initial models for non-Gaussian processes with long memory. We will present a study of the dependence of the posterior variance σ d of the memory parameter d on the length of the time series considered. This will be compared with equivalent error diagnostics for other measures of d.

  6. Critical elements on fitting the Bayesian multivariate Poisson Lognormal model

    NASA Astrophysics Data System (ADS)

    Zamzuri, Zamira Hasanah binti

    2015-10-01

    Motivated by a problem on fitting multivariate models to traffic accident data, a detailed discussion of the Multivariate Poisson Lognormal (MPL) model is presented. This paper reveals three critical elements on fitting the MPL model: the setting of initial estimates, hyperparameters and tuning parameters. These issues have not been highlighted in the literature. Based on simulation studies conducted, we have shown that to use the Univariate Poisson Model (UPM) estimates as starting values, at least 20,000 iterations are needed to obtain reliable final estimates. We also illustrated the sensitivity of the specific hyperparameter, which if it is not given extra attention, may affect the final estimates. The last issue is regarding the tuning parameters where they depend on the acceptance rate. Finally, a heuristic algorithm to fit the MPL model is presented. This acts as a guide to ensure that the model works satisfactorily given any data set.

  7. Applying the relaxation model of interfacial heat transfer to calculate the liquid outflow with supercritical initial parameters

    NASA Astrophysics Data System (ADS)

    Alekseev, M. V.; Vozhakov, I. S.; Lezhnin, S. I.; Pribaturin, N. A.

    2017-09-01

    A comparative numerical simulation of the supercritical fluid outflow on the thermodynamic equilibrium and non-equilibrium relaxation models of phase transition for different times of relaxation has been performed. The model for the fixed relaxation time based on the experimentally determined radius of liquid droplets was compared with the model of dynamically changing relaxation time, calculated by the formula (7) and depending on local parameters. It is shown that the relaxation time varies significantly depending on the thermodynamic conditions of the two-phase medium in the course of outflowing. The application of the proposed model with dynamic relaxation time leads to qualitatively correct results. The model can be used for both vaporization and condensation processes. It is shown that the model can be improved on the basis of processing experimental data on the distribution of the droplet sizes formed during the breaking up of the liquid jet.

  8. Bayesian network models for error detection in radiotherapy plans

    NASA Astrophysics Data System (ADS)

    Kalet, Alan M.; Gennari, John H.; Ford, Eric C.; Phillips, Mark H.

    2015-04-01

    The purpose of this study is to design and develop a probabilistic network for detecting errors in radiotherapy plans for use at the time of initial plan verification. Our group has initiated a multi-pronged approach to reduce these errors. We report on our development of Bayesian models of radiotherapy plans. Bayesian networks consist of joint probability distributions that define the probability of one event, given some set of other known information. Using the networks, we find the probability of obtaining certain radiotherapy parameters, given a set of initial clinical information. A low probability in a propagated network then corresponds to potential errors to be flagged for investigation. To build our networks we first interviewed medical physicists and other domain experts to identify the relevant radiotherapy concepts and their associated interdependencies and to construct a network topology. Next, to populate the network’s conditional probability tables, we used the Hugin Expert software to learn parameter distributions from a subset of de-identified data derived from a radiation oncology based clinical information database system. These data represent 4990 unique prescription cases over a 5 year period. Under test case scenarios with approximately 1.5% introduced error rates, network performance produced areas under the ROC curve of 0.88, 0.98, and 0.89 for the lung, brain and female breast cancer error detection networks, respectively. Comparison of the brain network to human experts performance (AUC of 0.90 ± 0.01) shows the Bayes network model performs better than domain experts under the same test conditions. Our results demonstrate the feasibility and effectiveness of comprehensive probabilistic models as part of decision support systems for improved detection of errors in initial radiotherapy plan verification procedures.

  9. Comparative estimation and assessment of initial soil moisture conditions for Flash Flood warning in Saxony

    NASA Astrophysics Data System (ADS)

    Luong, Thanh Thi; Kronenberg, Rico; Bernhofer, Christian; Janabi, Firas Al; Schütze, Niels

    2017-04-01

    Flash Floods are known as highly destructive natural hazards due to their sudden appearance and severe consequences. In Saxony/Germany flash floods occur in small and medium catchments of low mountain ranges which are typically ungauged. Besides rainfall and orography, pre-event moisture is decisive, as it determines the available natural retention in the catchment. The Flash Flood Guidance concept according to WMO and Prof. Marco Borga (University of Padua) will be adapted to incorporate pre-event moisture in real-time flood forecast within the ESF EXTRUSO project (SAB-Nr. 100270097). To arrive at pre-event moisture for the complete area of the low mountain range with flash flood potential, a widely applicable, accurate but yet simple approach is needed. Here, we use radar precipitation as input time series, detailed orographic, land-use and soil information and a lumped parameter model to estimate the overall catchment soil moisture and potential retention. When combined with rainfall forecast and its intrinsic uncertainty, the approach allows to find the point in time when precipitation exceeds the retention potential of the catchment. Then, spatially distributed and complex hydrological modeling and additional measurements can be initiated. Assuming reasonable rainfall forecasts of 24 to 48hrs, this part can start up to two days in advance of the actual event. The lumped-parameter model BROOK90 is used and tested for well observed catchments. First, physical meaningful parameters (like albedo or soil porosity) a set according to standards and second, "free" parameters (like percentage of lateral flow) were calibrated objectively by PEST (Model-Independent Parameter Estimation and Uncertainty Analysis) with the target on evapotranspiration and soil moisture which both have been measured at the study site Anchor Station Tharandt in Saxony/Germany. Finally, first results are presented for the Wernersbach catchment in Tharandt forest for main flood events in the 50-year gauging period since 1968.

  10. Chimera patterns in two-dimensional networks of coupled neurons.

    PubMed

    Schmidt, Alexander; Kasimatis, Theodoros; Hizanidis, Johanne; Provata, Astero; Hövel, Philipp

    2017-03-01

    We discuss synchronization patterns in networks of FitzHugh-Nagumo and leaky integrate-and-fire oscillators coupled in a two-dimensional toroidal geometry. A common feature between the two models is the presence of fast and slow dynamics, a typical characteristic of neurons. Earlier studies have demonstrated that both models when coupled nonlocally in one-dimensional ring networks produce chimera states for a large range of parameter values. In this study, we give evidence of a plethora of two-dimensional chimera patterns of various shapes, including spots, rings, stripes, and grids, observed in both models, as well as additional patterns found mainly in the FitzHugh-Nagumo system. Both systems exhibit multistability: For the same parameter values, different initial conditions give rise to different dynamical states. Transitions occur between various patterns when the parameters (coupling range, coupling strength, refractory period, and coupling phase) are varied. Many patterns observed in the two models follow similar rules. For example, the diameter of the rings grows linearly with the coupling radius.

  11. Determination of deuterium–tritium critical burn-up parameter by four temperature theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nazirzadeh, M.; Ghasemizad, A.; Khanbabei, B.

    Conditions for thermonuclear burn-up of an equimolar mixture of deuterium-tritium in non-equilibrium plasma have been investigated by four temperature theory. The photon distribution shape significantly affects the nature of thermonuclear burn. In three temperature model, the photon distribution is Planckian but in four temperature theory the photon distribution has a pure Planck form below a certain cut-off energy and then for photon energy above this cut-off energy makes a transition to Bose-Einstein distribution with a finite chemical potential. The objective was to develop four temperature theory in a plasma to calculate the critical burn up parameter which depends upon initialmore » density, the plasma components initial temperatures, and hot spot size. All the obtained results from four temperature theory model are compared with 3 temperature model. It is shown that the values of critical burn-up parameter calculated by four temperature theory are smaller than those of three temperature model.« less

  12. Chimera patterns in two-dimensional networks of coupled neurons

    NASA Astrophysics Data System (ADS)

    Schmidt, Alexander; Kasimatis, Theodoros; Hizanidis, Johanne; Provata, Astero; Hövel, Philipp

    2017-03-01

    We discuss synchronization patterns in networks of FitzHugh-Nagumo and leaky integrate-and-fire oscillators coupled in a two-dimensional toroidal geometry. A common feature between the two models is the presence of fast and slow dynamics, a typical characteristic of neurons. Earlier studies have demonstrated that both models when coupled nonlocally in one-dimensional ring networks produce chimera states for a large range of parameter values. In this study, we give evidence of a plethora of two-dimensional chimera patterns of various shapes, including spots, rings, stripes, and grids, observed in both models, as well as additional patterns found mainly in the FitzHugh-Nagumo system. Both systems exhibit multistability: For the same parameter values, different initial conditions give rise to different dynamical states. Transitions occur between various patterns when the parameters (coupling range, coupling strength, refractory period, and coupling phase) are varied. Many patterns observed in the two models follow similar rules. For example, the diameter of the rings grows linearly with the coupling radius.

  13. Optimization of the structural and control system for LSS with reduced-order model

    NASA Technical Reports Server (NTRS)

    Khot, N. S.

    1989-01-01

    The objective is the simultaneous design of the structural and control system for space structures. The minimum weight of the structure is the objective function, and the constraints are placed on the closed loop distribution of the frequencies and the damping parameters. The controls approach used is linear quadratic regulator with constant feedback. A reduced-order control system is used. The effect of uncontrolled modes is taken into consideration by the model error sensitivity suppression (MESS) technique which modified the weighting parameters for the control forces. For illustration, an ACOSS-FOUR structure is designed for a different number of controlled modes with specified values for the closed loop damping parameters and frequencies. The dynamic response of the optimum designs for an initial disturbance is compared.

  14. Parameter and prediction uncertainty in an optimized terrestrial carbon cycle model: Effects of constraining variables and data record length

    NASA Astrophysics Data System (ADS)

    Ricciuto, Daniel M.; King, Anthony W.; Dragoni, D.; Post, Wilfred M.

    2011-03-01

    Many parameters in terrestrial biogeochemical models are inherently uncertain, leading to uncertainty in predictions of key carbon cycle variables. At observation sites, this uncertainty can be quantified by applying model-data fusion techniques to estimate model parameters using eddy covariance observations and associated biometric data sets as constraints. Uncertainty is reduced as data records become longer and different types of observations are added. We estimate parametric and associated predictive uncertainty at the Morgan Monroe State Forest in Indiana, USA. Parameters in the Local Terrestrial Ecosystem Carbon (LoTEC) are estimated using both synthetic and actual constraints. These model parameters and uncertainties are then used to make predictions of carbon flux for up to 20 years. We find a strong dependence of both parametric and prediction uncertainty on the length of the data record used in the model-data fusion. In this model framework, this dependence is strongly reduced as the data record length increases beyond 5 years. If synthetic initial biomass pool constraints with realistic uncertainties are included in the model-data fusion, prediction uncertainty is reduced by more than 25% when constraining flux records are less than 3 years. If synthetic annual aboveground woody biomass increment constraints are also included, uncertainty is similarly reduced by an additional 25%. When actual observed eddy covariance data are used as constraints, there is still a strong dependence of parameter and prediction uncertainty on data record length, but the results are harder to interpret because of the inability of LoTEC to reproduce observed interannual variations and the confounding effects of model structural error.

  15. Creep strain and creep-life prediction for alloy 718 using the omega method

    NASA Astrophysics Data System (ADS)

    Yeom, Jong-Taek; Kim, Jong-Yup; Na, Young-Sang; Park, Nho-Kwang

    2003-12-01

    The creep behavior of Alloy 718 was investigated in relation to the MPCs omega (Ω) method. To evaluate the creep model and determine material parameters, constant load creep tests were performed at different initial stresses in a temperature range between 550°C and 700°C. The imaginary initial strain rate ɛ limits^. _0 and omega (Ω), considered to be important variables in the model, were expressed as a function of initial stress and temperature. For these variables, power-law and hyperbolic sine-law equations were used as constitutive equations for the creep of Alloy 718. To consider the effect of γ″ coarsening leading to a radical drop of tensile strength and creep strength at temperatures above 650°C, different material constants at the temperatures above 650°C were applied. The reliability of the models was investigated in relation to the creep curve and creep life.

  16. Characterization of Ice Roughness From Simulated Icing Encounters

    NASA Technical Reports Server (NTRS)

    Anderson, David N.; Shin, Jaiwon

    1997-01-01

    Detailed measurements of the size of roughness elements on ice accreted on models in the NASA Lewis Icing Research Tunnel (IRT) were made in a previous study. Only limited data from that study have been published, but included were the roughness element height, diameter and spacing. In the present study, the height and spacing data were found to correlate with the element diameter, and the diameter was found to be a function primarily of the non-dimensional parameters freezing fraction and accumulation parameter. The width of the smooth zone which forms at the leading edge of the model was found to decrease with increasing accumulation parameter. Although preliminary, the success of these correlations suggests that it may be possible to develop simple relationships between ice roughness and icing conditions for use in ice-accretion-prediction codes. These codes now require an ice-roughness estimate to determine convective heat transfer. Studies using a 7.6-cm-diameter cylinder and a 53.3-cm-chord NACA 0012 airfoil were also performed in which a 1/2-min icing spray at an initial set of conditions was followed by a 9-1/2-min spray at a second set of conditions. The resulting ice shape was compared with that from a full 10-min spray at the second set of conditions. The initial ice accumulation appeared to have no effect on the final ice shape. From this result, it would appear the accreting ice is affected very little by the initial roughness or shape features.

  17. Effects of Microstructural Parameters on Creep of Nickel-Base Superalloy Single Crystals

    NASA Technical Reports Server (NTRS)

    MacKay, Rebecca A.; Gabb, Timothy P.; Nathal, Michael V.

    2013-01-01

    Microstructure-sensitive creep models have been developed for Ni-base superalloy single crystals. Creep rupture testing was conducted on fourteen single crystal alloys at two applied stress levels at each of two temperatures, 982 and 1093 C. The variation in creep lives among the different alloys could be explained with regression models containing relatively few microstructural parameters. At 982 C, gamma-gamma prime lattice mismatch, gamma prime volume fraction, and initial gamma prime size were statistically significant in explaining the creep rupture lives. At 1093 C, only lattice mismatch and gamma prime volume fraction were significant. These models could explain from 84 to 94 percent of the variation in creep lives, depending on test condition. Longer creep lives were associated with alloys having more negative lattice mismatch, lower gamma prime volume fractions, and finer gamma prime sizes. The gamma-gamma prime lattice mismatch exhibited the strongest influence of all the microstructural parameters at both temperatures. Although a majority of the alloys in this study were stable with respect to topologically close packed (TCP) phases, it appeared that up to approximately 2 vol% TCP phase did not affect the 1093 C creep lives under applied stresses that produced lives of approximately 200 to 300 h. In contrast, TCP phase contents of approximately 2 vol% were detrimental at lower applied stresses where creep lives were longer. A regression model was also developed for the as-heat treated initial gamma prime size; this model showed that gamma prime solvus temperature, gamma-gamma prime lattice mismatch, and bulk Re content were all statistically significant.

  18. Effect of Bearing Housings on Centrifugal Pump Rotor Dynamics

    NASA Astrophysics Data System (ADS)

    Yashchenko, A. S.; Rudenko, A. A.; Simonovskiy, V. I.; Kozlov, O. M.

    2017-08-01

    The article deals with the effect of a bearing housing on rotor dynamics of a barrel casing centrifugal boiler feed pump rotor. The calculation of the rotor model including the bearing housing has been performed by the method of initial parameters. The calculation of a rotor solid model including the bearing housing has been performed by the finite element method. Results of both calculations highlight the need to add bearing housings into dynamic analyses of the pump rotor. The calculation performed by modern software packages is more a time-taking process, at the same time it is a preferred one due to a graphic editor that is employed for creating a numerical model. When it is necessary to view many variants of design parameters, programs for beam modeling should be used.

  19. Seasonal and spatial variation in broadleaf forest model parameters

    NASA Astrophysics Data System (ADS)

    Groenendijk, M.; van der Molen, M. K.; Dolman, A. J.

    2009-04-01

    Process based, coupled ecosystem carbon, energy and water cycle models are used with the ultimate goal to project the effect of future climate change on the terrestrial carbon cycle. A typical dilemma in such exercises is how much detail the model must be given to describe the observations reasonably realistic while also be general. We use a simple vegetation model (5PM) with five model parameters to study the variability of the parameters. These parameters are derived from the observed carbon and water fluxes from the FLUXNET database. For 15 broadleaf forests the model parameters were derived for different time resolutions. It appears that in general for all forests, the correlation coefficient between observed and simulated carbon and water fluxes improves with a higher parameter time resolution. The quality of the simulations is thus always better when a higher time resolution is used. These results show that annual parameters are not capable of properly describing weather effects on ecosystem fluxes, and that two day time resolution yields the best results. A first indication of the climate constraints can be found by the seasonal variation of the covariance between Jm, which describes the maximum electron transport for photosynthesis, and climate variables. A general seasonality we found is that during winter the covariance with all climate variables is zero. Jm increases rapidly after initial spring warming, resulting in a large covariance with air temperature and global radiation. During summer Jm is less variable, but co-varies negatively with air temperature and vapour pressure deficit and positively with soil water content. A temperature response appears during spring and autumn for broadleaf forests. This shows that an annual model parameter cannot be representative for the entire year. And relations with mean annual temperature are not possible. During summer the photosynthesis parameters are constrained by water availability, soil water content and vapour pressure deficit.

  20. Calibration strategies for a groundwater model in a highly dynamic alpine floodplain

    USGS Publications Warehouse

    Foglia, L.; Burlando, P.; Hill, Mary C.; Mehl, S.

    2004-01-01

    Most surface flows to the 20-km-long Maggia Valley in Southern Switzerland are impounded and the valley is being investigated to determine environmental flow requirements. The aim of the investigation is the devel-opment of a modelling framework that simulates the dynamics of the ground-water, hydrologic, and ecologic systems. Because of the multi-scale nature of the modelling framework, large-scale models are first developed to provide the boundary conditions for more detailed models of reaches that are of eco-logical importance. We describe here the initial (large-scale) groundwa-ter/surface water model and its calibration in relation to initial and boundary conditions. A MODFLOW-2000 model was constructed to simulate the inter-action of groundwater and surface water and was developed parsimoniously to avoid modelling artefacts and parameter inconsistencies. Model calibration includes two steady-state conditions, with and without recharge to the aquifer from the adjoining hillslopes. Parameters are defined to represent areal re-charge, hydraulic conductivity of the aquifer (up to 5 classes), and streambed hydraulic conductivity. Model performance was investigated following two system representation. The first representation assumed unknown flow input at the northern end of the groundwater domain and unknown lateral inflow. The second representation used simulations of the lateral flow obtained by means of a raster-based, physically oriented and continuous in time rainfall-runoff (R-R) model. Results based on these two representations are compared and discussed.

  1. Real-time prediction of rain-triggered lahars: incorporating seasonality and catchment recovery

    NASA Astrophysics Data System (ADS)

    Jones, Robbie; Manville, Vern; Peakall, Jeff; Froude, Melanie J.; Odbert, Henry M.

    2017-12-01

    Rain-triggered lahars are a significant secondary hydrological and geomorphic hazard at volcanoes where unconsolidated pyroclastic material produced by explosive eruptions is exposed to intense rainfall, often occurring for years to decades after the initial eruptive activity. Previous studies have shown that secondary lahar initiation is a function of rainfall parameters, source material characteristics and time since eruptive activity. In this study, probabilistic rain-triggered lahar forecasting models are developed using the lahar occurrence and rainfall record of the Belham River valley at the Soufrière Hills volcano (SHV), Montserrat, collected between April 2010 and April 2012. In addition to the use of peak rainfall intensity (PRI) as a base forecasting parameter, considerations for the effects of rainfall seasonality and catchment evolution upon the initiation of rain-triggered lahars and the predictability of lahar generation are also incorporated into these models. Lahar probability increases with peak 1 h rainfall intensity throughout the 2-year dataset and is higher under given rainfall conditions in year 1 than year 2. The probability of lahars is also enhanced during the wet season, when large-scale synoptic weather systems (including tropical cyclones) are more common and antecedent rainfall and thus levels of deposit saturation are typically increased. The incorporation of antecedent conditions and catchment evolution into logistic-regression-based rain-triggered lahar probability estimation models is shown to enhance model performance and displays the potential for successful real-time prediction of lahars, even in areas featuring strongly seasonal climates and temporal catchment recovery.

  2. Population Synthesis of Radio and Gamma-ray Pulsars using the Maximum Likelihood Approach

    NASA Astrophysics Data System (ADS)

    Billman, Caleb; Gonthier, P. L.; Harding, A. K.

    2012-01-01

    We present the results of a pulsar population synthesis of normal pulsars from the Galactic disk using a maximum likelihood method. We seek to maximize the likelihood of a set of parameters in a Monte Carlo population statistics code to better understand their uncertainties and the confidence region of the model's parameter space. The maximum likelihood method allows for the use of more applicable Poisson statistics in the comparison of distributions of small numbers of detected gamma-ray and radio pulsars. Our code simulates pulsars at birth using Monte Carlo techniques and evolves them to the present assuming initial spatial, kick velocity, magnetic field, and period distributions. Pulsars are spun down to the present and given radio and gamma-ray emission characteristics. We select measured distributions of radio pulsars from the Parkes Multibeam survey and Fermi gamma-ray pulsars to perform a likelihood analysis of the assumed model parameters such as initial period and magnetic field, and radio luminosity. We present the results of a grid search of the parameter space as well as a search for the maximum likelihood using a Markov Chain Monte Carlo method. We express our gratitude for the generous support of the Michigan Space Grant Consortium, of the National Science Foundation (REU and RUI), the NASA Astrophysics Theory and Fundamental Program and the NASA Fermi Guest Investigator Program.

  3. A Bayesian Hierarchical Modeling Approach to Predicting Flow in Ungauged Basins

    NASA Astrophysics Data System (ADS)

    Gronewold, A.; Alameddine, I.; Anderson, R. M.

    2009-12-01

    Recent innovative approaches to identifying and applying regression-based relationships between land use patterns (such as increasing impervious surface area and decreasing vegetative cover) and rainfall-runoff model parameters represent novel and promising improvements to predicting flow from ungauged basins. In particular, these approaches allow for predicting flows under uncertain and potentially variable future conditions due to rapid land cover changes, variable climate conditions, and other factors. Despite the broad range of literature on estimating rainfall-runoff model parameters, however, the absence of a robust set of modeling tools for identifying and quantifying uncertainties in (and correlation between) rainfall-runoff model parameters represents a significant gap in current hydrological modeling research. Here, we build upon a series of recent publications promoting novel Bayesian and probabilistic modeling strategies for quantifying rainfall-runoff model parameter estimation uncertainty. Our approach applies alternative measures of rainfall-runoff model parameter joint likelihood (including Nash-Sutcliffe efficiency, among others) to simulate samples from the joint parameter posterior probability density function. We then use these correlated samples as response variables in a Bayesian hierarchical model with land use coverage data as predictor variables in order to develop a robust land use-based tool for forecasting flow in ungauged basins while accounting for, and explicitly acknowledging, parameter estimation uncertainty. We apply this modeling strategy to low-relief coastal watersheds of Eastern North Carolina, an area representative of coastal resource waters throughout the world because of its sensitive embayments and because of the abundant (but currently threatened) natural resources it hosts. Consequently, this area is the subject of several ongoing studies and large-scale planning initiatives, including those conducted through the United States Environmental Protection Agency (USEPA) total maximum daily load (TMDL) program, as well as those addressing coastal population dynamics and sea level rise. Our approach has several advantages, including the propagation of parameter uncertainty through a nonparametric probability distribution which avoids common pitfalls of fitting parameters and model error structure to a predetermined parametric distribution function. In addition, by explicitly acknowledging correlation between model parameters (and reflecting those correlations in our predictive model) our model yields relatively efficient prediction intervals (unlike those in the current literature which are often unnecessarily large, and may lead to overly-conservative management actions). Finally, our model helps improve understanding of the rainfall-runoff process by identifying model parameters (and associated catchment attributes) which are most sensitive to current and future land use change patterns. Disclaimer: Although this work was reviewed by EPA and approved for publication, it may not necessarily reflect official Agency policy.

  4. Thermo-mechanical models of obduction applied to the Oman ophiolite

    NASA Astrophysics Data System (ADS)

    Thibault, Duretz; Philippe, Agard; Philippe, Yamato; Céline, Ducassou; Taras, Gerya; Evguenii, Burov

    2015-04-01

    During obduction regional-scale fragments of oceanic lithosphere (ophiolites) are emplaced somewhat enigmatically on top of lighter continental lithosphere. We herein use two-dimensional thermo-mechanical models to investigate the feasibility and controlling parameters of obduction. The models are designed using available geological data from the Oman (Semail) ophiolite. Initial and boundary conditions are constrained by plate kinematic and geochronological data and modeling results are validated against petrological and structural observations. The reference model consists of three distinct stages: (1) initiation of oceanic subduction initiation away from Arabian margin, (2) emplacement of the Oman Ophiolite atop the Arabian margin, (2) dome-like exhumation of the subducted Arabian margin beneath the overlying ophiolite. A parametric study suggests that 350-400 km of shortening allows to best fit both the peak P-T conditions of the subducted margin (1.5-2.5 GPa / 450-600°C) and the dimensions of the ophiolite (~170 km width), in agreement with previous estimations. Our results further confirm that the locus of obduction initiation is close to the eastern edge of the Arabian margin (~100 km) and indicate that obduction is facilitated by a strong continental basement rheology.

  5. Effect of Initial Stress on the Dynamic Response of a Multi-Layered Plate-Strip Subjected to an Arbitrary Inclined Time-Harmonic Force

    NASA Astrophysics Data System (ADS)

    Daşdemir, A.

    2017-08-01

    The forced vibration of a multi-layered plate-strip with initial stress under the action of an arbitrary inclined time-harmonic force resting on a rigid foundation is considered. Within the framework of the piecewise homogeneous body model with the use of the three-dimensional linearized theory of elastic waves in initially stressed bodies (TLTEWISB), a mathematical modelling is presented in plane strain state. It is assumed that there exists the complete contact interaction at the interface between the layers and the materials of the layer are linearly elastic, homogeneous and isotropic. The governing system of the partial differential equations of motion for the considered problem is solved approximately by employing the Finite Element Method (FEM). Further, the influence of the initial stress parameter on the dynamic response of the plate-strip is presented.

  6. Novel prescribed performance neural control of a flexible air-breathing hypersonic vehicle with unknown initial errors.

    PubMed

    Bu, Xiangwei; Wu, Xiaoyan; Zhu, Fujing; Huang, Jiaqi; Ma, Zhen; Zhang, Rui

    2015-11-01

    A novel prescribed performance neural controller with unknown initial errors is addressed for the longitudinal dynamic model of a flexible air-breathing hypersonic vehicle (FAHV) subject to parametric uncertainties. Different from traditional prescribed performance control (PPC) requiring that the initial errors have to be known accurately, this paper investigates the tracking control without accurate initial errors via exploiting a new performance function. A combined neural back-stepping and minimal learning parameter (MLP) technology is employed for exploring a prescribed performance controller that provides robust tracking of velocity and altitude reference trajectories. The highlight is that the transient performance of velocity and altitude tracking errors is satisfactory and the computational load of neural approximation is low. Finally, numerical simulation results from a nonlinear FAHV model demonstrate the efficacy of the proposed strategy. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  7. Parameters of Glucose and Lipid Metabolism Affect the Occurrence of Colorectal Adenomas Detected by Surveillance Colonoscopies

    PubMed Central

    Kim, Nam Hee; Suh, Jung Yul; Park, Jung Ho; Park, Dong Il; Cho, Yong Kyun; Sohn, Chong Il; Choi, Kyuyong

    2017-01-01

    Purpose Limited data are available regarding the associations between parameters of glucose and lipid metabolism and the occurrence of metachronous adenomas. We investigated whether these parameters affect the occurrence of adenomas detected on surveillance colonoscopy. Materials and Methods This longitudinal study was performed on 5289 subjects who underwent follow-up colonoscopy between 2012 and 2013 among 62171 asymptomatic subjects who underwent an initial colonoscopy for a health check-up between 2010 and 2011. The risk of adenoma occurrence was assessed using Cox proportional hazards modeling. Results The mean interval between the initial and follow-up colonoscopy was 2.2±0.6 years. The occurrence of adenomas detected by the follow-up colonoscopy increased linearly with the increasing quartiles of fasting glucose, hemoglobin A1c (HbA1c), insulin, homeostasis model assessment of insulin resistance (HOMA-IR), and triglycerides measured at the initial colonoscopy. These associations persisted after adjusting for confounding factors. The adjusted hazard ratios for adenoma occurrence comparing the fourth with the first quartiles of fasting glucose, HbA1c, insulin, HOMA-IR, and triglycerides were 1.50 [95% confidence interval (CI), 1.26–1.77; ptrend<0.001], 1.22 (95% CI, 1.04–1.43; ptrend=0.024), 1.22 (95% CI, 1.02–1.46; ptrend=0.046), 1.36 (95% CI, 1.14–1.63; ptrend=0.004), and 1.19 (95% CI, 0.99–1.42; ptrend=0.041), respectively. In addition, increasing quartiles of low-density lipoprotein-cholesterol and apolipoprotein B were associated with an increasing occurrence of adenomas. Conclusion The levels of parameters of glucose and lipid metabolism were significantly associated with the occurrence of adenomas detected on surveillance colonoscopy. Improving the parameters of glucose and lipid metabolism through lifestyle changes or medications may be helpful in preventing metachronous adenomas. PMID:28120565

  8. Richards-like two species population dynamics model.

    PubMed

    Ribeiro, Fabiano; Cabella, Brenno Caetano Troca; Martinez, Alexandre Souto

    2014-12-01

    The two-species population dynamics model is the simplest paradigm of inter- and intra-species interaction. Here, we present a generalized Lotka-Volterra model with intraspecific competition, which retrieves as particular cases, some well-known models. The generalization parameter is related to the species habitat dimensionality and their interaction range. Contrary to standard models, the species coupling parameters are general, not restricted to non-negative values. Therefore, they may represent different ecological regimes, which are derived from the asymptotic solution stability analysis and are represented in a phase diagram. In this diagram, we have identified a forbidden region in the mutualism regime, and a survival/extinction transition with dependence on initial conditions for the competition regime. Also, we shed light on two types of predation and competition: weak, if there are species coexistence, or strong, if at least one species is extinguished.

  9. Nickel(II) biosorption by Rhodotorula glutinis.

    PubMed

    Suazo-Madrid, Alicia; Morales-Barrera, Liliana; Aranda-García, Erick; Cristiani-Urbina, Eliseo

    2011-01-01

    The present study reports the feasibility of using Rhodotorula glutinis biomass as an alternative low-cost biosorbent to remove Ni(II) ions from aqueous solutions. Acetone-pretreated R. glutinis cells showed higher Ni(II) biosorption capacity than untreated cells at pH values ranging from 3 to 7.5, with an optimum pH of 7.5. The effects of other relevant environmental parameters, such as initial Ni(II) concentration, shaking contact time and temperature, on Ni(II) biosorption onto acetone-pretreated R. glutinis were evaluated. Significant enhancement of Ni(II) biosorption capacity was observed by increasing initial metal concentration and temperature. Kinetic studies showed that the kinetic data were best described by a pseudo-second-order kinetic model. Among the two-, three-, and four-parameter isotherm models tested, the Fritz-Schluender model exhibited the best fit to experimental data. Thermodynamic parameters (activation energy, and changes in activation enthalpy, activation entropy, and free energy of activation) revealed that the biosorption of Ni(II) ions onto acetone-pretreated R. glutinis biomass is an endothermic and non-spontaneous process, involving chemical sorption with weak interactions between the biosorbent and Ni(II) ions. The high sorption capacity (44.45 mg g(-1) at 25°C, and 63.53 mg g(-1) at 70°C) exhibited by acetone-pretreated R. glutinis biomass places this biosorbent among the best adsorbents currently available for removal of Ni(II) ions from aqueous effluents.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hurt, Christopher J.; Freels, James D.; Hobbs, Randy W.

    There has been a considerable effort over the previous few years to demonstrate and optimize the production of plutonium-238 ( 238Pu) at the High Flux Isotope Reactor (HFIR). This effort has involved resources from multiple divisions and facilities at the Oak Ridge National Laboratory (ORNL) to demonstrate the fabrication, irradiation, and chemical processing of targets containing neptunium-237 ( 237Np) dioxide (NpO 2)/aluminum (Al) cermet pellets. A critical preliminary step to irradiation at the HFIR is to demonstrate the safety of the target under irradiation via documented experiment safety analyses. The steady-state thermal safety analyses of the target are simulated inmore » a finite element model with the COMSOL Multiphysics code that determines, among other crucial parameters, the limiting maximum temperature in the target. Safety analysis efforts for this model discussed in the present report include: (1) initial modeling of single and reduced-length pellet capsules in order to generate an experimental knowledge base that incorporate initial non-linear contact heat transfer and fission gas equations, (2) modeling efforts for prototypical designs of partially loaded and fully loaded targets using limited available knowledge of fabrication and irradiation characteristics, and (3) the most recent and comprehensive modeling effort of a fully coupled thermo-mechanical approach over the entire fully loaded target domain incorporating burn-up dependent irradiation behavior and measured target and pellet properties, hereafter referred to as the production model. These models are used to conservatively determine several important steady-state parameters including target stresses and temperatures, the limiting condition of which is the maximum temperature with respect to the melting point. The single pellet model results provide a basis for the safety of the irradiations, followed by parametric analyses in the initial prototypical designs that were necessary due to the limiting fabrication and irradiation data available. The calculated parameters in the final production target model are the most accurate and comprehensive, while still conservative. Over 210 permutations in irradiation time and position were evaluated, and are supported by the most recent inputs and highest fidelity methodology. The results of these analyses show that the models presented in this report provide a robust and reliable basis for previous, current and future experiment safety analyses. In addition, they reveal an evolving knowledge of the steady-state behavior of the NpO 2/Al pellets under irradiation for a variety of target encapsulations and potential conditions.« less

  11. Kinetics of mercuric chloride retention by soils

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Amacher, M.C.; Selim, H.M.; Iskandar, I.K.

    A nonlinear multireaction model was used to describe kinetic data for HgCl{sub 2} retention by five soils. A three-parameter version of the model consisting of a reversible nonlinear (nth order, n < 1) reaction and an irreversible first-order reaction was capable of describing HgCl{sub 2} retention data for Cecil (clayey, kaolinitic, thermic Typic Kanhapludult) and Windsor (mixed, mesic Typic Udipsamment) soils at all initial solution Hg concentrations, and data for Norwood, (fine-silty, mixed (calcareous), thermic, Typic Udifluvent), Olivier (fine-silty, mixed, thermic Aquic Fragiudalt), and Sharkey (very-fine, montmorillonitic, nonacid, thermic Vertic Haplaquept) soils at initial solution Hg concentrations below 5 mg/L.more » A five-parameter version of the model, with an added reversible nonlinear reaction, provided a more accurate description of the retention data for the Norwood, Olivier, and Sharkey soils at initial solution Hg concentrations above 5 mg/L. The second reaction needed to describe the data at higher Hg concentrations suggests the presence of a second type of sorption sites, or a precipitation or coprecipitation reaction not encountered at lower Hg concentrations. Release of Hg from the soils was induced by serial dilution of the soil solution, but not all the soil Hg was reversibly retained. This was also indicated by the model. Release of soil Hg depended on the concentration of retained Hg with significant Hg release occurring only at high concentrations of retained Hg. A multireaction model is needed to describe Hg retention in soils because of the many solid phases that can remove Hg from solution.« less

  12. Cellular level models as tools for cytokine design.

    PubMed

    Radhakrishnan, Mala L; Tidor, Bruce

    2010-01-01

    Cytokines and growth factors are critical regulators that connect intracellular and extracellular environments through binding to specific cell-surface receptors. They regulate a wide variety of immunological, growth, and inflammatory response processes. The overall signal initiated by a population of cytokine molecules over long time periods is controlled by the subtle interplay of binding, signaling, and trafficking kinetics. Building on the work of others, we abstract a simple kinetic model that captures relevant features from cytokine systems as well as related growth factor systems. We explore a large range of potential biochemical behaviors, through systematic examination of the model's parameter space. Different rates for the same reaction topology lead to a dramatic range of biochemical network properties and outcomes. Evolution might productively explore varied and different portions of parameter space to create beneficial behaviors, and effective human therapeutic intervention might be achieved through altering network kinetic properties. Quantitative analysis of the results reveals the basis for tensions among a number of different network characteristics. For example, strong binding of cytokine to receptor can increase short-term receptor activation and signal initiation but decrease long-term signaling due to internalization and degradation. Further analysis reveals the role of specific biochemical processes in modulating such tensions. For instance, the kinetics of cytokine binding and receptor activation modulate whether ligand-receptor dissociation can generally occur before signal initiation or receptor internalization. Beyond analysis, the same models and model behaviors provide an important basis for the design of more potent cytokine therapeutics by providing insight into how binding kinetics affect ligand potency. (c) 2010 American Institute of Chemical Engineers

  13. Utilization of unconventional lignocellulosic waste biomass for the biosorption of toxic triphenylmethane dye malachite green from aqueous solution.

    PubMed

    Selvasembian, Rangabhashiyam; P, Balasubramanian

    2018-05-12

    Biosorption potential of novel lignocellulosic biosorbents Musa sp. peel (MSP) and Aegle marmelos shell (AMS) was investigated for the removal of toxic triphenylmethane dye malachite green (MG), from aqueous solution. Batch experiments were performed to study the biosorption characteristics of malachite green onto lignocellulosic biosorbents as a function of initial solution pH, initial malachite green concentration, biosorbents dosage, and temperature. Biosorption equilibrium data were fitted to two and three parameters isotherm models. Three-parameter isotherm models better described the equilibrium data. The maximum monolayer biosorption capacities obtained using the Langmuir model for MG removal using MSP and AMS was 47.61 and 18.86 mg/g, respectively. The biosorption kinetic data were analyzed using pseudo-first-order, pseudo-second-order, Elovich and intraparticle diffusion models. The pseudo-second-order kinetic model best fitted the experimental data, indicated the MG biosorption using MSP and AMS as chemisorption process. The removal of MG using AMS was found as highly dependent on the process temperature. The removal efficiency of MG showed declined effect at the higher concentrations of NaCl and CaCl 2 . The regeneration test of the biosorbents toward MG removal was successful up to three cycles.

  14. Modelling of fluoride removal via batch monopolar electrocoagulation process using aluminium electrodes

    NASA Astrophysics Data System (ADS)

    Amri, N.; Hashim, M. I.; Ismail, N.; Rohman, F. S.; Bashah, N. A. A.

    2017-09-01

    Electrocoagulation (EC) is a promising technology that extensively used to remove fluoride ions efficiently from industrial wastewater. However, it has received very little consideration and understanding on mechanism and factors that affecting the fluoride removal process. In order to determine the efficiency of fluoride removal in EC process, the effect of operating parameters such as voltage and electrolysis time were investigated in this study. A batch experiment with monopolar aluminium electrodes was conducted to identify the model of fluoride removal using empirical model equation. The EC process was investigated using several parameters which include voltage (3 - 12 V) and electrolysis time (0 - 60 minutes) at a constant initial fluoride concentration of 25 mg/L. The result shows that the fluoride removal efficiency increased steadily with increasing voltage and electrolysis time. The best fluoride removal efficiency was obtained with 94.8 % removal at 25 mg/L initial fluoride concentration, voltage of 12 V and 60 minutes electrolysis time. The results indicated that the rate constant, k and number of order, n decreased as the voltage increased. The rate of fluoride removal model was developed based on the empirical model equation using the correlation of k and n. Overall, the result showed that EC process can be considered as a potential alternative technology for fluoride removal in wastewater.

  15. Bivalves: From individual to population modelling

    NASA Astrophysics Data System (ADS)

    Saraiva, S.; van der Meer, J.; Kooijman, S. A. L. M.; Ruardij, P.

    2014-11-01

    An individual based population model for bivalves was designed, built and tested in a 0D approach, to simulate the population dynamics of a mussel bed located in an intertidal area. The processes at the individual level were simulated following the dynamic energy budget theory, whereas initial egg mortality, background mortality, food competition, and predation (including cannibalism) were additional population processes. Model properties were studied through the analysis of theoretical scenarios and by simulation of different mortality parameter combinations in a realistic setup, imposing environmental measurements. Realistic criteria were applied to narrow down the possible combination of parameter values. Field observations obtained in the long-term and multi-station monitoring program were compared with the model scenarios. The realistically selected modeling scenarios were able to reproduce reasonably the timing of some peaks in the individual abundances in the mussel bed and its size distribution but the number of individuals was not well predicted. The results suggest that the mortality in the early life stages (egg and larvae) plays an important role in population dynamics, either by initial egg mortality, larvae dispersion, settlement failure or shrimp predation. Future steps include the coupling of the population model with a hydrodynamic and biogeochemical model to improve the simulation of egg/larvae dispersion, settlement probability, food transport and also to simulate the feedback of the organisms' activity on the water column properties, which will result in an improvement of the food quantity and quality characterization.

  16. Mobile application MDDCS for modeling the expansion dynamics of a dislocation loop in FCC metals

    NASA Astrophysics Data System (ADS)

    Kirilyuk, Vasiliy; Petelin, Alexander; Eliseev, Andrey

    2017-11-01

    A mobile version of the software package Dynamic Dislocation of Crystallographic Slip (MDDCS) designed for modeling the expansion dynamics of dislocation loops and formation of a crystallographic slip zone in FCC-metals is examined. The paper describes the possibilities for using MDDCS, the application interface, and the database scheme. The software has a simple and intuitive interface and does not require special training. The user can set the initial parameters of the experiment, carry out computational experiments, export parameters and results of the experiment into separate text files, and display the experiment results on the device screen.

  17. Clusters of poverty and disease emerge from feedbacks on an epidemiological network.

    PubMed

    Pluciński, Mateusz M; Ngonghala, Calistus N; Getz, Wayne M; Bonds, Matthew H

    2013-03-06

    The distribution of health conditions is characterized by extreme inequality. These disparities have been alternately attributed to disease ecology and the economics of poverty. Here, we provide a novel framework that integrates epidemiological and economic growth theory on an individual-based hierarchically structured network. Our model indicates that, under certain parameter regimes, feedbacks between disease ecology and economics create clusters of low income and high disease that can stably persist in populations that become otherwise predominantly rich and free of disease. Surprisingly, unlike traditional poverty trap models, these localized disease-driven poverty traps can arise despite homogeneity of parameters and evenly distributed initial economic conditions.

  18. Determining dynamical parameters of the Milky Way Galaxy based on high-accuracy radio astrometry

    NASA Astrophysics Data System (ADS)

    Honma, Mareki; Nagayama, Takumi; Sakai, Nobuyuki

    2015-08-01

    In this paper we evaluate how the dynamical structure of the Galaxy can be constrained by high-accuracy VLBI (Very Long Baseline Interferometry) astrometry such as VERA (VLBI Exploration of Radio Astrometry). We generate simulated samples of maser sources which follow the gas motion caused by a spiral or bar potential, with their distribution similar to those currently observed with VERA and VLBA (Very Long Baseline Array). We apply the Markov chain Monte Carlo analyses to the simulated sample sources to determine the dynamical parameter of the models. We show that one can successfully determine the initial model parameters if astrometric results are obtained for a few hundred sources with currently achieved astrometric accuracy. If astrometric data are available from 500 sources, the expected accuracy of R0 and Θ0 is ˜ 1% or higher, and parameters related to the spiral structure can be constrained by an error of 10% or with higher accuracy. We also show that the parameter determination accuracy is basically independent of the locations of resonances such as corotation and/or inner/outer Lindblad resonances. We also discuss the possibility of model selection based on the Bayesian information criterion (BIC), and demonstrate that BIC can be used to discriminate different dynamical models of the Galaxy.

  19. Competitive Deep-Belief Networks for Underwater Acoustic Target Recognition

    PubMed Central

    Shen, Sheng; Yao, Xiaohui; Sheng, Meiping; Wang, Chen

    2018-01-01

    Underwater acoustic target recognition based on ship-radiated noise belongs to the small-sample-size recognition problems. A competitive deep-belief network is proposed to learn features with more discriminative information from labeled and unlabeled samples. The proposed model consists of four stages: (1) A standard restricted Boltzmann machine is pretrained using a large number of unlabeled data to initialize its parameters; (2) the hidden units are grouped according to categories, which provides an initial clustering model for competitive learning; (3) competitive training and back-propagation algorithms are used to update the parameters to accomplish the task of clustering; (4) by applying layer-wise training and supervised fine-tuning, a deep neural network is built to obtain features. Experimental results show that the proposed method can achieve classification accuracy of 90.89%, which is 8.95% higher than the accuracy obtained by the compared methods. In addition, the highest accuracy of our method is obtained with fewer features than other methods. PMID:29570642

  20. Inversion of Attributes and Full Waveforms of Ground Penetrating Radar Data Using PEST

    NASA Astrophysics Data System (ADS)

    Jazayeri, S.; Kruse, S.; Esmaeili, S.

    2015-12-01

    We seek to establish a method, based on freely available software, for inverting GPR signals for the underlying physical properties (electrical permittivity, magnetic permeability, target geometries). Such a procedure should be useful for classroom instruction and for analyzing surface GPR surveys over simple targets. We explore the applicability of the PEST parameter estimation software package for GPR inversion (www.pesthomepage.org). PEST is designed to invert data sets with large numbers of parameters, and offers a variety of inversion methods. Although primarily used in hydrogeology, the code has been applied to a wide variety of physical problems. The PEST code requires forward model input; the forward model of the GPR signal is done with the GPRMax package (www.gprmax.com). The problem of extracting the physical characteristics of a subsurface anomaly from the GPR data is highly nonlinear. For synthetic models of simple targets in homogeneous backgrounds, we find PEST's nonlinear Gauss-Marquardt-Levenberg algorithm is preferred. This method requires an initial model, for which the weighted differences between model-generated data and those of the "true" synthetic model (the objective function) are calculated. In order to do this, the Jacobian matrix and the derivatives of the observation data in respect to the model parameters are computed using a finite differences method. Next, the iterative process of building new models by updating the initial values starts in order to minimize the objective function. Another measure of the goodness of the final acceptable model is the correlation coefficient which is calculated based on the method of Cooley and Naff. An accepted final model satisfies both of these conditions. Models to date show that physical properties of simple isolated targets against homogeneous backgrounds can be obtained from multiple traces from common-offset surface surveys. Ongoing work examines the inversion capabilities with more complex target geometries and heterogeneous soils.

  1. Quantitative model validation of manipulative robot systems

    NASA Astrophysics Data System (ADS)

    Kartowisastro, Iman Herwidiana

    This thesis is concerned with applying the distortion quantitative validation technique to a robot manipulative system with revolute joints. Using the distortion technique to validate a model quantitatively, the model parameter uncertainties are taken into account in assessing the faithfulness of the model and this approach is relatively more objective than the commonly visual comparison method. The industrial robot is represented by the TQ MA2000 robot arm. Details of the mathematical derivation of the distortion technique are given which explains the required distortion of the constant parameters within the model and the assessment of model adequacy. Due to the complexity of a robot model, only the first three degrees of freedom are considered where all links are assumed rigid. The modelling involves the Newton-Euler approach to obtain the dynamics model, and the Denavit-Hartenberg convention is used throughout the work. The conventional feedback control system is used in developing the model. The system behavior to parameter changes is investigated as some parameters are redundant. This work is important so that the most important parameters to be distorted can be selected and this leads to a new term called the fundamental parameters. The transfer function approach has been chosen to validate an industrial robot quantitatively against the measured data due to its practicality. Initially, the assessment of the model fidelity criterion indicated that the model was not capable of explaining the transient record in term of the model parameter uncertainties. Further investigations led to significant improvements of the model and better understanding of the model properties. After several improvements in the model, the fidelity criterion obtained was almost satisfied. Although the fidelity criterion is slightly less than unity, it has been shown that the distortion technique can be applied in a robot manipulative system. Using the validated model, the importance of friction terms in the model was highlighted with the aid of the partition control technique. It was also shown that the conventional feedback control scheme was insufficient for a robot manipulative system due to high nonlinearity which was inherent in the robot manipulator.

  2. Fallback disks & magnetars: prospects & possibilities

    NASA Astrophysics Data System (ADS)

    Alpar, M. A.

    Some bound matter in the form of a fallback disk may be an initial parameter of isolated neutron stars at birth which along with the initial rotation rate and dipole and higher multipole magnetic moments determines the evolution of neutron stars and the categories into which they fall This talk reviews the strengths and difficulties of fallback disk models in explaining properties of isolated neutron stars of different categories Evidence for and observational limits on fallback disks will also be discussed

  3. RF models for plasma-surface interactions

    NASA Astrophysics Data System (ADS)

    Jenkins, Thomas; Smithe, David; Lin, Ming-Chieh; Kruger, Scott; Stoltz, Peter

    2013-09-01

    Computational models for DC and oscillatory (RF-driven) sheath potentials, arising at metal or dielectric-coated surfaces in contact with plasma, are developed within the VSim code and applied in parameter regimes characteristic of fusion plasma experiments and plasma processing scenarios. Results from initial studies quantifying the effects of various dielectric wall coating materials and thicknesses on these sheath potentials, as well as on the ensuing flux of plasma particles to the wall, are presented. As well, the developed models are used to model plasma-facing ICRF antenna structures in the ITER device; we present initial assessments of the efficacy of dielectric-coated antenna surfaces in reducing sputtering-induced high-Z impurity contamination of the fusion reaction. Funded by U.S. DoE via a Phase I SBIR grant, award DE-SC0009501.

  4. Distributed reacceleration of cosmic rays

    NASA Technical Reports Server (NTRS)

    Wandel, Amri; Eichler, David; Letaw, John R.; Silberberg, Rein; Tsao, C. H.

    1985-01-01

    A model is developed in which cosmic rays, in addition to their initial acceleration by a strong shock, are continuously reaccelerated while propagating through the Galaxy. The equations describing this acceleration scheme are solved analytically and numerically. Solutions for the spectra of primary and secondary cosmic rays are given in a closed analytic form, allowing a rapid search in parameter space for viable propagation models with distributed reeacceleration included. The observed boron-to-carbon ratio can be reproduced by the reacceleration theory over a range of escape parameters, some of them quite different from the standard leaky-box model. It is also shown that even a very modest amount of reacceleration by strong shocks causes the boron-to-carbon ratio to level off at sufficiently high energies.

  5. Numerical and Experimental Validation of a New Damage Initiation Criterion

    NASA Astrophysics Data System (ADS)

    Sadhinoch, M.; Atzema, E. H.; Perdahcioglu, E. S.; van den Boogaard, A. H.

    2017-09-01

    Most commercial finite element software packages, like Abaqus, have a built-in coupled damage model where a damage evolution needs to be defined in terms of a single fracture energy value for all stress states. The Johnson-Cook criterion has been modified to be Lode parameter dependent and this Modified Johnson-Cook (MJC) criterion is used as a Damage Initiation Surface (DIS) in combination with the built-in Abaqus ductile damage model. An exponential damage evolution law has been used with a single fracture energy value. Ultimately, the simulated force-displacement curves are compared with experiments to validate the MJC criterion. 7 out of 9 fracture experiments were predicted accurately. The limitations and accuracy of the failure predictions of the newly developed damage initiation criterion will be discussed shortly.

  6. Management of groundwater in-situ bioremediation system using reactive transport modelling under parametric uncertainty: field scale application

    NASA Astrophysics Data System (ADS)

    Verardo, E.; Atteia, O.; Rouvreau, L.

    2015-12-01

    In-situ bioremediation is a commonly used remediation technology to clean up the subsurface of petroleum-contaminated sites. Forecasting remedial performance (in terms of flux and mass reduction) is a challenge due to uncertainties associated with source properties and the uncertainties associated with contribution and efficiency of concentration reducing mechanisms. In this study, predictive uncertainty analysis of bio-remediation system efficiency is carried out with the null-space Monte Carlo (NSMC) method which combines the calibration solution-space parameters with the ensemble of null-space parameters, creating sets of calibration-constrained parameters for input to follow-on remedial efficiency. The first step in the NSMC methodology for uncertainty analysis is model calibration. The model calibration was conducted by matching simulated BTEX concentration to a total of 48 observations from historical data before implementation of treatment. Two different bio-remediation designs were then implemented in the calibrated model. The first consists in pumping/injection wells and the second in permeable barrier coupled with infiltration across slotted piping. The NSMC method was used to calculate 1000 calibration-constrained parameter sets for the two different models. Several variants of the method were implemented to investigate their effect on the efficiency of the NSMC method. The first variant implementation of the NSMC is based on a single calibrated model. In the second variant, models were calibrated from different initial parameter sets. NSMC calibration-constrained parameter sets were sampled from these different calibrated models. We demonstrate that in context of nonlinear model, second variant avoids to underestimate parameter uncertainty which may lead to a poor quantification of predictive uncertainty. Application of the proposed approach to manage bioremediation of groundwater in a real site shows that it is effective to provide support in management of the in-situ bioremediation systems. Moreover, this study demonstrates that the NSMC method provides a computationally efficient and practical methodology of utilizing model predictive uncertainty methods in environmental management.

  7. On Finding and Using Identifiable Parameter Combinations in Nonlinear Dynamic Systems Biology Models and COMBOS: A Novel Web Implementation

    PubMed Central

    DiStefano, Joseph

    2014-01-01

    Parameter identifiability problems can plague biomodelers when they reach the quantification stage of development, even for relatively simple models. Structural identifiability (SI) is the primary question, usually understood as knowing which of P unknown biomodel parameters p 1,…, pi,…, pP are-and which are not-quantifiable in principle from particular input-output (I-O) biodata. It is not widely appreciated that the same database also can provide quantitative information about the structurally unidentifiable (not quantifiable) subset, in the form of explicit algebraic relationships among unidentifiable pi. Importantly, this is a first step toward finding what else is needed to quantify particular unidentifiable parameters of interest from new I–O experiments. We further develop, implement and exemplify novel algorithms that address and solve the SI problem for a practical class of ordinary differential equation (ODE) systems biology models, as a user-friendly and universally-accessible web application (app)–COMBOS. Users provide the structural ODE and output measurement models in one of two standard forms to a remote server via their web browser. COMBOS provides a list of uniquely and non-uniquely SI model parameters, and–importantly-the combinations of parameters not individually SI. If non-uniquely SI, it also provides the maximum number of different solutions, with important practical implications. The behind-the-scenes symbolic differential algebra algorithms are based on computing Gröbner bases of model attributes established after some algebraic transformations, using the computer-algebra system Maxima. COMBOS was developed for facile instructional and research use as well as modeling. We use it in the classroom to illustrate SI analysis; and have simplified complex models of tumor suppressor p53 and hormone regulation, based on explicit computation of parameter combinations. It’s illustrated and validated here for models of moderate complexity, with and without initial conditions. Built-in examples include unidentifiable 2 to 4-compartment and HIV dynamics models. PMID:25350289

  8. Ignition-and-Growth Modeling of NASA Standard Detonator and a Linear Shaped Charge

    NASA Technical Reports Server (NTRS)

    Oguz, Sirri

    2010-01-01

    The main objective of this study is to quantitatively investigate the ignition and shock sensitivity of NASA Standard Detonator (NSD) and the shock wave propagation of a linear shaped charge (LSC) after being shocked by NSD flyer plate. This combined explosive train was modeled as a coupled Arbitrary Lagrangian-Eulerian (ALE) model with LS-DYNA hydro code. An ignition-and-growth (I&G) reactive model based on unreacted and reacted Jones-Wilkins-Lee (JWL) equations of state was used to simulate the shock initiation. Various NSD-to-LSC stand-off distances were analyzed to calculate the shock initiation (or failure to initiate) and detonation wave propagation along the shaped charge. Simulation results were verified by experimental data which included VISAR tests for NSD flyer plate velocity measurement and an aluminum target severance test for LSC performance verification. Parameters used for the analysis were obtained from various published data or by using CHEETAH thermo-chemical code.

  9. An application of the Continuous Opinions and Discrete Actions (CODA) model to adolescent smoking initiation.

    PubMed

    Sun, Ruoyan; Mendez, David

    2017-01-01

    We investigated the impact of peers' opinions on the smoking initiation process among adolescents. We applied the Continuous Opinions and Discrete Actions (CODA) model to study how social interactions change adolescents' opinions and behaviors about smoking. Through agent-based modeling (ABM), we simulated a population of 2500 adolescents and compared smoking prevalence to data from 9 cohorts of adolescents in the National Survey on Drug Use and Health (NSDUH) from year 2001 till 2014. Our model adjusts well for NSDUH data according to pseudo R2 values, which are at least 96%. Optimal parameter values indicate that adolescents exhibit imitator characteristics with regard to smoking opinions. The imitator characteristics suggests that teenagers tend to update their opinions consistently according to what others do, and these opinions later translate into smoking behaviors. As a result, peer influence from social networks plays a big role in the smoking initiation process and should be an important driver in policy formulation.

  10. Hypervelocity Impact Initiation of Explosive Transfer Lines

    NASA Technical Reports Server (NTRS)

    Bjorkman, Michael D.; Christiansen, Eric L.

    2012-01-01

    The Gemini, Apollo and Space Shuttle spacecraft utilized explosive transfer lines (ETL) in a number of applications. In each case the ETL was located behind substantial structure and the risk of impact initiation by micrometeoroids and orbital debris was negligible. A current NASA program is considering an ETL to synchronize the actuation of pyrobolts to release 12 capture latches in a contingency. The space constraints require placing the ETL 50 mm below the 1 mm thick 2024-T72 Whipple shield. The proximity of the ETL to the thin shield prompted analysts at NASA to perform a scoping analysis with a finite-difference hydrocode to calculate impact parameters that would initiate the ETL. The results suggest testing is required and a 12 shot test program with surplused Shuttle ETL is scheduled for February 2012 at the NASA White Sands Test Facility. Explosive initiation models are essential to the analysis and one exists in the CTH library for HNS I, but not the HNS II used in the Shuttle 2.5 gr/ft rigid shielded mild detonating cord (SMDC). HNS II is less sensitive than HNS I so it is anticipated that these results using the HNS I model are conservative. Until the hypervelocity impact test results are available, the only check on the analysis was comparison with the Shuttle qualification test result that a 22 long bullet would not initiate the SMDC. This result was reproduced by the hydrocode simulation. Simulations of the direct impact of a 7 km/s aluminum ball, impacting at 0 degree angle of incidence, onto the SMDC resulted in a 1.5 mm diameter ball initiating the SMDC and 1.0 mm ball failing to initiate it. Where one 1.0 mm ball could not initiate the SMDC, a cluster of six 1.0 mm diameter aluminum balls striking simultaneously could. Thus the impact parameters that will result in initiating SMDC located behind a Whipple shield will depend on how well the shield fragments the projectile and spreads the fragments. An end-to-end simulation of the impact of an aluminum ball onto a Whipple shield covering SMDC is problematic due to the hydrocode fracture models. Regardless, two simulations were performed resulting in a 5 mm ball initiating the SMDC and a 4 mm ball failing to initiate the SMDC.

  11. A Bayesian approach to the modelling of α Cen A

    NASA Astrophysics Data System (ADS)

    Bazot, M.; Bourguignon, S.; Christensen-Dalsgaard, J.

    2012-12-01

    Determining the physical characteristics of a star is an inverse problem consisting of estimating the parameters of models for the stellar structure and evolution, and knowing certain observable quantities. We use a Bayesian approach to solve this problem for α Cen A, which allows us to incorporate prior information on the parameters to be estimated, in order to better constrain the problem. Our strategy is based on the use of a Markov chain Monte Carlo (MCMC) algorithm to estimate the posterior probability densities of the stellar parameters: mass, age, initial chemical composition, etc. We use the stellar evolutionary code ASTEC to model the star. To constrain this model both seismic and non-seismic observations were considered. Several different strategies were tested to fit these values, using either two free parameters or five free parameters in ASTEC. We are thus able to show evidence that MCMC methods become efficient with respect to more classical grid-based strategies when the number of parameters increases. The results of our MCMC algorithm allow us to derive estimates for the stellar parameters and robust uncertainties thanks to the statistical analysis of the posterior probability densities. We are also able to compute odds for the presence of a convective core in α Cen A. When using core-sensitive seismic observational constraints, these can rise above ˜40 per cent. The comparison of results to previous studies also indicates that these seismic constraints are of critical importance for our knowledge of the structure of this star.

  12. Bayesian parameter estimation for the Wnt pathway: an infinite mixture models approach.

    PubMed

    Koutroumpas, Konstantinos; Ballarini, Paolo; Votsi, Irene; Cournède, Paul-Henry

    2016-09-01

    Likelihood-free methods, like Approximate Bayesian Computation (ABC), have been extensively used in model-based statistical inference with intractable likelihood functions. When combined with Sequential Monte Carlo (SMC) algorithms they constitute a powerful approach for parameter estimation and model selection of mathematical models of complex biological systems. A crucial step in the ABC-SMC algorithms, significantly affecting their performance, is the propagation of a set of parameter vectors through a sequence of intermediate distributions using Markov kernels. In this article, we employ Dirichlet process mixtures (DPMs) to design optimal transition kernels and we present an ABC-SMC algorithm with DPM kernels. We illustrate the use of the proposed methodology using real data for the canonical Wnt signaling pathway. A multi-compartment model of the pathway is developed and it is compared to an existing model. The results indicate that DPMs are more efficient in the exploration of the parameter space and can significantly improve ABC-SMC performance. In comparison to alternative sampling schemes that are commonly used, the proposed approach can bring potential benefits in the estimation of complex multimodal distributions. The method is used to estimate the parameters and the initial state of two models of the Wnt pathway and it is shown that the multi-compartment model fits better the experimental data. Python scripts for the Dirichlet Process Gaussian Mixture model and the Gibbs sampler are available at https://sites.google.com/site/kkoutroumpas/software konstantinos.koutroumpas@ecp.fr. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  13. Explosive Model Tarantula V1/JWL++ Calibration of LX-17: #2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Souers, P C; Vitello, P

    2009-05-01

    Tarantula V1 is a kinetic package for reactive flow codes that seeks to describe initiation, failure, dead zones and detonation simultaneously. The most important parameter is P1, the pressure between the initiation and failure regions. Both dead zone formation and failure can be largely controlled with this knob. However, V1 does failure with low settings and dead zones with higher settings, so that it cannot fulfill its purpose in the current format. To this end, V2 is under test. The derivation of the initiation threshold P0 is discussed. The derivation of the initiation pressure-tau curve as an output of Tarantulamore » shows that the initiation package is sound. A desensitization package is also considered.« less

  14. The Use of Asymptotic Functions for Determining Empirical Values of CN Parameter in Selected Catchments of Variable Land Cover

    NASA Astrophysics Data System (ADS)

    Wałęga, Andrzej; Młyński, Dariusz; Wachulec, Katarzyna

    2017-12-01

    The aim of the study was to assess the applicability of asymptotic functions for determining the value of CN parameter as a function of precipitation depth in mountain and upland catchments. The analyses were carried out in two catchments: the Rudawa, left tributary of the Vistula, and the Kamienica, right tributary of the Dunajec. The input material included data on precipitation and flows for a multi-year period 1980-2012, obtained from IMGW PIB in Warsaw. Two models were used to determine empirical values of CNobs parameter as a function of precipitation depth: standard Hawkins model and 2-CN model allowing for a heterogeneous nature of a catchment area. The study analyses confirmed that asymptotic functions properly described P-CNobs relationship for the entire range of precipitation variability. In the case of high rainfalls, CNobs remained above or below the commonly accepted average antecedent moisture conditions AMCII. The study calculations indicated that the runoff amount calculated according to the original SCS-CN method might be underestimated, and this could adversely affect the values of design flows required for the design of hydraulic engineering projects. In catchments with heterogeneous land cover, the results of CNobs were more accurate when 2-CN model was used instead of the standard Hawkins model. 2-CN model is more precise in accounting for differences in runoff formation depending on retention capacity of the substrate. It was also demonstrated that the commonly accepted initial abstraction coefficient λ = 0.20 yielded too big initial loss of precipitation in the analyzed catchments and, therefore, the computed direct runoff was underestimated. The best results were obtained for λ = 0.05.

  15. Simulation of an epidemic model with vector transmission

    NASA Astrophysics Data System (ADS)

    Dickman, Adriana G.; Dickman, Ronald

    2015-03-01

    We study a lattice model for vector-mediated transmission of a disease in a population consisting of two species, A and B, which contract the disease from one another. Individuals of species A are sedentary, while those of species B (the vector) diffuse in space. Examples of such diseases are malaria, dengue fever, and Pierce's disease in vineyards. The model exhibits a phase transition between an absorbing (infection free) phase and an active one as parameters such as infection rates and vector density are varied. We study the static and dynamic critical behavior of the model using initial spreading, initial decay, and quasistationary simulations. Simulations are checked against mean-field analysis. Although phase transitions to an absorbing state fall generically in the directed percolation universality class, this appears not to be the case for the present model.

  16. Convection- and SASI-driven flows in parametrized models of core-collapse supernova explosions

    DOE PAGES

    Endeve, E.; Cardall, C. Y.; Budiardja, R. D.; ...

    2016-01-21

    We present initial results from three-dimensional simulations of parametrized core-collapse supernova (CCSN) explosions obtained with our astrophysical simulation code General Astrophysical Simulation System (GenASIS). We are interested in nonlinear flows resulting from neutrino-driven convection and the standing accretion shock instability (SASI) in the CCSN environment prior to and during the explosion. By varying parameters in our model that control neutrino heating and shock dissociation, our simulations result in convection-dominated and SASI-dominated evolution. We describe this initial set of simulation results in some detail. To characterize the turbulent flows in the simulations, we compute and compare velocity power spectra from convection-dominatedmore » and SASI-dominated (both non-exploding and exploding) models. When compared to SASI-dominated models, convection-dominated models exhibit significantly more power on small spatial scales.« less

  17. GROWTH AND INEQUALITY: MODEL EVALUATION BASED ON AN ESTIMATION-CALIBRATION STRATEGY

    PubMed Central

    Jeong, Hyeok; Townsend, Robert

    2010-01-01

    This paper evaluates two well-known models of growth with inequality that have explicit micro underpinnings related to household choice. With incomplete markets or transactions costs, wealth can constrain investment in business and the choice of occupation and also constrain the timing of entry into the formal financial sector. Using the Thai Socio-Economic Survey (SES), we estimate the distribution of wealth and the key parameters that best fit cross-sectional data on household choices and wealth. We then simulate the model economies for two decades at the estimated initial wealth distribution and analyze whether the model economies at those micro-fit parameter estimates can explain the observed macro and sectoral aspects of income growth and inequality change. Both models capture important features of Thai reality. Anomalies and comparisons across the two distinct models yield specific suggestions for improved research on the micro foundations of growth and inequality. PMID:20448833

  18. A novel epidemic spreading model with decreasing infection rate based on infection times

    NASA Astrophysics Data System (ADS)

    Huang, Yunhan; Ding, Li; Feng, Yun

    2016-02-01

    A new epidemic spreading model where individuals can be infected repeatedly is proposed in this paper. The infection rate decreases according to the times it has been infected before. This phenomenon may be caused by immunity or heightened alertness of individuals. We introduce a new parameter called decay factor to evaluate the decrease of infection rate. Our model bridges the Susceptible-Infected-Susceptible(SIS) model and the Susceptible-Infected-Recovered(SIR) model by this parameter. The proposed model has been studied by Monte-Carlo numerical simulation. It is found that initial infection rate has greater impact on peak value comparing with decay factor. The effect of decay factor on final density and threshold of outbreak is dominant but weakens significantly when considering birth and death rates. Besides, simulation results show that the influence of birth and death rates on final density is non-monotonic in some circumstances.

  19. Development of an Algorithm for Automatic Analysis of the Impedance Spectrum Based on a Measurement Model

    NASA Astrophysics Data System (ADS)

    Kobayashi, Kiyoshi; Suzuki, Tohru S.

    2018-03-01

    A new algorithm for the automatic estimation of an equivalent circuit and the subsequent parameter optimization is developed by combining the data-mining concept and complex least-squares method. In this algorithm, the program generates an initial equivalent-circuit model based on the sampling data and then attempts to optimize the parameters. The basic hypothesis is that the measured impedance spectrum can be reproduced by the sum of the partial-impedance spectra presented by the resistor, inductor, resistor connected in parallel to a capacitor, and resistor connected in parallel to an inductor. The adequacy of the model is determined by using a simple artificial-intelligence function, which is applied to the output function of the Levenberg-Marquardt module. From the iteration of model modifications, the program finds an adequate equivalent-circuit model without any user input to the equivalent-circuit model.

  20. Flocking of the Motsch-Tadmor Model with a Cut-Off Interaction Function

    NASA Astrophysics Data System (ADS)

    Jin, Chunyin

    2018-04-01

    In this paper, we study the flocking behavior of the Motsch-Tadmor model with a cut-off interaction function. Our analysis shows that connectedness is important for flocking of this kind of model. Fortunately, we get a sufficient condition imposed only on the model parameters and initial data to guarantee the connectedness of the neighbor graph associated with the system. Then we present a theoretical analysis for flocking, and show that the system achieves consensus at an exponential rate.

Top