Sample records for adjust model parameters

  1. Cognitive models of risky choice: parameter stability and predictive accuracy of prospect theory.

    PubMed

    Glöckner, Andreas; Pachur, Thorsten

    2012-04-01

    In the behavioral sciences, a popular approach to describe and predict behavior is cognitive modeling with adjustable parameters (i.e., which can be fitted to data). Modeling with adjustable parameters allows, among other things, measuring differences between people. At the same time, parameter estimation also bears the risk of overfitting. Are individual differences as measured by model parameters stable enough to improve the ability to predict behavior as compared to modeling without adjustable parameters? We examined this issue in cumulative prospect theory (CPT), arguably the most widely used framework to model decisions under risk. Specifically, we examined (a) the temporal stability of CPT's parameters; and (b) how well different implementations of CPT, varying in the number of adjustable parameters, predict individual choice relative to models with no adjustable parameters (such as CPT with fixed parameters, expected value theory, and various heuristics). We presented participants with risky choice problems and fitted CPT to each individual's choices in two separate sessions (which were 1 week apart). All parameters were correlated across time, in particular when using a simple implementation of CPT. CPT allowing for individual variability in parameter values predicted individual choice better than CPT with fixed parameters, expected value theory, and the heuristics. CPT's parameters thus seem to pick up stable individual differences that need to be considered when predicting risky choice. Copyright © 2011 Elsevier B.V. All rights reserved.

  2. Improving the Process of Adjusting the Parameters of Finite Element Models of Healthy Human Intervertebral Discs by the Multi-Response Surface Method.

    PubMed

    Gómez, Fátima Somovilla; Lorza, Rubén Lostado; Bobadilla, Marina Corral; García, Rubén Escribano

    2017-09-21

    The kinematic behavior of models that are based on the finite element method (FEM) for modeling the human body depends greatly on an accurate estimate of the parameters that define such models. This task is complex, and any small difference between the actual biomaterial model and the simulation model based on FEM can be amplified enormously in the presence of nonlinearities. The current paper attempts to demonstrate how a combination of the FEM and the MRS methods with desirability functions can be used to obtain the material parameters that are most appropriate for use in defining the behavior of Finite Element (FE) models of the healthy human lumbar intervertebral disc (IVD). The FE model parameters were adjusted on the basis of experimental data from selected standard tests (compression, flexion, extension, shear, lateral bending, and torsion) and were developed as follows: First, three-dimensional parameterized FE models were generated on the basis of the mentioned standard tests. Then, 11 parameters were selected to define the proposed parameterized FE models. For each of the standard tests, regression models were generated using MRS to model the six stiffness and nine bulges of the healthy IVD models that were created by changing the parameters of the FE models. The optimal combination of the 11 parameters was based on three different adjustment criteria. The latter, in turn, were based on the combination of stiffness and bulges that were obtained from the standard test FE simulations. The first adjustment criteria considered stiffness and bulges to be equally important in the adjustment of FE model parameters. The second adjustment criteria considered stiffness as most important, whereas the third considered the bulges to be most important. The proposed adjustment methods were applied to a medium-sized human IVD that corresponded to the L3-L4 lumbar level with standard dimensions of width = 50 mm, depth = 35 mm, and height = 10 mm. Agreement between the kinematic behavior that was obtained with the optimized parameters and that obtained from the literature demonstrated that the proposed method is a powerful tool with which to adjust healthy IVD FE models when there are many parameters, stiffnesses, and bulges to which the models must adjust.

  3. Improving the Process of Adjusting the Parameters of Finite Element Models of Healthy Human Intervertebral Discs by the Multi-Response Surface Method

    PubMed Central

    Somovilla Gómez, Fátima

    2017-01-01

    The kinematic behavior of models that are based on the finite element method (FEM) for modeling the human body depends greatly on an accurate estimate of the parameters that define such models. This task is complex, and any small difference between the actual biomaterial model and the simulation model based on FEM can be amplified enormously in the presence of nonlinearities. The current paper attempts to demonstrate how a combination of the FEM and the MRS methods with desirability functions can be used to obtain the material parameters that are most appropriate for use in defining the behavior of Finite Element (FE) models of the healthy human lumbar intervertebral disc (IVD). The FE model parameters were adjusted on the basis of experimental data from selected standard tests (compression, flexion, extension, shear, lateral bending, and torsion) and were developed as follows: First, three-dimensional parameterized FE models were generated on the basis of the mentioned standard tests. Then, 11 parameters were selected to define the proposed parameterized FE models. For each of the standard tests, regression models were generated using MRS to model the six stiffness and nine bulges of the healthy IVD models that were created by changing the parameters of the FE models. The optimal combination of the 11 parameters was based on three different adjustment criteria. The latter, in turn, were based on the combination of stiffness and bulges that were obtained from the standard test FE simulations. The first adjustment criteria considered stiffness and bulges to be equally important in the adjustment of FE model parameters. The second adjustment criteria considered stiffness as most important, whereas the third considered the bulges to be most important. The proposed adjustment methods were applied to a medium-sized human IVD that corresponded to the L3–L4 lumbar level with standard dimensions of width = 50 mm, depth = 35 mm, and height = 10 mm. Agreement between the kinematic behavior that was obtained with the optimized parameters and that obtained from the literature demonstrated that the proposed method is a powerful tool with which to adjust healthy IVD FE models when there are many parameters, stiffnesses, and bulges to which the models must adjust. PMID:28934161

  4. Optimal Linking Design for Response Model Parameters

    ERIC Educational Resources Information Center

    Barrett, Michelle D.; van der Linden, Wim J.

    2017-01-01

    Linking functions adjust for differences between identifiability restrictions used in different instances of the estimation of item response model parameters. These adjustments are necessary when results from those instances are to be compared. As linking functions are derived from estimated item response model parameters, parameter estimation…

  5. An approach to adjustment of relativistic mean field model parameters

    NASA Astrophysics Data System (ADS)

    Bayram, Tuncay; Akkoyun, Serkan

    2017-09-01

    The Relativistic Mean Field (RMF) model with a small number of adjusted parameters is powerful tool for correct predictions of various ground-state nuclear properties of nuclei. Its success for describing nuclear properties of nuclei is directly related with adjustment of its parameters by using experimental data. In the present study, the Artificial Neural Network (ANN) method which mimics brain functionality has been employed for improvement of the RMF model parameters. In particular, the understanding capability of the ANN method for relations between the RMF model parameters and their predictions for binding energies (BEs) of 58Ni and 208Pb have been found in agreement with the literature values.

  6. Cognitive Models of Risky Choice: Parameter Stability and Predictive Accuracy of Prospect Theory

    ERIC Educational Resources Information Center

    Glockner, Andreas; Pachur, Thorsten

    2012-01-01

    In the behavioral sciences, a popular approach to describe and predict behavior is cognitive modeling with adjustable parameters (i.e., which can be fitted to data). Modeling with adjustable parameters allows, among other things, measuring differences between people. At the same time, parameter estimation also bears the risk of overfitting. Are…

  7. Generalized Parameter-Adjusted Stochastic Resonance of Duffing Oscillator and Its Application to Weak-Signal Detection.

    PubMed

    Lai, Zhi-Hui; Leng, Yong-Gang

    2015-08-28

    A two-dimensional Duffing oscillator which can produce stochastic resonance (SR) is studied in this paper. We introduce its SR mechanism and present a generalized parameter-adjusted SR (GPASR) model of this oscillator for the necessity of parameter adjustments. The Kramers rate is chosen as the theoretical basis to establish a judgmental function for judging the occurrence of SR in this model; and to analyze and summarize the parameter-adjusted rules under unmatched signal amplitude, frequency, and/or noise-intensity. Furthermore, we propose the weak-signal detection approach based on this GPASR model. Finally, we employ two practical examples to demonstrate the feasibility of the proposed approach in practical engineering application.

  8. Generalized Parameter-Adjusted Stochastic Resonance of Duffing Oscillator and Its Application to Weak-Signal Detection

    PubMed Central

    Lai, Zhi-Hui; Leng, Yong-Gang

    2015-01-01

    A two-dimensional Duffing oscillator which can produce stochastic resonance (SR) is studied in this paper. We introduce its SR mechanism and present a generalized parameter-adjusted SR (GPASR) model of this oscillator for the necessity of parameter adjustments. The Kramers rate is chosen as the theoretical basis to establish a judgmental function for judging the occurrence of SR in this model; and to analyze and summarize the parameter-adjusted rules under unmatched signal amplitude, frequency, and/or noise-intensity. Furthermore, we propose the weak-signal detection approach based on this GPASR model. Finally, we employ two practical examples to demonstrate the feasibility of the proposed approach in practical engineering application. PMID:26343671

  9. Resolving model parameter values from carbon and nitrogen stock measurements in a wide range of tropical mature forests using nonlinear inversion and regression trees

    USGS Publications Warehouse

    Liu, S.; Anderson, P.; Zhou, G.; Kauffman, B.; Hughes, F.; Schimel, D.; Watson, Vicente; Tosi, Joseph

    2008-01-01

    Objectively assessing the performance of a model and deriving model parameter values from observations are critical and challenging in landscape to regional modeling. In this paper, we applied a nonlinear inversion technique to calibrate the ecosystem model CENTURY against carbon (C) and nitrogen (N) stock measurements collected from 39 mature tropical forest sites in seven life zones in Costa Rica. Net primary productivity from the Moderate-Resolution Imaging Spectroradiometer (MODIS), C and N stocks in aboveground live biomass, litter, coarse woody debris (CWD), and in soils were used to calibrate the model. To investigate the resolution of available observations on the number of adjustable parameters, inversion was performed using nine setups of adjustable parameters. Statistics including observation sensitivity, parameter correlation coefficient, parameter sensitivity, and parameter confidence limits were used to evaluate the information content of observations, resolution of model parameters, and overall model performance. Results indicated that soil organic carbon content, soil nitrogen content, and total aboveground biomass carbon had the highest information contents, while measurements of carbon in litter and nitrogen in CWD contributed little to the parameter estimation processes. The available information could resolve the values of 2-4 parameters. Adjusting just one parameter resulted in under-fitting and unacceptable model performance, while adjusting five parameters simultaneously led to over-fitting. Results further indicated that the MODIS NPP values were compressed as compared with the spatial variability of net primary production (NPP) values inferred from inverse modeling. Using inverse modeling to infer NPP and other sensitive model parameters from C and N stock observations provides an opportunity to utilize data collected by national to regional forest inventory systems to reduce the uncertainties in the carbon cycle and generate valuable databases to validate and improve MODIS NPP algorithms.

  10. The predictive consequences of parameterization

    NASA Astrophysics Data System (ADS)

    White, J.; Hughes, J. D.; Doherty, J. E.

    2013-12-01

    In numerical groundwater modeling, parameterization is the process of selecting the aspects of a computer model that will be allowed to vary during history matching. This selection process is dependent on professional judgment and is, therefore, inherently subjective. Ideally, a robust parameterization should be commensurate with the spatial and temporal resolution of the model and should include all uncertain aspects of the model. Limited computing resources typically require reducing the number of adjustable parameters so that only a subset of the uncertain model aspects are treated as estimable parameters; the remaining aspects are treated as fixed parameters during history matching. We use linear subspace theory to develop expressions for the predictive error incurred by fixing parameters. The predictive error is comprised of two terms. The first term arises directly from the sensitivity of a prediction to fixed parameters. The second term arises from prediction-sensitive adjustable parameters that are forced to compensate for fixed parameters during history matching. The compensation is accompanied by inappropriate adjustment of otherwise uninformed, null-space parameter components. Unwarranted adjustment of null-space components away from prior maximum likelihood values may produce bias if a prediction is sensitive to those components. The potential for subjective parameterization choices to corrupt predictions is examined using a synthetic model. Several strategies are evaluated, including use of piecewise constant zones, use of pilot points with Tikhonov regularization and use of the Karhunen-Loeve transformation. The best choice of parameterization (as defined by minimum error variance) is strongly dependent on the types of predictions to be made by the model.

  11. Nonlinear predictive control for adaptive adjustments of deep brain stimulation parameters in basal ganglia-thalamic network.

    PubMed

    Su, Fei; Wang, Jiang; Niu, Shuangxia; Li, Huiyan; Deng, Bin; Liu, Chen; Wei, Xile

    2018-02-01

    The efficacy of deep brain stimulation (DBS) for Parkinson's disease (PD) depends in part on the post-operative programming of stimulation parameters. Closed-loop stimulation is one method to realize the frequent adjustment of stimulation parameters. This paper introduced the nonlinear predictive control method into the online adjustment of DBS amplitude and frequency. This approach was tested in a computational model of basal ganglia-thalamic network. The autoregressive Volterra model was used to identify the process model based on physiological data. Simulation results illustrated the efficiency of closed-loop stimulation methods (amplitude adjustment and frequency adjustment) in improving the relay reliability of thalamic neurons compared with the PD state. Besides, compared with the 130Hz constant DBS the closed-loop stimulation methods can significantly reduce the energy consumption. Through the analysis of inter-spike-intervals (ISIs) distribution of basal ganglia neurons, the evoked network activity by the closed-loop frequency adjustment stimulation was closer to the normal state. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. Flow adjustment inside homogeneous canopies after a leading edge – An analytical approach backed by LES

    DOE PAGES

    Kroniger, Konstantin; Banerjee, Tirtha; De Roo, Frederik; ...

    2017-10-06

    A two-dimensional analytical model for describing the mean flow behavior inside a vegetation canopy after a leading edge in neutral conditions was developed and tested by means of large eddy simulations (LES) employing the LES code PALM. The analytical model is developed for the region directly after the canopy edge, the adjustment region, where one-dimensional canopy models fail due to the sharp change in roughness. The derivation of this adjustment region model is based on an analytic solution of the two-dimensional Reynolds averaged Navier–Stokes equation in neutral conditions for a canopy with constant plant area density (PAD). The main assumptionsmore » for solving the governing equations are separability of the velocity components concerning the spatial variables and the neglection of the Reynolds stress gradients. These two assumptions are verified by means of LES. To determine the emerging model parameters, a simultaneous fitting scheme was applied to the velocity and pressure data of a reference LES simulation. Furthermore a sensitivity analysis of the adjustment region model, equipped with the previously calculated parameters, was performed varying the three relevant length, the canopy height ( h), the canopy length and the adjustment length ( Lc), in additional LES. Even if the model parameters are, in general, functions of h/ Lc, it was found out that the model is capable of predicting the flow quantities in various cases, when using constant parameters. Subsequently the adjustment region model is combined with the one-dimensional model of Massman, which is applicable for the interior of the canopy, to attain an analytical model capable of describing the mean flow for the full canopy domain. As a result, the model is tested against an analytical model based on a linearization approach.« less

  13. Flow adjustment inside homogeneous canopies after a leading edge – An analytical approach backed by LES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kroniger, Konstantin; Banerjee, Tirtha; De Roo, Frederik

    A two-dimensional analytical model for describing the mean flow behavior inside a vegetation canopy after a leading edge in neutral conditions was developed and tested by means of large eddy simulations (LES) employing the LES code PALM. The analytical model is developed for the region directly after the canopy edge, the adjustment region, where one-dimensional canopy models fail due to the sharp change in roughness. The derivation of this adjustment region model is based on an analytic solution of the two-dimensional Reynolds averaged Navier–Stokes equation in neutral conditions for a canopy with constant plant area density (PAD). The main assumptionsmore » for solving the governing equations are separability of the velocity components concerning the spatial variables and the neglection of the Reynolds stress gradients. These two assumptions are verified by means of LES. To determine the emerging model parameters, a simultaneous fitting scheme was applied to the velocity and pressure data of a reference LES simulation. Furthermore a sensitivity analysis of the adjustment region model, equipped with the previously calculated parameters, was performed varying the three relevant length, the canopy height ( h), the canopy length and the adjustment length ( Lc), in additional LES. Even if the model parameters are, in general, functions of h/ Lc, it was found out that the model is capable of predicting the flow quantities in various cases, when using constant parameters. Subsequently the adjustment region model is combined with the one-dimensional model of Massman, which is applicable for the interior of the canopy, to attain an analytical model capable of describing the mean flow for the full canopy domain. As a result, the model is tested against an analytical model based on a linearization approach.« less

  14. Complementary nonparametric analysis of covariance for logistic regression in a randomized clinical trial setting.

    PubMed

    Tangen, C M; Koch, G G

    1999-03-01

    In the randomized clinical trial setting, controlling for covariates is expected to produce variance reduction for the treatment parameter estimate and to adjust for random imbalances of covariates between the treatment groups. However, for the logistic regression model, variance reduction is not obviously obtained. This can lead to concerns about the assumptions of the logistic model. We introduce a complementary nonparametric method for covariate adjustment. It provides results that are usually compatible with expectations for analysis of covariance. The only assumptions required are based on randomization and sampling arguments. The resulting treatment parameter is a (unconditional) population average log-odds ratio that has been adjusted for random imbalance of covariates. Data from a randomized clinical trial are used to compare results from the traditional maximum likelihood logistic method with those from the nonparametric logistic method. We examine treatment parameter estimates, corresponding standard errors, and significance levels in models with and without covariate adjustment. In addition, we discuss differences between unconditional population average treatment parameters and conditional subpopulation average treatment parameters. Additional features of the nonparametric method, including stratified (multicenter) and multivariate (multivisit) analyses, are illustrated. Extensions of this methodology to the proportional odds model are also made.

  15. Lower extremity EMG-driven modeling of walking with automated adjustment of musculoskeletal geometry

    PubMed Central

    Meyer, Andrew J.; Patten, Carolynn

    2017-01-01

    Neuromusculoskeletal disorders affecting walking ability are often difficult to manage, in part due to limited understanding of how a patient’s lower extremity muscle excitations contribute to the patient’s lower extremity joint moments. To assist in the study of these disorders, researchers have developed electromyography (EMG) driven neuromusculoskeletal models utilizing scaled generic musculoskeletal geometry. While these models can predict individual muscle contributions to lower extremity joint moments during walking, the accuracy of the predictions can be hindered by errors in the scaled geometry. This study presents a novel EMG-driven modeling method that automatically adjusts surrogate representations of the patient’s musculoskeletal geometry to improve prediction of lower extremity joint moments during walking. In addition to commonly adjusted neuromusculoskeletal model parameters, the proposed method adjusts model parameters defining muscle-tendon lengths, velocities, and moment arms. We evaluated our EMG-driven modeling method using data collected from a high-functioning hemiparetic subject walking on an instrumented treadmill at speeds ranging from 0.4 to 0.8 m/s. EMG-driven model parameter values were calibrated to match inverse dynamic moments for five degrees of freedom in each leg while keeping musculoskeletal geometry close to that of an initial scaled musculoskeletal model. We found that our EMG-driven modeling method incorporating automated adjustment of musculoskeletal geometry predicted net joint moments during walking more accurately than did the same method without geometric adjustments. Geometric adjustments improved moment prediction errors by 25% on average and up to 52%, with the largest improvements occurring at the hip. Predicted adjustments to musculoskeletal geometry were comparable to errors reported in the literature between scaled generic geometric models and measurements made from imaging data. Our results demonstrate that with appropriate experimental data, joint moment predictions for walking generated by an EMG-driven model can be improved significantly when automated adjustment of musculoskeletal geometry is included in the model calibration process. PMID:28700708

  16. 40 CFR 86.001-22 - Approval of application for certification; test fleet selections; determinations of parameters...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... certification; test fleet selections; determinations of parameters subject to adjustment for certification and..., and for 1985 and Later Model Year New Gasoline Fueled, Natural Gas-Fueled, Liquefied Petroleum Gas...; test fleet selections; determinations of parameters subject to adjustment for certification and...

  17. 40 CFR 86.001-22 - Approval of application for certification; test fleet selections; determinations of parameters...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... certification; test fleet selections; determinations of parameters subject to adjustment for certification and..., and for 1985 and Later Model Year New Gasoline Fueled, Natural Gas-Fueled, Liquefied Petroleum Gas...; test fleet selections; determinations of parameters subject to adjustment for certification and...

  18. Inverse modeling with RZWQM2 to predict water quality

    USDA-ARS?s Scientific Manuscript database

    Agricultural systems models such as RZWQM2 are complex and have numerous parameters that are unknown and difficult to estimate. Inverse modeling provides an objective statistical basis for calibration that involves simultaneous adjustment of model parameters and yields parameter confidence intervals...

  19. On-board adaptive model for state of charge estimation of lithium-ion batteries based on Kalman filter with proportional integral-based error adjustment

    NASA Astrophysics Data System (ADS)

    Wei, Jingwen; Dong, Guangzhong; Chen, Zonghai

    2017-10-01

    With the rapid development of battery-powered electric vehicles, the lithium-ion battery plays a critical role in the reliability of vehicle system. In order to provide timely management and protection for battery systems, it is necessary to develop a reliable battery model and accurate battery parameters estimation to describe battery dynamic behaviors. Therefore, this paper focuses on an on-board adaptive model for state-of-charge (SOC) estimation of lithium-ion batteries. Firstly, a first-order equivalent circuit battery model is employed to describe battery dynamic characteristics. Then, the recursive least square algorithm and the off-line identification method are used to provide good initial values of model parameters to ensure filter stability and reduce the convergence time. Thirdly, an extended-Kalman-filter (EKF) is applied to on-line estimate battery SOC and model parameters. Considering that the EKF is essentially a first-order Taylor approximation of battery model, which contains inevitable model errors, thus, a proportional integral-based error adjustment technique is employed to improve the performance of EKF method and correct model parameters. Finally, the experimental results on lithium-ion batteries indicate that the proposed EKF with proportional integral-based error adjustment method can provide robust and accurate battery model and on-line parameter estimation.

  20. The on-orbit calibration of geometric parameters of the Tian-Hui 1 (TH-1) satellite

    NASA Astrophysics Data System (ADS)

    Wang, Jianrong; Wang, Renxiang; Hu, Xin; Su, Zhongbo

    2017-02-01

    The on-orbit calibration of geometric parameters is a key step in improving the location accuracy of satellite images without using Ground Control Points (GCPs). Most methods of on-orbit calibration are based on the self-calibration using additional parameters. When using additional parameters, different number of additional parameters may lead to different results. The triangulation bundle adjustment is another way to calibrate the geometric parameters of camera, which can describe the changes in each geometric parameter. When triangulation bundle adjustment method is applied to calibrate geometric parameters, a prerequisite is that the strip model can avoid systematic deformation caused by the rate of attitude changes. Concerning the stereo camera, the influence of the intersection angle should be considered during calibration. The Equivalent Frame Photo (EFP) bundle adjustment based on the Line-Matrix CCD (LMCCD) image can solve the systematic distortion of the strip model, and obtain high accuracy location without using GCPs. In this paper, the triangulation bundle adjustment is used to calibrate the geometric parameters of TH-1 satellite cameras based on LMCCD image. During the bundle adjustment, the three-line array cameras are reconstructed by adopting the principle of inverse triangulation. Finally, the geometric accuracy is validated before and after on-orbit calibration using 5 testing fields. After on-orbit calibration, the 3D geometric accuracy is improved to 11.8 m from 170 m. The results show that the location accuracy of TH-1 without using GCPs is significantly improved using the on-orbit calibration of the geometric parameters.

  1. Impact of orbit modeling on DORIS station position and Earth rotation estimates

    NASA Astrophysics Data System (ADS)

    Štěpánek, Petr; Rodriguez-Solano, Carlos Javier; Hugentobler, Urs; Filler, Vratislav

    2014-04-01

    The high precision of estimated station coordinates and Earth rotation parameters (ERP) obtained from satellite geodetic techniques is based on the precise determination of the satellite orbit. This paper focuses on the analysis of the impact of different orbit parameterizations on the accuracy of station coordinates and the ERPs derived from DORIS observations. In a series of experiments the DORIS data from the complete year 2011 were processed with different orbit model settings. First, the impact of precise modeling of the non-conservative forces on geodetic parameters was compared with results obtained with an empirical-stochastic modeling approach. Second, the temporal spacing of drag scaling parameters was tested. Third, the impact of estimating once-per-revolution harmonic accelerations in cross-track direction was analyzed. And fourth, two different approaches for solar radiation pressure (SRP) handling were compared, namely adjusting SRP scaling parameter or fixing it on pre-defined values. Our analyses confirm that the empirical-stochastic orbit modeling approach, which does not require satellite attitude information and macro models, results for most of the monitored station parameters in comparable accuracy as the dynamical model that employs precise non-conservative force modeling. However, the dynamical orbit model leads to a reduction of the RMS values for the estimated rotation pole coordinates by 17% for x-pole and 12% for y-pole. The experiments show that adjusting atmospheric drag scaling parameters each 30 min is appropriate for DORIS solutions. Moreover, it was shown that the adjustment of cross-track once-per-revolution empirical parameter increases the RMS of the estimated Earth rotation pole coordinates. With recent data it was however not possible to confirm the previously known high annual variation in the estimated geocenter z-translation series as well as its mitigation by fixing the SRP parameters on pre-defined values.

  2. Adjustments of the TaD electron density reconstruction model with GNSS-TEC parameters for operational application purposes

    NASA Astrophysics Data System (ADS)

    Kutiev, Ivan; Marinov, Pencho; Fidanova, Stefka; Belehaki, Anna; Tsagouri, Ioanna

    2012-12-01

    Validation results on the latest version of TaD model (TaDv2) show realistic reconstruction of the electron density profiles (EDPs) with an average error of 3 TECU, similar to the error obtained from GNSS-TEC calculated paremeters. The work presented here has the aim to further improve the accuracy of the TaD topside reconstruction, adjusting the TEC parameter calculated from TaD model with the TEC parameter calculated by GNSS transmitting RINEX files provided by receivers co-located with the Digisondes. The performance of the new version is tested during a storm period demonstrating further improvements in respect to the previous version. Statistical comparison of modeled and observed TEC confirms the validity of the proposed adjustment. A significant benefit of the proposed upgrade is that it facilitates the real-time implementation of TaD. The model needs a reliable measure of the scale height at the peak height, which is supposed to be provided by Digisondes. Oftenly, the automatic scaling software fails to correctly calculate the scale height at the peak, Hm, due to interferences in the receiving signal. Consequently the model estimated topside scale height is wrongly calculated leading to unrealistic results for the modeled EDP. The proposed TEC adjustment forces the model to correctly reproduce the topside scale height, despite the inaccurate values of Hm. This adjustment is very important for the application of TaD in an operational environment.

  3. Hydrograph structure informed calibration in the frequency domain with time localization

    NASA Astrophysics Data System (ADS)

    Kumarasamy, K.; Belmont, P.

    2015-12-01

    Complex models with large number of parameters are commonly used to estimate sediment yields and predict changes in sediment loads as a result of changes in management or conservation practice at large watershed (>2000 km2) scales. As sediment yield is a strongly non-linear function that responds to channel (peak or mean) velocity or flow depth, it is critical to accurately represent flows. The process of calibration in such models (e.g., SWAT) generally involves the adjustment of several parameters to obtain better estimates of goodness of fit metrics such as Nash Sutcliff Efficiency (NSE). However, such indicators only provide a global view of model performance, potentially obscuring accuracy of the timing or magnitude of specific flows of interest. We describe an approach for streamflow calibration that will greatly reduce the black-box nature of calibration, when response from a parameter adjustment is not clearly known. Fourier Transform or the Short Term Fourier Transform could be used to characterize model performance in the frequency domain as well, however, the ambiguity of a Fourier transform with regards to time localization renders its implementation in a model calibration setting rather useless. Brief and sudden changes (e.g. stream flow peaks) in signals carry the most interesting information from parameter adjustments, which are completely lost in the transform without time localization. Wavelet transform captures the frequency component in the signal without compromising time and is applied to contrast changes in signal response to parameter adjustments. Here we employ the mother wavelet called the Mexican hat wavelet and apply a Continuous Wavelet Transform to understand the signal in the frequency domain. Further, with the use of the cross-wavelet spectrum we examine the relationship between the two signals (prior or post parameter adjustment) in the time-scale plane (e.g., lower scales correspond to higher frequencies). The non-stationarity of the streamflow signal does not hinder this assessment and regions of change called boundaries of influence (seasons or time when such change occurs in the hydrograph) for each parameter are delineated. In addition, we can discover the structural component of the signal (e.g., shifts or amplitude change) that has changed.

  4. Determination of Phobos' rotational parameters by an inertial frame bundle block adjustment

    NASA Astrophysics Data System (ADS)

    Burmeister, Steffi; Willner, Konrad; Schmidt, Valentina; Oberst, Jürgen

    2018-01-01

    A functional model for a bundle block adjustment in the inertial reference frame was developed, implemented and tested. This approach enables the determination of rotation parameters of planetary bodies on the basis of photogrammetric observations. Tests with a self-consistent synthetic data set showed that the implementation converges reliably toward the expected values of the introduced unknown parameters of the adjustment, e.g., spin pole orientation, and that it can cope with typical observational errors in the data. We applied the model to a data set of Phobos using images from the Mars Express and the Viking mission. With Phobos being in a locked rotation, we computed a forced libration amplitude of 1.14^circ ± 0.03^circ together with a control point network of 685 points.

  5. Towards a covariance matrix of CAB model parameters for H(H2O)

    NASA Astrophysics Data System (ADS)

    Scotta, Juan Pablo; Noguere, Gilles; Damian, José Ignacio Marquez

    2017-09-01

    Preliminary results on the uncertainties of hydrogen into light water thermal scattering law of the CAB model are presented. It was done through a coupling between the nuclear data code CONRAD and the molecular dynamic simulations code GROMACS. The Generalized Least Square method was used to adjust the model parameters on evaluated data and generate covariance matrices between the CAB model parameters.

  6. Numerical Simulation Of Cratering Effects In Adobe

    DTIC Science & Technology

    2013-07-01

    DEVELOPMENT OF MATERIAL PARAMETERS .........................................................7 PROBLEM SETUP...37 PARAMETER ADJUSTMENTS ......................................................................................38 GLOSSARY...dependent yield surface with the Geological Yield Surface (GEO) modeled in CTH using well characterized adobe. By identifying key parameters that

  7. SICR rumor spreading model in complex networks: Counterattack and self-resistance

    NASA Astrophysics Data System (ADS)

    Zan, Yongli; Wu, Jianliang; Li, Ping; Yu, Qinglin

    2014-07-01

    Rumor is an important form of social interaction. However, spreading of harmful rumors could have a significant negative impact on the well-being of the society. In this paper, considering the counterattack mechanism of the rumor spreading, we introduce two new models: Susceptible-Infective-Counterattack-Refractory (SICR) model and adjusted-SICR model. We then derive mean-field equations to describe their dynamics in homogeneous networks and conduct the steady-state analysis. We also introduce the self-resistance parameter τ, and study the influence of this parameter on rumor spreading. Numerical simulations are performed to compare the SICR model with the SIR model and the adjusted-SICR model, respectively, and we investigate the spreading peak of the rumor and the final size of the rumor with various parameters. Simulation results are congruent exactly with the theoretical analysis. The experiment reveals some interesting patterns of rumor spreading involved with counterattack force.

  8. Validation of geometric models for fisheye lenses

    NASA Astrophysics Data System (ADS)

    Schneider, D.; Schwalbe, E.; Maas, H.-G.

    The paper focuses on the photogrammetric investigation of geometric models for different types of optical fisheye constructions (equidistant, equisolid-angle, sterographic and orthographic projection). These models were implemented and thoroughly tested in a spatial resection and a self-calibrating bundle adjustment. For this purpose, fisheye images were taken with a Nikkor 8 mm fisheye lens on a Kodak DSC 14n Pro digital camera in a hemispherical calibration room. Both, the spatial resection and the bundle adjustment resulted in a standard deviation of unit weight of 1/10 pixel with a suitable set of simultaneous calibration parameters introduced into the camera model. The camera-lens combination was treated with all of the four basic models mentioned above. Using the same set of additional lens distortion parameters, the differences between the models can largely be compensated, delivering almost the same precision parameters. The relative object space precision obtained from the bundle adjustment was ca. 1:10 000 of the object dimensions. This value can be considered as a very satisfying result, as fisheye images generally have a lower geometric resolution as a consequence of their large field of view and also have a inferior imaging quality in comparison to most central perspective lenses.

  9. ConvAn: a convergence analyzing tool for optimization of biochemical networks.

    PubMed

    Kostromins, Andrejs; Mozga, Ivars; Stalidzans, Egils

    2012-01-01

    Dynamic models of biochemical networks usually are described as a system of nonlinear differential equations. In case of optimization of models for purpose of parameter estimation or design of new properties mainly numerical methods are used. That causes problems of optimization predictability as most of numerical optimization methods have stochastic properties and the convergence of the objective function to the global optimum is hardly predictable. Determination of suitable optimization method and necessary duration of optimization becomes critical in case of evaluation of high number of combinations of adjustable parameters or in case of large dynamic models. This task is complex due to variety of optimization methods, software tools and nonlinearity features of models in different parameter spaces. A software tool ConvAn is developed to analyze statistical properties of convergence dynamics for optimization runs with particular optimization method, model, software tool, set of optimization method parameters and number of adjustable parameters of the model. The convergence curves can be normalized automatically to enable comparison of different methods and models in the same scale. By the help of the biochemistry adapted graphical user interface of ConvAn it is possible to compare different optimization methods in terms of ability to find the global optima or values close to that as well as the necessary computational time to reach them. It is possible to estimate the optimization performance for different number of adjustable parameters. The functionality of ConvAn enables statistical assessment of necessary optimization time depending on the necessary optimization accuracy. Optimization methods, which are not suitable for a particular optimization task, can be rejected if they have poor repeatability or convergence properties. The software ConvAn is freely available on www.biosystems.lv/convan. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  10. Scale-up on basis of structured mixing models: A new concept.

    PubMed

    Mayr, B; Moser, A; Nagy, E; Horvat, P

    1994-02-05

    A new scale-up concept based upon mixing models for bioreactors equipped with Rushton turbines using the tanks-in-series concept is presented. The physical mixing model includes four adjustable parameters, i.e., radial and axial circulation time, number of ideally mixed elements in one cascade, and the volume of the ideally mixed turbine region. The values of the model parameters were adjusted with the application of a modified Monte-Carlo optimization method, which fitted the simulated response function to the experimental curve. The number of cascade elements turned out to be constant (N = 4). The model parameter radial circulation time is in good agreement with the one obtained by the pumping capacity. In case of remaining parameters a first or second order formal equation was developed, including four operational parameters (stirring and aeration intensity, scale, viscosity). This concept can be extended to several other types of bioreactors as well, and it seems to be a suitable tool to compare the bioprocess performance of different types of bioreactors. (c) 1994 John Wiley & Sons, Inc.

  11. A Lagrangian subgrid-scale model with dynamic estimation of Lagrangian time scale for large eddy simulation of complex flows

    NASA Astrophysics Data System (ADS)

    Verma, Aman; Mahesh, Krishnan

    2012-08-01

    The dynamic Lagrangian averaging approach for the dynamic Smagorinsky model for large eddy simulation is extended to an unstructured grid framework and applied to complex flows. The Lagrangian time scale is dynamically computed from the solution and does not need any adjustable parameter. The time scale used in the standard Lagrangian model contains an adjustable parameter θ. The dynamic time scale is computed based on a "surrogate-correlation" of the Germano-identity error (GIE). Also, a simple material derivative relation is used to approximate GIE at different events along a pathline instead of Lagrangian tracking or multi-linear interpolation. Previously, the time scale for homogeneous flows was computed by averaging along directions of homogeneity. The present work proposes modifications for inhomogeneous flows. This development allows the Lagrangian averaged dynamic model to be applied to inhomogeneous flows without any adjustable parameter. The proposed model is applied to LES of turbulent channel flow on unstructured zonal grids at various Reynolds numbers. Improvement is observed when compared to other averaging procedures for the dynamic Smagorinsky model, especially at coarse resolutions. The model is also applied to flow over a cylinder at two Reynolds numbers and good agreement with previous computations and experiments is obtained. Noticeable improvement is obtained using the proposed model over the standard Lagrangian model. The improvement is attributed to a physically consistent Lagrangian time scale. The model also shows good performance when applied to flow past a marine propeller in an off-design condition; it regularizes the eddy viscosity and adjusts locally to the dominant flow features.

  12. Adjustments to de Leva-anthropometric regression data for the changes in body proportions in elderly humans.

    PubMed

    Ho Hoang, Khai-Long; Mombaur, Katja

    2015-10-15

    Dynamic modeling of the human body is an important tool to investigate the fundamentals of the biomechanics of human movement. To model the human body in terms of a multi-body system, it is necessary to know the anthropometric parameters of the body segments. For young healthy subjects, several data sets exist that are widely used in the research community, e.g. the tables provided by de Leva. None such comprehensive anthropometric parameter sets exist for elderly people. It is, however, well known that body proportions change significantly during aging, e.g. due to degenerative effects in the spine, such that parameters for young people cannot be used for realistically simulating the dynamics of elderly people. In this study, regression equations are derived from the inertial parameters, center of mass positions, and body segment lengths provided by de Leva to be adjustable to the changes in proportion of the body parts of male and female humans due to aging. Additional adjustments are made to the reference points of the parameters for the upper body segments as they are chosen in a more practicable way in the context of creating a multi-body model in a chain structure with the pelvis representing the most proximal segment. Copyright © 2015 Elsevier Ltd. All rights reserved.

  13. Sensitivity analysis of a multilayer, finite-difference model of the Southeastern Coastal Plain regional aquifer system; Mississippi, Alabama, Georgia, and South Carolina

    USGS Publications Warehouse

    Pernik, Meribeth

    1987-01-01

    The sensitivity of a multilayer finite-difference regional flow model was tested by changing the calibrated values for five parameters in the steady-state model and one in the transient-state model. The parameters that changed under the steady-state condition were those that had been routinely adjusted during the calibration process as part of the effort to match pre-development potentiometric surfaces, and elements of the water budget. The tested steady-state parameters include: recharge, riverbed conductance, transmissivity, confining unit leakance, and boundary location. In the transient-state model, the storage coefficient was adjusted. The sensitivity of the model to changes in the calibrated values of these parameters was evaluated with respect to the simulated response of net base flow to the rivers, and the mean value of the absolute head residual. To provide a standard measurement of sensitivity from one parameter to another, the standard deviation of the absolute head residual was calculated. The steady-state model was shown to be most sensitive to changes in rates of recharge. When the recharge rate was held constant, the model was more sensitive to variations in transmissivity. Near the rivers, the riverbed conductance becomes the dominant parameter in controlling the heads. Changes in confining unit leakance had little effect on simulated base flow, but greatly affected head residuals. The model was relatively insensitive to changes in the location of no-flow boundaries and to moderate changes in the altitude of constant head boundaries. The storage coefficient was adjusted under transient conditions to illustrate the model 's sensitivity to changes in storativity. The model is less sensitive to an increase in storage coefficient than it is to a decrease in storage coefficient. As the storage coefficient decreased, the aquifer drawdown increases, the base flow decreased. The opposite response occurred when the storage coefficient was increased. (Author 's abstract)

  14. Estimating parameters of hidden Markov models based on marked individuals: use of robust design data

    USGS Publications Warehouse

    Kendall, William L.; White, Gary C.; Hines, James E.; Langtimm, Catherine A.; Yoshizaki, Jun

    2012-01-01

    Development and use of multistate mark-recapture models, which provide estimates of parameters of Markov processes in the face of imperfect detection, have become common over the last twenty years. Recently, estimating parameters of hidden Markov models, where the state of an individual can be uncertain even when it is detected, has received attention. Previous work has shown that ignoring state uncertainty biases estimates of survival and state transition probabilities, thereby reducing the power to detect effects. Efforts to adjust for state uncertainty have included special cases and a general framework for a single sample per period of interest. We provide a flexible framework for adjusting for state uncertainty in multistate models, while utilizing multiple sampling occasions per period of interest to increase precision and remove parameter redundancy. These models also produce direct estimates of state structure for each primary period, even for the case where there is just one sampling occasion. We apply our model to expected value data, and to data from a study of Florida manatees, to provide examples of the improvement in precision due to secondary capture occasions. We also provide user-friendly software to implement these models. This general framework could also be used by practitioners to consider constrained models of particular interest, or model the relationship between within-primary period parameters (e.g., state structure) and between-primary period parameters (e.g., state transition probabilities).

  15. Experimental study and thermodynamic modeling for determining the effect of non-polar solvent (hexane)/polar solvent (methanol) ratio and moisture content on the lipid extraction efficiency from Chlorella vulgaris.

    PubMed

    Malekzadeh, Mohammad; Abedini Najafabadi, Hamed; Hakim, Maziar; Feilizadeh, Mehrzad; Vossoughi, Manouchehr; Rashtchian, Davood

    2016-02-01

    In this research, organic solvent composed of hexane and methanol was used for lipid extraction from dry and wet biomass of Chlorella vulgaris. The results indicated that lipid and fatty acid extraction yield was decreased by increasing the moisture content of biomass. However, the maximum extraction efficiency was attained by applying equivolume mixture of hexane and methanol for both dry and wet biomass. Thermodynamic modeling was employed to estimate the effect of hexane/methanol ratio and moisture content on fatty acid extraction yield. Hansen solubility parameter was used in adjusting the interaction parameters of the model, which led to decrease the number of tuning parameters from 6 to 2. The results indicated that the model can accurately estimate the fatty acid recovery with average absolute deviation percentage (AAD%) of 13.90% and 15.00% for the two cases of using 6 and 2 adjustable parameters, respectively. Copyright © 2015 Elsevier Ltd. All rights reserved.

  16. Simulation of dynamics of beam structures with bolted joints using adjusted Iwan beam elements

    NASA Astrophysics Data System (ADS)

    Song, Y.; Hartwigsen, C. J.; McFarland, D. M.; Vakakis, A. F.; Bergman, L. A.

    2004-05-01

    Mechanical joints often affect structural response, causing localized non-linear stiffness and damping changes. As many structures are assemblies, incorporating the effects of joints is necessary to produce predictive finite element models. In this paper, we present an adjusted Iwan beam element (AIBE) for dynamic response analysis of beam structures containing joints. The adjusted Iwan model consists of a combination of springs and frictional sliders that exhibits non-linear behavior due to the stick-slip characteristic of the latter. The beam element developed is two-dimensional and consists of two adjusted Iwan models and maintains the usual complement of degrees of freedom: transverse displacement and rotation at each of the two nodes. The resulting element includes six parameters, which must be determined. To circumvent the difficulty arising from the non-linear nature of the inverse problem, a multi-layer feed-forward neural network (MLFF) is employed to extract joint parameters from measured structural acceleration responses. A parameter identification procedure is implemented on a beam structure with a bolted joint. In this procedure, acceleration responses at one location on the beam structure due to one known impulsive forcing function are simulated for sets of combinations of varying joint parameters. A MLFF is developed and trained using the patterns of envelope data corresponding to these acceleration histories. The joint parameters are identified through the trained MLFF applied to the measured acceleration response. Then, using the identified joint parameters, acceleration responses of the jointed beam due to a different impulsive forcing function are predicted. The validity of the identified joint parameters is assessed by comparing simulated acceleration responses with experimental measurements. The capability of the AIBE to capture the effects of bolted joints on the dynamic responses of beam structures, and the efficacy of the MLFF parameter identification procedure, are demonstrated.

  17. Adaptively Adjusted Event-Triggering Mechanism on Fault Detection for Networked Control Systems.

    PubMed

    Wang, Yu-Long; Lim, Cheng-Chew; Shi, Peng

    2016-12-08

    This paper studies the problem of adaptively adjusted event-triggering mechanism-based fault detection for a class of discrete-time networked control system (NCS) with applications to aircraft dynamics. By taking into account the fault occurrence detection progress and the fault occurrence probability, and introducing an adaptively adjusted event-triggering parameter, a novel event-triggering mechanism is proposed to achieve the efficient utilization of the communication network bandwidth. Both the sensor-to-control station and the control station-to-actuator network-induced delays are taken into account. The event-triggered sensor and the event-triggered control station are utilized simultaneously to establish new network-based closed-loop models for the NCS subject to faults. Based on the established models, the event-triggered simultaneous design of fault detection filter (FDF) and controller is presented. A new algorithm for handling the adaptively adjusted event-triggering parameter is proposed. Performance analysis verifies the effectiveness of the adaptively adjusted event-triggering mechanism, and the simultaneous design of FDF and controller.

  18. [Sensitivity analysis of AnnAGNPS model's hydrology and water quality parameters based on the perturbation analysis method].

    PubMed

    Xi, Qing; Li, Zhao-Fu; Luo, Chuan

    2014-05-01

    Sensitivity analysis of hydrology and water quality parameters has a great significance for integrated model's construction and application. Based on AnnAGNPS model's mechanism, terrain, hydrology and meteorology, field management, soil and other four major categories of 31 parameters were selected for the sensitivity analysis in Zhongtian river watershed which is a typical small watershed of hilly region in the Taihu Lake, and then used the perturbation method to evaluate the sensitivity of the parameters to the model's simulation results. The results showed that: in the 11 terrain parameters, LS was sensitive to all the model results, RMN, RS and RVC were generally sensitive and less sensitive to the output of sediment but insensitive to the remaining results. For hydrometeorological parameters, CN was more sensitive to runoff and sediment and relatively sensitive for the rest results. In field management, fertilizer and vegetation parameters, CCC, CRM and RR were less sensitive to sediment and particulate pollutants, the six fertilizer parameters (FR, FD, FID, FOD, FIP, FOP) were particularly sensitive for nitrogen and phosphorus nutrients. For soil parameters, K is quite sensitive to all the results except the runoff, the four parameters of the soil's nitrogen and phosphorus ratio (SONR, SINR, SOPR, SIPR) were less sensitive to the corresponding results. The simulation and verification results of runoff in Zhongtian watershed show a good accuracy with the deviation less than 10% during 2005- 2010. Research results have a direct reference value on AnnAGNPS model's parameter selection and calibration adjustment. The runoff simulation results of the study area also proved that the sensitivity analysis was practicable to the parameter's adjustment and showed the adaptability to the hydrology simulation in the Taihu Lake basin's hilly region and provide reference for the model's promotion in China.

  19. Re-estimating temperature-dependent consumption parameters in bioenergetics models for juvenile Chinook salmon

    USGS Publications Warehouse

    Plumb, John M.; Moffitt, Christine M.

    2015-01-01

    Researchers have cautioned against the borrowing of consumption and growth parameters from other species and life stages in bioenergetics growth models. In particular, the function that dictates temperature dependence in maximum consumption (Cmax) within the Wisconsin bioenergetics model for Chinook Salmon Oncorhynchus tshawytscha produces estimates that are lower than those measured in published laboratory feeding trials. We used published and unpublished data from laboratory feeding trials with subyearling Chinook Salmon from three stocks (Snake, Nechako, and Big Qualicum rivers) to estimate and adjust the model parameters for temperature dependence in Cmax. The data included growth measures in fish ranging from 1.5 to 7.2 g that were held at temperatures from 14°C to 26°C. Parameters for temperature dependence in Cmax were estimated based on relative differences in food consumption, and bootstrapping techniques were then used to estimate the error about the parameters. We found that at temperatures between 17°C and 25°C, the current parameter values did not match the observed data, indicating that Cmax should be shifted by about 4°C relative to the current implementation under the bioenergetics model. We conclude that the adjusted parameters for Cmax should produce more accurate predictions from the bioenergetics model for subyearling Chinook Salmon.

  20. Use of In-Situ and Remotely Sensed Snow Observations for the National Water Model in Both an Analysis and Calibration Framework.

    NASA Astrophysics Data System (ADS)

    Karsten, L. R.; Gochis, D.; Dugger, A. L.; McCreight, J. L.; Barlage, M. J.; Fall, G. M.; Olheiser, C.

    2017-12-01

    Since version 1.0 of the National Water Model (NWM) has gone operational in Summer 2016, several upgrades to the model have occurred to improve hydrologic prediction for the continental United States. Version 1.1 of the NWM (Spring 2017) includes upgrades to parameter datasets impacting land surface hydrologic processes. These parameter datasets were upgraded using an automated calibration workflow that utilizes the Dynamic Data Search (DDS) algorithm to adjust parameter values using observed streamflow. As such, these upgrades to parameter values took advantage of various observations collected for snow analysis. In particular, in-situ SNOTEL observations in the Western US, volunteer in-situ observations across the entire US, gamma-derived snow water equivalent (SWE) observations courtesy of the NWS NOAA Corps program, gridded snow depth and SWE products from the Jet Propulsion Laboratory (JPL) Airborne Snow Observatory (ASO), gridded remotely sensed satellite-based snow products (MODIS,AMSR2,VIIRS,ATMS), and gridded SWE from the NWS Snow Data Assimilation System (SNODAS). This study explores the use of these observations to quantify NWM error and improvements from version 1.0 to version 1.1, along with subsequent work since then. In addition, this study explores the use of snow observations for use within the automated calibration workflow. Gridded parameter fields impacting the accumulation and ablation of snow states in the NWM were adjusted and calibrated using gridded remotely sensed snow states, SNODAS products, and in-situ snow observations. This calibration adjustment took place over various ecological regions in snow-dominated parts of the US for a retrospective period of time to capture a variety of climatological conditions. Specifically, the latest calibrated parameters impacting streamflow were held constant and only parameters impacting snow physics were tuned using snow observations and analysis. The adjusted parameter datasets were then used to force the model over an independent period for analysis against both snow and streamflow observations to see if improvements took place. The goal of this work is to further improve snow physics in the NWM, along with identifying areas where further work will take place in the future, such as data assimilation or further forcing improvements.

  1. Application of Layered Perforation Profile Control Technique to Low Permeable Reservoir

    NASA Astrophysics Data System (ADS)

    Wei, Sun

    2018-01-01

    it is difficult to satisfy the demand of profile control of complex well section and multi-layer reservoir by adopting the conventional profile control technology, therefore, a research is conducted on adjusting the injection production profile with layered perforating parameters optimization. i.e. in the case of coproduction for multi-layer, water absorption of each layer is adjusted by adjusting the perforating parameters, thus to balance the injection production profile of the whole well section, and ultimately enhance the oil displacement efficiency of water flooding. By applying the relationship between oil-water phase percolation theory/perforating damage and capacity, a mathematic model of adjusting the injection production profile with layered perforating parameters optimization, besides, perforating parameters optimization software is programmed. Different types of optimization design work are carried out according to different geological conditions and construction purposes by using the perforating optimization design software; furthermore, an application test is done for low permeable reservoir, and the water injection profile tends to be balanced significantly after perforation with optimized parameters, thereby getting a good application effect on site.

  2. Aqua/Aura Updated Inclination Adjust Maneuver Performance Prediction Model

    NASA Technical Reports Server (NTRS)

    Boone, Spencer

    2017-01-01

    This presentation will discuss the updated Inclination Adjust Maneuver (IAM) performance prediction model that was developed for Aqua and Aura following the 2017 IAM series. This updated model uses statistical regression methods to identify potential long-term trends in maneuver parameters, yielding improved predictions when re-planning past maneuvers. The presentation has been reviewed and approved by Eric Moyer, ESMO Deputy Project Manager.

  3. Ground Motion Prediction Models for Caucasus Region

    NASA Astrophysics Data System (ADS)

    Jorjiashvili, Nato; Godoladze, Tea; Tvaradze, Nino; Tumanova, Nino

    2016-04-01

    Ground motion prediction models (GMPMs) relate ground motion intensity measures to variables describing earthquake source, path, and site effects. Estimation of expected ground motion is a fundamental earthquake hazard assessment. The most commonly used parameter for attenuation relation is peak ground acceleration or spectral acceleration because this parameter gives useful information for Seismic Hazard Assessment. Since 2003 development of Georgian Digital Seismic Network has started. In this study new GMP models are obtained based on new data from Georgian seismic network and also from neighboring countries. Estimation of models is obtained by classical, statistical way, regression analysis. In this study site ground conditions are additionally considered because the same earthquake recorded at the same distance may cause different damage according to ground conditions. Empirical ground-motion prediction models (GMPMs) require adjustment to make them appropriate for site-specific scenarios. However, the process of making such adjustments remains a challenge. This work presents a holistic framework for the development of a peak ground acceleration (PGA) or spectral acceleration (SA) GMPE that is easily adjustable to different seismological conditions and does not suffer from the practical problems associated with adjustments in the response spectral domain.

  4. High-precision method of binocular camera calibration with a distortion model.

    PubMed

    Li, Weimin; Shan, Siyu; Liu, Hui

    2017-03-10

    A high-precision camera calibration method for binocular stereo vision system based on a multi-view template and alternative bundle adjustment is presented in this paper. The proposed method could be achieved by taking several photos on a specially designed calibration template that has diverse encoded points in different orientations. In this paper, the method utilized the existing algorithm used for monocular camera calibration to obtain the initialization, which involves a camera model, including radial lens distortion and tangential distortion. We created a reference coordinate system based on the left camera coordinate to optimize the intrinsic parameters of left camera through alternative bundle adjustment to obtain optimal values. Then, optimal intrinsic parameters of the right camera can be obtained through alternative bundle adjustment when we create a reference coordinate system based on the right camera coordinate. We also used all intrinsic parameters that were acquired to optimize extrinsic parameters. Thus, the optimal lens distortion parameters and intrinsic and extrinsic parameters were obtained. Synthetic and real data were used to test the method. The simulation results demonstrate that the maximum mean absolute relative calibration errors are about 3.5e-6 and 1.2e-6 for the focal length and the principal point, respectively, under zero-mean Gaussian noise with 0.05 pixels standard deviation. The real result shows that the reprojection error of our model is about 0.045 pixels with the relative standard deviation of 1.0e-6 over the intrinsic parameters. The proposed method is convenient, cost-efficient, highly precise, and simple to carry out.

  5. Mapping an operator's perception of a parameter space

    NASA Technical Reports Server (NTRS)

    Pew, R. W.; Jagacinski, R. J.

    1972-01-01

    Operators monitored the output of two versions of the crossover model having a common random input. Their task was to make discrete, real-time adjustments of the parameters k and tau of one of the models to make its output time history converge to that of the other, fixed model. A plot was obtained of the direction of parameter change as a function of position in the (tau, k) parameter space relative to the nominal value. The plot has a great deal of structure and serves as one form of representation of the operator's perception of the parameter space.

  6. Roll paper pilot. [mathematical model for predicting pilot rating of aircraft in roll task

    NASA Technical Reports Server (NTRS)

    Naylor, F. R.; Dillow, J. D.; Hannen, R. A.

    1973-01-01

    A mathematical model for predicting the pilot rating of an aircraft in a roll task is described. The model includes: (1) the lateral-directional aircraft equations of motion; (2) a stochastic gust model; (3) a pilot model with two free parameters; and (4) a pilot rating expression that is a function of rms roll angle and the pilot lead time constant. The pilot gain and lead time constant are selected to minimize the pilot rating expression. The pilot parameters are then adjusted to provide a 20% stability margin and the adjusted pilot parameters are used to compute a roll paper pilot rating of the aircraft/gust configuration. The roll paper pilot rating was computed for 25 aircraft/gust configurations. A range of actual ratings from 2 to 9 were encountered and the roll paper pilot ratings agree quite well with the actual ratings. In addition there is good correlation between predicted and measured rms roll angle.

  7. Computational modeling of cardiovascular response to orthostatic stress

    NASA Technical Reports Server (NTRS)

    Heldt, Thomas; Shim, Eun B.; Kamm, Roger D.; Mark, Roger G.

    2002-01-01

    The objective of this study is to develop a model of the cardiovascular system capable of simulating the short-term (< or = 5 min) transient and steady-state hemodynamic responses to head-up tilt and lower body negative pressure. The model consists of a closed-loop lumped-parameter representation of the circulation connected to set-point models of the arterial and cardiopulmonary baroreflexes. Model parameters are largely based on literature values. Model verification was performed by comparing the simulation output under baseline conditions and at different levels of orthostatic stress to sets of population-averaged hemodynamic data reported in the literature. On the basis of experimental evidence, we adjusted some model parameters to simulate experimental data. Orthostatic stress simulations are not statistically different from experimental data (two-sided test of significance with Bonferroni adjustment for multiple comparisons). Transient response characteristics of heart rate to tilt also compare well with reported data. A case study is presented on how the model is intended to be used in the future to investigate the effects of post-spaceflight orthostatic intolerance.

  8. Human sense utilization method on real-time computer graphics

    NASA Astrophysics Data System (ADS)

    Maehara, Hideaki; Ohgashi, Hitoshi; Hirata, Takao

    1997-06-01

    We are developing an adjustment method of real-time computer graphics, to obtain effective ones which give audience various senses intended by producer, utilizing human sensibility technologically. Generally, production of real-time computer graphics needs much adjustment of various parameters, such as 3D object models/their motions/attributes/view angle/parallax etc., in order that the graphics gives audience superior effects as reality of materials, sense of experience and so on. And it is also known it costs much to adjust such various parameters by trial and error. A graphics producer often evaluates his graphics to improve it. For example, it may lack 'sense of speed' or be necessary to be given more 'sense of settle down,' to improve it. On the other hand, we can know how the parameters in computer graphics affect such senses by means of statistically analyzing several samples of computer graphics which provide different senses. We paid attention to these two facts, so that we designed an adjustment method of the parameters by inputting phases of sense into a computer. By the way of using this method, it becomes possible to adjust real-time computer graphics more effectively than by conventional way of trial and error.

  9. Using geometry to improve model fitting and experiment design for glacial isostasy

    NASA Astrophysics Data System (ADS)

    Kachuck, S. B.; Cathles, L. M.

    2017-12-01

    As scientists we routinely deal with models, which are geometric objects at their core - the manifestation of a set of parameters as predictions for comparison with observations. When the number of observations exceeds the number of parameters, the model is a hypersurface (the model manifold) in the space of all possible predictions. The object of parameter fitting is to find the parameters corresponding to the point on the model manifold as close to the vector of observations as possible. But the geometry of the model manifold can make this difficult. By curving, ending abruptly (where, for instance, parameters go to zero or infinity), and by stretching and compressing the parameters together in unexpected directions, it can be difficult to design algorithms that efficiently adjust the parameters. Even at the optimal point on the model manifold, parameters might not be individually resolved well enough to be applied to new contexts. In our context of glacial isostatic adjustment, models of sparse surface observations have a broad spread of sensitivity to mixtures of the earth's viscous structure and the surface distribution of ice over the last glacial cycle. This impedes precise statements about crucial geophysical processes, such as the planet's thermal history or the climates that controlled the ice age. We employ geometric methods developed in the field of systems biology to improve the efficiency of fitting (geodesic accelerated Levenberg-Marquardt) and to identify the maximally informative sources of additional data to make better predictions of sea levels and ice configurations (optimal experiment design). We demonstrate this in particular in reconstructions of the Barents Sea Ice Sheet, where we show that only certain kinds of data from the central Barents have the power to distinguish between proposed models.

  10. A Generalized Simple Formulation of Convective Adjustment Timescale for Cumulus Convection Parameterizations

    EPA Science Inventory

    Convective adjustment timescale (τ) for cumulus clouds is one of the most influential parameters controlling parameterized convective precipitation in climate and weather simulation models at global and regional scales. Due to the complex nature of deep convection, a pres...

  11. A model of the human supervisor

    NASA Technical Reports Server (NTRS)

    Kok, J. J.; Vanwijk, R. A.

    1977-01-01

    A general model of the human supervisor's behavior is given. Submechanisms of the model include: the observer/reconstructor; decision-making; and controller. A set of hypothesis is postulated for the relations between the task variables and the parameters of the different submechanisms of the model. Verification of the model hypotheses is considered using variations in the task variables. An approach is suggested for the identification of the model parameters which makes use of a multidimensional error criterion. Each of the elements of this multidimensional criterion corresponds to a certain aspect of the supervisor's behavior, and is directly related to a particular part of the model and its parameters. This approach offers good possibilities for an efficient parameter adjustment procedure.

  12. CPU time optimization and precise adjustment of the Geant4 physics parameters for a VARIAN 2100 C/D gamma radiotherapy linear accelerator simulation using GAMOS.

    PubMed

    Arce, Pedro; Lagares, Juan Ignacio

    2018-01-25

    We have verified the GAMOS/Geant4 simulation model of a 6 MV VARIAN Clinac 2100 C/D linear accelerator by the procedure of adjusting the initial beam parameters to fit the percentage depth dose and cross-profile dose experimental data at different depths in a water phantom. Thanks to the use of a wide range of field sizes, from 2  ×  2 cm 2 to 40  ×  40 cm 2 , a small phantom voxel size and high statistics, fine precision in the determination of the beam parameters has been achieved. This precision has allowed us to make a thorough study of the different physics models and parameters that Geant4 offers. The three Geant4 electromagnetic physics sets of models, i.e. Standard, Livermore and Penelope, have been compared to the experiment, testing the four different models of angular bremsstrahlung distributions as well as the three available multiple-scattering models, and optimizing the most relevant Geant4 electromagnetic physics parameters. Before the fitting, a comprehensive CPU time optimization has been done, using several of the Geant4 efficiency improvement techniques plus a few more developed in GAMOS.

  13. Inverse modeling with RZWQM2 to predict water quality

    USGS Publications Warehouse

    Nolan, Bernard T.; Malone, Robert W.; Ma, Liwang; Green, Christopher T.; Fienen, Michael N.; Jaynes, Dan B.

    2011-01-01

    This chapter presents guidelines for autocalibration of the Root Zone Water Quality Model (RZWQM2) by inverse modeling using PEST parameter estimation software (Doherty, 2010). Two sites with diverse climate and management were considered for simulation of N losses by leaching and in drain flow: an almond [Prunus dulcis (Mill.) D.A. Webb] orchard in the San Joaquin Valley, California and the Walnut Creek watershed in central Iowa, which is predominantly in corn (Zea mays L.)–soybean [Glycine max (L.) Merr.] rotation. Inverse modeling provides an objective statistical basis for calibration that involves simultaneous adjustment of model parameters and yields parameter confidence intervals and sensitivities. We describe operation of PEST in both parameter estimation and predictive analysis modes. The goal of parameter estimation is to identify a unique set of parameters that minimize a weighted least squares objective function, and the goal of predictive analysis is to construct a nonlinear confidence interval for a prediction of interest by finding a set of parameters that maximizes or minimizes the prediction while maintaining the model in a calibrated state. We also describe PEST utilities (PAR2PAR, TSPROC) for maintaining ordered relations among model parameters (e.g., soil root growth factor) and for post-processing of RZWQM2 outputs representing different cropping practices at the Iowa site. Inverse modeling provided reasonable fits to observed water and N fluxes and directly benefitted the modeling through: (i) simultaneous adjustment of multiple parameters versus one-at-a-time adjustment in manual approaches; (ii) clear indication by convergence criteria of when calibration is complete; (iii) straightforward detection of nonunique and insensitive parameters, which can affect the stability of PEST and RZWQM2; and (iv) generation of confidence intervals for uncertainty analysis of parameters and model predictions. Composite scaled sensitivities, which reflect the total information provided by the observations for a parameter, indicated that most of the RZWQM2 parameters at the California study site (CA) and Iowa study site (IA) could be reliably estimated by regression. Correlations obtained in the CA case indicated that all model parameters could be uniquely estimated by inverse modeling. Although water content at field capacity was highly correlated with bulk density (−0.94), the correlation is less than the threshold for nonuniqueness (0.95, absolute value basis). Additionally, we used truncated singular value decomposition (SVD) at CA to mitigate potential problems with highly correlated and insensitive parameters. Singular value decomposition estimates linear combinations (eigenvectors) of the original process-model parameters. Parameter confidence intervals (CIs) at CA indicated that parameters were reliably estimated with the possible exception of an organic pool transfer coefficient (R45), which had a comparatively wide CI. However, the 95% confidence interval for R45 (0.03–0.35) is mostly within the range of values reported for this parameter. Predictive analysis at CA generated confidence intervals that were compared with independently measured annual water flux (groundwater recharge) and median nitrate concentration in a collocated monitoring well as part of model evaluation. Both the observed recharge (42.3 cm yr−1) and nitrate concentration (24.3 mg L−1) were within their respective 90% confidence intervals, indicating that overall model error was within acceptable limits.

  14. Evaluation, Calibration and Comparison of the Precipitation-Runoff Modeling System (PRMS) National Hydrologic Model (NHM) Using Moderate Resolution Imaging Spectroradiometer (MODIS) and Snow Data Assimilation System (SNODAS) Gridded Datasets

    NASA Astrophysics Data System (ADS)

    Norton, P. A., II; Haj, A. E., Jr.

    2014-12-01

    The United States Geological Survey is currently developing a National Hydrologic Model (NHM) to support and facilitate coordinated and consistent hydrologic modeling efforts at the scale of the continental United States. As part of this effort, the Geospatial Fabric (GF) for the NHM was created. The GF is a database that contains parameters derived from datasets that characterize the physical features of watersheds. The GF was used to aggregate catchments and flowlines defined in the National Hydrography Dataset Plus dataset for more than 100,000 hydrologic response units (HRUs), and to establish initial parameter values for input to the Precipitation-Runoff Modeling System (PRMS). Many parameter values are adjusted in PRMS using an automated calibration process. Using these adjusted parameter values, the PRMS model estimated variables such as evapotranspiration (ET), potential evapotranspiration (PET), snow-covered area (SCA), and snow water equivalent (SWE). In order to evaluate the effectiveness of parameter calibration, and model performance in general, several satellite-based Moderate Resolution Imaging Spectroradiometer (MODIS) and Snow Data Assimilation System (SNODAS) gridded datasets including ET, PET, SCA, and SWE were compared to PRMS-simulated values. The MODIS and SNODAS data were spatially averaged for each HRU, and compared to PRMS-simulated ET, PET, SCA, and SWE values for each HRU in the Upper Missouri River watershed. Default initial GF parameter values and PRMS calibration ranges were evaluated. Evaluation results, and the use of MODIS and SNODAS datasets to update GF parameter values and PRMS calibration ranges, are presented and discussed.

  15. Celestial Object Imaging Model and Parameter Optimization for an Optical Navigation Sensor Based on the Well Capacity Adjusting Scheme.

    PubMed

    Wang, Hao; Jiang, Jie; Zhang, Guangjun

    2017-04-21

    The simultaneous extraction of optical navigation measurements from a target celestial body and star images is essential for autonomous optical navigation. Generally, a single optical navigation sensor cannot simultaneously image the target celestial body and stars well-exposed because their irradiance difference is generally large. Multi-sensor integration or complex image processing algorithms are commonly utilized to solve the said problem. This study analyzes and demonstrates the feasibility of simultaneously imaging the target celestial body and stars well-exposed within a single exposure through a single field of view (FOV) optical navigation sensor using the well capacity adjusting (WCA) scheme. First, the irradiance characteristics of the celestial body are analyzed. Then, the celestial body edge model and star spot imaging model are established when the WCA scheme is applied. Furthermore, the effect of exposure parameters on the accuracy of star centroiding and edge extraction is analyzed using the proposed model. Optimal exposure parameters are also derived by conducting Monte Carlo simulation to obtain the best performance of the navigation sensor. Finally, laboratorial and night sky experiments are performed to validate the correctness of the proposed model and optimal exposure parameters.

  16. Celestial Object Imaging Model and Parameter Optimization for an Optical Navigation Sensor Based on the Well Capacity Adjusting Scheme

    PubMed Central

    Wang, Hao; Jiang, Jie; Zhang, Guangjun

    2017-01-01

    The simultaneous extraction of optical navigation measurements from a target celestial body and star images is essential for autonomous optical navigation. Generally, a single optical navigation sensor cannot simultaneously image the target celestial body and stars well-exposed because their irradiance difference is generally large. Multi-sensor integration or complex image processing algorithms are commonly utilized to solve the said problem. This study analyzes and demonstrates the feasibility of simultaneously imaging the target celestial body and stars well-exposed within a single exposure through a single field of view (FOV) optical navigation sensor using the well capacity adjusting (WCA) scheme. First, the irradiance characteristics of the celestial body are analyzed. Then, the celestial body edge model and star spot imaging model are established when the WCA scheme is applied. Furthermore, the effect of exposure parameters on the accuracy of star centroiding and edge extraction is analyzed using the proposed model. Optimal exposure parameters are also derived by conducting Monte Carlo simulation to obtain the best performance of the navigation sensor. Finally, laboratorial and night sky experiments are performed to validate the correctness of the proposed model and optimal exposure parameters. PMID:28430132

  17. NGA-West2 Empirical Fourier Model for Active Crustal Regions to Generate Regionally Adjustable Response Spectra

    NASA Astrophysics Data System (ADS)

    Bora, S. S.; Cotton, F.; Scherbaum, F.; Kuehn, N. M.

    2016-12-01

    Adjustment of median ground motion prediction equations (GMPEs) from data-rich (host) regions to data-poor regions (target) is one of major challenges that remains with the current practice of engineering seismology and seismic hazard analysis. Fourier spectral representation of ground motion provides a solution to address the problem of adjustment that is physically transparent and consistent with the concepts of linear system theory. Also, it provides a direct interface to appreciate the physically expected behavior of seismological parameters on ground motion. In the present study, we derive an empirical Fourier model for computing regionally adjustable response spectral ordinates based on random vibration theory (RVT) from shallow crustal earthquakes in active tectonic regions, following the approach of Bora et al. (2014, 2015). , For this purpose, we use an expanded NGA-West2 database with M 3.2—7.9 earthquakes at distances ranging from 0 to 300 km. A mixed-effects regression technique is employed to further explore various components of variability. The NGA-West2 database expanded over a wide magnitude range provides a better understanding (and constraint) of source scaling of ground motion. The large global volume of the database also allows investigating regional patterns in distance-dependent attenuation (i.e., geometrical spreading and inelastic attenuation) of ground motion as well as in the source parameters (e.g., magnitude and stress drop). Furthermore, event-wise variability and its correlation with stress parameter are investigated. Finally, application of the derived Fourier model in generating adjustable response spectra will be shown.

  18. Calibration by Hydrological Response Unit of a National Hydrologic Model to Improve Spatial Representation and Distribution of Parameters

    NASA Astrophysics Data System (ADS)

    Norton, P. A., II

    2015-12-01

    The U. S. Geological Survey is developing a National Hydrologic Model (NHM) to support consistent hydrologic modeling across the conterminous United States (CONUS). The Precipitation-Runoff Modeling System (PRMS) simulates daily hydrologic and energy processes in watersheds, and is used for the NHM application. For PRMS each watershed is divided into hydrologic response units (HRUs); by default each HRU is assumed to have a uniform hydrologic response. The Geospatial Fabric (GF) is a database containing initial parameter values for input to PRMS and was created for the NHM. The parameter values in the GF were derived from datasets that characterize the physical features of the entire CONUS. The NHM application is composed of more than 100,000 HRUs from the GF. Selected parameter values commonly are adjusted by basin in PRMS using an automated calibration process based on calibration targets, such as streamflow. Providing each HRU with distinct values that captures variability within the CONUS may improve simulation performance of the NHM. During calibration of the NHM by HRU, selected parameter values are adjusted for PRMS based on calibration targets, such as streamflow, snow water equivalent (SWE) and actual evapotranspiration (AET). Simulated SWE, AET, and runoff were compared to value ranges derived from multiple sources (e.g. the Snow Data Assimilation System, the Moderate Resolution Imaging Spectroradiometer (i.e. MODIS) Global Evapotranspiration Project, the Simplified Surface Energy Balance model, and the Monthly Water Balance Model). This provides each HRU with a distinct set of parameter values that captures the variability within the CONUS, leading to improved model performance. We present simulation results from the NHM after preliminary calibration, including the results of basin-level calibration for the NHM using: 1) default initial GF parameter values, and 2) parameter values calibrated by HRU.

  19. Application of a Constant Gain Extended Kalman Filter for In-Flight Estimation of Aircraft Engine Performance Parameters

    NASA Technical Reports Server (NTRS)

    Kobayashi, Takahisa; Simon, Donald L.; Litt, Jonathan S.

    2005-01-01

    An approach based on the Constant Gain Extended Kalman Filter (CGEKF) technique is investigated for the in-flight estimation of non-measurable performance parameters of aircraft engines. Performance parameters, such as thrust and stall margins, provide crucial information for operating an aircraft engine in a safe and efficient manner, but they cannot be directly measured during flight. A technique to accurately estimate these parameters is, therefore, essential for further enhancement of engine operation. In this paper, a CGEKF is developed by combining an on-board engine model and a single Kalman gain matrix. In order to make the on-board engine model adaptive to the real engine s performance variations due to degradation or anomalies, the CGEKF is designed with the ability to adjust its performance through the adjustment of artificial parameters called tuning parameters. With this design approach, the CGEKF can maintain accurate estimation performance when it is applied to aircraft engines at offnominal conditions. The performance of the CGEKF is evaluated in a simulation environment using numerous component degradation and fault scenarios at multiple operating conditions.

  20. Application of the Aquifer Impact Model to support decisions at a CO 2 sequestration site: Modeling and Analysis: Application of the Aquifer Impact Model to support decisions at a CO 2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bacon, Diana Holford; Locke II, Randall A.; Keating, Elizabeth

    The National Risk Assessment Partnership (NRAP) has developed a suite of tools to assess and manage risk at CO2 sequestration sites (1). The NRAP tool suite includes the Aquifer Impact Model (AIM), based on reduced order models developed using site-specific data from two aquifers (alluvium and carbonate). The models accept aquifer parameters as a range of variable inputs so they may have more broad applicability. Guidelines have been developed for determining the aquifer types for which the ROMs should be applicable. This paper considers the applicability of the aquifer models in AIM to predicting the impact of CO2 or Brinemore » leakage were it to occur at the Illinois Basin Decatur Project (IBDP). Based on the results of the sensitivity analysis, the hydraulic parameters and leakage source term magnitude are more sensitive than clay fraction or cation exchange capacity. Sand permeability was the only hydraulic parameter measured at the IBDP site. More information on the other hydraulic parameters, such as sand fraction and sand/clay correlation lengths, could reduce uncertainty in risk estimates. Some non-adjustable parameters, such as the initial pH and TDS and the pH no-impact threshold, are significantly different for the ROM than for the observations at the IBDP site. The reduced order model could be made more useful to a wider range of sites if the initial conditions and no-impact threshold values were adjustable parameters.« less

  1. A formulation of convection for stellar structure and evolution calculations without the mixing-length theory approximations. II - Application to Alpha Centauri A and B

    NASA Technical Reports Server (NTRS)

    Lydon, Thomas J.; Fox, Peter A.; Sofia, Sabatino

    1993-01-01

    We have constructed a series of models of Alpha Centauri A and Alpha Centauri B for the purposes of testing the effects of convection modeling both by means of the mixing-length theory (MLT), and by means of parameterization of energy fluxes based upon numerical simulations of turbulent compressible convection. We demonstrate that while MLT, through its adjustable parameter alpha, can be used to match any given values of luminosities and radii, our treatment of convection, which lacks any adjustable parameters, makes specific predictions of stellar radii. Since the predicted radii of the Alpha Centauri system fall within the errors of the observed radii, our treatment of convection is applicable to other stars in the H-R diagram in addition to the sun. A second set of models is constructed using MLT, adjusting alpha to yield not the 'measured' radii but, instead, the radii predictions of our revised treatment of convection. We conclude by assessing the appropriateness of using a single value of alpha to model a wide variety of stars.

  2. Computation of physiological human vocal fold parameters by mathematical optimization of a biomechanical model

    PubMed Central

    Yang, Anxiong; Stingl, Michael; Berry, David A.; Lohscheller, Jörg; Voigt, Daniel; Eysholdt, Ulrich; Döllinger, Michael

    2011-01-01

    With the use of an endoscopic, high-speed camera, vocal fold dynamics may be observed clinically during phonation. However, observation and subjective judgment alone may be insufficient for clinical diagnosis and documentation of improved vocal function, especially when the laryngeal disease lacks any clear morphological presentation. In this study, biomechanical parameters of the vocal folds are computed by adjusting the corresponding parameters of a three-dimensional model until the dynamics of both systems are similar. First, a mathematical optimization method is presented. Next, model parameters (such as pressure, tension and masses) are adjusted to reproduce vocal fold dynamics, and the deduced parameters are physiologically interpreted. Various combinations of global and local optimization techniques are attempted. Evaluation of the optimization procedure is performed using 50 synthetically generated data sets. The results show sufficient reliability, including 0.07 normalized error, 96% correlation, and 91% accuracy. The technique is also demonstrated on data from human hemilarynx experiments, in which a low normalized error (0.16) and high correlation (84%) values were achieved. In the future, this technique may be applied to clinical high-speed images, yielding objective measures with which to document improved vocal function of patients with voice disorders. PMID:21877808

  3. DSD/WBL-consistent JWL equations of state for EDC35

    NASA Astrophysics Data System (ADS)

    Hodgson, Alexander N.; Handley, Caroline Angela

    2012-03-01

    The Detonation Shock Dynamics (DSD) model allows the calculation of curvature-dependent detonation propagation. It is of particular use when applied to insensitive high explosives, such as EDC35, since they have a greater non-ideal behaviour. The DSD model is used in conjunction with experimental cylinder test data to obtain the JWL Equation of State (EOS) for EDC35. Adjustment of parameters in the JWL equation changes the expansion profile of the cylinder wall in hydrocode simulations. The parameters are iterated until the best match can be made between simulation and experiment. Previous DSD models used at AWE have no mechanism to adjust the chemical energy release to match the detonation conditions. Two JWL calibrations are performed using the DSD model, with and without Hetherington's energy release model (these proceedings). Also in use is a newly-calibrated detonation speed-curvature relation.

  4. Simulation of Changes in Diffusion Related to Different Pathologies at Cellular Level After Traumatic Brain Injury

    PubMed Central

    Lin, Mu; He, Hongjian; Schifitto, Giovanni; Zhong, Jianhui

    2016-01-01

    Purpose The goal of the current study was to investigate tissue pathology at the cellular level in traumatic brain injury (TBI) as revealed by Monte Carlo simulation of diffusion tensor imaging (DTI)-derived parameters and elucidate the possible sources of conflicting findings of DTI abnormalities as reported in the TBI literature. Methods A model with three compartments separated by permeable membranes was employed to represent the diffusion environment of water molecules in brain white matter. The dynamic diffusion process was simulated with a Monte Carlo method using adjustable parameters of intra-axonal diffusivity, axon separation, glial cell volume fraction, and myelin sheath permeability. The effects of tissue pathology on DTI parameters were investigated by adjusting the parameters of the model corresponding to different stages of brain injury. Results The results suggest that the model is appropriate and the DTI-derived parameters simulate the predominant cellular pathology after TBI. Our results further indicate that when edema is not prevalent, axial and radial diffusivity have better sensitivity to axonal injury and demyelination than other DTI parameters. Conclusion DTI is a promising biomarker to detect and stage tissue injury after TBI. The observed inconsistencies among previous studies are likely due to scanning at different stages of tissue injury after TBI. PMID:26256558

  5. A physiology-based model describing heterogeneity in glucose metabolism: the core of the Eindhoven Diabetes Education Simulator (E-DES).

    PubMed

    Maas, Anne H; Rozendaal, Yvonne J W; van Pul, Carola; Hilbers, Peter A J; Cottaar, Ward J; Haak, Harm R; van Riel, Natal A W

    2015-03-01

    Current diabetes education methods are costly, time-consuming, and do not actively engage the patient. Here, we describe the development and verification of the physiological model for healthy subjects that forms the basis of the Eindhoven Diabetes Education Simulator (E-DES). E-DES shall provide diabetes patients with an individualized virtual practice environment incorporating the main factors that influence glycemic control: food, exercise, and medication. The physiological model consists of 4 compartments for which the inflow and outflow of glucose and insulin are calculated using 6 nonlinear coupled differential equations and 14 parameters. These parameters are estimated on 12 sets of oral glucose tolerance test (OGTT) data (226 healthy subjects) obtained from literature. The resulting parameter set is verified on 8 separate literature OGTT data sets (229 subjects). The model is considered verified if 95% of the glucose data points lie within an acceptance range of ±20% of the corresponding model value. All glucose data points of the verification data sets lie within the predefined acceptance range. Physiological processes represented in the model include insulin resistance and β-cell function. Adjusting the corresponding parameters allows to describe heterogeneity in the data and shows the capabilities of this model for individualization. We have verified the physiological model of the E-DES for healthy subjects. Heterogeneity of the data has successfully been modeled by adjusting the 4 parameters describing insulin resistance and β-cell function. Our model will form the basis of a simulator providing individualized education on glucose control. © 2014 Diabetes Technology Society.

  6. Optimum data weighting and error calibration for estimation of gravitational parameters

    NASA Technical Reports Server (NTRS)

    Lerch, F. J.

    1989-01-01

    A new technique was developed for the weighting of data from satellite tracking systems in order to obtain an optimum least squares solution and an error calibration for the solution parameters. Data sets from optical, electronic, and laser systems on 17 satellites in GEM-T1 (Goddard Earth Model, 36x36 spherical harmonic field) were employed toward application of this technique for gravity field parameters. Also, GEM-T2 (31 satellites) was recently computed as a direct application of the method and is summarized here. The method employs subset solutions of the data associated with the complete solution and uses an algorithm to adjust the data weights by requiring the differences of parameters between solutions to agree with their error estimates. With the adjusted weights the process provides for an automatic calibration of the error estimates for the solution parameters. The data weights derived are generally much smaller than corresponding weights obtained from nominal values of observation accuracy or residuals. Independent tests show significant improvement for solutions with optimal weighting as compared to the nominal weighting. The technique is general and may be applied to orbit parameters, station coordinates, or other parameters than the gravity model.

  7. DaMoScope and its internet graphics for the visual control of adjusting mathematical models describing experimental data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Belousov, V. I.; Ezhela, V. V.; Kuyanov, Yu. V., E-mail: Yu.Kuyanov@gmail.com

    The experience of using the dynamic atlas of the experimental data and mathematical models of their description in the problems of adjusting parametric models of observable values depending on kinematic variables is presented. The functional possibilities of an image of a large number of experimental data and the models describing them are shown by examples of data and models of observable values determined by the amplitudes of elastic scattering of hadrons. The Internet implementation of an interactive tool DaMoScope and its interface with the experimental data and codes of adjusted parametric models with the parameters of the best description ofmore » data are schematically shown. The DaMoScope codes are freely available.« less

  8. Quantum Chemically Estimated Abraham Solute Parameters Using Multiple Solvent-Water Partition Coefficients and Molecular Polarizability.

    PubMed

    Liang, Yuzhen; Xiong, Ruichang; Sandler, Stanley I; Di Toro, Dominic M

    2017-09-05

    Polyparameter Linear Free Energy Relationships (pp-LFERs), also called Linear Solvation Energy Relationships (LSERs), are used to predict many environmentally significant properties of chemicals. A method is presented for computing the necessary chemical parameters, the Abraham parameters (AP), used by many pp-LFERs. It employs quantum chemical calculations and uses only the chemical's molecular structure. The method computes the Abraham E parameter using density functional theory computed molecular polarizability and the Clausius-Mossotti equation relating the index refraction to the molecular polarizability, estimates the Abraham V as the COSMO calculated molecular volume, and computes the remaining AP S, A, and B jointly with a multiple linear regression using sixty-five solvent-water partition coefficients computed using the quantum mechanical COSMO-SAC solvation model. These solute parameters, referred to as Quantum Chemically estimated Abraham Parameters (QCAP), are further adjusted by fitting to experimentally based APs using QCAP parameters as the independent variables so that they are compatible with existing Abraham pp-LFERs. QCAP and adjusted QCAP for 1827 neutral chemicals are included. For 24 solvent-water systems including octanol-water, predicted log solvent-water partition coefficients using adjusted QCAP have the smallest root-mean-square errors (RMSEs, 0.314-0.602) compared to predictions made using APs estimated using the molecular fragment based method ABSOLV (0.45-0.716). For munition and munition-like compounds, adjusted QCAP has much lower RMSE (0.860) than does ABSOLV (4.45) which essentially fails for these compounds.

  9. Mathematical models and photogrammetric exploitation of image sensing

    NASA Astrophysics Data System (ADS)

    Puatanachokchai, Chokchai

    Mathematical models of image sensing are generally categorized into physical/geometrical sensor models and replacement sensor models. While the former is determined from image sensing geometry, the latter is based on knowledge of the physical/geometric sensor models and on using such models for its implementation. The main thrust of this research is in replacement sensor models which have three important characteristics: (1) Highly accurate ground-to-image functions; (2) Rigorous error propagation that is essentially of the same accuracy as the physical model; and, (3) Adjustability, or the ability to upgrade the replacement sensor model parameters when additional control information becomes available after the replacement sensor model has replaced the physical model. In this research, such replacement sensor models are considered as True Replacement Models or TRMs. TRMs provide a significant advantage of universality, particularly for image exploitation functions. There have been several writings about replacement sensor models, and except for the so called RSM (Replacement Sensor Model as a product described in the Manual of Photogrammetry), almost all of them pay very little or no attention to errors and their propagation. This is because, it is suspected, the few physical sensor parameters are usually replaced by many more parameters, thus presenting a potential error estimation difficulty. The third characteristic, adjustability, is perhaps the most demanding. It provides an equivalent flexibility to that of triangulation using the physical model. Primary contributions of this thesis include not only "the eigen-approach", a novel means of replacing the original sensor parameter covariance matrices at the time of estimating the TRM, but also the implementation of the hybrid approach that combines the eigen-approach with the added parameters approach used in the RSM. Using either the eigen-approach or the hybrid approach, rigorous error propagation can be performed during image exploitation. Further, adjustability can be performed when additional control information becomes available after the TRM has been implemented. The TRM is shown to apply to imagery from sensors having different geometries, including an aerial frame camera, a spaceborne linear array sensor, an airborne pushbroom sensor, and an airborne whiskbroom sensor. TRM results show essentially negligible differences as compared to those from rigorous physical sensor models, both for geopositioning from single and overlapping images. Simulated as well as real image data are used to address all three characteristics of the TRM.

  10. Universal Parameter Measurement and Sensorless Vector Control of Induction and Permanent Magnet Synchronous Motors

    NASA Astrophysics Data System (ADS)

    Yamamoto, Shu; Ara, Takahiro

    Recently, induction motors (IMs) and permanent-magnet synchronous motors (PMSMs) have been used in various industrial drive systems. The features of the hardware device used for controlling the adjustable-speed drive in these motors are almost identical. Despite this, different techniques are generally used for parameter measurement and speed-sensorless control of these motors. If the same technique can be used for parameter measurement and sensorless control, a highly versatile adjustable-speed-drive system can be realized. In this paper, the authors describe a new universal sensorless control technique for both IMs and PMSMs (including salient pole and nonsalient pole machines). A mathematical model applicable for IMs and PMSMs is discussed. Using this model, the authors derive the proposed universal sensorless vector control algorithm on the basis of estimation of the stator flux linkage vector. All the electrical motor parameters are determined by a unified test procedure. The proposed method is implemented on three test machines. The actual driving test results demonstrate the validity of the proposed method.

  11. Lesions to the left lateral prefrontal cortex impair decision threshold adjustment for lexical selection.

    PubMed

    Anders, Royce; Riès, Stéphanie; Van Maanen, Leendert; Alario, F-Xavier

    Patients with lesions in the left prefrontal cortex (PFC) have been shown to be impaired in lexical selection, especially when interference between semantically related alternatives is increased. To more deeply investigate which computational mechanisms may be impaired following left PFC damage due to stroke, a psychometric modelling approach is employed in which we assess the cognitive parameters of the patients from an evidence accumulation (sequential information sampling) modelling of their response data. We also compare the results to healthy speakers. Analysis of the cognitive parameters indicates an impairment of the PFC patients to appropriately adjust their decision threshold, in order to handle the increased item difficulty that is introduced by semantic interference. Also, the modelling contributes to other topics in psycholinguistic theory, in which specific effects are observed on the cognitive parameters according to item familiarization, and the opposing effects of priming (lower threshold) and semantic interference (lower drift) which are found to depend on repetition. These results are developed for the blocked-cyclic picture naming paradigm, in which pictures are presented within semantically homogeneous (HOM) or heterogeneous (HET) blocks, and are repeated several times per block. Overall, the results are in agreement with a role of the left PFC in adjusting the decision threshold for lexical selection in language production.

  12. Demand-Adjusted Shelf Availability Parameters: A Second Look.

    ERIC Educational Resources Information Center

    Schwarz, Philip

    1983-01-01

    Data gathered in application of Paul Kantor's demand-adjusted shelf availability model to medium-sized academic library indicate significant differences in shelf availability when data are analyzed by last circulation date, acquisition date, and imprint date, and when they are gathered during periods of low and high use. Ten references are cited.…

  13. Dynamic Parameter Identification of Subject-Specific Body Segment Parameters Using Robotics Formalism: Case Study Head Complex.

    PubMed

    Díaz-Rodríguez, Miguel; Valera, Angel; Page, Alvaro; Besa, Antonio; Mata, Vicente

    2016-05-01

    Accurate knowledge of body segment inertia parameters (BSIP) improves the assessment of dynamic analysis based on biomechanical models, which is of paramount importance in fields such as sport activities or impact crash test. Early approaches for BSIP identification rely on the experiments conducted on cadavers or through imaging techniques conducted on living subjects. Recent approaches for BSIP identification rely on inverse dynamic modeling. However, most of the approaches are focused on the entire body, and verification of BSIP for dynamic analysis for distal segment or chain of segments, which has proven to be of significant importance in impact test studies, is rarely established. Previous studies have suggested that BSIP should be obtained by using subject-specific identification techniques. To this end, our paper develops a novel approach for estimating subject-specific BSIP based on static and dynamics identification models (SIM, DIM). We test the validity of SIM and DIM by comparing the results using parameters obtained from a regression model proposed by De Leva (1996, "Adjustments to Zatsiorsky-Seluyanov's Segment Inertia Parameters," J. Biomech., 29(9), pp. 1223-1230). Both SIM and DIM are developed considering robotics formalism. First, the static model allows the mass and center of gravity (COG) to be estimated. Second, the results from the static model are included in the dynamics equation allowing us to estimate the moment of inertia (MOI). As a case study, we applied the approach to evaluate the dynamics modeling of the head complex. Findings provide some insight into the validity not only of the proposed method but also of the application proposed by De Leva (1996, "Adjustments to Zatsiorsky-Seluyanov's Segment Inertia Parameters," J. Biomech., 29(9), pp. 1223-1230) for dynamic modeling of body segments.

  14. Role of the combination of FA and T2* parameters as a new diagnostic method in therapeutic evaluation of parkinson's disease.

    PubMed

    Fang, Yuan; Zheng, Tao; Liu, Lanxiang; Gao, Dawei; Shi, Qinglei; Dong, Yanchao; Du, Dan

    2017-11-17

    Simple diffusion delivery (SDD) has attained good effects with only tiny amounts of drugs. Fractional anisotropy (FA) and relaxation time T2* that indicate the integrity of fiber tracts and iron concentration within brain tissue were used to evaluate the therapeutic effect of SDD. To evaluate therapeutic effect of SDD in the Parkinson's disease (PD) rat model with FA and T2* parameters. Prospective case-control animal study. Thirty-two male Sprague Dawley rats (eight normal, eight PD, eight SDD, and eight subcutaneous injection rats). Single-shot spin echo echo-planar imaging and fast low-angle shot T 2 WI sequences at 3.0T. Parameters of FA and T2* on the treated side of the substantia nigra were measured to evaluate the therapeutic effect of SDD in a PD rat model. The effects of time on FA and T2* values were analyzed by repeated measurement tests. A one-way analysis of variance was conducted, followed by individual comparisons of the mean FA and T2* values at different timepoints. The FA values on the treated side of the substantia nigra in the SDD treatment group and subcutaneous injection treatment group were significantly higher at week 1 and lower at week 6 than that of the PD control group (SDD vs. PD, week 1, adjusted P = 0.012; subcutaneous vs. PD, week 1, adjusted P < 0.001; SDD vs. PD, week 6, adjusted P = 0.004; subcutaneous vs. PD, week 6, adjusted P = 0.024). The T2* parameter in the SDD treatment group and subcutaneous injection treatment group was significantly higher than that in the PD control group at week 6 (SDD vs. PD, adjusted P = 0.032; subcutaneous vs. PD, adjusted P < 0.001). The combination of FA and T2* parameters can potentially serve as a new effective evaluation method of the therapeutic effect of SDD. 1 Technical Efficacy: Stage 4 J. Magn. Reson. Imaging 2017. © 2017 International Society for Magnetic Resonance in Medicine.

  15. APPLICATION OF THE HSPF MODEL TO THE SOUTH FORK OF THE BROAD RIVER WATERSHED IN NORTHEASTERN GEORGIA

    EPA Science Inventory

    The Hydrological Simulation Program-Fortran (HSPF) is a comprehensive watershed model which simulates hydrology and water quality at user-specified temporal and spatial scales. Well-established model calibration and validation procedures are followed when adjusting model paramete...

  16. Using an EM Covariance Matrix to Estimate Structural Equation Models with Missing Data: Choosing an Adjusted Sample Size to Improve the Accuracy of Inferences

    ERIC Educational Resources Information Center

    Enders, Craig K.; Peugh, James L.

    2004-01-01

    Two methods, direct maximum likelihood (ML) and the expectation maximization (EM) algorithm, can be used to obtain ML parameter estimates for structural equation models with missing data (MD). Although the 2 methods frequently produce identical parameter estimates, it may be easier to satisfy missing at random assumptions using EM. However, no…

  17. Ocean Turbulence. Paper 3; Two-Point Closure Model Momentum, Heat and Salt Vertical Diffusivities in the Presence of Shear

    NASA Technical Reports Server (NTRS)

    Canuto, V. M.; Dubovikov, M. S.; Howard, A.; Cheng, Y.

    1999-01-01

    In papers 1 and 2 we have presented the results of the most updated 1-point closure model for the turbulent vertical diffusivities of momentum, heat and salt, K(sub m,h,s). In this paper, we derive the analytic expressions for K(sub m,h,s) using a new 2-point closure model that has recently been developed and successfully tested against some approx. 80 turbulence statistics for different flows. The new model has no free parameters. The expressions for K(sub m, h. s) are analytical functions of two stability parameters: the Turner number R(sub rho) (salinity gradient/temperature gradient) and the Richardson number R(sub i) (temperature gradient/shear). The turbulent kinetic energy K and its rate of dissipation may be taken local or non-local (K-epsilon model). Contrary to all previous models that to describe turbulent mixing below the mixed layer (ML) have adopted three adjustable "background diffusivities" for momentum. heat and salt, we propose a model that avoids such adjustable diffusivities. We assume that below the ML, K(sub m,h,s) have the same functional dependence on R(sub i) and R(sub rho) derived from the turbulence model. However, in order to compute R(sub i) below the ML, we use data of vertical shear due to wave-breaking measured by Gargett et al. (1981). The procedure frees the model from adjustable background diffusivities and indeed we use the same model throughout the entire vertical extent of the ocean. Using the new K(sub m,h, s), we run an O-GCM and present a variety of results that we compare with Levitus and the KPP model. Since the traditional 1-point (used in papers 1 and 2) and the new 2-point closure models used here represent different modeling philosophies and procedures, testing them in an O-GCM is indispensable. The basic motivation is to show that the new 2-point closure model gives results that are overall superior to the 1-point closure in spite of the fact that the latter rely on several adjustable parameters while the new 2-point closure has none. After the extensive comparisons presented in papers 1 and 2, we conclude that the new model presented here is overall superior for it not only is parameter free but also 2 because is part of a more general turbulence model that has been previously successfully tested on a wide variety of other types of turbulent flows.

  18. Regression dilution in the proportional hazards model.

    PubMed

    Hughes, M D

    1993-12-01

    The problem of regression dilution arising from covariate measurement error is investigated for survival data using the proportional hazards model. The naive approach to parameter estimation is considered whereby observed covariate values are used, inappropriately, in the usual analysis instead of the underlying covariate values. A relationship between the estimated parameter in large samples and the true parameter is obtained showing that the bias does not depend on the form of the baseline hazard function when the errors are normally distributed. With high censorship, adjustment of the naive estimate by the factor 1 + lambda, where lambda is the ratio of within-person variability about an underlying mean level to the variability of these levels in the population sampled, removes the bias. As censorship increases, the adjustment required increases and when there is no censorship is markedly higher than 1 + lambda and depends also on the true risk relationship.

  19. Temperature based Restricted Boltzmann Machines

    NASA Astrophysics Data System (ADS)

    Li, Guoqi; Deng, Lei; Xu, Yi; Wen, Changyun; Wang, Wei; Pei, Jing; Shi, Luping

    2016-01-01

    Restricted Boltzmann machines (RBMs), which apply graphical models to learning probability distribution over a set of inputs, have attracted much attention recently since being proposed as building blocks of multi-layer learning systems called deep belief networks (DBNs). Note that temperature is a key factor of the Boltzmann distribution that RBMs originate from. However, none of existing schemes have considered the impact of temperature in the graphical model of DBNs. In this work, we propose temperature based restricted Boltzmann machines (TRBMs) which reveals that temperature is an essential parameter controlling the selectivity of the firing neurons in the hidden layers. We theoretically prove that the effect of temperature can be adjusted by setting the parameter of the sharpness of the logistic function in the proposed TRBMs. The performance of RBMs can be improved by adjusting the temperature parameter of TRBMs. This work provides a comprehensive insights into the deep belief networks and deep learning architectures from a physical point of view.

  20. Adjusting for overdispersion in piecewise exponential regression models to estimate excess mortality rate in population-based research.

    PubMed

    Luque-Fernandez, Miguel Angel; Belot, Aurélien; Quaresma, Manuela; Maringe, Camille; Coleman, Michel P; Rachet, Bernard

    2016-10-01

    In population-based cancer research, piecewise exponential regression models are used to derive adjusted estimates of excess mortality due to cancer using the Poisson generalized linear modelling framework. However, the assumption that the conditional mean and variance of the rate parameter given the set of covariates x i are equal is strong and may fail to account for overdispersion given the variability of the rate parameter (the variance exceeds the mean). Using an empirical example, we aimed to describe simple methods to test and correct for overdispersion. We used a regression-based score test for overdispersion under the relative survival framework and proposed different approaches to correct for overdispersion including a quasi-likelihood, robust standard errors estimation, negative binomial regression and flexible piecewise modelling. All piecewise exponential regression models showed the presence of significant inherent overdispersion (p-value <0.001). However, the flexible piecewise exponential model showed the smallest overdispersion parameter (3.2 versus 21.3) for non-flexible piecewise exponential models. We showed that there were no major differences between methods. However, using a flexible piecewise regression modelling, with either a quasi-likelihood or robust standard errors, was the best approach as it deals with both, overdispersion due to model misspecification and true or inherent overdispersion.

  1. A regression model for calculating the second dimension retention index in comprehensive two-dimensional gas chromatography time-of-flight mass spectrometry.

    PubMed

    Wang, Bing; Shen, Hao; Fang, Aiqin; Huang, De-Shuang; Jiang, Changjun; Zhang, Jun; Chen, Peng

    2016-06-17

    Comprehensive two-dimensional gas chromatography time-of-flight mass spectrometry (GC×GC/TOF-MS) system has become a key analytical technology in high-throughput analysis. Retention index has been approved to be helpful for compound identification in one-dimensional gas chromatography, which is also true for two-dimensional gas chromatography. In this work, a novel regression model was proposed for calculating the second dimension retention index of target components where n-alkanes were used as reference compounds. This model was developed to depict the relationship among adjusted second dimension retention time, temperature of the second dimension column and carbon number of n-alkanes by an exponential nonlinear function with only five parameters. Three different criteria were introduced to find the optimal values of parameters. The performance of this model was evaluated using experimental data of n-alkanes (C7-C31) at 24 temperatures which can cover all 0-6s adjusted retention time area. The experimental results show that the mean relative error between predicted adjusted retention time and experimental data of n-alkanes was only 2%. Furthermore, our proposed model demonstrates a good extrapolation capability for predicting adjusted retention time of target compounds which located out of the range of the reference compounds in the second dimension adjusted retention time space. Our work shows the deviation was less than 9 retention index units (iu) while the number of alkanes were added up to 5. The performance of our proposed model has also been demonstrated by analyzing a mixture of compounds in temperature programmed experiments. Copyright © 2016 Elsevier B.V. All rights reserved.

  2. The pitch of short-duration fundamental frequency glissandos.

    PubMed

    d'Alessandro, C; Rosset, S; Rossi, J P

    1998-10-01

    Pitch perception for short-duration fundamental frequency (F0) glissandos was studied. In the first part, new measurements using the method of adjustment are reported. Stimuli were F0 glissandos centered at 220 Hz. The parameters under study were: F0 glissando extents (0, 0.8, 1.5, 3, 6, and 12 semitones, i.e., 0, 10.17, 18.74, 38.17, 76.63, and 155.56 Hz), F0 glissando durations (50, 100, 200, and 300 ms), F0 glissando directions (rising or falling), and the extremity of F0 glissandos matched (beginning or end). In the second part, the main results are discussed: (1) perception seems to correspond to an average of the frequencies present in the vicinity of the extremity matched; (2) the higher extremities of the glissando seem more important; (3) adjustments at the end are closer to the extremities than adjustments at the beginning. In the third part, numerical models accounting for the experimental data are proposed: a time-average model and a weighted time-average model. Optimal parameters for these models are derived. The weighted time-average model achieves a 94% accurate prediction rate for the experimental data. The numerical model is successful in predicting the pitch of short-duration F0 glissandos.

  3. Radiometric Block Adjusment and Digital Radiometric Model Generation

    NASA Astrophysics Data System (ADS)

    Pros, A.; Colomina, I.; Navarro, J. A.; Antequera, R.; Andrinal, P.

    2013-05-01

    In this paper we present a radiometric block adjustment method that is related to geometric block adjustment and to the concept of a terrain Digital Radiometric Model (DRM) as a complement to the terrain digital elevation and surface models. A DRM, in our concept, is a function that for each ground point returns a reflectance value and a Bidirectional Reflectance Distribution Function (BRDF). In a similar way to the terrain geometric reconstruction procedure, given an image block of some terrain area, we split the DRM generation in two phases: radiometric block adjustment and DRM generation. In the paper we concentrate on the radiometric block adjustment step, but we also describe a preliminary DRM generator. In the block adjustment step, after a radiometric pre-calibraton step, local atmosphere radiative transfer parameters, and ground reflectances and BRDFs at the radiometric tie points are estimated. This radiometric block adjustment is based on atmospheric radiative transfer (ART) models, pre-selected BRDF models and radiometric ground control points. The proposed concept is implemented and applied in an experimental campaign, and the obtained results are presented. The DRM and orthophoto mosaics are generated showing no radiometric differences at the seam lines.

  4. A computer model of the pediatric circulatory system for testing pediatric assist devices.

    PubMed

    Giridharan, Guruprasad A; Koenig, Steven C; Mitchell, Michael; Gartner, Mark; Pantalos, George M

    2007-01-01

    Lumped parameter computer models of the pediatric circulatory systems for 1- and 4-year-olds were developed to predict hemodynamic responses to mechanical circulatory support devices. Model parameters, including resistance, compliance and volume, were adjusted to match hemodynamic pressure and flow waveforms, pressure-volume loops, percent systole, and heart rate of pediatric patients (n = 6) with normal ventricles. Left ventricular failure was modeled by adjusting the time-varying compliance curve of the left heart to produce aortic pressures and cardiac outputs consistent with those observed clinically. Models of pediatric continuous flow (CF) and pulsatile flow (PF) ventricular assist devices (VAD) and intraaortic balloon pump (IABP) were developed and integrated into the heart failure pediatric circulatory system models. Computer simulations were conducted to predict acute hemodynamic responses to PF and CF VAD operating at 50%, 75% and 100% support and 2.5 and 5 ml IABP operating at 1:1 and 1:2 support modes. The computer model of the pediatric circulation matched the human pediatric hemodynamic waveform morphology to within 90% and cardiac function parameters with 95% accuracy. The computer model predicted PF VAD and IABP restore aortic pressure pulsatility and variation in end-systolic and end-diastolic volume, but diminish with increasing CF VAD support.

  5. Tuning a climate model using nudging to reanalysis.

    NASA Astrophysics Data System (ADS)

    Cheedela, S. K.; Mapes, B. E.

    2014-12-01

    Tuning a atmospheric general circulation model involves a daunting task of adjusting non-observable parameters to adjust the mean climate. These parameters arise from necessity to describe unresolved flow through parametrizations. Tuning a climate model is often done with certain set of priorities, such as global mean temperature, net top of the atmosphere radiation. These priorities are hard enough to reach let alone reducing systematic biases in the models. The goal of currently study is to explore alternate ways to tune a climate model to reduce some systematic biases that can be used in synergy with existing efforts. Nudging a climate model to a known state is a poor man's inverse of tuning process described above. Our approach involves nudging the atmospheric model to state of art reanalysis fields thereby providing a balanced state with respect to the global mean temperature and winds. The tendencies derived from nudging are negative of errors from physical parametrizations as the errors from dynamical core would be small. Patterns of nudging are compared to the patterns of different physical parametrizations to decipher the cause for certain biases in relation to tuning parameters. This approach might also help in understanding certain compensating errors that arise from tuning process. ECHAM6 is a comprehensive general model, also used in recent Coupled Model Intercomparision Project(CMIP5). The approach used to tune it and effect of certain parameters that effect its mean climate are reported clearly, hence it serves as a benchmark for our approach. Our planned experiments include nudging ECHAM6 atmospheric model to European Center Reanalysis (ERA-Interim) and reanalysis from National Center for Environmental Prediction (NCEP) and decipher choice of certain parameters that lead to systematic biases in its simulations. Of particular interest are reducing long standing biases related to simulation of Asian summer monsoon.

  6. Bundle Block Adjustment of Airborne Three-Line Array Imagery Based on Rotation Angles

    PubMed Central

    Zhang, Yongjun; Zheng, Maoteng; Huang, Xu; Xiong, Jinxin

    2014-01-01

    In the midst of the rapid developments in electronic instruments and remote sensing technologies, airborne three-line array sensors and their applications are being widely promoted and plentiful research related to data processing and high precision geo-referencing technologies is under way. The exterior orientation parameters (EOPs), which are measured by the integrated positioning and orientation system (POS) of airborne three-line sensors, however, have inevitable systematic errors, so the level of precision of direct geo-referencing is not sufficiently accurate for surveying and mapping applications. Consequently, a few ground control points are necessary to refine the exterior orientation parameters, and this paper will discuss bundle block adjustment models based on the systematic error compensation and the orientation image, considering the principle of an image sensor and the characteristics of the integrated POS. Unlike the models available in the literature, which mainly use a quaternion to represent the rotation matrix of exterior orientation, three rotation angles are directly used in order to effectively model and eliminate the systematic errors of the POS observations. Very good experimental results have been achieved with several real datasets that verify the correctness and effectiveness of the proposed adjustment models. PMID:24811075

  7. Bundle block adjustment of airborne three-line array imagery based on rotation angles.

    PubMed

    Zhang, Yongjun; Zheng, Maoteng; Huang, Xu; Xiong, Jinxin

    2014-05-07

    In the midst of the rapid developments in electronic instruments and remote sensing technologies, airborne three-line array sensors and their applications are being widely promoted and plentiful research related to data processing and high precision geo-referencing technologies is under way. The exterior orientation parameters (EOPs), which are measured by the integrated positioning and orientation system (POS) of airborne three-line sensors, however, have inevitable systematic errors, so the level of precision of direct geo-referencing is not sufficiently accurate for surveying and mapping applications. Consequently, a few ground control points are necessary to refine the exterior orientation parameters, and this paper will discuss bundle block adjustment models based on the systematic error compensation and the orientation image, considering the principle of an image sensor and the characteristics of the integrated POS. Unlike the models available in the literature, which mainly use a quaternion to represent the rotation matrix of exterior orientation, three rotation angles are directly used in order to effectively model and eliminate the systematic errors of the POS observations. Very good experimental results have been achieved with several real datasets that verify the correctness and effectiveness of the proposed adjustment models.

  8. Physicochemical and thermodynamic investigation of hydrogen absorption and desorption in LaNi3.8Al1.0Mn0.2 using the statistical physics modeling

    NASA Astrophysics Data System (ADS)

    Bouaziz, Nadia; Ben Manaa, Marwa; Ben Lamine, Abdelmottaleb

    2018-06-01

    In the present work, experimental absorption and desorption isotherms of hydrogen in LaNi3.8Al1.0Mn0.2 metal at two temperatures (T = 433 K, 453 K) have been fitted using a monolayer model with two energies treated by statistical physics formalism by means of the grand canonical ensemble. Six parameters of the model are adjusted, namely the numbers of hydrogen atoms per site nα and nβ, the receptor site densities Nmα and Nmβ, and the energetic parameters Pα and Pβ. The behaviors of these parameters are discussed in relationship with temperature of absorption/desorption process. Then, a dynamic investigation of the simultaneous evolution with pressure of the two α and β phases in the absorption and desorption phenomena using the adjustment parameters. Thanks to the energetic parameters, we calculated the sorption energies which are typically ranged between 276.107 and 310.711 kJ/mol for absorption process and between 277.01 and 310.9 kJ/mol for desorption process comparable to usual chemical bond energies. The calculated thermodynamic parameters such as entropy, Gibbs free energy and internal energy from experimental data showed that the absorption/desorption of hydrogen in LaNi3.8Al1.0Mn0.2 alloy was feasible, spontaneous and exothermic in nature.

  9. Static shape control for flexible structures

    NASA Technical Reports Server (NTRS)

    Rodriguez, G.; Scheid, R. E., Jr.

    1986-01-01

    An integrated methodology is described for defining static shape control laws for large flexible structures. The techniques include modeling, identifying and estimating the control laws of distributed systems characterized in terms of infinite dimensional state and parameter spaces. The models are expressed as interconnected elliptic partial differential equations governing a range of static loads, with the capability of analyzing electromagnetic fields around antenna systems. A second-order analysis is carried out for statistical errors, and model parameters are determined by maximizing an appropriate defined likelihood functional which adjusts the model to observational data. The parameter estimates are derived from the conditional mean of the observational data, resulting in a least squares superposition of shape functions obtained from the structural model.

  10. Improved Horvitz-Thompson Estimation of Model Parameters from Two-phase Stratified Samples: Applications in Epidemiology

    PubMed Central

    Breslow, Norman E.; Lumley, Thomas; Ballantyne, Christie M; Chambless, Lloyd E.; Kulich, Michal

    2009-01-01

    The case-cohort study involves two-phase sampling: simple random sampling from an infinite super-population at phase one and stratified random sampling from a finite cohort at phase two. Standard analyses of case-cohort data involve solution of inverse probability weighted (IPW) estimating equations, with weights determined by the known phase two sampling fractions. The variance of parameter estimates in (semi)parametric models, including the Cox model, is the sum of two terms: (i) the model based variance of the usual estimates that would be calculated if full data were available for the entire cohort; and (ii) the design based variance from IPW estimation of the unknown cohort total of the efficient influence function (IF) contributions. This second variance component may be reduced by adjusting the sampling weights, either by calibration to known cohort totals of auxiliary variables correlated with the IF contributions or by their estimation using these same auxiliary variables. Both adjustment methods are implemented in the R survey package. We derive the limit laws of coefficients estimated using adjusted weights. The asymptotic results suggest practical methods for construction of auxiliary variables that are evaluated by simulation of case-cohort samples from the National Wilms Tumor Study and by log-linear modeling of case-cohort data from the Atherosclerosis Risk in Communities Study. Although not semiparametric efficient, estimators based on adjusted weights may come close to achieving full efficiency within the class of augmented IPW estimators. PMID:20174455

  11. Benchmarking antibiotic use in Finnish acute care hospitals using patient case-mix adjustment.

    PubMed

    Kanerva, Mari; Ollgren, Jukka; Lyytikäinen, Outi

    2011-11-01

    It is difficult to draw conclusions about the prudence of antibiotic use in different hospitals by directly comparing usage figures. We present a patient case-mix adjustment model of antibiotic use to rank hospitals while taking patient characteristics into account. Data on antibiotic use were collected during the national healthcare-associated infection (HAI) prevalence survey in 2005 in Finland in all 5 tertiary care, all 15 secondary care and 10 (25% of 40) other acute care hospitals. The use of antibiotics was measured using use-days/100 patient-days during a 7day period and the prevalence of patients receiving at least two antimicrobials during the study day. Case-mix-adjusted antibiotic use was calculated by using multivariate models and an indirect standardization method. Parameters in the model included age, sex, severity of underlying diseases, intensive care, haematology, preceding surgery, respirator, central venous and urinary catheters, community-associated infection, HAI and contact isolation due to methicillin-resistant Staphylococcus aureus. The ranking order changed one position in 12 (40%) hospitals and more than two positions in 13 (43%) hospitals when the case-mix-adjusted figures were compared with those observed. In 24 hospitals (80%), the antibiotic use density observed was lower than expected by the case-mix-adjusted use density. The patient case-mix adjustment of antibiotic use ranked the hospitals differently from the ranking according to observed use, and may be a useful tool for benchmarking hospital antibiotic use. However, the best set of easily and widely available parameters that would describe both patient material and hospital activities remains to be determined.

  12. Validation of DYSTOOL for unsteady aerodynamic modeling of 2D airfoils

    NASA Astrophysics Data System (ADS)

    González, A.; Gomez-Iradi, S.; Munduate, X.

    2014-06-01

    From the point of view of wind turbine modeling, an important group of tools is based on blade element momentum (BEM) theory using 2D aerodynamic calculations on the blade elements. Due to the importance of this sectional computation of the blades, the National Renewable Wind Energy Center of Spain (CENER) developed DYSTOOL, an aerodynamic code for 2D airfoil modeling based on the Beddoes-Leishman model. The main focus here is related to the model parameters, whose values depend on the airfoil or the operating conditions. In this work, the values of the parameters are adjusted using available experimental or CFD data. The present document is mainly related to the validation of the results of DYSTOOL for 2D airfoils. The results of the computations have been compared with unsteady experimental data of the S809 and NACA0015 profiles. Some of the cases have also been modeled using the CFD code WMB (Wind Multi Block), within the framework of a collaboration with ACCIONA Windpower. The validation has been performed using pitch oscillations with different reduced frequencies, Reynolds numbers, amplitudes and mean angles of attack. The results have shown a good agreement using the methodology of adjustment for the value of the parameters. DYSTOOL have demonstrated to be a promising tool for 2D airfoil unsteady aerodynamic modeling.

  13. A Universe without Weak Interactions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harnik, Roni; Kribs, Graham D.; Perez, Gilad

    2006-04-07

    A universe without weak interactions is constructed that undergoes big-bang nucleosynthesis, matter domination, structure formation, and star formation. The stars in this universe are able to burn for billions of years, synthesize elements up to iron, and undergo supernova explosions, dispersing heavy elements into the interstellar medium. These definitive claims are supported by a detailed analysis where this hypothetical ''Weakless Universe'' is matched to our Universe by simultaneously adjusting Standard Model and cosmological parameters. For instance, chemistry and nuclear physics are essentially unchanged. The apparent habitability of the Weakless Universe suggests that the anthropic principle does not determine the scalemore » of electroweak breaking, or even require that it be smaller than the Planck scale, so long as technically natural parameters may be suitably adjusted. Whether the multi-parameter adjustment is realized or probable is dependent on the ultraviolet completion, such as the string landscape. Considering a similar analysis for the cosmological constant, however, we argue that no adjustments of other parameters are able to allow the cosmological constant to raise up even remotely close to the Planck scale while obtaining macroscopic structure. The fine-tuning problems associated with the electroweak breaking scale and the cosmological constant therefore appear to be qualitatively different from the perspective of obtaining a habitable universe.« less

  14. A New Formulation of the Filter-Error Method for Aerodynamic Parameter Estimation in Turbulence

    NASA Technical Reports Server (NTRS)

    Grauer, Jared A.; Morelli, Eugene A.

    2015-01-01

    A new formulation of the filter-error method for estimating aerodynamic parameters in nonlinear aircraft dynamic models during turbulence was developed and demonstrated. The approach uses an estimate of the measurement noise covariance to identify the model parameters, their uncertainties, and the process noise covariance, in a relaxation method analogous to the output-error method. Prior information on the model parameters and uncertainties can be supplied, and a post-estimation correction to the uncertainty was included to account for colored residuals not considered in the theory. No tuning parameters, needing adjustment by the analyst, are used in the estimation. The method was demonstrated in simulation using the NASA Generic Transport Model, then applied to the subscale T-2 jet-engine transport aircraft flight. Modeling results in different levels of turbulence were compared with results from time-domain output error and frequency- domain equation error methods to demonstrate the effectiveness of the approach.

  15. Sensitivity of predicted bioaerosol exposure from open windrow composting facilities to ADMS dispersion model parameters.

    PubMed

    Douglas, P; Tyrrel, S F; Kinnersley, R P; Whelan, M; Longhurst, P J; Walsh, K; Pollard, S J T; Drew, G H

    2016-12-15

    Bioaerosols are released in elevated quantities from composting facilities and are associated with negative health effects, although dose-response relationships are not well understood, and require improved exposure classification. Dispersion modelling has great potential to improve exposure classification, but has not yet been extensively used or validated in this context. We present a sensitivity analysis of the ADMS dispersion model specific to input parameter ranges relevant to bioaerosol emissions from open windrow composting. This analysis provides an aid for model calibration by prioritising parameter adjustment and targeting independent parameter estimation. Results showed that predicted exposure was most sensitive to the wet and dry deposition modules and the majority of parameters relating to emission source characteristics, including pollutant emission velocity, source geometry and source height. This research improves understanding of the accuracy of model input data required to provide more reliable exposure predictions. Copyright © 2016. Published by Elsevier Ltd.

  16. Lithium-ion Open Circuit Voltage (OCV) curve modelling and its ageing adjustment

    NASA Astrophysics Data System (ADS)

    Lavigne, L.; Sabatier, J.; Francisco, J. Mbala; Guillemard, F.; Noury, A.

    2016-08-01

    This paper is a contribution to lithium-ion batteries modelling taking into account aging effects. It first analyses the impact of aging on electrode stoichiometry and then on lithium-ion cell Open Circuit Voltage (OCV) curve. Through some hypotheses and an appropriate definition of the cell state of charge, it shows that each electrode equilibrium potential, but also the whole cell equilibrium potential can be modelled by a polynomial that requires only one adjustment parameter during aging. An adjustment algorithm, based on the idea that for two fixed OCVs, the state of charge between these two equilibrium states is unique for a given aging level, is then proposed. Its efficiency is evaluated on a battery pack constituted of four cells.

  17. Establishment and correction of an Echelle cross-prism spectrogram reduction model

    NASA Astrophysics Data System (ADS)

    Zhang, Rui; Bayanheshig; Li, Xiaotian; Cui, Jicheng

    2017-11-01

    The accuracy of an echelle cross-prism spectrometer depends on the matching degree between the spectrum reduction model and the actual state of the spectrometer. However, the error of adjustment can change the actual state of the spectrometer and result in a reduction model that does not match. This produces an inaccurate wavelength calibration. Therefore, the calibration of a spectrogram reduction model is important for the analysis of any echelle cross-prism spectrometer. In this study, the spectrogram reduction model of an echelle cross-prism spectrometer was established. The image position laws of a spectrometer that varies with the system parameters were simulated to the influence of the changes in prism refractive index, focal length and so on, on the calculation results. The model was divided into different wavebands. The iterative method, least squares principle and element lamps with known characteristic wavelength were used to calibrate the spectral model in different wavebands to obtain the actual values of the system parameters. After correction, the deviation of actual x- and y-coordinates and the coordinates calculated by the model are less than one pixel. The model corrected by this method thus reflects the system parameters in the current spectrometer state and can assist in accurate wavelength extraction. The instrument installation and adjustment would be guided in model-repeated correction, reducing difficulty of equipment, respectively.

  18. Users manual for an expert system (HSPEXP) for calibration of the hydrological simulation program; Fortran

    USGS Publications Warehouse

    Lumb, A.M.; McCammon, R.B.; Kittle, J.L.

    1994-01-01

    Expert system software was developed to assist less experienced modelers with calibration of a watershed model and to facilitate the interaction between the modeler and the modeling process not provided by mathematical optimization. A prototype was developed with artificial intelligence software tools, a knowledge engineer, and two domain experts. The manual procedures used by the domain experts were identified and the prototype was then coded by the knowledge engineer. The expert system consists of a set of hierarchical rules designed to guide the calibration of the model through a systematic evaluation of model parameters. When the prototype was completed and tested, it was rewritten for portability and operational use and was named HSPEXP. The watershed model Hydrological Simulation Program--Fortran (HSPF) is used in the expert system. This report is the users manual for HSPEXP and contains a discussion of the concepts and detailed steps and examples for using the software. The system has been tested on watersheds in the States of Washington and Maryland, and the system correctly identified the model parameters to be adjusted and the adjustments led to improved calibration.

  19. How large are the consequences of covariate imbalance in cluster randomized trials: a simulation study with a continuous outcome and a binary covariate at the cluster level.

    PubMed

    Moerbeek, Mirjam; van Schie, Sander

    2016-07-11

    The number of clusters in a cluster randomized trial is often low. It is therefore likely random assignment of clusters to treatment conditions results in covariate imbalance. There are no studies that quantify the consequences of covariate imbalance in cluster randomized trials on parameter and standard error bias and on power to detect treatment effects. The consequences of covariance imbalance in unadjusted and adjusted linear mixed models are investigated by means of a simulation study. The factors in this study are the degree of imbalance, the covariate effect size, the cluster size and the intraclass correlation coefficient. The covariate is binary and measured at the cluster level; the outcome is continuous and measured at the individual level. The results show covariate imbalance results in negligible parameter bias and small standard error bias in adjusted linear mixed models. Ignoring the possibility of covariate imbalance while calculating the sample size at the cluster level may result in a loss in power of at most 25 % in the adjusted linear mixed model. The results are more severe for the unadjusted linear mixed model: parameter biases up to 100 % and standard error biases up to 200 % may be observed. Power levels based on the unadjusted linear mixed model are often too low. The consequences are most severe for large clusters and/or small intraclass correlation coefficients since then the required number of clusters to achieve a desired power level is smallest. The possibility of covariate imbalance should be taken into account while calculating the sample size of a cluster randomized trial. Otherwise more sophisticated methods to randomize clusters to treatments should be used, such as stratification or balance algorithms. All relevant covariates should be carefully identified, be actually measured and included in the statistical model to avoid severe levels of parameter and standard error bias and insufficient power levels.

  20. Analysis of Nonlinear Dynamics in Linear Compressors Driven by Linear Motors

    NASA Astrophysics Data System (ADS)

    Chen, Liangyuan

    2018-03-01

    The analysis of dynamic characteristics of the mechatronics system is of great significance for the linear motor design and control. Steady-state nonlinear response characteristics of a linear compressor are investigated theoretically based on the linearized and nonlinear models. First, the influence factors considering the nonlinear gas force load were analyzed. Then, a simple linearized model was set up to analyze the influence on the stroke and resonance frequency. Finally, the nonlinear model was set up to analyze the effects of piston mass, spring stiffness, driving force as an example of design parameter variation. The simulating results show that the stroke can be obtained by adjusting the excitation amplitude, frequency and other adjustments, the equilibrium position can be adjusted by adjusting the DC input, and to make the more efficient operation, the operating frequency must always equal to the resonance frequency.

  1. Using Least Squares to Solve Systems of Equations

    ERIC Educational Resources Information Center

    Tellinghuisen, Joel

    2016-01-01

    The method of least squares (LS) yields exact solutions for the adjustable parameters when the number of data values n equals the number of parameters "p". This holds also when the fit model consists of "m" different equations and "m = p", which means that LS algorithms can be used to obtain solutions to systems of…

  2. Online Chapmann Layer Calculator for Simulating the Ionosphere with Undergraduate and Graduate Students

    NASA Astrophysics Data System (ADS)

    Gross, N. A.; Withers, P.; Sojka, J. J.

    2014-12-01

    The Chapman Layer Model is a "textbook" model of the ionosphere (for example, "Theory of Planetary Atmospheres" by Chamberlain and Hunten, Academic Press (1978)). The model use fundamental assumptions about the neutral atmosphere, the flux of ionizing radiation, and the recombination rate to calculation the ionization rate, and ion/electron density for a single species atmosphere. We have developed a "Chapman Layer Calculator" application that is deployed on the web using Java. It allows the user to see how various parameters control ion density, peak height, and profile of the ionospheric layer. Users can adjust parameters relevant to thermosphere scale height (temperature, gravitational acceleration, molecular weight, neutral atmosphere density) and to Extreme Ultraviolet solar flux (reference EUV, distance from the Sun, and solar Zenith Angle) and then see how the layer changes. This allows the user to simulate the ionosphere on other planets, by adjusting to the appropriate parameters. This simulation has been used as an exploratory activity for the NASA/LWS - Heliophysics Summer School 2014 and has an accompanying activity guide.

  3. Finite Nuclei in the Quark-Meson Coupling Model.

    PubMed

    Stone, J R; Guichon, P A M; Reinhard, P G; Thomas, A W

    2016-03-04

    We report the first use of the effective quark-meson coupling (QMC) energy density functional (EDF), derived from a quark model of hadron structure, to study a broad range of ground state properties of even-even nuclei across the periodic table in the nonrelativistic Hartree-Fock+BCS framework. The novelty of the QMC model is that the nuclear medium effects are treated through modification of the internal structure of the nucleon. The density dependence is microscopically derived and the spin-orbit term arises naturally. The QMC EDF depends on a single set of four adjustable parameters having a clear physics basis. When applied to diverse ground state data the QMC EDF already produces, in its present simple form, overall agreement with experiment of a quality comparable to a representative Skyrme EDF. There exist, however, multiple Skyrme parameter sets, frequently tailored to describe selected nuclear phenomena. The QMC EDF set of fewer parameters, derived in this work, is not open to such variation, chosen set being applied, without adjustment, to both the properties of finite nuclei and nuclear matter.

  4. Sensitivity analysis of machine-learning models of hydrologic time series

    NASA Astrophysics Data System (ADS)

    O'Reilly, A. M.

    2017-12-01

    Sensitivity analysis traditionally has been applied to assessing model response to perturbations in model parameters, where the parameters are those model input variables adjusted during calibration. Unlike physics-based models where parameters represent real phenomena, the equivalent of parameters for machine-learning models are simply mathematical "knobs" that are automatically adjusted during training/testing/verification procedures. Thus the challenge of extracting knowledge of hydrologic system functionality from machine-learning models lies in their very nature, leading to the label "black box." Sensitivity analysis of the forcing-response behavior of machine-learning models, however, can provide understanding of how the physical phenomena represented by model inputs affect the physical phenomena represented by model outputs.As part of a previous study, hybrid spectral-decomposition artificial neural network (ANN) models were developed to simulate the observed behavior of hydrologic response contained in multidecadal datasets of lake water level, groundwater level, and spring flow. Model inputs used moving window averages (MWA) to represent various frequencies and frequency-band components of time series of rainfall and groundwater use. Using these forcing time series, the MWA-ANN models were trained to predict time series of lake water level, groundwater level, and spring flow at 51 sites in central Florida, USA. A time series of sensitivities for each MWA-ANN model was produced by perturbing forcing time-series and computing the change in response time-series per unit change in perturbation. Variations in forcing-response sensitivities are evident between types (lake, groundwater level, or spring), spatially (among sites of the same type), and temporally. Two generally common characteristics among sites are more uniform sensitivities to rainfall over time and notable increases in sensitivities to groundwater usage during significant drought periods.

  5. A systematic approach to parameter selection for CAD-virtual reality data translation using response surface methodology and MOGA-II.

    PubMed

    Abidi, Mustufa Haider; Al-Ahmari, Abdulrahman; Ahmad, Ali

    2018-01-01

    Advanced graphics capabilities have enabled the use of virtual reality as an efficient design technique. The integration of virtual reality in the design phase still faces impediment because of issues linked to the integration of CAD and virtual reality software. A set of empirical tests using the selected conversion parameters was found to yield properly represented virtual reality models. The reduced model yields an R-sq (pred) value of 72.71% and an R-sq (adjusted) value of 86.64%, indicating that 86.64% of the response variability can be explained by the model. The R-sq (pred) is 67.45%, which is not very high, indicating that the model should be further reduced by eliminating insignificant terms. The reduced model yields an R-sq (pred) value of 73.32% and an R-sq (adjusted) value of 79.49%, indicating that 79.49% of the response variability can be explained by the model. Using the optimization software MODE Frontier (Optimization, MOGA-II, 2014), four types of response surfaces for the three considered response variables were tested for the data of DOE. The parameter values obtained using the proposed experimental design methodology result in better graphics quality, and other necessary design attributes.

  6. Model Calibration in Watershed Hydrology

    NASA Technical Reports Server (NTRS)

    Yilmaz, Koray K.; Vrugt, Jasper A.; Gupta, Hoshin V.; Sorooshian, Soroosh

    2009-01-01

    Hydrologic models use relatively simple mathematical equations to conceptualize and aggregate the complex, spatially distributed, and highly interrelated water, energy, and vegetation processes in a watershed. A consequence of process aggregation is that the model parameters often do not represent directly measurable entities and must, therefore, be estimated using measurements of the system inputs and outputs. During this process, known as model calibration, the parameters are adjusted so that the behavior of the model approximates, as closely and consistently as possible, the observed response of the hydrologic system over some historical period of time. This Chapter reviews the current state-of-the-art of model calibration in watershed hydrology with special emphasis on our own contributions in the last few decades. We discuss the historical background that has led to current perspectives, and review different approaches for manual and automatic single- and multi-objective parameter estimation. In particular, we highlight the recent developments in the calibration of distributed hydrologic models using parameter dimensionality reduction sampling, parameter regularization and parallel computing.

  7. Nutrient modeling for a semi-intensive IMC pond: an MS-Excel approach.

    PubMed

    Ray, Lala I P; Mal, B C; Moulick, S

    2017-11-01

    Semi-intensive Indian Major Carp (IMC) culture was practised in polythene lined dugout ponds at the Aquacultural Farm of Indian Institute of Technology, Kharagpur, West Bengal for 3 consecutive years at three different stocking densities (S.D), viz., 20,000, 35,000 and 50,000 numbers of fingerlings per hectare of water spread area. Fingerlings of Catla, Rohu and Mrigal were raised at a stocking ratio of 4:3:3. Total ammonia nitrogen (TAN) value along with other fishpond water quality parameters was monitored at 1 day intervals to ensure a good water ecosystem for a better fish growth. Water exchange was carried out before the TAN reached the critical limit. Field data on TAN obtained from the cultured fishponds stocked with three different stocking densities were used to study the dynamics of TAN. A developed model used to study the nutrient dynamics in shrimp pond was used to validate the observed data in the IMC pond ecosystem. Two years of observed TAN data were used to calibrate the spreadsheet model and the same model was validated using the third year observed data. The manual calibration based on the trial and error process of parameters adjustments was used and several simulations were performed by changing the model parameters. After adjustment of each parameter, the simulated and measured values of the water quality parameters were compared to judge the improvement in the model prediction. Forward finite difference discretization method was used in a MS-Excel spreadsheet to calibrate and validate the model for obtaining the TAN levels during the culture period. Observed data from the cultured fishponds of three different S.D were used to standardize 13 model parameters. The efficiency of the developed spreadsheet model was found to be more than 90% for the TAN estimation in the IMC cultured fishponds.

  8. A simplified method for determining reactive rate parameters for reaction ignition and growth in explosives

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, P.J.

    1996-07-01

    A simplified method for determining the reactive rate parameters for the ignition and growth model is presented. This simplified ignition and growth (SIG) method consists of only two adjustable parameters, the ignition (I) and growth (G) rate constants. The parameters are determined by iterating these variables in DYNA2D hydrocode simulations of the failure diameter and the gap test sensitivity until the experimental values are reproduced. Examples of four widely different explosives were evaluated using the SIG model. The observed embedded gauge stress-time profiles for these explosives are compared to those calculated by the SIG equation and the results are described.

  9. Adjoint-Based Climate Model Tuning: Application to the Planet Simulator

    NASA Astrophysics Data System (ADS)

    Lyu, Guokun; Köhl, Armin; Matei, Ion; Stammer, Detlef

    2018-01-01

    The adjoint method is used to calibrate the medium complexity climate model "Planet Simulator" through parameter estimation. Identical twin experiments demonstrate that this method can retrieve default values of the control parameters when using a long assimilation window of the order of 2 months. Chaos synchronization through nudging, required to overcome limits in the temporal assimilation window in the adjoint method, is employed successfully to reach this assimilation window length. When assimilating ERA-Interim reanalysis data, the observations of air temperature and the radiative fluxes are the most important data for adjusting the control parameters. The global mean net longwave fluxes at the surface and at the top of the atmosphere are significantly improved by tuning two model parameters controlling the absorption of clouds and water vapor. The global mean net shortwave radiation at the surface is improved by optimizing three model parameters controlling cloud optical properties. The optimized parameters improve the free model (without nudging terms) simulation in a way similar to that in the assimilation experiments. Results suggest a promising way for tuning uncertain parameters in nonlinear coupled climate models.

  10. Using Four Downscaling Techniques to Characterize Uncertainty in Updating Intensity-Duration-Frequency Curves Under Climate Change

    NASA Astrophysics Data System (ADS)

    Cook, L. M.; Samaras, C.; McGinnis, S. A.

    2017-12-01

    Intensity-duration-frequency (IDF) curves are a common input to urban drainage design, and are used to represent extreme rainfall in a region. As rainfall patterns shift into a non-stationary regime as a result of climate change, these curves will need to be updated with future projections of extreme precipitation. Many regions have begun to update these curves to reflect the trends from downscaled climate models; however, few studies have compared the methods for doing so, as well as the uncertainty that results from the selection of the native grid scale and temporal resolution of the climate model. This study examines the variability in updated IDF curves for Pittsburgh using four different methods for adjusting gridded regional climate model (RCM) outputs into station scale precipitation extremes: (1) a simple change factor applied to observed return levels, (2) a naïve adjustment of stationary and non-stationary Generalized Extreme Value (GEV) distribution parameters, (3) a transfer function of the GEV parameters from the annual maximum series, and (4) kernel density distribution mapping bias correction of the RCM time series. Return level estimates (rainfall intensities) and confidence intervals from these methods for the 1-hour to 48-hour duration are tested for sensitivity to the underlying spatial and temporal resolution of the climate ensemble from the NA-CORDEX project, as well as, the future time period for updating. The first goal is to determine if uncertainty is highest for: (i) the downscaling method, (ii) the climate model resolution, (iii) the climate model simulation, (iv) the GEV parameters, or (v) the future time period examined. Initial results of the 6-hour, 10-year return level adjusted with the simple change factor method using four climate model simulations of two different spatial resolutions show that uncertainty is highest in the estimation of the GEV parameters. The second goal is to determine if complex downscaling methods and high-resolution climate models are necessary for updating, or if simpler methods and lower resolution climate models will suffice. The final results can be used to inform the most appropriate method and climate model resolutions to use for updating IDF curves for urban drainage design.

  11. Bayesian inference for unidirectional misclassification of a binary response trait.

    PubMed

    Xia, Michelle; Gustafson, Paul

    2018-03-15

    When assessing association between a binary trait and some covariates, the binary response may be subject to unidirectional misclassification. Unidirectional misclassification can occur when revealing a particular level of the trait is associated with a type of cost, such as a social desirability or financial cost. The feasibility of addressing misclassification is commonly obscured by model identification issues. The current paper attempts to study the efficacy of inference when the binary response variable is subject to unidirectional misclassification. From a theoretical perspective, we demonstrate that the key model parameters possess identifiability, except for the case with a single binary covariate. From a practical standpoint, the logistic model with quantitative covariates can be weakly identified, in the sense that the Fisher information matrix may be near singular. This can make learning some parameters difficult under certain parameter settings, even with quite large samples. In other cases, the stronger identification enables the model to provide more effective adjustment for unidirectional misclassification. An extension to the Poisson approximation of the binomial model reveals the identifiability of the Poisson and zero-inflated Poisson models. For fully identified models, the proposed method adjusts for misclassification based on learning from data. For binary models where there is difficulty in identification, the method is useful for sensitivity analyses on the potential impact from unidirectional misclassification. Copyright © 2017 John Wiley & Sons, Ltd.

  12. A two-dimensional hydrodynamic model of the St. Clair-Detroit River waterway in the Great Lakes basin

    USGS Publications Warehouse

    Holtschlag, David J.; Koschik, John A.

    2002-01-01

    The St. Clair–Detroit River Waterway connects Lake Huron with Lake Erie in the Great Lakes basin to form part of the international boundary between the United States and Canada. A two-dimensional hydrodynamic model is developed to compute flow velocities and water levels as part of a source-water assessment of public water intakes. The model, which uses the generalized finite-element code RMA2, discretizes the waterway into a mesh formed by 13,783 quadratic elements defined by 42,936 nodes. Seven steadystate scenarios are used to calibrate the model by adjusting parameters associated with channel roughness in 25 material zones in sub-areas of the waterway. An inverse modeling code is used to systematically adjust model parameters and to determine their associated uncertainty by use of nonlinear regression. Calibration results show close agreement between simulated and expected flows in major channels and water levels at gaging stations. Sensitivity analyses describe the amount of information available to estimate individual model parameters, and quantify the utility of flow measurements at selected cross sections and water-level measurements at gaging stations. Further data collection, model calibration analysis, and grid refinements are planned to assess and enhance two-dimensional flow simulation capabilities describing the horizontal flow distributions in St. Clair and Detroit Rivers and circulation patterns in Lake St. Clair.

  13. ASYMPTOTIC DISTRIBUTION OF ΔAUC, NRIs, AND IDI BASED ON THEORY OF U-STATISTICS

    PubMed Central

    Demler, Olga V.; Pencina, Michael J.; Cook, Nancy R.; D’Agostino, Ralph B.

    2017-01-01

    The change in AUC (ΔAUC), the IDI, and NRI are commonly used measures of risk prediction model performance. Some authors have reported good validity of associated methods of estimating their standard errors (SE) and construction of confidence intervals, whereas others have questioned their performance. To address these issues we unite the ΔAUC, IDI, and three versions of the NRI under the umbrella of the U-statistics family. We rigorously show that the asymptotic behavior of ΔAUC, NRIs, and IDI fits the asymptotic distribution theory developed for U-statistics. We prove that the ΔAUC, NRIs, and IDI are asymptotically normal, unless they compare nested models under the null hypothesis. In the latter case, asymptotic normality and existing SE estimates cannot be applied to ΔAUC, NRIs, or IDI. In the former case SE formulas proposed in the literature are equivalent to SE formulas obtained from U-statistics theory if we ignore adjustment for estimated parameters. We use Sukhatme-Randles-deWet condition to determine when adjustment for estimated parameters is necessary. We show that adjustment is not necessary for SEs of the ΔAUC and two versions of the NRI when added predictor variables are significant and normally distributed. The SEs of the IDI and three-category NRI should always be adjusted for estimated parameters. These results allow us to define when existing formulas for SE estimates can be used and when resampling methods such as the bootstrap should be used instead when comparing nested models. We also use the U-statistic theory to develop a new SE estimate of ΔAUC. PMID:28627112

  14. Asymptotic distribution of ∆AUC, NRIs, and IDI based on theory of U-statistics.

    PubMed

    Demler, Olga V; Pencina, Michael J; Cook, Nancy R; D'Agostino, Ralph B

    2017-09-20

    The change in area under the curve (∆AUC), the integrated discrimination improvement (IDI), and net reclassification index (NRI) are commonly used measures of risk prediction model performance. Some authors have reported good validity of associated methods of estimating their standard errors (SE) and construction of confidence intervals, whereas others have questioned their performance. To address these issues, we unite the ∆AUC, IDI, and three versions of the NRI under the umbrella of the U-statistics family. We rigorously show that the asymptotic behavior of ∆AUC, NRIs, and IDI fits the asymptotic distribution theory developed for U-statistics. We prove that the ∆AUC, NRIs, and IDI are asymptotically normal, unless they compare nested models under the null hypothesis. In the latter case, asymptotic normality and existing SE estimates cannot be applied to ∆AUC, NRIs, or IDI. In the former case, SE formulas proposed in the literature are equivalent to SE formulas obtained from U-statistics theory if we ignore adjustment for estimated parameters. We use Sukhatme-Randles-deWet condition to determine when adjustment for estimated parameters is necessary. We show that adjustment is not necessary for SEs of the ∆AUC and two versions of the NRI when added predictor variables are significant and normally distributed. The SEs of the IDI and three-category NRI should always be adjusted for estimated parameters. These results allow us to define when existing formulas for SE estimates can be used and when resampling methods such as the bootstrap should be used instead when comparing nested models. We also use the U-statistic theory to develop a new SE estimate of ∆AUC. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  15. A Parameter Communication Optimization Strategy for Distributed Machine Learning in Sensors.

    PubMed

    Zhang, Jilin; Tu, Hangdi; Ren, Yongjian; Wan, Jian; Zhou, Li; Li, Mingwei; Wang, Jue; Yu, Lifeng; Zhao, Chang; Zhang, Lei

    2017-09-21

    In order to utilize the distributed characteristic of sensors, distributed machine learning has become the mainstream approach, but the different computing capability of sensors and network delays greatly influence the accuracy and the convergence rate of the machine learning model. Our paper describes a reasonable parameter communication optimization strategy to balance the training overhead and the communication overhead. We extend the fault tolerance of iterative-convergent machine learning algorithms and propose the Dynamic Finite Fault Tolerance (DFFT). Based on the DFFT, we implement a parameter communication optimization strategy for distributed machine learning, named Dynamic Synchronous Parallel Strategy (DSP), which uses the performance monitoring model to dynamically adjust the parameter synchronization strategy between worker nodes and the Parameter Server (PS). This strategy makes full use of the computing power of each sensor, ensures the accuracy of the machine learning model, and avoids the situation that the model training is disturbed by any tasks unrelated to the sensors.

  16. Three methods to construct predictive models using logistic regression and likelihood ratios to facilitate adjustment for pretest probability give similar results.

    PubMed

    Chan, Siew Foong; Deeks, Jonathan J; Macaskill, Petra; Irwig, Les

    2008-01-01

    To compare three predictive models based on logistic regression to estimate adjusted likelihood ratios allowing for interdependency between diagnostic variables (tests). This study was a review of the theoretical basis, assumptions, and limitations of published models; and a statistical extension of methods and application to a case study of the diagnosis of obstructive airways disease based on history and clinical examination. Albert's method includes an offset term to estimate an adjusted likelihood ratio for combinations of tests. Spiegelhalter and Knill-Jones method uses the unadjusted likelihood ratio for each test as a predictor and computes shrinkage factors to allow for interdependence. Knottnerus' method differs from the other methods because it requires sequencing of tests, which limits its application to situations where there are few tests and substantial data. Although parameter estimates differed between the models, predicted "posttest" probabilities were generally similar. Construction of predictive models using logistic regression is preferred to the independence Bayes' approach when it is important to adjust for dependency of tests errors. Methods to estimate adjusted likelihood ratios from predictive models should be considered in preference to a standard logistic regression model to facilitate ease of interpretation and application. Albert's method provides the most straightforward approach.

  17. Error analysis of mechanical system and wavelength calibration of monochromator

    NASA Astrophysics Data System (ADS)

    Zhang, Fudong; Chen, Chen; Liu, Jie; Wang, Zhihong

    2018-02-01

    This study focuses on improving the accuracy of a grating monochromator on the basis of the grating diffraction equation in combination with an analysis of the mechanical transmission relationship between the grating, the sine bar, and the screw of the scanning mechanism. First, the relationship between the mechanical error in the monochromator with the sine drive and the wavelength error is analyzed. Second, a mathematical model of the wavelength error and mechanical error is developed, and an accurate wavelength calibration method based on the sine bar's length adjustment and error compensation is proposed. Based on the mathematical model and calibration method, experiments using a standard light source with known spectral lines and a pre-adjusted sine bar length are conducted. The model parameter equations are solved, and subsequent parameter optimization simulations are performed to determine the optimal length ratio. Lastly, the length of the sine bar is adjusted. The experimental results indicate that the wavelength accuracy is ±0.3 nm, which is better than the original accuracy of ±2.6 nm. The results confirm the validity of the error analysis of the mechanical system of the monochromator as well as the validity of the calibration method.

  18. Modeling erosion under future climates with the WEPP model

    Treesearch

    Timothy Bayley; William Elliot; Mark A. Nearing; D. Phillp Guertin; Thomas Johnson; David Goodrich; Dennis Flanagan

    2010-01-01

    The Water Erosion Prediction Project Climate Assessment Tool (WEPPCAT) was developed to be an easy-to-use, web-based erosion model that allows users to adjust climate inputs for user-specified climate scenarios. WEPPCAT allows the user to modify monthly mean climate parameters, including maximum and minimum temperatures, number of wet days, precipitation, and...

  19. Adaptive non-linear control for cancer therapy through a Fokker-Planck observer.

    PubMed

    Shakeri, Ehsan; Latif-Shabgahi, Gholamreza; Esmaeili Abharian, Amir

    2018-04-01

    In recent years, many efforts have been made to present optimal strategies for cancer therapy through the mathematical modelling of tumour-cell population dynamics and optimal control theory. In many cases, therapy effect is included in the drift term of the stochastic Gompertz model. By fitting the model with empirical data, the parameters of therapy function are estimated. The reported research works have not presented any algorithm to determine the optimal parameters of therapy function. In this study, a logarithmic therapy function is entered in the drift term of the Gompertz model. Using the proposed control algorithm, the therapy function parameters are predicted and adaptively adjusted. To control the growth of tumour-cell population, its moments must be manipulated. This study employs the probability density function (PDF) control approach because of its ability to control all the process moments. A Fokker-Planck-based non-linear stochastic observer will be used to determine the PDF of the process. A cost function based on the difference between a predefined desired PDF and PDF of tumour-cell population is defined. Using the proposed algorithm, the therapy function parameters are adjusted in such a manner that the cost function is minimised. The existence of an optimal therapy function is also proved. The numerical results are finally given to demonstrate the effectiveness of the proposed method.

  20. Forcings and feedbacks in the GeoMIP ensemble for a reduction in solar irradiance and increase in CO2

    NASA Astrophysics Data System (ADS)

    Huneeus, Nicolas; Boucher, Olivier; Alterskjær, Kari; Cole, Jason N. S.; Curry, Charles L.; Ji, Duoying; Jones, Andy; Kravitz, Ben; Kristjánsson, Jón Egill; Moore, John C.; Muri, Helene; Niemeier, Ulrike; Rasch, Phil; Robock, Alan; Singh, Balwinder; Schmidt, Hauke; Schulz, Michael; Tilmes, Simone; Watanabe, Shingo; Yoon, Jin-Ho

    2014-05-01

    The effective radiative forcings (including rapid adjustments) and feedbacks associated with an instantaneous quadrupling of the preindustrial CO2 concentration and a counterbalancing reduction of the solar constant are investigated in the context of the Geoengineering Model Intercomparison Project (GeoMIP). The forcing and feedback parameters of the net energy flux, as well as its different components at the top-of-atmosphere (TOA) and surface, were examined in 10 Earth System Models to better understand the impact of solar radiation management on the energy budget. In spite of their very different nature, the feedback parameter and its components at the TOA and surface are almost identical for the two forcing mechanisms, not only in the global mean but also in their geographical distributions. This conclusion holds for each of the individual models despite intermodel differences in how feedbacks affect the energy budget. This indicates that the climate sensitivity parameter is independent of the forcing (when measured as an effective radiative forcing). We also show the existence of a large contribution of the cloudy-sky component to the shortwave effective radiative forcing at the TOA suggesting rapid cloud adjustments to a change in solar irradiance. In addition, the models present significant diversity in the spatial distribution of the shortwave feedback parameter in cloudy regions, indicating persistent uncertainties in cloud feedback mechanisms.

  1. Research on fuzzy PID control to electronic speed regulator

    NASA Astrophysics Data System (ADS)

    Xu, Xiao-gang; Chen, Xue-hui; Zheng, Sheng-guo

    2007-12-01

    As an important part of diesel engine, the speed regulator plays an important role in stabilizing speed and improving engine's performance. Because there are so many model parameters of diesel-engine considered in traditional PID control and these parameters present non-linear characteristic.The method to adjust engine speed using traditional PID is not considered as a best way. Especially for the diesel-engine generator set. In this paper, the Fuzzy PID control strategy is proposed. Some problems about its utilization in electronic speed regulator are discussed. A mathematical model of electric control system for diesel-engine generator set is established and the way of the PID parameters in the model to affect the function of system is analyzed. And then it is proposed the differential coefficient must be applied in control design for reducing dynamic deviation of system and adjusting time. Based on the control theory, a study combined control with PID calculation together for turning fuzzy PID parameter is implemented. And also a simulation experiment about electronic speed regulator system was conducted using Matlab/Simulink and the Fuzzy-Toolbox. Compared with the traditional PID Algorithm, the simulated results presented obvious improvements in the instantaneous speed governing rate and steady state speed governing rate of diesel-engine generator set when the fuzzy logic control strategy used.

  2. Model verification of mixed dynamic systems. [POGO problem in liquid propellant rockets

    NASA Technical Reports Server (NTRS)

    Chrostowski, J. D.; Evensen, D. A.; Hasselman, T. K.

    1978-01-01

    A parameter-estimation method is described for verifying the mathematical model of mixed (combined interactive components from various engineering fields) dynamic systems against pertinent experimental data. The model verification problem is divided into two separate parts: defining a proper model and evaluating the parameters of that model. The main idea is to use differences between measured and predicted behavior (response) to adjust automatically the key parameters of a model so as to minimize response differences. To achieve the goal of modeling flexibility, the method combines the convenience of automated matrix generation with the generality of direct matrix input. The equations of motion are treated in first-order form, allowing for nonsymmetric matrices, modeling of general networks, and complex-mode analysis. The effectiveness of the method is demonstrated for an example problem involving a complex hydraulic-mechanical system.

  3. A new universal dynamic model to describe eating rate and cumulative intake curves123

    PubMed Central

    Paynter, Jonathan; Peterson, Courtney M; Heymsfield, Steven B

    2017-01-01

    Background: Attempts to model cumulative intake curves with quadratic functions have not simultaneously taken gustatory stimulation, satiation, and maximal food intake into account. Objective: Our aim was to develop a dynamic model for cumulative intake curves that captures gustatory stimulation, satiation, and maximal food intake. Design: We developed a first-principles model describing cumulative intake that universally describes gustatory stimulation, satiation, and maximal food intake using 3 key parameters: 1) the initial eating rate, 2) the effective duration of eating, and 3) the maximal food intake. These model parameters were estimated in a study (n = 49) where eating rates were deliberately changed. Baseline data was used to determine the quality of model's fit to data compared with the quadratic model. The 3 parameters were also calculated in a second study consisting of restrained and unrestrained eaters. Finally, we calculated when the gustatory stimulation phase is short or absent. Results: The mean sum squared error for the first-principles model was 337.1 ± 240.4 compared with 581.6 ± 563.5 for the quadratic model, or a 43% improvement in fit. Individual comparison demonstrated lower errors for 94% of the subjects. Both sex (P = 0.002) and eating duration (P = 0.002) were associated with the initial eating rate (adjusted R2 = 0.23). Sex was also associated (P = 0.03 and P = 0.012) with the effective eating duration and maximum food intake (adjusted R2 = 0.06 and 0.11). In participants directed to eat as much as they could compared with as much as they felt comfortable with, the maximal intake parameter was approximately double the amount. The model found that certain parameter regions resulted in both stimulation and satiation phases, whereas others only produced a satiation phase. Conclusions: The first-principles model better quantifies interindividual differences in food intake, shows how aspects of food intake differ across subpopulations, and can be applied to determine how eating behavior factors influence total food intake. PMID:28077377

  4. The Rothermel surface fire spread model and associated developments: A comprehensive explanation

    Treesearch

    Patricia L. Andrews

    2018-01-01

    The Rothermel surface fire spread model, with some adjustments by Frank A. Albini in 1976, has been used in fire and fuels management systems since 1972. It is generally used with other models including fireline intensity and flame length. Fuel models are often used to define fuel input parameters. Dynamic fuel models use equations for live fuel curing. Models have...

  5. Calibrating Physical Parameters in House Models Using Aggregate AC Power Demand

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sun, Yannan; Stevens, Andrew J.; Lian, Jianming

    For residential houses, the air conditioning (AC) units are one of the major resources that can provide significant flexibility in energy use for the purpose of demand response. To quantify the flexibility, the characteristics of all the houses need to be accurately estimated, so that certain house models can be used to predict the dynamics of the house temperatures in order to adjust the setpoints accordingly to provide demand response while maintaining the same comfort levels. In this paper, we propose an approach using the Reverse Monte Carlo modeling method and aggregate house models to calibrate the distribution parameters ofmore » the house models for a population of residential houses. Given the aggregate AC power demand for the population, the approach can successfully estimate the distribution parameters for the sensitive physical parameters based on our previous uncertainty quantification study, such as the mean of the floor areas of the houses.« less

  6. Mean gravity anomalies and sea surface heights derived from GEOS-3 altimeter data

    NASA Technical Reports Server (NTRS)

    Rapp, R. H.

    1978-01-01

    Approximately 2000 GEOS-3 altimeter arcs were analyzed to improve knowledge of the geoid and gravity field. An adjustment procedure was used to fit the sea surface heights (geoid undulations) in an adjustment process that incorporated cross-over constraints. The error model used for the fit was a one or two parameter model which was designed to remove altimeter bias and orbit error. The undulations on the adjusted arcs were used to produce geoid maps in 20 regions. The adjusted data was used to derive 301 5 degree equal area anomalies and 9995 1 x 1 degree anomalies in areas where the altimeter data was most dense, using least squares collocation techniques. Also emphasized was the ability of the altimeter data to imply rapid anomaly changes of up to 240 mgals in adjacent 1 x 1 degree blocks.

  7. The relationship between the C-statistic of a risk-adjustment model and the accuracy of hospital report cards: a Monte Carlo Study.

    PubMed

    Austin, Peter C; Reeves, Mathew J

    2013-03-01

    Hospital report cards, in which outcomes following the provision of medical or surgical care are compared across health care providers, are being published with increasing frequency. Essential to the production of these reports is risk-adjustment, which allows investigators to account for differences in the distribution of patient illness severity across different hospitals. Logistic regression models are frequently used for risk adjustment in hospital report cards. Many applied researchers use the c-statistic (equivalent to the area under the receiver operating characteristic curve) of the logistic regression model as a measure of the credibility and accuracy of hospital report cards. To determine the relationship between the c-statistic of a risk-adjustment model and the accuracy of hospital report cards. Monte Carlo simulations were used to examine this issue. We examined the influence of 3 factors on the accuracy of hospital report cards: the c-statistic of the logistic regression model used for risk adjustment, the number of hospitals, and the number of patients treated at each hospital. The parameters used to generate the simulated datasets came from analyses of patients hospitalized with a diagnosis of acute myocardial infarction in Ontario, Canada. The c-statistic of the risk-adjustment model had, at most, a very modest impact on the accuracy of hospital report cards, whereas the number of patients treated at each hospital had a much greater impact. The c-statistic of a risk-adjustment model should not be used to assess the accuracy of a hospital report card.

  8. The relationship between the c-statistic of a risk-adjustment model and the accuracy of hospital report cards: A Monte Carlo study

    PubMed Central

    Austin, Peter C.; Reeves, Mathew J.

    2015-01-01

    Background Hospital report cards, in which outcomes following the provision of medical or surgical care are compared across health care providers, are being published with increasing frequency. Essential to the production of these reports is risk-adjustment, which allows investigators to account for differences in the distribution of patient illness severity across different hospitals. Logistic regression models are frequently used for risk-adjustment in hospital report cards. Many applied researchers use the c-statistic (equivalent to the area under the receiver operating characteristic curve) of the logistic regression model as a measure of the credibility and accuracy of hospital report cards. Objectives To determine the relationship between the c-statistic of a risk-adjustment model and the accuracy of hospital report cards. Research Design Monte Carlo simulations were used to examine this issue. We examined the influence of three factors on the accuracy of hospital report cards: the c-statistic of the logistic regression model used for risk-adjustment, the number of hospitals, and the number of patients treated at each hospital. The parameters used to generate the simulated datasets came from analyses of patients hospitalized with a diagnosis of acute myocardial infarction in Ontario, Canada. Results The c-statistic of the risk-adjustment model had, at most, a very modest impact on the accuracy of hospital report cards, whereas the number of patients treated at each hospital had a much greater impact. Conclusions The c-statistic of a risk-adjustment model should not be used to assess the accuracy of a hospital report card. PMID:23295579

  9. Modelling decremental ramps using 2- and 3-parameter "critical power" models.

    PubMed

    Morton, R Hugh; Billat, Veronique

    2013-01-01

    The "Critical Power" (CP) model of human bioenergetics provides a valuable way to identify both limits of tolerance to exercise and mechanisms that underpin that tolerance. It applies principally to cycling-based exercise, but with suitable adjustments for analogous units it can be applied to other exercise modalities; in particular to incremental ramp exercise. It has not yet been applied to decremental ramps which put heavy early demand on the anaerobic energy supply system. This paper details cycling-based bioenergetics of decremental ramps using 2- and 3-parameter CP models. It derives equations that, for an individual of known CP model parameters, define those combinations of starting intensity and decremental gradient which will or will not lead to exhaustion before ramping to zero; and equations that predict time to exhaustion on those decremental ramps that will. These are further detailed with suitably chosen numerical and graphical illustrations. These equations can be used for parameter estimation from collected data, or to make predictions when parameters are known.

  10. Linear-quadratic-Gaussian synthesis with reduced parameter sensitivity

    NASA Technical Reports Server (NTRS)

    Lin, J. Y.; Mingori, D. L.

    1992-01-01

    We present a method for improving the tolerance of a conventional LQG controller to parameter errors in the plant model. The improvement is achieved by introducing additional terms reflecting the structure of the parameter errors into the LQR cost function, and also the process and measurement noise models. Adjusting the sizes of these additional terms permits a trade-off between robustness and nominal performance. Manipulation of some of the additional terms leads to high gain controllers while other terms lead to low gain controllers. Conditions are developed under which the high-gain approach asymptotically recovers the robustness of the corresponding full-state feedback design, and the low-gain approach makes the closed-loop poles asymptotically insensitive to parameter errors.

  11. The elimination of colour blocks in remote sensing images in VR

    NASA Astrophysics Data System (ADS)

    Zhao, Xiuying; Li, Guohui; Su, Zhenyu

    2018-02-01

    Aiming at the characteristics in HSI colour space of remote sensing images at different time in VR, a unified colour algorithm is proposed. First the method converted original image from RGB colour space to HSI colour space. Then, based on the invariance of the hue before and after the colour adjustment in the HSI colour space and the brightness translational features of the image after the colour adjustment, establish the linear model which satisfied these characteristics of the image. And then determine the range of the parameters in the model. Finally, according to the established colour adjustment model, the experimental verification is carried out. The experimental results show the proposed model can effectively recover the clear image, and the algorithm is faster. The experimental results show the proposed algorithm can effectively enhance the image clarity and can solve the pigment block problem well.

  12. DSD-Consistent JWL Equations of State for EDC35

    NASA Astrophysics Data System (ADS)

    Hodgson, Alexander

    2011-06-01

    The Detonation Shock Dynamics model (DSD) allows the calculation of curvature-dependent detonation propagation. It is of particular use when applied to insensitive high explosives, such as EDC35, since they have a greater non-ideal behaviour. The DSD model has been used in conjunction with an experimental cylinder test to obtain the JWL Equation of State (EoS) for EDC35. Adjustment of parameters in the JWL equation changes the expansion profile of the simulated wall expansion. The parameters are iterated until the best match can be made between simulation and experiment. Previous DSD models used at AWE have no energy release mechanism to adjust the release of chemical energy to match the detonation conditions. Two JWL calibrations are performed using the DSD model, with and without Hetherington's energy release model (these proceedings). Also in use is a newly-calibrated detonation speed-curvature relation that is much closer, compared to previous calibrations, to Bdzil's equivalent for PBX9502. This paper discusses the possible improvements that this approach makes to the EDC35 JWL EoS.

  13. Fitting the Rasch Model to Account for Variation in Item Discrimination

    ERIC Educational Resources Information Center

    Weitzman, R. A.

    2009-01-01

    Building on the Kelley and Gulliksen versions of classical test theory, this article shows that a logistic model having only a single item parameter can account for varying item discrimination, as well as difficulty, by using item-test correlations to adjust incorrect-correct (0-1) item responses prior to an initial model fit. The fit occurs…

  14. Improving the Non-Hydrostatic Numerical Dust Model by Integrating Soil Moisture and Greenness Vegetation Fraction Data with Different Spatiotemporal Resolutions.

    PubMed

    Yu, Manzhu; Yang, Chaowei

    2016-01-01

    Dust storms are devastating natural disasters that cost billions of dollars and many human lives every year. Using the Non-Hydrostatic Mesoscale Dust Model (NMM-dust), this research studies how different spatiotemporal resolutions of two input parameters (soil moisture and greenness vegetation fraction) impact the sensitivity and accuracy of a dust model. Experiments are conducted by simulating dust concentration during July 1-7, 2014, for the target area covering part of Arizona and California (31, 37, -118, -112), with a resolution of ~ 3 km. Using ground-based and satellite observations, this research validates the temporal evolution and spatial distribution of dust storm output from the NMM-dust, and quantifies model error using measurements of four evaluation metrics (mean bias error, root mean square error, correlation coefficient and fractional gross error). Results showed that the default configuration of NMM-dust (with a low spatiotemporal resolution of both input parameters) generates an overestimation of Aerosol Optical Depth (AOD). Although it is able to qualitatively reproduce the temporal trend of the dust event, the default configuration of NMM-dust cannot fully capture its actual spatial distribution. Adjusting the spatiotemporal resolution of soil moisture and vegetation cover datasets showed that the model is sensitive to both parameters. Increasing the spatiotemporal resolution of soil moisture effectively reduces model's overestimation of AOD, while increasing the spatiotemporal resolution of vegetation cover changes the spatial distribution of reproduced dust storm. The adjustment of both parameters enables NMM-dust to capture the spatial distribution of dust storms, as well as reproducing more accurate dust concentration.

  15. Foot Type Biomechanics Part 2: are structure and anthropometrics related to function?

    PubMed

    Mootanah, Rajshree; Song, Jinsup; Lenhoff, Mark W; Hafer, Jocelyn F; Backus, Sherry I; Gagnon, David; Deland, Jonathan T; Hillstrom, Howard J

    2013-03-01

    Many foot pathologies are associated with specific foot types. If foot structure and function are related, measurement of either could assist with differential diagnosis of pedal pathologies. Biomechanical measures of foot structure and function are related in asymptomatic healthy individuals. Sixty-one healthy subjects' left feet were stratified into cavus (n=12), rectus (n=27) and planus (n=22) foot types. Foot structure was assessed by malleolar valgus index, arch height index, and arch height flexibility. Anthropometrics (height and weight), age, and walking speed were measured. Foot function was assessed by center of pressure excursion index, peak plantar pressure, maximum force, and gait pattern parameters. Foot structure and anthropometric variables were entered into stepwise linear regression models to identify predictors of function. Measures of foot structure and anthropometrics explained 10-37% of the model variance (adjusted R(2)) for gait pattern parameters. When walking speed was included, the adjusted R(2) increased to 45-77% but foot structure was no longer a factor. Foot structure and anthropometrics predicted 7-47% of the model variance for plantar pressure and 16-64% for maximum force parameters. All multivariate models were significant (p<0.05), supporting acceptance of the hypothesis. Foot structure and function are related in asymptomatic healthy individuals. The structural parameters employed are basic measurements that do not require ionizing radiation and could be used in a clinical setting. Further research is needed to identify additional predictive parameters (plantar soft tissue characteristics, skeletal alignment, and neuromuscular control) and to include individuals with pathology. Copyright © 2012. Published by Elsevier B.V.

  16. Foot Type Biomechanics Part 2: Are structure and anthropometrics related to function?

    PubMed Central

    Mootanah, Rajshree; Song, Jinsup; Lenhoff, Mark W.; Hafer, Jocelyn F.; Backus, Sherry I.; Gagnon, David; Deland, Jonathan T.; Hillstrom, Howard J.

    2013-01-01

    Background Many foot pathologies are associated with specific foot types. If foot structure and function are related, measurement of either could assist with differential diagnosis of pedal pathologies. Hypothesis Biomechanical measures of foot structure and function are related in asymptomatic healthy individuals. Methods Sixty-one healthy subjects' left feet were stratified into cavus (n = 12), rectus (n = 27) and planus (n = 22) foot types. Foot structure was assessed by malleolar valgus index, arch height index, and arch height flexibility. Anthropometrics (height and weight), age, and walking speed were measured. Foot function was assessed by center of pressure excursion index, peak plantar pressure, maximum force, and gait pattern parameters. Foot structure and anthropometric variables were entered into stepwise linear regression models to identify predictors of function. Results Measures of foot structure and anthropometrics explained 10–37% of the model variance (adjusted R2) for gait pattern parameters. When walking speed was included, the adjusted R2 increased to 45–77% but foot structure was no longer a factor. Foot structure and anthropometrics predicted 7–47% of the model variance for plantar pressure and 16–64% for maximum force parameters. All multivariate models were significant (p < 0.05), supporting acceptance of the hypothesis. Discussion and conclusion Foot structure and function are related in asymptomatic healthy individuals. The structural parameters employed are basic measurements that do not require ionizing radiation and could be used in a clinical setting. Further research is needed to identify additional predictive parameters (plantar soft tissue characteristics, skeletal alignment, and neuromuscular control) and to include individuals with pathology. PMID:23107624

  17. The Association between Parameters of Malnutrition and Diagnostic Measures of Sarcopenia in Geriatric Outpatients

    PubMed Central

    Reijnierse, Esmee M.; Trappenburg, Marijke C.; Leter, Morena J.; Blauw, Gerard Jan; de van der Schueren, Marian A. E.; Meskers, Carel G. M.; Maier, Andrea B.

    2015-01-01

    Objectives Diagnostic criteria for sarcopenia include measures of muscle mass, muscle strength and physical performance. Consensus on the definition of sarcopenia has not been reached yet. To improve insight into the most clinically valid definition of sarcopenia, this study aimed to compare the association between parameters of malnutrition, as a risk factor in sarcopenia, and diagnostic measures of sarcopenia in geriatric outpatients. Material and Methods This study is based on data from a cross-sectional study conducted in a geriatric outpatient clinic including 185 geriatric outpatients (mean age 82 years). Parameters of malnutrition included risk of malnutrition (assessed by the Short Nutritional Assessment Questionnaire), loss of appetite, unintentional weight loss and underweight (body mass index <22 kg/m2). Diagnostic measures of sarcopenia included relative muscle mass (lean mass and appendicular lean mass [ALM] as percentages), absolute muscle mass (total lean mass and ALM/height2), handgrip strength and walking speed. All diagnostic measures of sarcopenia were standardized. Associations between parameters of malnutrition (independent variables) and diagnostic measures of sarcopenia (dependent variables) were analysed using multivariate linear regression models adjusted for age, body mass, fat mass and height in separate models. Results None of the parameters of malnutrition was consistently associated with diagnostic measures of sarcopenia. The strongest associations were found for both relative and absolute muscle mass; less stronger associations were found for muscle strength and physical performance. Underweight (p = <0.001) and unintentional weight loss (p = 0.031) were most strongly associated with higher lean mass percentage after adjusting for age. Loss of appetite (p = 0.003) and underweight (p = 0.021) were most strongly associated with lower total lean mass after adjusting for age and fat mass. Conclusion Parameters of malnutrition relate differently to diagnostic measures of sarcopenia in geriatric outpatients. The association between parameters of malnutrition and diagnostic measures of sarcopenia was strongest for both relative and absolute muscle mass, while less strong associations were found with muscle strength and physical performance. PMID:26284368

  18. Applications of Monte Carlo method to nonlinear regression of rheological data

    NASA Astrophysics Data System (ADS)

    Kim, Sangmo; Lee, Junghaeng; Kim, Sihyun; Cho, Kwang Soo

    2018-02-01

    In rheological study, it is often to determine the parameters of rheological models from experimental data. Since both rheological data and values of the parameters vary in logarithmic scale and the number of the parameters is quite large, conventional method of nonlinear regression such as Levenberg-Marquardt (LM) method is usually ineffective. The gradient-based method such as LM is apt to be caught in local minima which give unphysical values of the parameters whenever the initial guess of the parameters is far from the global optimum. Although this problem could be solved by simulated annealing (SA), the Monte Carlo (MC) method needs adjustable parameter which could be determined in ad hoc manner. We suggest a simplified version of SA, a kind of MC methods which results in effective values of the parameters of most complicated rheological models such as the Carreau-Yasuda model of steady shear viscosity, discrete relaxation spectrum and zero-shear viscosity as a function of concentration and molecular weight.

  19. Application of the migration models implemented in the decision system MOIRA-PLUS to assess the long term behaviour of (137)Cs in water and fish of the Baltic Sea.

    PubMed

    Monte, Luigi

    2014-08-01

    This work presents and discusses the results of an application of the contaminant migration models implemented in the decision support system MOIRA-PLUS to simulate the time behaviour of the concentrations of (137)Cs of Chernobyl origin in water and fish of the Baltic Sea. The results of the models were compared with the extensive sets of highly reliable empirical data of radionuclide contamination available from international databases and covering a period of, approximately, twenty years. The model application involved three main phases: a) the customisation performed by using hydrological, morphometric and water circulation data obtained from the literature; b) a blind test of the model results, in the sense that the models made use of default values of the migration parameters to predict the dynamics of the contaminant in the environmental components; and c) the adjustment of the model parameter values to improve the agreement of the predictions with the empirical data. The results of the blind test showed that the models successfully predicted the empirical contamination values within the expected range of uncertainty of the predictions (confidence level at 68% of approximately a factor 2). The parameter adjustment can be helpful for the assessment of the fluxes of water circulating among the main sub-basins of the Baltic Sea, substantiating the usefulness of radionuclides to trace the movement of masses of water in seas. Copyright © 2014 Elsevier Ltd. All rights reserved.

  20. Sample Size and Statistical Conclusions from Tests of Fit to the Rasch Model According to the Rasch Unidimensional Measurement Model (Rumm) Program in Health Outcome Measurement.

    PubMed

    Hagell, Peter; Westergren, Albert

    Sample size is a major factor in statistical null hypothesis testing, which is the basis for many approaches to testing Rasch model fit. Few sample size recommendations for testing fit to the Rasch model concern the Rasch Unidimensional Measurement Models (RUMM) software, which features chi-square and ANOVA/F-ratio based fit statistics, including Bonferroni and algebraic sample size adjustments. This paper explores the occurrence of Type I errors with RUMM fit statistics, and the effects of algebraic sample size adjustments. Data with simulated Rasch model fitting 25-item dichotomous scales and sample sizes ranging from N = 50 to N = 2500 were analysed with and without algebraically adjusted sample sizes. Results suggest the occurrence of Type I errors with N less then or equal to 500, and that Bonferroni correction as well as downward algebraic sample size adjustment are useful to avoid such errors, whereas upward adjustment of smaller samples falsely signal misfit. Our observations suggest that sample sizes around N = 250 to N = 500 may provide a good balance for the statistical interpretation of the RUMM fit statistics studied here with respect to Type I errors and under the assumption of Rasch model fit within the examined frame of reference (i.e., about 25 item parameters well targeted to the sample).

  1. Color identification and fuzzy reasoning based monitoring and controlling of fermentation process of branched chain amino acid

    NASA Astrophysics Data System (ADS)

    Ma, Lei; Wang, Yizhong; Xu, Qingyang; Huang, Huafang; Zhang, Rui; Chen, Ning

    2009-11-01

    The main production method of branched chain amino acid (BCAA) is microbial fermentation. In this paper, to monitor and to control the fermentation process of BCAA, especially its logarithmic phase, parameters such as the color of fermentation broth, culture temperature, pH, revolution, dissolved oxygen, airflow rate, pressure, optical density, and residual glucose, are measured and/or controlled and/or adjusted. The color of fermentation broth is measured using the HIS color model and a BP neural network. The network's input is the histograms of hue H and saturation S, and output is the color description. Fermentation process parameters are adjusted using fuzzy reasoning, which is performed by inference rules. According to the practical situation of BCAA fermentation process, all parameters are divided into four grades, and different fuzzy rules are established.

  2. In search of best fitted composite model to the ALAE data set with transformed Gamma and inversed transformed Gamma families

    NASA Astrophysics Data System (ADS)

    Maghsoudi, Mastoureh; Bakar, Shaiful Anuar Abu

    2017-05-01

    In this paper, a recent novel approach is applied to estimate the threshold parameter of a composite model. Several composite models from Transformed Gamma and Inverse Transformed Gamma families are constructed based on this approach and their parameters are estimated by the maximum likelihood method. These composite models are fitted to allocated loss adjustment expenses (ALAE). In comparison to all composite models studied, the composite Weibull-Inverse Transformed Gamma model is proved to be a competitor candidate as it best fit the loss data. The final part considers the backtesting method to verify the validation of VaR and CTE risk measures.

  3. Outdoor ground impedance models.

    PubMed

    Attenborough, Keith; Bashir, Imran; Taherzadeh, Shahram

    2011-05-01

    Many models for the acoustical properties of rigid-porous media require knowledge of parameter values that are not available for outdoor ground surfaces. The relationship used between tortuosity and porosity for stacked spheres results in five characteristic impedance models that require not more than two adjustable parameters. These models and hard-backed-layer versions are considered further through numerical fitting of 42 short range level difference spectra measured over various ground surfaces. For all but eight sites, slit-pore, phenomenological and variable porosity models yield lower fitting errors than those given by the widely used one-parameter semi-empirical model. Data for 12 of 26 grassland sites and for three beech wood sites are fitted better by hard-backed-layer models. Parameter values obtained by fitting slit-pore and phenomenological models to data for relatively low flow resistivity grounds, such as forest floors, porous asphalt, and gravel, are consistent with values that have been obtained non-acoustically. Three impedance models yield reasonable fits to a narrow band excess attenuation spectrum measured at short range over railway ballast but, if extended reaction is taken into account, the hard-backed-layer version of the slit-pore model gives the most reasonable parameter values.

  4. Exploring the importance of within-canopy spatial temperature variation on transpiration predictions

    PubMed Central

    Bauerle, William L.; Bowden, Joseph D.; Wang, G. Geoff; Shahba, Mohamed A.

    2009-01-01

    Models seldom consider the effect of leaf-level biochemical acclimation to temperature when scaling forest water use. Therefore, the dependence of transpiration on temperature acclimation was investigated at the within-crown scale in climatically contrasting genotypes of Acer rubrum L., cv. October Glory (OG) and Summer Red (SR). The effects of temperature acclimation on intracanopy gradients in transpiration over a range of realistic forest growth temperatures were also assessed by simulation. Physiological parameters were applied, with or without adjustment for temperature acclimation, to account for transpiration responses to growth temperature. Both types of parameterization were scaled up to stand transpiration (expressed per unit leaf area) with an individual tree model (MAESTRA) to assess how transpiration might be affected by spatial and temporal distributions of foliage properties. The MAESTRA model performed well, but its reproducibility was dependent on physiological parameters acclimated to daytime temperature. Concordance correlation coefficients between measured and predicted transpiration were higher (0.95 and 0.98 versus 0.87 and 0.96) when model parameters reflected acclimated growth temperature. In response to temperature increases, the southern genotype (SR) transpiration responded more than the northern (OG). Conditions of elevated long-term temperature acclimation further separate their transpiration differences. Results demonstrate the importance of accounting for leaf-level physiological adjustments that are sensitive to microclimate changes and the use of provenance-, ecotype-, and/or genotype-specific parameter sets, two components likely to improve the accuracy of site-level and ecosystem-level estimates of transpiration flux. PMID:19561047

  5. Melting of genomic DNA: Predictive modeling by nonlinear lattice dynamics

    NASA Astrophysics Data System (ADS)

    Theodorakopoulos, Nikos

    2010-08-01

    The melting behavior of long, heterogeneous DNA chains is examined within the framework of the nonlinear lattice dynamics based Peyrard-Bishop-Dauxois (PBD) model. Data for the pBR322 plasmid and the complete T7 phage have been used to obtain model fits and determine parameter dependence on salt content. Melting curves predicted for the complete fd phage and the Y1 and Y2 fragments of the ϕX174 phage without any adjustable parameters are in good agreement with experiment. The calculated probabilities for single base-pair opening are consistent with values obtained from imino proton exchange experiments.

  6. Wheat flour dough Alveograph characteristics predicted by Mixolab regression models.

    PubMed

    Codină, Georgiana Gabriela; Mironeasa, Silvia; Mironeasa, Costel; Popa, Ciprian N; Tamba-Berehoiu, Radiana

    2012-02-01

    In Romania, the Alveograph is the most used device to evaluate the rheological properties of wheat flour dough, but lately the Mixolab device has begun to play an important role in the breadmaking industry. These two instruments are based on different principles but there are some correlations that can be found between the parameters determined by the Mixolab and the rheological properties of wheat dough measured with the Alveograph. Statistical analysis on 80 wheat flour samples using the backward stepwise multiple regression method showed that Mixolab values using the ‘Chopin S’ protocol (40 samples) and ‘Chopin + ’ protocol (40 samples) can be used to elaborate predictive models for estimating the value of the rheological properties of wheat dough: baking strength (W), dough tenacity (P) and extensibility (L). The correlation analysis confirmed significant findings (P < 0.05 and P < 0.01) between the parameters of wheat dough studied by the Mixolab and its rheological properties measured with the Alveograph. A number of six predictive linear equations were obtained. Linear regression models gave multiple regression coefficients with R²(adjusted) > 0.70 for P, R²(adjusted) > 0.70 for W and R²(adjusted) > 0.38 for L, at a 95% confidence interval. Copyright © 2011 Society of Chemical Industry.

  7. A Parameter Communication Optimization Strategy for Distributed Machine Learning in Sensors

    PubMed Central

    Zhang, Jilin; Tu, Hangdi; Ren, Yongjian; Wan, Jian; Zhou, Li; Li, Mingwei; Wang, Jue; Yu, Lifeng; Zhao, Chang; Zhang, Lei

    2017-01-01

    In order to utilize the distributed characteristic of sensors, distributed machine learning has become the mainstream approach, but the different computing capability of sensors and network delays greatly influence the accuracy and the convergence rate of the machine learning model. Our paper describes a reasonable parameter communication optimization strategy to balance the training overhead and the communication overhead. We extend the fault tolerance of iterative-convergent machine learning algorithms and propose the Dynamic Finite Fault Tolerance (DFFT). Based on the DFFT, we implement a parameter communication optimization strategy for distributed machine learning, named Dynamic Synchronous Parallel Strategy (DSP), which uses the performance monitoring model to dynamically adjust the parameter synchronization strategy between worker nodes and the Parameter Server (PS). This strategy makes full use of the computing power of each sensor, ensures the accuracy of the machine learning model, and avoids the situation that the model training is disturbed by any tasks unrelated to the sensors. PMID:28934163

  8. Clinical-Radiological Parameters Improve the Prediction of the Thrombolysis Time Window by Both MRI Signal Intensities and DWI-FLAIR Mismatch.

    PubMed

    Madai, Vince Istvan; Wood, Carla N; Galinovic, Ivana; Grittner, Ulrike; Piper, Sophie K; Revankar, Gajanan S; Martin, Steve Z; Zaro-Weber, Olivier; Moeller-Hartmann, Walter; von Samson-Himmelstjerna, Federico C; Heiss, Wolf-Dieter; Ebinger, Martin; Fiebach, Jochen B; Sobesky, Jan

    2016-01-01

    With regard to acute stroke, patients with unknown time from stroke onset are not eligible for thrombolysis. Quantitative diffusion weighted imaging (DWI) and fluid attenuated inversion recovery (FLAIR) MRI relative signal intensity (rSI) biomarkers have been introduced to predict eligibility for thrombolysis, but have shown heterogeneous results in the past. In the present work, we investigated whether the inclusion of easily obtainable clinical-radiological parameters would improve the prediction of the thrombolysis time window by rSIs and compared their performance to the visual DWI-FLAIR mismatch. In a retrospective study, patients from 2 centers with proven stroke with onset <12 h were included. The DWI lesion was segmented and overlaid on ADC and FLAIR images. rSI mean and SD, were calculated as follows: (mean ROI value/mean value of the unaffected hemisphere). Additionally, the visual DWI-FLAIR mismatch was evaluated. Prediction of the thrombolysis time window was evaluated by the area-under-the-curve (AUC) derived from receiver operating characteristic (ROC) curve analysis. Factors such as the association of age, National Institutes of Health Stroke Scale, MRI field strength, lesion size, vessel occlusion and Wahlund-Score with rSI were investigated and the models were adjusted and stratified accordingly. In 82 patients, the unadjusted rSI measures DWI-mean and -SD showed the highest AUCs (AUC 0.86-0.87). Adjustment for clinical-radiological covariates significantly improved the performance of FLAIR-mean (0.91) and DWI-SD (0.91). The best prediction results based on the AUC were found for the final stratified and adjusted models of DWI-SD (0.94) and FLAIR-mean (0.96) and a multivariable DWI-FLAIR model (0.95). The adjusted visual DWI-FLAIR mismatch did not perform in a significantly worse manner (0.89). ADC-rSIs showed fair performance in all models. Quantitative DWI and FLAIR MRI biomarkers as well as the visual DWI-FLAIR mismatch provide excellent prediction of eligibility for thrombolysis in acute stroke, when easily obtainable clinical-radiological parameters are included in the prediction models. © 2016 S. Karger AG, Basel.

  9. Calibration of a biome-biogeochemical cycles model for modeling the net primary production of teak forests through inverse modeling of remotely sensed data

    NASA Astrophysics Data System (ADS)

    Imvitthaya, Chomchid; Honda, Kiyoshi; Lertlum, Surat; Tangtham, Nipon

    2011-01-01

    In this paper, we present the results of a net primary production (NPP) modeling of teak (Tectona grandis Lin F.), an important species in tropical deciduous forests. The biome-biogeochemical cycles or Biome-BGC model was calibrated to estimate net NPP through the inverse modeling approach. A genetic algorithm (GA) was linked with Biome-BGC to determine the optimal ecophysiological model parameters. The Biome-BGC was calibrated by adjusting the ecophysiological model parameters to fit the simulated LAI to the satellite LAI (SPOT-Vegetation), and the best fitness confirmed the high accuracy of generated ecophysioligical parameter from GA. The modeled NPP, using optimized parameters from GA as input data, was evaluated using daily NPP derived by the MODIS satellite and the annual field data in northern Thailand. The results showed that NPP obtained using the optimized ecophysiological parameters were more accurate than those obtained using default literature parameterization. This improvement occurred mainly because the model's optimized parameters reduced the bias by reducing systematic underestimation in the model. These Biome-BGC results can be effectively applied in teak forests in tropical areas. The study proposes a more effective method of using GA to determine ecophysiological parameters at the site level and represents a first step toward the analysis of the carbon budget of teak plantations at the regional scale.

  10. Nonlinear dynamic analysis of rigid rotor supported by gas foil bearings: Effects of gas film and foil structure on subsynchronous vibrations

    NASA Astrophysics Data System (ADS)

    Guo, Zhiyang; Feng, Kai; Liu, Tianyu; Lyu, Peng; Zhang, Tao

    2018-07-01

    Highly nonlinear subsynchronous vibrations are the main causing factors of failure in gas foil bearing (GFB)-rotor systems. Thus, investigating the vibration generation mechanisms and the relationship between subsynchronous vibrations and GFBs is necessary to ensure the healthy operation of rotor systems. In this study, an integrated nonlinear dynamic model with the consideration of shaft motion, unsteady gas film, and deformations of foil structure is established to investigate the effect of gas film and foil structure on system subsynchronous response. One test rig of GFB-rotor system is developed for model comparison. High agreement is shown between the prediction and test data, especially in the frequency domain. The nonlinear dynamic response is analyzed using waterfall plots, operation deflection shapes, journal orbits, Poincaré maps, and fast Fourier transforms. The parameter studies reveal that subsynchronous vibrations are highly related to gas film and foil structure. Subsynchronous vibrations can be adjusted by parameters such as bump stiffness, nominal clearance, and static loads. Therefore, gas foil bearing parameters should be carefully adjusted by system manufacturers to achieve the best rotordynamic performance.

  11. Deep space network software cost estimation model

    NASA Technical Reports Server (NTRS)

    Tausworthe, R. C.

    1981-01-01

    A parametric software cost estimation model prepared for Jet PRopulsion Laboratory (JPL) Deep Space Network (DSN) Data System implementation tasks is described. The resource estimation mdel modifies and combines a number of existing models. The model calibrates the task magnitude and difficulty, development environment, and software technology effects through prompted responses to a set of approximately 50 questions. Parameters in the model are adjusted to fit JPL software life-cycle statistics.

  12. Model Performance Evaluation and Scenario Analysis ...

    EPA Pesticide Factsheets

    This tool consists of two parts: model performance evaluation and scenario analysis (MPESA). The model performance evaluation consists of two components: model performance evaluation metrics and model diagnostics. These metrics provides modelers with statistical goodness-of-fit measures that capture magnitude only, sequence only, and combined magnitude and sequence errors. The performance measures include error analysis, coefficient of determination, Nash-Sutcliffe efficiency, and a new weighted rank method. These performance metrics only provide useful information about the overall model performance. Note that MPESA is based on the separation of observed and simulated time series into magnitude and sequence components. The separation of time series into magnitude and sequence components and the reconstruction back to time series provides diagnostic insights to modelers. For example, traditional approaches lack the capability to identify if the source of uncertainty in the simulated data is due to the quality of the input data or the way the analyst adjusted the model parameters. This report presents a suite of model diagnostics that identify if mismatches between observed and simulated data result from magnitude or sequence related errors. MPESA offers graphical and statistical options that allow HSPF users to compare observed and simulated time series and identify the parameter values to adjust or the input data to modify. The scenario analysis part of the too

  13. Sleep architecture parameters as a putative biomarker of suicidal ideation in treatment-resistant depression.

    PubMed

    Bernert, Rebecca A; Luckenbaugh, David A; Duncan, Wallace C; Iwata, Naomi G; Ballard, Elizabeth D; Zarate, Carlos A

    2017-01-15

    Disturbed sleep may confer risk for suicidal behaviors. Polysomnographic (PSG) sleep parameters have not been systematically evaluated in association with suicidal ideation (SI) among individuals with treatment-resistant depression (TRD). This secondary data analysis included 54 TRD individuals (N=30 with major depressive disorder (MDD) and N=24 with bipolar depression (BD)). PSG sleep parameters included Sleep Efficiency (SE), Total Sleep Time (TST), Wakefulness After Sleep Onset (WASO), REM percent/latency, and non-REM (NREM) Sleep Stages 1-4. The Hamilton Depression Rating Scale (HAM-D) was used to group participants according to presence or absence of SI. Sleep abnormalities were hypothesized among those with current SI. ANOVA analyses were conducted before (Model 1) and after adjusting for depression (Model 2) and diagnostic variables (Model 3). Significant differences in PSG parameters were observed in Model 1; those with SI had less NREM Stage 4 sleep (p<.05). After adjusting for central covariates, Models 2 and 3 revealed significantly less NREM Stage 4 sleep, lower SE (P<.05), and higher WASO (P<.05) among those with SI. BD participants with SI also had less NREM Stage 4 and more NREM Stage 1 sleep. 1) a predominantly white sample; 2) exclusion of imminent suicide risk; 3) concomitant mood stabilizer use among BD patients; and 4) single-item SI assessment. Independent of depression severity, SI was associated with less NREM Stage 4 sleep, and higher nocturnal wakefulness across diagnostic groups. Sleep may warrant further investigation in the pathogenesis of suicide risk, particularly in TRD, where risk may be heightened. Copyright © 2016 Elsevier B.V. All rights reserved.

  14. Synthetic calibration of a Rainfall-Runoff Model

    USGS Publications Warehouse

    Thompson, David B.; Westphal, Jerome A.; ,

    1990-01-01

    A method for synthetically calibrating storm-mode parameters for the U.S. Geological Survey's Precipitation-Runoff Modeling System is described. Synthetic calibration is accomplished by adjusting storm-mode parameters to minimize deviations between the pseudo-probability disributions represented by regional regression equations and actual frequency distributions fitted to model-generated peak discharge and runoff volume. Results of modeling storm hydrographs using synthetic and analytic storm-mode parameters are presented. Comparisons are made between model results from both parameter sets and between model results and observed hydrographs. Although mean storm runoff is reproducible to within about 26 percent of the observed mean storm runoff for five or six parameter sets, runoff from individual storms is subject to large disparities. Predicted storm runoff volume ranged from 2 percent to 217 percent of commensurate observed values. Furthermore, simulation of peak discharges was poor. Predicted peak discharges from individual storm events ranged from 2 percent to 229 percent of commensurate observed values. The model was incapable of satisfactorily executing storm-mode simulations for the study watersheds. This result is not considered a particular fault of the model, but instead is indicative of deficiencies in similar conceptual models.

  15. Using bioimpedance spectroscopy parameters as real-time feedback during tDCS.

    PubMed

    Nejadgholi, Isar; Caytak, Herschel; Bolic, Miodrag

    2016-08-01

    An exploratory analysis is carried out to investigate the feasibility of using BioImpedance Spectroscopy (BIS) parameters, measured on scalp, as real-time feedback during Transcranial Direct Current Stimulation (tDCS). TDCS is shown to be a potential treatment for neurological disorders. However, this technique is not considered as a reliable clinical treatment, due to the lack of a measurable indicator of treatment efficacy. Although the voltage that is applied on the head is very simple to measure during a tDCS session, changes of voltage are difficult to interpret in terms of variables that affect clinical outcome. BIS parameters are considered as potential feedback parameters, because: 1) they are shown to be associated with the DC voltage applied on the head, 2) they are interpretable in terms of conductive and capacitive properties of head tissues, 3) physical interpretation of BIS measurements makes them prone to be adjusted by clinically controllable variables, 4) BIS parameters are measurable in a cost-effective and safe way and do not interfere with DC stimulation. This research indicates that a quadratic regression model can predict the DC voltage between anode and cathode based on parameters extracted from BIS measurements. These parameters are extracted by fitting the measured BIS spectra to an equivalent electrical circuit model. The effect of clinical tDCS variables on BIS parameters needs to be investigated in future works. This work suggests that BIS is a potential method to be used for monitoring a tDCS session in order to adjust, tailor, or personalize tDCS treatment protocols.

  16. Automatic parameter estimation of multicompartmental neuron models via minimization of trace error with control adjustment.

    PubMed

    Brookings, Ted; Goeritz, Marie L; Marder, Eve

    2014-11-01

    We describe a new technique to fit conductance-based neuron models to intracellular voltage traces from isolated biological neurons. The biological neurons are recorded in current-clamp with pink (1/f) noise injected to perturb the activity of the neuron. The new algorithm finds a set of parameters that allows a multicompartmental model neuron to match the recorded voltage trace. Attempting to match a recorded voltage trace directly has a well-known problem: mismatch in the timing of action potentials between biological and model neuron is inevitable and results in poor phenomenological match between the model and data. Our approach avoids this by applying a weak control adjustment to the model to promote alignment during the fitting procedure. This approach is closely related to the control theoretic concept of a Luenberger observer. We tested this approach on synthetic data and on data recorded from an anterior gastric receptor neuron from the stomatogastric ganglion of the crab Cancer borealis. To test the flexibility of this approach, the synthetic data were constructed with conductance models that were different from the ones used in the fitting model. For both synthetic and biological data, the resultant models had good spike-timing accuracy. Copyright © 2014 the American Physiological Society.

  17. Real-time adjusting of rainfall estimates from commercial microwave links

    NASA Astrophysics Data System (ADS)

    Fencl, Martin; Dohnal, Michal; Bareš, Vojtěch

    2017-04-01

    Urban stormwater predictions require reliable rainfall information with space-time resolution higher than commonly provided by standard rainfall monitoring networks of national weather services. Rainfall data from commercial microwave links (CMLs) could fill this gap. CMLs are line-of-sight radio connections widely used by cellular operators which operate at millimeter bands, where radio waves are attenuated by raindrops. Attenuation data of each single CML in the cellular network can be remotely accessed in (near) real-time with virtually arbitrary sampling frequency and convert to rainfall intensity. Unfortunately, rainfall estimates from CMLs can be substantially biased. Fencl et al., (2017), therefore, proposed adjusting method which enables to correct for this bias. They used rain gauge (RG) data from existing rainfall monitoring networks, which would have otherwise insufficient spatial and temporal resolution for urban rainfall monitoring when used alone without CMLs. In this investigation, we further develop the method to improve its performance in a real-time setting. First, a shortcoming of the original algorithm which delivers unreliable results at the beginning of a rainfall event is overcome by introducing model parameter prior distributions estimated from previous parameter realizations. Second, weights reflecting variance between RGs are introduced into cost function, which is minimized when optimizing model parameters. Finally, RG data used for adjusting are preprocessed by moving average filter. The performance of improved adjusting method is evaluated on four short CMLs (path length < 2 km) located in the small urban catchment (2.3 km2) in Prague-Letnany (CZ). The adjusted CMLs are compared to reference rainfall calculated from six RGs in the catchment. The suggested improvements of the method lead on average to 10% higher Nash-Sutcliffe efficiency coefficient (median value 0.85) for CML adjustment to hourly RG data. Reliability of CML rainfall estimates is especially improved at the beginning of rainfall events and during strong convective rainfalls, whereas performance during longer frontal rainfalls is almost unchanged. Our results clearly demonstrate that adjusting of CMLs to existing RGs represents a viable approach with great potential for real-time applications in stormwater management. This work was supported by the project of Czech Science Foundation (GACR) No.17-16389S. References: Fencl, M., Dohnal, M., Rieckermann, J. and Bareš, V.: Gauge-Adjusted Rainfall Estimates from Commercial Microwave Links, Hydrol Earth Syst. Sci., 2017 (accepted).

  18. Reactive flow modeling of initial density effect on divergence JB-9014 detonation driving

    NASA Astrophysics Data System (ADS)

    Yu, Xin; Huang, Kuibang; Zheng, Miao

    2016-06-01

    A serious of experiments were designed and the results were represented in this paper, in which 2mm thickness cooper shells were impacted by explosives named JB-9014 with different densities, and the surface velocities of the OFHC shells were measured. The comparison of experimental data shows the free surface velocity of the OFHC shell increase with the IHE density. Numerical modeling, which occupied phenomenological reactive flow rate model using the two-dimensional Lagrange hydrodynamic code, were carried out to simulate the above experiments, and empirical adjustments on detonation velocity and pressure and Pier Tang's adjustments on EOS of detonation products were both introduced in our numerical simulation work. The computational results agree well with that of experiments, and the numerical results with original parameters of products and the adjusted ones of JB-9014 could describe the density effect distinctly.

  19. Selection of optical model of stereophotography experiment for determination the cloud base height as a problem of testing of statistical hypotheses

    NASA Astrophysics Data System (ADS)

    Chulichkov, Alexey I.; Nikitin, Stanislav V.; Emilenko, Alexander S.; Medvedev, Andrey P.; Postylyakov, Oleg V.

    2017-10-01

    Earlier, we developed a method for estimating the height and speed of clouds from cloud images obtained by a pair of digital cameras. The shift of a fragment of the cloud in the right frame relative to its position in the left frame is used to estimate the height of the cloud and its velocity. This shift is estimated by the method of the morphological analysis of images. However, this method requires that the axes of the cameras are parallel. Instead of real adjustment of the axes, we use virtual camera adjustment, namely, a transformation of a real frame, the result of which could be obtained if all the axes were perfectly adjusted. For such adjustment, images of stars as infinitely distant objects were used: on perfectly aligned cameras, images on both the right and left frames should be identical. In this paper, we investigate in more detail possible mathematical models of cloud image deformations caused by the misalignment of the axes of two cameras, as well as their lens aberration. The simplest model follows the paraxial approximation of lens (without lens aberrations) and reduces to an affine transformation of the coordinates of one of the frames. The other two models take into account the lens distortion of the 3rd and 3rd and 5th orders respectively. It is shown that the models differ significantly when converting coordinates near the edges of the frame. Strict statistical criteria allow choosing the most reliable model, which is as much as possible consistent with the measurement data. Further, each of these three models was used to determine parameters of the image deformations. These parameters are used to provide cloud images to mean what they would have when measured using an ideal setup, and then the distance to cloud is calculated. The results were compared with data of a laser range finder.

  20. Origami-inspired building block and parametric design for mechanical metamaterials

    NASA Astrophysics Data System (ADS)

    Jiang, Wei; Ma, Hua; Feng, Mingde; Yan, Leilei; Wang, Jiafu; Wang, Jun; Qu, Shaobo

    2016-08-01

    An origami-based building block of mechanical metamaterials is proposed and explained by introducing a mechanism model based on its geometry. According to our model, this origami mechanism supports response to uniaxial tension that depends on structure parameters. Hence, its mechanical properties can be tunable by adjusting the structure parameters. Experiments for poly lactic acid (PLA) samples were carried out, and the results are in good agreement with those of finite element analysis (FEA). This work may be useful for designing building blocks of mechanical metamaterials or other complex mechanical structures.

  1. Comparison between two photovoltaic module models based on transistors

    NASA Astrophysics Data System (ADS)

    Saint-Eve, Frédéric; Sawicki, Jean-Paul; Petit, Pierre; Maufay, Fabrice; Aillerie, Michel

    2018-05-01

    The main objective of this paper is to verify the possibility to reduce to a simple electronic circuit with very few components the behavior simulation of an un-shaded photovoltaic (PV) module. Particularly, two models based on well-tried elementary structures, i.e., the Darlington structure in first model and the voltage regulation with programmable Zener diode in the second are analyzed. Specifications extracted from the behavior of a real I-V characteristic of a panel are considered and the principal electrical variables are deduced. The two models are expected to match with open circuit voltage, maximum power point (MPP) and short circuit current, without forgetting realistic current slopes on the both sides of MPP. The robustness is mentioned when irradiance varies and is considered as an additional fundamental property. For both models, two simulations are done to identify influence of some parameters. In the first model, a parameter allowing to adjust current slope on left side of MPP proves to be also important for the calculation of open circuit voltage. Besides this model does not authorize an entirely adjustment of I-V characteristic and MPP moves significantly away from real value when irradiance increases. On the contrary, the second model seems to have only qualities: open circuit voltage is easy to calculate, current slopes are realistic and there is perhaps a good robustness when irradiance variations are simulated by adjusting short circuit current of PV module. We have shown that these two simplified models are expected to make reliable and easier simulations of complex PV architecture integrating many different devices like PV modules or other renewable energy sources and storage capacities coupled in parallel association.

  2. An improved algorithm for the determination of the system paramters of a visual binary by least squares

    NASA Astrophysics Data System (ADS)

    Xu, Yu-Lin

    The problem of computing the orbit of a visual binary from a set of observed positions is reconsidered. It is a least squares adjustment problem, if the observational errors follow a bias-free multivariate Gaussian distribution and the covariance matrix of the observations is assumed to be known. The condition equations are constructed to satisfy both the conic section equation and the area theorem, which are nonlinear in both the observations and the adjustment parameters. The traditional least squares algorithm, which employs condition equations that are solved with respect to the uncorrelated observations and either linear in the adjustment parameters or linearized by developing them in Taylor series by first-order approximation, is inadequate in our orbit problem. D.C. Brown proposed an algorithm solving a more general least squares adjustment problem in which the scalar residual function, however, is still constructed by first-order approximation. Not long ago, a completely general solution was published by W.H Jefferys, who proposed a rigorous adjustment algorithm for models in which the observations appear nonlinearly in the condition equations and may be correlated, and in which construction of the normal equations and the residual function involves no approximation. This method was successfully applied in our problem. The normal equations were first solved by Newton's scheme. Practical examples show that this converges fast if the observational errors are sufficiently small and the initial approximate solution is sufficiently accurate, and that it fails otherwise. Newton's method was modified to yield a definitive solution in the case the normal approach fails, by combination with the method of steepest descent and other sophisticated algorithms. Practical examples show that the modified Newton scheme can always lead to a final solution. The weighting of observations, the orthogonal parameters and the efficiency of a set of adjustment parameters are also considered. The definition of efficiency is revised.

  3. A three-lead, programmable, and microcontroller-based electrocardiogram generator with frequency domain characteristics of heart rate variability.

    PubMed

    Wei, Ying-Chieh; Wei, Ying-Yu; Chang, Kai-Hsiung; Young, Ming-Shing

    2012-04-01

    The objective of this study is to design and develop a programmable electrocardiogram (ECG) generator with frequency domain characteristics of heart rate variability (HRV) which can be used to test the efficiency of ECG algorithms and to calibrate and maintain ECG equipment. We simplified and modified the three coupled ordinary differential equations in McSharry's model to a single differential equation to obtain the ECG signal. This system not only allows the signal amplitude, heart rate, QRS-complex slopes, and P- and T-wave position parameters to be adjusted, but can also be used to adjust the very low frequency, low frequency, and high frequency components of HRV frequency domain characteristics. The system can be tuned to function with HRV or not. When the HRV function is on, the average heart rate can be set to a value ranging from 20 to 122 beats per minute (BPM) with an adjustable variation of 1 BPM. When the HRV function is off, the heart rate can be set to a value ranging from 20 to 139 BPM with an adjustable variation of 1 BPM. The amplitude of the ECG signal can be set from 0.0 to 330 mV at a resolution of 0.005 mV. These parameters can be adjusted either via input through a keyboard or through a graphical user interface (GUI) control panel that was developed using LABVIEW. The GUI control panel depicts a preview of the ECG signal such that the user can adjust the parameters to establish a desired ECG morphology. A complete set of parameters can be stored in the flash memory of the system via a USB 2.0 interface. Our system can generate three different types of synthetic ECG signals for testing the efficiency of an ECG algorithm or calibrating and maintaining ECG equipment. © 2012 American Institute of Physics

  4. A three-lead, programmable, and microcontroller-based electrocardiogram generator with frequency domain characteristics of heart rate variability

    NASA Astrophysics Data System (ADS)

    Wei, Ying-Chieh; Wei, Ying-Yu; Chang, Kai-Hsiung; Young, Ming-Shing

    2012-04-01

    The objective of this study is to design and develop a programmable electrocardiogram (ECG) generator with frequency domain characteristics of heart rate variability (HRV) which can be used to test the efficiency of ECG algorithms and to calibrate and maintain ECG equipment. We simplified and modified the three coupled ordinary differential equations in McSharry's model to a single differential equation to obtain the ECG signal. This system not only allows the signal amplitude, heart rate, QRS-complex slopes, and P- and T-wave position parameters to be adjusted, but can also be used to adjust the very low frequency, low frequency, and high frequency components of HRV frequency domain characteristics. The system can be tuned to function with HRV or not. When the HRV function is on, the average heart rate can be set to a value ranging from 20 to 122 beats per minute (BPM) with an adjustable variation of 1 BPM. When the HRV function is off, the heart rate can be set to a value ranging from 20 to 139 BPM with an adjustable variation of 1 BPM. The amplitude of the ECG signal can be set from 0.0 to 330 mV at a resolution of 0.005 mV. These parameters can be adjusted either via input through a keyboard or through a graphical user interface (GUI) control panel that was developed using LABVIEW. The GUI control panel depicts a preview of the ECG signal such that the user can adjust the parameters to establish a desired ECG morphology. A complete set of parameters can be stored in the flash memory of the system via a USB 2.0 interface. Our system can generate three different types of synthetic ECG signals for testing the efficiency of an ECG algorithm or calibrating and maintaining ECG equipment.

  5. Based on Artificial Neural Network to Realize K-Parameter Analysis of Vehicle Air Spring System

    NASA Astrophysics Data System (ADS)

    Hung, San-Shan; Hsu, Chia-Ning; Hwang, Chang-Chou; Chen, Wen-Jan

    2017-10-01

    In recent years, because of the air-spring control technique is more mature, that air- spring suspension systems already can be used to replace the classical vehicle suspension system. Depend on internal pressure variation of the air-spring, thestiffnessand the damping factor can be adjusted. Because of air-spring has highly nonlinear characteristic, therefore it isn’t easy to construct the classical controller to control the air-spring effectively. The paper based on Artificial Neural Network to propose a feasible control strategy. By using offline way for the neural network design and learning to the air-spring in different initial pressures and different loads, offline method through, predict air-spring stiffness parameter to establish a model. Finally, through adjusting air-spring internal pressure to change the K-parameter of the air-spring, realize the well dynamic control performance of air-spring suspension.

  6. Regionalization of response routine parameters

    NASA Astrophysics Data System (ADS)

    Tøfte, Lena S.; Sultan, Yisak A.

    2013-04-01

    When area distributed hydrological models are to be calibrated or updated, fewer calibration parameters is of a considerable advantage. Based on, among others, Kirchner, we have developed a simple non-threshold response model for drainage in natural catchments, to be used in the gridded hydrological model ENKI. The new response model takes only the hydrogram into account, it has one state and two parameters, and is adapted to catchments that are dominated by terrain drainage. The method is based on the assumption that in catchments where precipitation, evaporation and snowmelt is neglect able, the discharge is entirely determined by the amount of stored water. It can then be characterized as a simple first-order nonlinear dynamical system, where the governing equations can be found directly from measured stream flow fluctuations. This means that the response in the catchment can be modelled by using hydrogram data where all data from periods with rain, snowmelt or evaporation is left out, and adjust these series to a two or three parameter equation. A large number of discharge series from catchments in different regions in Norway are analyzed, and parameters found for all the series. By combining the computed parameters and known catchments characteristics, we try to regionalize the parameters. Then the parameters in the response routine can easily be found also for ungauged catchments, from maps or data bases.

  7. The combined geodetic network adjusted on the reference ellipsoid - a comparison of three functional models for GNSS observations

    NASA Astrophysics Data System (ADS)

    Kadaj, Roman

    2016-12-01

    The adjustment problem of the so-called combined (hybrid, integrated) network created with GNSS vectors and terrestrial observations has been the subject of many theoretical and applied works. The network adjustment in various mathematical spaces was considered: in the Cartesian geocentric system on a reference ellipsoid and on a mapping plane. For practical reasons, it often takes a geodetic coordinate system associated with the reference ellipsoid. In this case, the Cartesian GNSS vectors are converted, for example, into geodesic parameters (azimuth and length) on the ellipsoid, but the simple form of converted pseudo-observations are the direct differences of the geodetic coordinates. Unfortunately, such an approach may be essentially distorted by a systematic error resulting from the position error of the GNSS vector, before its projection on the ellipsoid surface. In this paper, an analysis of the impact of this error on the determined measures of geometric ellipsoid elements, including the differences of geodetic coordinates or geodesic parameters is presented. Assuming that the adjustment of a combined network on the ellipsoid shows that the optimal functional approach in relation to the satellite observation, is to create the observational equations directly for the original GNSS Cartesian vector components, writing them directly as a function of the geodetic coordinates (in numerical applications, we use the linearized forms of observational equations with explicitly specified coefficients). While retaining the original character of the Cartesian vector, one avoids any systematic errors that may occur in the conversion of the original GNSS vectors to ellipsoid elements, for example the vector of the geodesic parameters. The problem is theoretically developed and numerically tested. An example of the adjustment of a subnet loaded from the database of reference stations of the ASG-EUPOS system was considered for the preferred functional model of the GNSS observations.

  8. Systematic parameter estimation and sensitivity analysis using a multidimensional PEMFC model coupled with DAKOTA.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Chao Yang; Luo, Gang; Jiang, Fangming

    2010-05-01

    Current computational models for proton exchange membrane fuel cells (PEMFCs) include a large number of parameters such as boundary conditions, material properties, and numerous parameters used in sub-models for membrane transport, two-phase flow and electrochemistry. In order to successfully use a computational PEMFC model in design and optimization, it is important to identify critical parameters under a wide variety of operating conditions, such as relative humidity, current load, temperature, etc. Moreover, when experimental data is available in the form of polarization curves or local distribution of current and reactant/product species (e.g., O2, H2O concentrations), critical parameters can be estimated inmore » order to enable the model to better fit the data. Sensitivity analysis and parameter estimation are typically performed using manual adjustment of parameters, which is also common in parameter studies. We present work to demonstrate a systematic approach based on using a widely available toolkit developed at Sandia called DAKOTA that supports many kinds of design studies, such as sensitivity analysis as well as optimization and uncertainty quantification. In the present work, we couple a multidimensional PEMFC model (which is being developed, tested and later validated in a joint effort by a team from Penn State Univ. and Sandia National Laboratories) with DAKOTA through the mapping of model parameters to system responses. Using this interface, we demonstrate the efficiency of performing simple parameter studies as well as identifying critical parameters using sensitivity analysis. Finally, we show examples of optimization and parameter estimation using the automated capability in DAKOTA.« less

  9. Differential Evolution algorithm applied to FSW model calibration

    NASA Astrophysics Data System (ADS)

    Idagawa, H. S.; Santos, T. F. A.; Ramirez, A. J.

    2014-03-01

    Friction Stir Welding (FSW) is a solid state welding process that can be modelled using a Computational Fluid Dynamics (CFD) approach. These models use adjustable parameters to control the heat transfer and the heat input to the weld. These parameters are used to calibrate the model and they are generally determined using the conventional trial and error approach. Since this method is not very efficient, we used the Differential Evolution (DE) algorithm to successfully determine these parameters. In order to improve the success rate and to reduce the computational cost of the method, this work studied different characteristics of the DE algorithm, such as the evolution strategy, the objective function, the mutation scaling factor and the crossover rate. The DE algorithm was tested using a friction stir weld performed on a UNS S32205 Duplex Stainless Steel.

  10. Concurrently adjusting interrelated control parameters to achieve optimal engine performance

    DOEpatents

    Jiang, Li; Lee, Donghoon; Yilmaz, Hakan; Stefanopoulou, Anna

    2015-12-01

    Methods and systems for real-time engine control optimization are provided. A value of an engine performance variable is determined, a value of a first operating condition and a value of a second operating condition of a vehicle engine are detected, and initial values for a first engine control parameter and a second engine control parameter are determined based on the detected first operating condition and the detected second operating condition. The initial values for the first engine control parameter and the second engine control parameter are adjusted based on the determined value of the engine performance variable to cause the engine performance variable to approach a target engine performance variable. In order to cause the engine performance variable to approach the target engine performance variable, adjusting the initial value for the first engine control parameter necessitates a corresponding adjustment of the initial value for the second engine control parameter.

  11. Model reference adaptive control (MRAC)-based parameter identification applied to surface-mounted permanent magnet synchronous motor

    NASA Astrophysics Data System (ADS)

    Zhong, Chongquan; Lin, Yaoyao

    2017-11-01

    In this work, a model reference adaptive control-based estimated algorithm is proposed for online multi-parameter identification of surface-mounted permanent magnet synchronous machines. By taking the dq-axis equations of a practical motor as the reference model and the dq-axis estimation equations as the adjustable model, a standard model-reference-adaptive-system-based estimator was established. Additionally, the Popov hyperstability principle was used in the design of the adaptive law to guarantee accurate convergence. In order to reduce the oscillation of identification result, this work introduces a first-order low-pass digital filter to improve precision regarding the parameter estimation. The proposed scheme was then applied to an SPM synchronous motor control system without any additional circuits and implemented using a DSP TMS320LF2812. For analysis, the experimental results reveal the effectiveness of the proposed method.

  12. [Optimization of the parameters of microcirculatory structural adaptation model based on improved quantum-behaved particle swarm optimization algorithm].

    PubMed

    Pan, Qing; Yao, Jialiang; Wang, Ruofan; Cao, Ping; Ning, Gangmin; Fang, Luping

    2017-08-01

    The vessels in the microcirculation keep adjusting their structure to meet the functional requirements of the different tissues. A previously developed theoretical model can reproduce the process of vascular structural adaptation to help the study of the microcirculatory physiology. However, until now, such model lacks the appropriate methods for its parameter settings with subsequent limitation of further applications. This study proposed an improved quantum-behaved particle swarm optimization (QPSO) algorithm for setting the parameter values in this model. The optimization was performed on a real mesenteric microvascular network of rat. The results showed that the improved QPSO was superior to the standard particle swarm optimization, the standard QPSO and the previously reported Downhill algorithm. We conclude that the improved QPSO leads to a better agreement between mathematical simulation and animal experiment, rendering the model more reliable in future physiological studies.

  13. Osmotic pressure beyond concentration restrictions.

    PubMed

    Grattoni, Alessandro; Merlo, Manuele; Ferrari, Mauro

    2007-10-11

    Osmosis is a fundamental physical process that involves the transit of solvent molecules across a membrane separating two liquid solutions. Osmosis plays a role in many biological processes such as fluid exchange in animal cells (Cell Biochem. Biophys. 2005, 42, 277-345;1 J. Periodontol. 2007, 78, 757-7632) and water transport in plants. It is also involved in many technological applications such as drug delivery systems (Crit. Rev. Ther. Drug. 2004, 21, 477-520;3 J. Micro-Electromech. Syst. 2004, 13, 75-824) and water purification. Extensive attention has been dedicated in the past to the modeling of osmosis, starting with the classical theories of van't Hoff and Morse. These are predictive, in the sense that they do not involve adjustable parameters; however, they are directly applicable only to limited regimes of dilute solute concentrations. Extensions beyond the domains of validity of these classical theories have required recourse to fitting parameters, transitioning therefore to semiempirical, or nonpredictive models. A novel approach was presented by Granik et al., which is not a priori restricted in concentration domains, presents no adjustable parameters, and is mechanistic, in the sense that it is based on a coupled diffusion model. In this work, we examine the validity of predictive theories of osmosis, by comparison with our new experimental results, and a meta-analysis of literature data.

  14. Genetic parameters for test day milk yields of first lactation Holstein cows by random regression models.

    PubMed

    de Melo, C M R; Packer, I U; Costa, C N; Machado, P F

    2007-03-01

    Covariance components for test day milk yield using 263 390 first lactation records of 32 448 Holstein cows were estimated using random regression animal models by restricted maximum likelihood. Three functions were used to adjust the lactation curve: the five-parameter logarithmic Ali and Schaeffer function (AS), the three-parameter exponential Wilmink function in its standard form (W) and in a modified form (W*), by reducing the range of covariate, and the combination of Legendre polynomial and W (LEG+W). Heterogeneous residual variance (RV) for different classes (4 and 29) of days in milk was considered in adjusting the functions. Estimates of RV were quite similar, rating from 4.15 to 5.29 kg2. Heritability estimates for AS (0.29 to 0.42), LEG+W (0.28 to 0.42) and W* (0.33 to 0.40) were similar, but heritability estimates used W (0.25 to 0.65) were highest than those estimated by the other functions, particularly at the end of lactation. Genetic correlations between milk yield on consecutive test days were close to unity, but decreased as the interval between test days increased. The AS function with homogeneous RV model had the best fit among those evaluated.

  15. Fuzzy Mixed Assembly Line Sequencing and Scheduling Optimization Model Using Multiobjective Dynamic Fuzzy GA

    PubMed Central

    Tahriri, Farzad; Dawal, Siti Zawiah Md; Taha, Zahari

    2014-01-01

    A new multiobjective dynamic fuzzy genetic algorithm is applied to solve a fuzzy mixed-model assembly line sequencing problem in which the primary goals are to minimize the total make-span and minimize the setup number simultaneously. Trapezoidal fuzzy numbers are implemented for variables such as operation and travelling time in order to generate results with higher accuracy and representative of real-case data. An improved genetic algorithm called fuzzy adaptive genetic algorithm (FAGA) is proposed in order to solve this optimization model. In establishing the FAGA, five dynamic fuzzy parameter controllers are devised in which fuzzy expert experience controller (FEEC) is integrated with automatic learning dynamic fuzzy controller (ALDFC) technique. The enhanced algorithm dynamically adjusts the population size, number of generations, tournament candidate, crossover rate, and mutation rate compared with using fixed control parameters. The main idea is to improve the performance and effectiveness of existing GAs by dynamic adjustment and control of the five parameters. Verification and validation of the dynamic fuzzy GA are carried out by developing test-beds and testing using a multiobjective fuzzy mixed production assembly line sequencing optimization problem. The simulation results highlight that the performance and efficacy of the proposed novel optimization algorithm are more efficient than the performance of the standard genetic algorithm in mixed assembly line sequencing model. PMID:24982962

  16. Systems and methods for locating and imaging proppant in an induced fracture

    DOEpatents

    Aldridge, David F.; Bartel, Lewis C.

    2016-02-02

    Born Scattering Inversion (BSI) systems and methods are disclosed. A BSI system may be incorporated in a well system for accessing natural gas, oil and geothermal reserves in a geologic formation beneath the surface of the Earth. The BSI system may be used to generate a three-dimensional image of a proppant-filled hydraulically-induced fracture in the geologic formation. The BSI system may include computing equipment and sensors for measuring electromagnetic fields in the vicinity of the fracture before and after the fracture is generated, adjusting the parameters of a first Born approximation model of a scattered component of the surface electromagnetic fields using the measured electromagnetic fields, and generating the image of the proppant-filled fracture using the adjusted parameters.

  17. Estimation of key parameters in adaptive neuron model according to firing patterns based on improved particle swarm optimization algorithm

    NASA Astrophysics Data System (ADS)

    Yuan, Chunhua; Wang, Jiang; Yi, Guosheng

    2017-03-01

    Estimation of ion channel parameters is crucial to spike initiation of neurons. The biophysical neuron models have numerous ion channel parameters, but only a few of them play key roles in the firing patterns of the models. So we choose three parameters featuring the adaptation in the Ermentrout neuron model to be estimated. However, the traditional particle swarm optimization (PSO) algorithm is still easy to fall into local optimum and has the premature convergence phenomenon in the study of some problems. In this paper, we propose an improved method that uses a concave function and dynamic logistic chaotic mapping mixed to adjust the inertia weights of the fitness value, effectively improve the global convergence ability of the algorithm. The perfect predicting firing trajectories of the rebuilt model using the estimated parameters prove that only estimating a few important ion channel parameters can establish the model well and the proposed algorithm is effective. Estimations using two classic PSO algorithms are also compared to the improved PSO to verify that the algorithm proposed in this paper can avoid local optimum and quickly converge to the optimal value. The results provide important theoretical foundations for building biologically realistic neuron models.

  18. Parameter estimation for groundwater models under uncertain irrigation data

    USGS Publications Warehouse

    Demissie, Yonas; Valocchi, Albert J.; Cai, Ximing; Brozovic, Nicholas; Senay, Gabriel; Gebremichael, Mekonnen

    2015-01-01

    The success of modeling groundwater is strongly influenced by the accuracy of the model parameters that are used to characterize the subsurface system. However, the presence of uncertainty and possibly bias in groundwater model source/sink terms may lead to biased estimates of model parameters and model predictions when the standard regression-based inverse modeling techniques are used. This study first quantifies the levels of bias in groundwater model parameters and predictions due to the presence of errors in irrigation data. Then, a new inverse modeling technique called input uncertainty weighted least-squares (IUWLS) is presented for unbiased estimation of the parameters when pumping and other source/sink data are uncertain. The approach uses the concept of generalized least-squares method with the weight of the objective function depending on the level of pumping uncertainty and iteratively adjusted during the parameter optimization process. We have conducted both analytical and numerical experiments, using irrigation pumping data from the Republican River Basin in Nebraska, to evaluate the performance of ordinary least-squares (OLS) and IUWLS calibration methods under different levels of uncertainty of irrigation data and calibration conditions. The result from the OLS method shows the presence of statistically significant (p < 0.05) bias in estimated parameters and model predictions that persist despite calibrating the models to different calibration data and sample sizes. However, by directly accounting for the irrigation pumping uncertainties during the calibration procedures, the proposed IUWLS is able to minimize the bias effectively without adding significant computational burden to the calibration processes.

  19. Research on On-Line Modeling of Fed-Batch Fermentation Process Based on v-SVR

    NASA Astrophysics Data System (ADS)

    Ma, Yongjun

    The fermentation process is very complex and non-linear, many parameters are not easy to measure directly on line, soft sensor modeling is a good solution. This paper introduces v-support vector regression (v-SVR) for soft sensor modeling of fed-batch fermentation process. v-SVR is a novel type of learning machine. It can control the accuracy of fitness and prediction error by adjusting the parameter v. An on-line training algorithm is discussed in detail to reduce the training complexity of v-SVR. The experimental results show that v-SVR has low error rate and better generalization with appropriate v.

  20. On the nullspace of TLS multi-station adjustment

    NASA Astrophysics Data System (ADS)

    Sterle, Oskar; Kogoj, Dušan; Stopar, Bojan; Kregar, Klemen

    2018-07-01

    In the article we present an analytic aspect of TLS multi-station least-squares adjustment with the main focus on the datum problem. The datum problem is, compared to previously published researches, theoretically analyzed and solved, where the solution is based on nullspace derivation of the mathematical model. The importance of datum problem solution is seen in a complete description of TLS multi-station adjustment solutions from a set of all minimally constrained least-squares solutions. On a basis of known nullspace, estimable parameters are described and the geometric interpretation of all minimally constrained least squares solutions is presented. At the end a simulated example is used to analyze the results of TLS multi-station minimally constrained and inner constrained least-squares adjustment solutions.

  1. Expanding wave solutions of the Einstein equations that induce an anomalous acceleration into the Standard Model of Cosmology.

    PubMed

    Temple, Blake; Smoller, Joel

    2009-08-25

    We derive a system of three coupled equations that implicitly defines a continuous one-parameter family of expanding wave solutions of the Einstein equations, such that the Friedmann universe associated with the pure radiation phase of the Standard Model of Cosmology is embedded as a single point in this family. By approximating solutions near the center to leading order in the Hubble length, the family reduces to an explicit one-parameter family of expanding spacetimes, given in closed form, that represents a perturbation of the Standard Model. By introducing a comoving coordinate system, we calculate the correction to the Hubble constant as well as the exact leading order quadratic correction to the redshift vs. luminosity relation for an observer at the center. The correction to redshift vs. luminosity entails an adjustable free parameter that introduces an anomalous acceleration. We conclude (by continuity) that corrections to the redshift vs. luminosity relation observed after the radiation phase of the Big Bang can be accounted for, at the leading order quadratic level, by adjustment of this free parameter. The next order correction is then a prediction. Since nonlinearities alone could actuate dissipation and decay in the conservation laws associated with the highly nonlinear radiation phase and since noninteracting expanding waves represent possible time-asymptotic wave patterns that could result, we propose to further investigate the possibility that these corrections to the Standard Model might be the source of the anomalous acceleration of the galaxies, an explanation not requiring the cosmological constant or dark energy.

  2. Controlling stimulation strength and focality in electroconvulsive therapy via current amplitude and electrode size and spacing: comparison with magnetic seizure therapy.

    PubMed

    Deng, Zhi-De; Lisanby, Sarah H; Peterchev, Angel V

    2013-12-01

    Understanding the relationship between the stimulus parameters of electroconvulsive therapy (ECT) and the electric field characteristics could guide studies on improving risk/benefit ratio. We aimed to determine the effect of current amplitude and electrode size and spacing on the ECT electric field characteristics, compare ECT focality with magnetic seizure therapy (MST), and evaluate stimulus individualization by current amplitude adjustment. Electroconvulsive therapy and double-cone-coil MST electric field was simulated in a 5-shell spherical human head model. A range of ECT electrode diameters (2-5 cm), spacing (1-25 cm), and current amplitudes (0-900 mA) was explored. The head model parameters were varied to examine the stimulus current adjustment required to compensate for interindividual anatomical differences. By reducing the electrode size, spacing, and current, the ECT electric field can be more focal and superficial without increasing scalp current density. By appropriately adjusting the electrode configuration and current, the ECT electric field characteristics can be made to approximate those of MST within 15%. Most electric field characteristics in ECT are more sensitive to head anatomy variation than in MST, especially for close electrode spacing. Nevertheless, ECT current amplitude adjustment of less than 70% can compensate for interindividual anatomical variability. The strength and focality of ECT can be varied over a wide range by adjusting the electrode size, spacing, and current. If desirable, ECT can be made as focal as MST while using simpler stimulation equipment. Current amplitude individualization can compensate for interindividual anatomical variability.

  3. Impedance analysis of cultured cells: a mean-field electrical response model for electric cell-substrate impedance sensing technique.

    PubMed

    Urdapilleta, E; Bellotti, M; Bonetto, F J

    2006-10-01

    In this paper we present a model to describe the electrical properties of a confluent cell monolayer cultured on gold microelectrodes to be used with electric cell-substrate impedance sensing technique. This model was developed from microscopic considerations (distributed effects), and by assuming that the monolayer is an element with mean electrical characteristics (specific lumped parameters). No assumptions were made about cell morphology. The model has only three adjustable parameters. This model and other models currently used for data analysis are compared with data we obtained from electrical measurements of confluent monolayers of Madin-Darby Canine Kidney cells. One important parameter is the cell-substrate height and we found that estimates of this magnitude strongly differ depending on the model used for the analysis. We analyze the origin of the discrepancies, concluding that the estimates from the different models can be considered as limits for the true value of the cell-substrate height.

  4. Economic evaluation of decompressive craniectomy versus barbiturate coma for refractory intracranial hypertension following traumatic brain injury.

    PubMed

    Alali, Aziz S; Naimark, David M J; Wilson, Jefferson R; Fowler, Robert A; Scales, Damon C; Golan, Eyal; Mainprize, Todd G; Ray, Joel G; Nathens, Avery B

    2014-10-01

    Decompressive craniectomy and barbiturate coma are often used as second-tier strategies when intracranial hypertension following severe traumatic brain injury is refractory to first-line treatments. Uncertainty surrounds the decision to choose either treatment option. We investigated which strategy is more economically attractive in this context. We performed a cost-utility analysis. A Markov Monte Carlo microsimulation model with a life-long time horizon was created to compare quality-adjusted survival and cost of the two treatment strategies, from the perspective of healthcare payer. Model parameters were estimated from the literature. Two-dimensional simulation was used to incorporate parameter uncertainty into the model. Value of information analysis was conducted to identify major drivers of decision uncertainty and focus future research. Trauma centers in the United States. Base case was a population of patients (mean age = 25 yr) who developed refractory intracranial hypertension following traumatic brain injury. We compared two treatment strategies: decompressive craniectomy and barbiturate coma. Decompressive craniectomy was associated with an average gain of 1.5 quality-adjusted life years relative to barbiturate coma, with an incremental cost-effectiveness ratio of $9,565/quality-adjusted life year gained. Decompressive craniectomy resulted in a greater quality-adjusted life expectancy 86% of the time and was more cost-effective than barbiturate coma in 78% of cases if our willingness-to-pay threshold is $50,000/quality-adjusted life year and 82% of cases at a threshold of $100,000/quality-adjusted life year. At older age, decompressive craniectomy continued to increase survival but at higher cost (incremental cost-effectiveness ratio = $197,906/quality-adjusted life year at mean age = 85 yr). Based on available evidence, decompressive craniectomy for the treatment of refractory intracranial hypertension following traumatic brain injury provides better value in terms of costs and health gains than barbiturate coma. However, decompressive craniectomy might be less economically attractive for older patients. Further research, particularly on natural history of severe traumatic brain injury patients, is needed to make more informed treatment decisions.

  5. Seasonal Influenza Forecasting in Real Time Using the Incidence Decay With Exponential Adjustment Model.

    PubMed

    Nasserie, Tahmina; Tuite, Ashleigh R; Whitmore, Lindsay; Hatchette, Todd; Drews, Steven J; Peci, Adriana; Kwong, Jeffrey C; Friedman, Dara; Garber, Gary; Gubbay, Jonathan; Fisman, David N

    2017-01-01

    Seasonal influenza epidemics occur frequently. Rapid characterization of seasonal dynamics and forecasting of epidemic peaks and final sizes could help support real-time decision-making related to vaccination and other control measures. Real-time forecasting remains challenging. We used the previously described "incidence decay with exponential adjustment" (IDEA) model, a 2-parameter phenomenological model, to evaluate the characteristics of the 2015-2016 influenza season in 4 Canadian jurisdictions: the Provinces of Alberta, Nova Scotia and Ontario, and the City of Ottawa. Model fits were updated weekly with receipt of incident virologically confirmed case counts. Best-fit models were used to project seasonal influenza peaks and epidemic final sizes. The 2015-2016 influenza season was mild and late-peaking. Parameter estimates generated through fitting were consistent in the 2 largest jurisdictions (Ontario and Alberta) and with pooled data including Nova Scotia counts (R 0 approximately 1.4 for all fits). Lower R 0 estimates were generated in Nova Scotia and Ottawa. Final size projections that made use of complete time series were accurate to within 6% of true final sizes, but final size was using pre-peak data. Projections of epidemic peaks stabilized before the true epidemic peak, but these were persistently early (~2 weeks) relative to the true peak. A simple, 2-parameter influenza model provided reasonably accurate real-time projections of influenza seasonal dynamics in an atypically late, mild influenza season. Challenges are similar to those seen with more complex forecasting methodologies. Future work includes identification of seasonal characteristics associated with variability in model performance.

  6. Relation of Heart Rate and its Variability during Sleep with Age, Physical Activity, and Body Composition in Young Children

    PubMed Central

    Herzig, David; Eser, Prisca; Radtke, Thomas; Wenger, Alina; Rusterholz, Thomas; Wilhelm, Matthias; Achermann, Peter; Arhab, Amar; Jenni, Oskar G.; Kakebeeke, Tanja H.; Leeger-Aschmann, Claudia S.; Messerli-Bürgy, Nadine; Meyer, Andrea H.; Munsch, Simone; Puder, Jardena J.; Schmutz, Einat A.; Stülb, Kerstin; Zysset, Annina E.; Kriemler, Susi

    2017-01-01

    Background: Recent studies have claimed a positive effect of physical activity and body composition on vagal tone. In pediatric populations, there is a pronounced decrease in heart rate with age. While this decrease is often interpreted as an age-related increase in vagal tone, there is some evidence that it may be related to a decrease in intrinsic heart rate. This factor has not been taken into account in most previous studies. The aim of the present study was to assess the association between physical activity and/or body composition and heart rate variability (HRV) independently of the decline in heart rate in young children. Methods: Anthropometric measurements were taken in 309 children aged 2–6 years. Ambulatory electrocardiograms were collected over 14–18 h comprising a full night and accelerometry over 7 days. HRV was determined of three different night segments: (1) over 5 min during deep sleep identified automatically based on HRV characteristics; (2) during a 20 min segment starting 15 min after sleep onset; (3) over a 4-h segment between midnight and 4 a.m. Linear models were computed for HRV parameters with anthropometric and physical activity variables adjusted for heart rate and other confounding variables (e.g., age for physical activity models). Results: We found a decline in heart rate with increasing physical activity and decreasing skinfold thickness. HRV parameters decreased with increasing age, height, and weight in HR-adjusted regression models. These relationships were only found in segments of deep sleep detected automatically based on HRV or manually 15 min after sleep onset, but not in the 4-h segment with random sleep phases. Conclusions: Contrary to most previous studies, we found no increase of standard HRV parameters with age, however, when adjusted for heart rate, there was a significant decrease of HRV parameters with increasing age. Without knowing intrinsic heart rate correct interpretation of HRV in growing children is impossible. PMID:28286485

  7. Relation of Heart Rate and its Variability during Sleep with Age, Physical Activity, and Body Composition in Young Children.

    PubMed

    Herzig, David; Eser, Prisca; Radtke, Thomas; Wenger, Alina; Rusterholz, Thomas; Wilhelm, Matthias; Achermann, Peter; Arhab, Amar; Jenni, Oskar G; Kakebeeke, Tanja H; Leeger-Aschmann, Claudia S; Messerli-Bürgy, Nadine; Meyer, Andrea H; Munsch, Simone; Puder, Jardena J; Schmutz, Einat A; Stülb, Kerstin; Zysset, Annina E; Kriemler, Susi

    2017-01-01

    Background: Recent studies have claimed a positive effect of physical activity and body composition on vagal tone. In pediatric populations, there is a pronounced decrease in heart rate with age. While this decrease is often interpreted as an age-related increase in vagal tone, there is some evidence that it may be related to a decrease in intrinsic heart rate. This factor has not been taken into account in most previous studies. The aim of the present study was to assess the association between physical activity and/or body composition and heart rate variability (HRV) independently of the decline in heart rate in young children. Methods: Anthropometric measurements were taken in 309 children aged 2-6 years. Ambulatory electrocardiograms were collected over 14-18 h comprising a full night and accelerometry over 7 days. HRV was determined of three different night segments: (1) over 5 min during deep sleep identified automatically based on HRV characteristics; (2) during a 20 min segment starting 15 min after sleep onset; (3) over a 4-h segment between midnight and 4 a.m. Linear models were computed for HRV parameters with anthropometric and physical activity variables adjusted for heart rate and other confounding variables (e.g., age for physical activity models). Results: We found a decline in heart rate with increasing physical activity and decreasing skinfold thickness. HRV parameters decreased with increasing age, height, and weight in HR-adjusted regression models. These relationships were only found in segments of deep sleep detected automatically based on HRV or manually 15 min after sleep onset, but not in the 4-h segment with random sleep phases. Conclusions: Contrary to most previous studies, we found no increase of standard HRV parameters with age, however, when adjusted for heart rate, there was a significant decrease of HRV parameters with increasing age. Without knowing intrinsic heart rate correct interpretation of HRV in growing children is impossible.

  8. Optimized Assistive Human-Robot Interaction Using Reinforcement Learning.

    PubMed

    Modares, Hamidreza; Ranatunga, Isura; Lewis, Frank L; Popa, Dan O

    2016-03-01

    An intelligent human-robot interaction (HRI) system with adjustable robot behavior is presented. The proposed HRI system assists the human operator to perform a given task with minimum workload demands and optimizes the overall human-robot system performance. Motivated by human factor studies, the presented control structure consists of two control loops. First, a robot-specific neuro-adaptive controller is designed in the inner loop to make the unknown nonlinear robot behave like a prescribed robot impedance model as perceived by a human operator. In contrast to existing neural network and adaptive impedance-based control methods, no information of the task performance or the prescribed robot impedance model parameters is required in the inner loop. Then, a task-specific outer-loop controller is designed to find the optimal parameters of the prescribed robot impedance model to adjust the robot's dynamics to the operator skills and minimize the tracking error. The outer loop includes the human operator, the robot, and the task performance details. The problem of finding the optimal parameters of the prescribed robot impedance model is transformed into a linear quadratic regulator (LQR) problem which minimizes the human effort and optimizes the closed-loop behavior of the HRI system for a given task. To obviate the requirement of the knowledge of the human model, integral reinforcement learning is used to solve the given LQR problem. Simulation results on an x - y table and a robot arm, and experimental implementation results on a PR2 robot confirm the suitability of the proposed method.

  9. Genetic Parameters of Pre-adjusted Body Weight Growth and Ultrasound Measures of Body Tissue Development in Three Seedstock Pig Breed Populations in Korea

    PubMed Central

    Choy, Yun Ho; Mahboob, Alam; Cho, Chung Il; Choi, Jae Gwan; Choi, Im Soo; Choi, Tae Jeong; Cho, Kwang Hyun; Park, Byoung Ho

    2015-01-01

    The objective of this study was to compare the effects of body weight growth adjustment methods on genetic parameters of body growth and tissue among three pig breeds. Data collected on 101,820 Landrace, 281,411 Yorkshire, and 78,068 Duroc pigs, born in Korean swine breeder farms since 2000, were analyzed. Records included body weights on test day and amplitude (A)-mode ultrasound carcass measures of backfat thickness (BF), eye muscle area (EMA), and retail cut percentage (RCP). Days to 90 kg body weight (DAYS90), through an adjustment of the age based on the body weight at the test day, were obtained. Ultrasound measures were also pre-adjusted (ABF, EMA, AEMA, ARCP) based on their test day measures. The (co)variance components were obtained with 3 multi-trait animal models using the REMLF90 software package. Model I included DAYS90 and ultrasound traits, whereas model II and III accounted DAYS90 and pre-adjusted ultrasound traits. Fixed factors were sex (sex) and contemporary groups (herd-year-month of birth) for all traits among the models. Additionally, model I and II considered a linear covariate of final weight on the ultrasound measure traits. Heritability (h2) estimates for DAYS90, BF, EMA, and RCP ranged from 0.36 to 0.42, 0.34 to 0.43, 0.20 to 0.22, and 0.39 to 0.45, respectively, among the models. The h2 estimates of DAYS90 from model II and III were also somewhat similar. The h2 for ABF, AEMA, and ARCP were 0.35 to 0.44, 0.20 to 0.25, and 0.41 to 0.46, respectively. Our heritability estimates varied mostly among the breeds. The genetic correlations (rG) were moderately negative between DAYS90 and BF (−0.29 to −0.38), and between DAYS90 and EMA (−0.16 to −0.26). BF had strong rG with RCP (−0.87 to −0.93). Moderately positive rG existed between DAYS90 and RCP (0.20 to 0.28) and between EMA and RCP (0.35 to 0.44) among the breeds. For DAYS90, model II and III, its correlations with ABF, AEMA, and ARCP were mostly low or negligible except the rG between DAYS90 and AEMA from model III (0.27 to 0.30). The rG between AEMA and ABF and between AEMA and ARCP were moderate but with negative and positive signs, respectively; also reflected influence of pre-adjustments. However, the rG between BF and RCP remained non-influential to trait pre-adjustments or covariable fits. Therefore, we conclude that ultrasound measures taken at a body weight of about 90 kg as the test final should be adjusted for body weight growth. Our adjustment formulas, particularly those for BF and EMA, should be revised further to accommodate the added variation due to different performance testing endpoints with regard to differential growth in body composition. PMID:26580436

  10. Genetic Parameters of Pre-adjusted Body Weight Growth and Ultrasound Measures of Body Tissue Development in Three Seedstock Pig Breed Populations in Korea.

    PubMed

    Choy, Yun Ho; Mahboob, Alam; Cho, Chung Il; Choi, Jae Gwan; Choi, Im Soo; Choi, Tae Jeong; Cho, Kwang Hyun; Park, Byoung Ho

    2015-12-01

    The objective of this study was to compare the effects of body weight growth adjustment methods on genetic parameters of body growth and tissue among three pig breeds. Data collected on 101,820 Landrace, 281,411 Yorkshire, and 78,068 Duroc pigs, born in Korean swine breeder farms since 2000, were analyzed. Records included body weights on test day and amplitude (A)-mode ultrasound carcass measures of backfat thickness (BF), eye muscle area (EMA), and retail cut percentage (RCP). Days to 90 kg body weight (DAYS90), through an adjustment of the age based on the body weight at the test day, were obtained. Ultrasound measures were also pre-adjusted (ABF, EMA, AEMA, ARCP) based on their test day measures. The (co)variance components were obtained with 3 multi-trait animal models using the REMLF90 software package. Model I included DAYS90 and ultrasound traits, whereas model II and III accounted DAYS90 and pre-adjusted ultrasound traits. Fixed factors were sex (sex) and contemporary groups (herd-year-month of birth) for all traits among the models. Additionally, model I and II considered a linear covariate of final weight on the ultrasound measure traits. Heritability (h(2)) estimates for DAYS90, BF, EMA, and RCP ranged from 0.36 to 0.42, 0.34 to 0.43, 0.20 to 0.22, and 0.39 to 0.45, respectively, among the models. The h(2) estimates of DAYS90 from model II and III were also somewhat similar. The h(2) for ABF, AEMA, and ARCP were 0.35 to 0.44, 0.20 to 0.25, and 0.41 to 0.46, respectively. Our heritability estimates varied mostly among the breeds. The genetic correlations (rG) were moderately negative between DAYS90 and BF (-0.29 to -0.38), and between DAYS90 and EMA (-0.16 to -0.26). BF had strong rG with RCP (-0.87 to -0.93). Moderately positive rG existed between DAYS90 and RCP (0.20 to 0.28) and between EMA and RCP (0.35 to 0.44) among the breeds. For DAYS90, model II and III, its correlations with ABF, AEMA, and ARCP were mostly low or negligible except the rG between DAYS90 and AEMA from model III (0.27 to 0.30). The rG between AEMA and ABF and between AEMA and ARCP were moderate but with negative and positive signs, respectively; also reflected influence of pre-adjustments. However, the rG between BF and RCP remained non-influential to trait pre-adjustments or covariable fits. Therefore, we conclude that ultrasound measures taken at a body weight of about 90 kg as the test final should be adjusted for body weight growth. Our adjustment formulas, particularly those for BF and EMA, should be revised further to accommodate the added variation due to different performance testing endpoints with regard to differential growth in body composition.

  11. Dosimetric Analysis of Radiation-induced Gastric Bleeding

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Feng, Mary, E-mail: maryfeng@umich.edu; Normolle, Daniel; Pan, Charlie C.

    2012-09-01

    Purpose: Radiation-induced gastric bleeding has been poorly understood. In this study, we described dosimetric predictors for gastric bleeding after fractionated radiation therapy. Methods and Materials: The records of 139 sequential patients treated with 3-dimensional conformal radiation therapy (3D-CRT) for intrahepatic malignancies were reviewed. Median follow-up was 7.4 months. The parameters of a Lyman normal tissue complication probability (NTCP) model for the occurrence of {>=}grade 3 gastric bleed, adjusted for cirrhosis, were fitted to the data. The principle of maximum likelihood was used to estimate parameters for NTCP models. Results: Sixteen of 116 evaluable patients (14%) developed gastric bleeds at amore » median time of 4.0 months (mean, 6.5 months; range, 2.1-28.3 months) following completion of RT. The median and mean maximum doses to the stomach were 61 and 63 Gy (range, 46-86 Gy), respectively, after biocorrection of each part of the 3D dose distributions to equivalent 2-Gy daily fractions. The Lyman NTCP model with parameters adjusted for cirrhosis predicted gastric bleed. Best-fit Lyman NTCP model parameters were n=0.10 and m=0.21 and with TD{sub 50} (normal) = 56 Gy and TD{sub 50} (cirrhosis) = 22 Gy. The low n value is consistent with the importance of maximum dose; a lower TD{sub 50} value for the cirrhosis patients points out their greater sensitivity. Conclusions: This study demonstrates that the Lyman NTCP model has utility for predicting gastric bleeding and that the presence of cirrhosis greatly increases this risk. These findings should facilitate the design of future clinical trials involving high-dose upper abdominal radiation.« less

  12. Wave-front propagation in a discrete model of excitable media

    NASA Astrophysics Data System (ADS)

    Feldman, A. B.; Chernyak, Y. B.; Cohen, R. J.

    1998-06-01

    We generalize our recent discrete cellular automata (CA) model of excitable media [Y. B. Chernyak, A. B. Feldman, and R. J. Cohen, Phys. Rev. E 55, 3215 (1997)] to incorporate the effects of inhibitory processes on the propagation of the excitation wave front. In the common two variable reaction-diffusion (RD) models of excitable media, the inhibitory process is described by the v ``controller'' variable responsible for the restoration of the equilibrium state following excitation. In myocardial tissue, the inhibitory effects are mainly due to the inactivation of the fast sodium current. We represent inhibition using a physical model in which the ``source'' contribution of excited elements to the excitation of their neighbors decreases with time as a simple function with a single adjustable parameter (a rate constant). We sought specific solutions of the CA state transition equations and obtained (both analytically and numerically) the dependence of the wave-front speed c on the four model parameters and the wave-front curvature κ. By requiring that the major characteristics of c(κ) in our CA model coincide with those obtained from solutions of a specific RD model, we find a unique set of CA parameter values for a given excitable medium. The basic structure of our CA solutions is remarkably similar to that found in typical RD systems (similar behavior is observed when the analogous model parameters are varied). Most notably, the ``turn-on'' of the inhibitory process is accompanied by the appearance of a solution branch of slow speed, unstable waves. Additionally, when κ is small, we obtain a family of ``eikonal'' relations c(κ) that are suitable for the kinematic analysis of traveling waves in the CA medium. We compared the solutions of the CA equations to CA simulations for the case of plane waves and circular (target) waves and found excellent agreement. We then studied a spiral wave using the CA model adjusted to a specific RD system and found good correspondence between the shapes of the RD and CA spiral arms in the region away from the tip where kinematic theory applies. Our analysis suggests that only four physical parameters control the behavior of wave fronts in excitable media.

  13. The thermal structure of Titan's atmosphere

    NASA Technical Reports Server (NTRS)

    Mckay, Christopher P.; Pollack, James B.; Courtin, Regis

    1989-01-01

    The present radiative-convective model of the Titan atmosphere thermal structure obtains the solar and IR radiation in a series of spectral intervals with vertical resolution. Haze properties have been determined with a microphysics model encompassing a minimum of free parameters. It is determined that gas and haze opacity alone, using temperatures established by Voyager observations, yields a model that is within a few percent of the radiative convective balance throughout the Titan atmosphere. Model calculations of the surface temperature are generally colder than the observed value by 5-10 K; better agreement is obtained through adjustment of the model parameters. Sunlight absorption by stratospheric haze and pressure-induced gas opacity in the IR are the most important thermal structure-controlling factors.

  14. Spiking and bursting patterns of fractional-order Izhikevich model

    NASA Astrophysics Data System (ADS)

    Teka, Wondimu W.; Upadhyay, Ranjit Kumar; Mondal, Argha

    2018-03-01

    Bursting and spiking oscillations play major roles in processing and transmitting information in the brain through cortical neurons that respond differently to the same signal. These oscillations display complex dynamics that might be produced by using neuronal models and varying many model parameters. Recent studies have shown that models with fractional order can produce several types of history-dependent neuronal activities without the adjustment of several parameters. We studied the fractional-order Izhikevich model and analyzed different kinds of oscillations that emerge from the fractional dynamics. The model produces a wide range of neuronal spike responses, including regular spiking, fast spiking, intrinsic bursting, mixed mode oscillations, regular bursting and chattering, by adjusting only the fractional order. Both the active and silent phase of the burst increase when the fractional-order model further deviates from the classical model. For smaller fractional order, the model produces memory dependent spiking activity after the pulse signal turned off. This special spiking activity and other properties of the fractional-order model are caused by the memory trace that emerges from the fractional-order dynamics and integrates all the past activities of the neuron. On the network level, the response of the neuronal network shifts from random to scale-free spiking. Our results suggest that the complex dynamics of spiking and bursting can be the result of the long-term dependence and interaction of intracellular and extracellular ionic currents.

  15. A fast Monte Carlo EM algorithm for estimation in latent class model analysis with an application to assess diagnostic accuracy for cervical neoplasia in women with AGC

    PubMed Central

    Kang, Le; Carter, Randy; Darcy, Kathleen; Kauderer, James; Liao, Shu-Yuan

    2013-01-01

    In this article we use a latent class model (LCM) with prevalence modeled as a function of covariates to assess diagnostic test accuracy in situations where the true disease status is not observed, but observations on three or more conditionally independent diagnostic tests are available. A fast Monte Carlo EM (MCEM) algorithm with binary (disease) diagnostic data is implemented to estimate parameters of interest; namely, sensitivity, specificity, and prevalence of the disease as a function of covariates. To obtain standard errors for confidence interval construction of estimated parameters, the missing information principle is applied to adjust information matrix estimates. We compare the adjusted information matrix based standard error estimates with the bootstrap standard error estimates both obtained using the fast MCEM algorithm through an extensive Monte Carlo study. Simulation demonstrates that the adjusted information matrix approach estimates the standard error similarly with the bootstrap methods under certain scenarios. The bootstrap percentile intervals have satisfactory coverage probabilities. We then apply the LCM analysis to a real data set of 122 subjects from a Gynecologic Oncology Group (GOG) study of significant cervical lesion (S-CL) diagnosis in women with atypical glandular cells of undetermined significance (AGC) to compare the diagnostic accuracy of a histology-based evaluation, a CA-IX biomarker-based test and a human papillomavirus (HPV) DNA test. PMID:24163493

  16. Relationships of the phase velocity with the microarchitectural parameters in bovine trabecular bone in vitro: Application of a stratified model

    NASA Astrophysics Data System (ADS)

    Lee, Kang Il

    2012-08-01

    The present study aims to provide insight into the relationships of the phase velocity with the microarchitectural parameters in bovine trabecular bone in vitro. The frequency-dependent phase velocity was measured in 22 bovine femoral trabecular bone samples by using a pair of transducers with a diameter of 25.4 mm and a center frequency of 0.5 MHz. The phase velocity exhibited positive correlation coefficients of 0.48 and 0.32 with the ratio of bone volume to total volume and the trabecular thickness, respectively, but a negative correlation coefficient of -0.62 with the trabecular separation. The best univariate predictor of the phase velocity was the trabecular separation, yielding an adjusted squared correlation coefficient of 0.36. The multivariate regression models yielded adjusted squared correlation coefficients of 0.21-0.36. The theoretical phase velocity predicted by using a stratified model for wave propagation in periodically stratified media consisting of alternating parallel solid-fluid layers showed reasonable agreements with the experimental measurements.

  17. Scalability of surrogate-assisted multi-objective optimization of antenna structures exploiting variable-fidelity electromagnetic simulation models

    NASA Astrophysics Data System (ADS)

    Koziel, Slawomir; Bekasiewicz, Adrian

    2016-10-01

    Multi-objective optimization of antenna structures is a challenging task owing to the high computational cost of evaluating the design objectives as well as the large number of adjustable parameters. Design speed-up can be achieved by means of surrogate-based optimization techniques. In particular, a combination of variable-fidelity electromagnetic (EM) simulations, design space reduction techniques, response surface approximation models and design refinement methods permits identification of the Pareto-optimal set of designs within a reasonable timeframe. Here, a study concerning the scalability of surrogate-assisted multi-objective antenna design is carried out based on a set of benchmark problems, with the dimensionality of the design space ranging from six to 24 and a CPU cost of the EM antenna model from 10 to 20 min per simulation. Numerical results indicate that the computational overhead of the design process increases more or less quadratically with the number of adjustable geometric parameters of the antenna structure at hand, which is a promising result from the point of view of handling even more complex problems.

  18. Chemically Realistic Tetrahedral Lattice Models for Polymer Chains: Application to Polyethylene Oxide.

    PubMed

    Dietschreit, Johannes C B; Diestler, Dennis J; Knapp, Ernst W

    2016-05-10

    To speed up the generation of an ensemble of poly(ethylene oxide) (PEO) polymer chains in solution, a tetrahedral lattice model possessing the appropriate bond angles is used. The distance between noncovalently bonded atoms is maintained at realistic values by generating chains with an enhanced degree of self-avoidance by a very efficient Monte Carlo (MC) algorithm. Potential energy parameters characterizing this lattice model are adjusted so as to mimic realistic PEO polymer chains in water simulated by molecular dynamics (MD), which serves as a benchmark. The MD data show that PEO chains have a fractal dimension of about two, in contrast to self-avoiding walk lattice models, which exhibit the fractal dimension of 1.7. The potential energy accounts for a mild hydrophobic effect (HYEF) of PEO and for a proper setting of the distribution between trans and gauche conformers. The potential energy parameters are determined by matching the Flory radius, the radius of gyration, and the fraction of trans torsion angles in the chain. A gratifying result is the excellent agreement of the pair distribution function and the angular correlation for the lattice model with the benchmark distribution. The lattice model allows for the precise computation of the torsional entropy of the chain. The generation of polymer conformations of the adjusted lattice model is at least 2 orders of magnitude more efficient than MD simulations of the PEO chain in explicit water. This method of generating chain conformations on a tetrahedral lattice can also be applied to other types of polymers with appropriate adjustment of the potential energy function. The efficient MC algorithm for generating chain conformations on a tetrahedral lattice is available for download at https://github.com/Roulattice/Roulattice .

  19. State and Parameter Estimation for a Coupled Ocean--Atmosphere Model

    NASA Astrophysics Data System (ADS)

    Ghil, M.; Kondrashov, D.; Sun, C.

    2006-12-01

    The El-Nino/Southern-Oscillation (ENSO) dominates interannual climate variability and plays, therefore, a key role in seasonal-to-interannual prediction. Much is known by now about the main physical mechanisms that give rise to and modulate ENSO, but the values of several parameters that enter these mechanisms are an important unknown. We apply Extended Kalman Filtering (EKF) for both model state and parameter estimation in an intermediate, nonlinear, coupled ocean--atmosphere model of ENSO. The coupled model consists of an upper-ocean, reduced-gravity model of the Tropical Pacific and a steady-state atmospheric response to the sea surface temperature (SST). The model errors are assumed to be mainly in the atmospheric wind stress, and assimilated data are equatorial Pacific SSTs. Model behavior is very sensitive to two key parameters: (i) μ, the ocean-atmosphere coupling coefficient between SST and wind stress anomalies; and (ii) δs, the surface-layer coefficient. Previous work has shown that δs determines the period of the model's self-sustained oscillation, while μ measures the degree of nonlinearity. Depending on the values of these parameters, the spatio-temporal pattern of model solutions is either that of a delayed oscillator or of a westward propagating mode. Estimation of these parameters is tested first on synthetic data and allows us to recover the delayed-oscillator mode starting from model parameter values that correspond to the westward-propagating case. Assimilation of SST data from the NCEP-NCAR Reanalysis-2 shows that the parameters can vary on fairly short time scales and switch between values that approximate the two distinct modes of ENSO behavior. Rapid adjustments of these parameters occur, in particular, during strong ENSO events. Ways to apply EKF parameter estimation efficiently to state-of-the-art coupled ocean--atmosphere GCMs will be discussed.

  20. Comparison of SWAT Hydrological Model Results from TRMM 3B42, NEXRAD Stage III, and Oklahoma Mesonet Data

    NASA Astrophysics Data System (ADS)

    Tobin, K. J.; Bennett, M. E.

    2008-05-01

    The Cimarron River Basin (3110 sq km) between Dodge and Guthrie, Oklahoma is located in northern Oklahoma and was used as a test bed to compare the hydrological model performance associated with different methods of precipitation quantification. The Soil and Water Assessment Tool (SWAT) was selected for this project, which is a comprehensive model that, besides quantifying watershed hydrology, can simulate water quality as well as nutrient and sediment loading within stream reaches. An advantage of this location is the extensive monitoring of MET parameters (precipitation, temperature, relative humidity, wind speed, solar radiation) afforded by the Oklahoma Mesonet, which has been documented to improve the performance of SWAT. The utility of TRMM 3B42 and NEXRAD Stage III data in supporting the hydrologic modeling of Cimarron River Basin is demonstrated. Minor adjustments to selected model parameters were made to make parameter values more realistic based on results from previous studies and information and to more realistically simulate base flow. Significantly, no ad hoc adjustments to major parameters such as Curve Number or Available Soil Water were made and robust simulations were obtained. TRMM and NEXRAD data are aggregated into an average daily estimate of precipitation for each TRMM grid cell (0.25 degree X 0.25 degree). Preliminary simulation of stream flow (year 2004 to 2006) in the Cimarron River Basin yields acceptable monthly results with very little adjustment of model parameters using TRMM 3B42 precipitation data (mass balance error = 3 percent; Monthly Nash-Sutcliffe efficiency coefficients (NS) = 0.77). However, both Oklahoma Mesonet rain gauge (mass balance error = 13 percent; Monthly NS = 0.91; Daily NS = 0.64) and NEXRAD Stage III data (mass balance error = -5 percent; Monthly NS = 0.95; Daily NS = 0.69) produces superior simulations even at a sub-monthly time scale; daily results are time averaged over a three day period. Note that all types of precipitation data perform better than a synthetic precipitation dataset generated using a weather simulator (mass balance error = 12 percent; Monthly NS = 0.40). Our study again documents that merged precipitation satellite products, such as TRMM 3B42, can support semi-distributed hydrologic modeling at the watershed scale. However, apparently additional work is required to improve TRMM precipitation retrievals over land to generate a product that yields more robust hydrological simulations especially at finer time scales. Additionally, ongoing work in this basin will compare TRMM results with stream flow model results generated using CMORPH precipitation estimates. Finally, in the future we plan to use simulated, semi-distributed soil moisture values determined by SWAT for comparison with gridded soil moisture estimates from TRMM-TMI that should provide further validation of our modeling efforts.

  1. Approximated adjusted fractional Bayes factors: A general method for testing informative hypotheses.

    PubMed

    Gu, Xin; Mulder, Joris; Hoijtink, Herbert

    2018-05-01

    Informative hypotheses are increasingly being used in psychological sciences because they adequately capture researchers' theories and expectations. In the Bayesian framework, the evaluation of informative hypotheses often makes use of default Bayes factors such as the fractional Bayes factor. This paper approximates and adjusts the fractional Bayes factor such that it can be used to evaluate informative hypotheses in general statistical models. In the fractional Bayes factor a fraction parameter must be specified which controls the amount of information in the data used for specifying an implicit prior. The remaining fraction is used for testing the informative hypotheses. We discuss different choices of this parameter and present a scheme for setting it. Furthermore, a software package is described which computes the approximated adjusted fractional Bayes factor. Using this software package, psychological researchers can evaluate informative hypotheses by means of Bayes factors in an easy manner. Two empirical examples are used to illustrate the procedure. © 2017 The British Psychological Society.

  2. Comparison of existing models to simulate anaerobic digestion of lipid-rich waste.

    PubMed

    Béline, F; Rodriguez-Mendez, R; Girault, R; Bihan, Y Le; Lessard, P

    2017-02-01

    Models for anaerobic digestion of lipid-rich waste taking inhibition into account were reviewed and, if necessary, adjusted to the ADM1 model framework in order to compare them. Experimental data from anaerobic digestion of slaughterhouse waste at an organic loading rate (OLR) ranging from 0.3 to 1.9kgVSm -3 d -1 were used to compare and evaluate models. Experimental data obtained at low OLRs were accurately modeled whatever the model thereby validating the stoichiometric parameters used and influent fractionation. However, at higher OLRs, although inhibition parameters were optimized to reduce differences between experimental and simulated data, no model was able to accurately simulate accumulation of substrates and intermediates, mainly due to the wrong simulation of pH. A simulation using pH based on experimental data showed that acetogenesis and methanogenesis were the most sensitive steps to LCFA inhibition and enabled identification of the inhibition parameters of both steps. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. Progressive Learning of Topic Modeling Parameters: A Visual Analytics Framework.

    PubMed

    El-Assady, Mennatallah; Sevastjanova, Rita; Sperrle, Fabian; Keim, Daniel; Collins, Christopher

    2018-01-01

    Topic modeling algorithms are widely used to analyze the thematic composition of text corpora but remain difficult to interpret and adjust. Addressing these limitations, we present a modular visual analytics framework, tackling the understandability and adaptability of topic models through a user-driven reinforcement learning process which does not require a deep understanding of the underlying topic modeling algorithms. Given a document corpus, our approach initializes two algorithm configurations based on a parameter space analysis that enhances document separability. We abstract the model complexity in an interactive visual workspace for exploring the automatic matching results of two models, investigating topic summaries, analyzing parameter distributions, and reviewing documents. The main contribution of our work is an iterative decision-making technique in which users provide a document-based relevance feedback that allows the framework to converge to a user-endorsed topic distribution. We also report feedback from a two-stage study which shows that our technique results in topic model quality improvements on two independent measures.

  4. Comparison of DSMC Reaction Models with QCT Reaction Rates for Nitrogen

    DTIC Science & Technology

    2016-07-17

    The U.S. Government is joint author of the work and has the right to use, modify, reproduce, release, perform, display, or disclose the work. 13...Distribution A: Approved for Public Release, Distribution Unlimited PA #16299 Introduction • Comparison with measurements is final goal • Validation...model verification and parameter adjustment • Four chemistry models: total collision energy (TCE), quantum kinetic (QK), vibration-dissociation favoring

  5. On approaches to analyze the sensitivity of simulated hydrologic fluxes to model parameters in the community land model

    DOE PAGES

    Bao, Jie; Hou, Zhangshuan; Huang, Maoyi; ...

    2015-12-04

    Here, effective sensitivity analysis approaches are needed to identify important parameters or factors and their uncertainties in complex Earth system models composed of multi-phase multi-component phenomena and multiple biogeophysical-biogeochemical processes. In this study, the impacts of 10 hydrologic parameters in the Community Land Model on simulations of runoff and latent heat flux are evaluated using data from a watershed. Different metrics, including residual statistics, the Nash-Sutcliffe coefficient, and log mean square error, are used as alternative measures of the deviations between the simulated and field observed values. Four sensitivity analysis (SA) approaches, including analysis of variance based on the generalizedmore » linear model, generalized cross validation based on the multivariate adaptive regression splines model, standardized regression coefficients based on a linear regression model, and analysis of variance based on support vector machine, are investigated. Results suggest that these approaches show consistent measurement of the impacts of major hydrologic parameters on response variables, but with differences in the relative contributions, particularly for the secondary parameters. The convergence behaviors of the SA with respect to the number of sampling points are also examined with different combinations of input parameter sets and output response variables and their alternative metrics. This study helps identify the optimal SA approach, provides guidance for the calibration of the Community Land Model parameters to improve the model simulations of land surface fluxes, and approximates the magnitudes to be adjusted in the parameter values during parametric model optimization.« less

  6. A brief dataset on the model-based evaluation of the growth performance of Bacillus coagulans and l-lactic acid production in a lignin-supplemented medium.

    PubMed

    Glaser, Robert; Venus, Joachim

    2017-04-01

    The data presented in this article are related to the research article entitled "Model-based characterization of growth performance and l-lactic acid production with high optical purity by thermophilic Bacillus coagulans in a lignin-supplemented mixed substrate medium (R. Glaser and J. Venus, 2016) [1]". This data survey provides the information on characterization of three Bacillus coagulans strains. Information on cofermentation of lignocellulose-related sugars in lignin-containing media is given. Basic characterization data are supported by optical-density high-throughput screening and parameter adjustment to logistic growth models. Lab scale fermentation procedures are examined by model adjustment of a Monod kinetics-based growth model. Lignin consumption is analyzed using the data on decolorization of a lignin-supplemented minimal medium.

  7. High-Level Performance Modeling of SAR Systems

    NASA Technical Reports Server (NTRS)

    Chen, Curtis

    2006-01-01

    SAUSAGE (Still Another Utility for SAR Analysis that s General and Extensible) is a computer program for modeling (see figure) the performance of synthetic- aperture radar (SAR) or interferometric synthetic-aperture radar (InSAR or IFSAR) systems. The user is assumed to be familiar with the basic principles of SAR imaging and interferometry. Given design parameters (e.g., altitude, power, and bandwidth) that characterize a radar system, the software predicts various performance metrics (e.g., signal-to-noise ratio and resolution). SAUSAGE is intended to be a general software tool for quick, high-level evaluation of radar designs; it is not meant to capture all the subtleties, nuances, and particulars of specific systems. SAUSAGE was written to facilitate the exploration of engineering tradeoffs within the multidimensional space of design parameters. Typically, this space is examined through an iterative process of adjusting the values of the design parameters and examining the effects of the adjustments on the overall performance of the system at each iteration. The software is designed to be modular and extensible to enable consideration of a variety of operating modes and antenna beam patterns, including, for example, strip-map and spotlight SAR acquisitions, polarimetry, burst modes, and squinted geometries.

  8. Cosmological implications of a large complete quasar sample.

    PubMed

    Segal, I E; Nicoll, J F

    1998-04-28

    Objective and reproducible determinations of the probabilistic significance levels of the deviations between theoretical cosmological prediction and direct model-independent observation are made for the Large Bright Quasar Sample [Foltz, C., Chaffee, F. H., Hewett, P. C., MacAlpine, G. M., Turnshek, D. A., et al. (1987) Astron. J. 94, 1423-1460]. The Expanding Universe model as represented by the Friedman-Lemaitre cosmology with parameters qo = 0, Lambda = 0 denoted as C1 and chronometric cosmology (no relevant adjustable parameters) denoted as C2 are the cosmologies considered. The mean and the dispersion of the apparent magnitudes and the slope of the apparent magnitude-redshift relation are the directly observed statistics predicted. The C1 predictions of these cosmology-independent quantities are deviant by as much as 11sigma from direct observation; none of the C2 predictions deviate by >2sigma. The C1 deviations may be reconciled with theory by the hypothesis of quasar "evolution," which, however, appears incapable of being substantiated through direct observation. The excellent quantitative agreement of the C1 deviations with those predicted by C2 without adjustable parameters for the results of analysis predicated on C1 indicates that the evolution hypothesis may well be a theoretical artifact.

  9. The Association between Bone Quality and Atherosclerosis: Results from Two Large Population-Based Studies.

    PubMed

    Lange, V; Dörr, M; Schminke, U; Völzke, H; Nauck, M; Wallaschofski, H; Hannemann, A

    2017-01-01

    It is highly debated whether associations between osteoporosis and atherosclerosis are independent of cardiovascular risk factors. We aimed to explore the associations between quantitative ultrasound (QUS) parameters at the heel with the carotid artery intima-media thickness (IMT), the presence of carotid artery plaques, and the ankle-brachial index (ABI). The study population comprised 5680 men and women aged 20-93 years from two population-based cohort studies: Study of Health in Pomerania (SHIP) and SHIP-Trend. QUS measurements were performed at the heel. The extracranial carotid arteries were examined with B-mode ultrasonography. ABI was measured in a subgroup of 3853 participants. Analyses of variance and linear and logistic regression models were calculated and adjusted for major cardiovascular risk factors. Men but not women had significantly increased odds for carotid artery plaques with decreasing QUS parameters independent of diabetes mellitus, dyslipidemia, and hypertension. Beyond this, the QUS parameters were not significantly associated with IMT or ABI in fully adjusted models. Our data argue against an independent role of bone metabolism in atherosclerotic changes in women. Yet, in men, associations with advanced atherosclerosis, exist. Thus, men presenting with clinical signs of osteoporosis may be at increased risk for atherosclerotic disease.

  10. A comparison of scope for growth (SFG) and dynamic energy budget (DEB) models applied to the blue mussel ( Mytilus edulis)

    NASA Astrophysics Data System (ADS)

    Filgueira, Ramón; Rosland, Rune; Grant, Jon

    2011-11-01

    Growth of Mytilus edulis was simulated using individual based models following both Scope For Growth (SFG) and Dynamic Energy Budget (DEB) approaches. These models were parameterized using independent studies and calibrated for each dataset by adjusting the half-saturation coefficient of the food ingestion function term, XK, a common parameter in both approaches related to feeding behavior. Auto-calibration was carried out using an optimization tool, which provides an objective way of tuning the model. Both approaches yielded similar performance, suggesting that although the basis for constructing the models is different, both can successfully reproduce M. edulis growth. The good performance of both models in different environments achieved by adjusting a single parameter, XK, highlights the potential of these models for (1) producing prospective analysis of mussel growth and (2) investigating mussel feeding response in different ecosystems. Finally, we emphasize that the convergence of two different modeling approaches via calibration of XK, indicates the importance of the feeding behavior and local trophic conditions for bivalve growth performance. Consequently, further investigations should be conducted to explore the relationship of XK to environmental variables and/or to the sophistication of the functional response to food availability with the final objective of creating a general model that can be applied to different ecosystems without the need for calibration.

  11. Numerical Optimization of Density Functional Tight Binding Models: Application to Molecules Containing Carbon, Hydrogen, Nitrogen, and Oxygen

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krishnapriyan, A.; Yang, P.; Niklasson, A. M. N.

    New parametrizations for semiempirical density functional tight binding (DFTB) theory have been developed by the numerical optimization of adjustable parameters to minimize errors in the atomization energy and interatomic forces with respect to ab initio calculated data. Initial guesses for the radial dependences of the Slater- Koster bond integrals and overlap integrals were obtained from minimum basis density functional theory calculations. The radial dependences of the pair potentials and the bond and overlap integrals were represented by simple analytic functions. The adjustable parameters in these functions were optimized by simulated annealing and steepest descent algorithms to minimize the value ofmore » an objective function that quantifies the error between the DFTB model and ab initio calculated data. The accuracy and transferability of the resulting DFTB models for the C, H, N, and O system were assessed by comparing the predicted atomization energies and equilibrium molecular geometries of small molecules that were not included in the training data from DFTB to ab initio data. The DFTB models provide accurate predictions of the properties of hydrocarbons and more complex molecules containing C, H, N, and O.« less

  12. Some comparisons of complexity in dictionary-based and linear computational models.

    PubMed

    Gnecco, Giorgio; Kůrková, Věra; Sanguineti, Marcello

    2011-03-01

    Neural networks provide a more flexible approximation of functions than traditional linear regression. In the latter, one can only adjust the coefficients in linear combinations of fixed sets of functions, such as orthogonal polynomials or Hermite functions, while for neural networks, one may also adjust the parameters of the functions which are being combined. However, some useful properties of linear approximators (such as uniqueness, homogeneity, and continuity of best approximation operators) are not satisfied by neural networks. Moreover, optimization of parameters in neural networks becomes more difficult than in linear regression. Experimental results suggest that these drawbacks of neural networks are offset by substantially lower model complexity, allowing accuracy of approximation even in high-dimensional cases. We give some theoretical results comparing requirements on model complexity for two types of approximators, the traditional linear ones and so called variable-basis types, which include neural networks, radial, and kernel models. We compare upper bounds on worst-case errors in variable-basis approximation with lower bounds on such errors for any linear approximator. Using methods from nonlinear approximation and integral representations tailored to computational units, we describe some cases where neural networks outperform any linear approximator. Copyright © 2010 Elsevier Ltd. All rights reserved.

  13. Numerical Optimization of Density Functional Tight Binding Models: Application to Molecules Containing Carbon, Hydrogen, Nitrogen, and Oxygen

    DOE PAGES

    Krishnapriyan, A.; Yang, P.; Niklasson, A. M. N.; ...

    2017-10-17

    New parametrizations for semiempirical density functional tight binding (DFTB) theory have been developed by the numerical optimization of adjustable parameters to minimize errors in the atomization energy and interatomic forces with respect to ab initio calculated data. Initial guesses for the radial dependences of the Slater- Koster bond integrals and overlap integrals were obtained from minimum basis density functional theory calculations. The radial dependences of the pair potentials and the bond and overlap integrals were represented by simple analytic functions. The adjustable parameters in these functions were optimized by simulated annealing and steepest descent algorithms to minimize the value ofmore » an objective function that quantifies the error between the DFTB model and ab initio calculated data. The accuracy and transferability of the resulting DFTB models for the C, H, N, and O system were assessed by comparing the predicted atomization energies and equilibrium molecular geometries of small molecules that were not included in the training data from DFTB to ab initio data. The DFTB models provide accurate predictions of the properties of hydrocarbons and more complex molecules containing C, H, N, and O.« less

  14. Variational estimation of process parameters in a simplified atmospheric general circulation model

    NASA Astrophysics Data System (ADS)

    Lv, Guokun; Koehl, Armin; Stammer, Detlef

    2016-04-01

    Parameterizations are used to simulate effects of unresolved sub-grid-scale processes in current state-of-the-art climate model. The values of the process parameters, which determine the model's climatology, are usually manually adjusted to reduce the difference of model mean state to the observed climatology. This process requires detailed knowledge of the model and its parameterizations. In this work, a variational method was used to estimate process parameters in the Planet Simulator (PlaSim). The adjoint code was generated using automatic differentiation of the source code. Some hydrological processes were switched off to remove the influence of zero-order discontinuities. In addition, the nonlinearity of the model limits the feasible assimilation window to about 1day, which is too short to tune the model's climatology. To extend the feasible assimilation window, nudging terms for all state variables were added to the model's equations, which essentially suppress all unstable directions. In identical twin experiments, we found that the feasible assimilation window could be extended to over 1-year and accurate parameters could be retrieved. Although the nudging terms transform to a damping of the adjoint variables and therefore tend to erases the information of the data over time, assimilating climatological information is shown to provide sufficient information on the parameters. Moreover, the mechanism of this regularization is discussed.

  15. LAGEOS geodetic analysis-SL7.1

    NASA Technical Reports Server (NTRS)

    Smith, D. E.; Kolenkiewicz, R.; Dunn, P. J.; Klosko, S. M.; Robbins, J. W.; Torrence, M. H.; Williamson, R. G.; Pavlis, E. C.; Douglas, N. B.; Fricke, S. K.

    1991-01-01

    Laser ranging measurements to the LAGEOS satellite from 1976 through 1989 are related via geodetic and orbital theories to a variety of geodetic and geodynamic parameters. The SL7.1 analyses are explained of this data set including the estimation process for geodetic parameters such as Earth's gravitational constant (GM), those describing the Earth's elasticity properties (Love numbers), and the temporally varying geodetic parameters such as Earth's orientation (polar motion and Delta UT1) and tracking site horizontal tectonic motions. Descriptions of the reference systems, tectonic models, and adopted geodetic constants are provided; these are the framework within which the SL7.1 solution takes place. Estimates of temporal variations in non-conservative force parameters are included in these SL7.1 analyses as well as parameters describing the orbital states at monthly epochs. This information is useful in further refining models used to describe close-Earth satellite behavior. Estimates of intersite motions and individual tracking site motions computed through the network adjustment scheme are given. Tabulations of tracking site eccentricities, data summaries, estimated monthly orbital and force model parameters, polar motion, Earth rotation, and tracking station coordinate results are also provided.

  16. On the interatomic potentials for noble gas mixtures

    NASA Astrophysics Data System (ADS)

    Watanabe, Kyoko; Allnatt, A. R.; Meath, William J.

    1982-07-01

    Recently, a relatively simple scheme for the construction of isotropic intermolecular potentials has been proposed and tested for the like species interactions involving He, Ne, Ar, Kr and H 2. The model potential has an adjustable parameter which controls the balance between its exchange and Coulomb energy components. The representation of the Coulomb energy contains a damped multipolar dispersion energy series (which is truncated through O( R-10) and provides additional flexibility through adjustment of the dispersion energy coefficients, particularly C8 and C10, within conservative error estimates. In this paper the scheme is tested further by application to interactions involving unlike noble gas atoms where the parameters in the potential model are determined by fitting mixed second virial coefficient data as a function of temperature. Generally the approach leads to potential of accuracy comparable to the best available literature potentials which are usually determined using a large base of experimental and theoretical input data. Our results also strongly indicate the need of high quality virial data.

  17. Utilization of Expert Knowledge in a Multi-Objective Hydrologic Model Automatic Calibration Process

    NASA Astrophysics Data System (ADS)

    Quebbeman, J.; Park, G. H.; Carney, S.; Day, G. N.; Micheletty, P. D.

    2016-12-01

    Spatially distributed continuous simulation hydrologic models have a large number of parameters for potential adjustment during the calibration process. Traditional manual calibration approaches of such a modeling system is extremely laborious, which has historically motivated the use of automatic calibration procedures. With a large selection of model parameters, achieving high degrees of objective space fitness - measured with typical metrics such as Nash-Sutcliffe, Kling-Gupta, RMSE, etc. - can easily be achieved using a range of evolutionary algorithms. A concern with this approach is the high degree of compensatory calibration, with many similarly performing solutions, and yet grossly varying parameter set solutions. To help alleviate this concern, and mimic manual calibration processes, expert knowledge is proposed for inclusion within the multi-objective functions, which evaluates the parameter decision space. As a result, Pareto solutions are identified with high degrees of fitness, but also create parameter sets that maintain and utilize available expert knowledge resulting in more realistic and consistent solutions. This process was tested using the joint SNOW-17 and Sacramento Soil Moisture Accounting method (SAC-SMA) within the Animas River basin in Colorado. Three different elevation zones, each with a range of parameters, resulted in over 35 model parameters simultaneously calibrated. As a result, high degrees of fitness were achieved, in addition to the development of more realistic and consistent parameter sets such as those typically achieved during manual calibration procedures.

  18. Theory of the lattice Boltzmann Method: Dispersion, Dissipation, Isotropy, Galilean Invariance, and Stability

    NASA Technical Reports Server (NTRS)

    Lallemand, Pierre; Luo, Li-Shi

    2000-01-01

    The generalized hydrodynamics (the wave vector dependence of the transport coefficients) of a generalized lattice Boltzmann equation (LBE) is studied in detail. The generalized lattice Boltzmann equation is constructed in moment space rather than in discrete velocity space. The generalized hydrodynamics of the model is obtained by solving the dispersion equation of the linearized LBE either analytically by using perturbation technique or numerically. The proposed LBE model has a maximum number of adjustable parameters for the given set of discrete velocities. Generalized hydrodynamics characterizes dispersion, dissipation (hyper-viscosities), anisotropy, and lack of Galilean invariance of the model, and can be applied to select the values of the adjustable parameters which optimize the properties of the model. The proposed generalized hydrodynamic analysis also provides some insights into stability and proper initial conditions for LBE simulations. The stability properties of some 2D LBE models are analyzed and compared with each other in the parameter space of the mean streaming velocity and the viscous relaxation time. The procedure described in this work can be applied to analyze other LBE models. As examples, LBE models with various interpolation schemes are analyzed. Numerical results on shear flow with an initially discontinuous velocity profile (shock) with or without a constant streaming velocity are shown to demonstrate the dispersion effects in the LBE model; the results compare favorably with our theoretical analysis. We also show that whereas linear analysis of the LBE evolution operator is equivalent to Chapman-Enskog analysis in the long wave-length limit (wave vector k = 0), it can also provide results for large values of k. Such results are important for the stability and other hydrodynamic properties of the LBE method and cannot be obtained through Chapman-Enskog analysis.

  19. On the climate policy implications of substitutability and flexibility in the economy: An in-depth integrated assessment model diagnostic

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Craxton, Melanie; Merrick, James; Makridis, Christos

    This paper conducts an in-depth model diagnostic exercise for two parameters, 1) the elasticity of substitution between the capital/labour aggregate and the energy aggregate in the Integrated Assessment Model (IAM) MERGE's production function and 2) the rate at which new technologies can be deployed within the energy system. We show that in a more complementary world the model's ability to adjust the carbon intensity of its energy sector is more important whereas in a more substitutable world the ability to expand carbon free technologies is of lesser relative importance. The uncertainty in the literature surrounding the elasticity of substitution parameter,more » its interaction with the mechanisms of technical change, and the associated danger of grounding forward-looking analyses in historically based parameters lend support to the importance of such a diagnostic exercise. Building on work from model intercomparison studies, we investigate whether a given model's choice of strategy is primarily a function of the choice of its parameter values or its structure. As a result, a deeper understanding of what drives model behaviour is beneficial to both modellers and the policymakers who utilise their insights and output.« less

  20. Automated method for the systematic interpretation of resonance peaks in spectrum data

    DOEpatents

    Damiano, B.; Wood, R.T.

    1997-04-22

    A method is described for spectral signature interpretation. The method includes the creation of a mathematical model of a system or process. A neural network training set is then developed based upon the mathematical model. The neural network training set is developed by using the mathematical model to generate measurable phenomena of the system or process based upon model input parameter that correspond to the physical condition of the system or process. The neural network training set is then used to adjust internal parameters of a neural network. The physical condition of an actual system or process represented by the mathematical model is then monitored by extracting spectral features from measured spectra of the actual process or system. The spectral features are then input into said neural network to determine the physical condition of the system or process represented by the mathematical model. More specifically, the neural network correlates the spectral features (i.e. measurable phenomena) of the actual process or system with the corresponding model input parameters. The model input parameters relate to specific components of the system or process, and, consequently, correspond to the physical condition of the process or system. 1 fig.

  1. On the climate policy implications of substitutability and flexibility in the economy: An in-depth integrated assessment model diagnostic

    DOE PAGES

    Craxton, Melanie; Merrick, James; Makridis, Christos; ...

    2017-07-12

    This paper conducts an in-depth model diagnostic exercise for two parameters, 1) the elasticity of substitution between the capital/labour aggregate and the energy aggregate in the Integrated Assessment Model (IAM) MERGE's production function and 2) the rate at which new technologies can be deployed within the energy system. We show that in a more complementary world the model's ability to adjust the carbon intensity of its energy sector is more important whereas in a more substitutable world the ability to expand carbon free technologies is of lesser relative importance. The uncertainty in the literature surrounding the elasticity of substitution parameter,more » its interaction with the mechanisms of technical change, and the associated danger of grounding forward-looking analyses in historically based parameters lend support to the importance of such a diagnostic exercise. Building on work from model intercomparison studies, we investigate whether a given model's choice of strategy is primarily a function of the choice of its parameter values or its structure. As a result, a deeper understanding of what drives model behaviour is beneficial to both modellers and the policymakers who utilise their insights and output.« less

  2. Validation Testing of a Peridynamic Impact Damage Model Using NASA's Micro-Particle Gun

    NASA Technical Reports Server (NTRS)

    Baber, Forrest E.; Zelinski, Brian J.; Guven, Ibrahim; Gray, Perry

    2017-01-01

    Through a collaborative effort between the Virginia Commonwealth University and Raytheon, a peridynamic model for sand impact damage has been developed1-3. Model development has focused on simulating impacts of sand particles on ZnS traveling at velocities consistent with aircraft take-off and landing speeds. The model reproduces common features of impact damage including pit and radial cracks, and, under some conditions, lateral cracks. This study focuses on a preliminary validation exercise in which simulation results from the peridynamic model are compared to a limited experimental data set generated by NASA's recently developed micro-particle gun (MPG). The MPG facility measures the dimensions and incoming and rebound velocities of the impact particles. It also links each particle to a specific impact site and its associated damage. In this validation exercise parameters of the peridynamic model are adjusted to fit the experimentally observed pit diameter, average length of radial cracks and rebound velocities for 4 impacts of 300 µm glass beads on ZnS. Results indicate that a reasonable fit of these impact characteristics can be obtained by suitable adjustment of the peridynamic input parameters, demonstrating that the MPG can be used effectively as a validation tool for impact modeling and that the peridynamic sand impact model described herein possesses not only a qualitative but also a quantitative ability to simulate sand impact events.

  3. A simple shape-free model for pore-size estimation with positron annihilation lifetime spectroscopy

    NASA Astrophysics Data System (ADS)

    Wada, Ken; Hyodo, Toshio

    2013-06-01

    Positron annihilation lifetime spectroscopy is one of the methods for estimating pore size in insulating materials. We present a shape-free model to be used conveniently for such analysis. A basic model in classical picture is modified by introducing a parameter corresponding to an effective size of the positronium (Ps). This parameter is adjusted so that its Ps-lifetime to pore-size relation merges smoothly with that of the well-established Tao-Eldrup model (with modification involving the intrinsic Ps annihilation rate) applicable to very small pores. The combined model, i.e., modified Tao-Eldrup model for smaller pores and the modified classical model for larger pores, agrees surprisingly well with the quantum-mechanics based extended Tao-Eldrup model, which deals with Ps trapped in and thermally equilibrium with a rectangular pore.

  4. An improved state-parameter analysis of ecosystem models using data assimilation

    USGS Publications Warehouse

    Chen, M.; Liu, S.; Tieszen, L.L.; Hollinger, D.Y.

    2008-01-01

    Much of the effort spent in developing data assimilation methods for carbon dynamics analysis has focused on estimating optimal values for either model parameters or state variables. The main weakness of estimating parameter values alone (i.e., without considering state variables) is that all errors from input, output, and model structure are attributed to model parameter uncertainties. On the other hand, the accuracy of estimating state variables may be lowered if the temporal evolution of parameter values is not incorporated. This research develops a smoothed ensemble Kalman filter (SEnKF) by combining ensemble Kalman filter with kernel smoothing technique. SEnKF has following characteristics: (1) to estimate simultaneously the model states and parameters through concatenating unknown parameters and state variables into a joint state vector; (2) to mitigate dramatic, sudden changes of parameter values in parameter sampling and parameter evolution process, and control narrowing of parameter variance which results in filter divergence through adjusting smoothing factor in kernel smoothing algorithm; (3) to assimilate recursively data into the model and thus detect possible time variation of parameters; and (4) to address properly various sources of uncertainties stemming from input, output and parameter uncertainties. The SEnKF is tested by assimilating observed fluxes of carbon dioxide and environmental driving factor data from an AmeriFlux forest station located near Howland, Maine, USA, into a partition eddy flux model. Our analysis demonstrates that model parameters, such as light use efficiency, respiration coefficients, minimum and optimum temperatures for photosynthetic activity, and others, are highly constrained by eddy flux data at daily-to-seasonal time scales. The SEnKF stabilizes parameter values quickly regardless of the initial values of the parameters. Potential ecosystem light use efficiency demonstrates a strong seasonality. Results show that the simultaneous parameter estimation procedure significantly improves model predictions. Results also show that the SEnKF can dramatically reduce the variance in state variables stemming from the uncertainty of parameters and driving variables. The SEnKF is a robust and effective algorithm in evaluating and developing ecosystem models and in improving the understanding and quantification of carbon cycle parameters and processes. ?? 2008 Elsevier B.V.

  5. Use of Multi-class Empirical Orthogonal Function for Identification of Hydrogeological Parameters and Spatiotemporal Pattern of Multiple Recharges in Groundwater Modeling

    NASA Astrophysics Data System (ADS)

    Huang, C. L.; Hsu, N. S.; Yeh, W. W. G.; Hsieh, I. H.

    2017-12-01

    This study develops an innovative calibration method for regional groundwater modeling by using multi-class empirical orthogonal functions (EOFs). The developed method is an iterative approach. Prior to carrying out the iterative procedures, the groundwater storage hydrographs associated with the observation wells are calculated. The combined multi-class EOF amplitudes and EOF expansion coefficients of the storage hydrographs are then used to compute the initial gauss of the temporal and spatial pattern of multiple recharges. The initial guess of the hydrogeological parameters are also assigned according to in-situ pumping experiment. The recharges include net rainfall recharge and boundary recharge, and the hydrogeological parameters are riverbed leakage conductivity, horizontal hydraulic conductivity, vertical hydraulic conductivity, storage coefficient, and specific yield. The first step of the iterative algorithm is to conduct the numerical model (i.e. MODFLOW) by the initial guess / adjusted values of the recharges and parameters. Second, in order to determine the best EOF combination of the error storage hydrographs for determining the correction vectors, the objective function is devised as minimizing the root mean square error (RMSE) of the simulated storage hydrographs. The error storage hydrograph are the differences between the storage hydrographs computed from observed and simulated groundwater level fluctuations. Third, adjust the values of recharges and parameters and repeat the iterative procedures until the stopping criterion is reached. The established methodology was applied to the groundwater system of Ming-Chu Basin, Taiwan. The study period is from January 1st to December 2ed in 2012. Results showed that the optimal EOF combination for the multiple recharges and hydrogeological parameters can decrease the RMSE of the simulated storage hydrographs dramatically within three calibration iterations. It represents that the iterative approach that using EOF techniques can capture the groundwater flow tendency and detects the correction vector of the simulated error sources. Hence, the established EOF-based methodology can effectively and accurately identify the multiple recharges and hydrogeological parameters.

  6. Empirical Bayes estimation of proportions with application to cowbird parasitism rates

    USGS Publications Warehouse

    Link, W.A.; Hahn, D.C.

    1996-01-01

    Bayesian models provide a structure for studying collections of parameters such as are considered in the investigation of communities, ecosystems, and landscapes. This structure allows for improved estimation of individual parameters, by considering them in the context of a group of related parameters. Individual estimates are differentially adjusted toward an overall mean, with the magnitude of their adjustment based on their precision. Consequently, Bayesian estimation allows for a more credible identification of extreme values in a collection of estimates. Bayesian models regard individual parameters as values sampled from a specified probability distribution, called a prior. The requirement that the prior be known is often regarded as an unattractive feature of Bayesian analysis and may be the reason why Bayesian analyses are not frequently applied in ecological studies. Empirical Bayes methods provide an alternative approach that incorporates the structural advantages of Bayesian models while requiring a less stringent specification of prior knowledge. Rather than requiring that the prior distribution be known, empirical Bayes methods require only that it be in a certain family of distributions, indexed by hyperparameters that can be estimated from the available data. This structure is of interest per se, in addition to its value in allowing for improved estimation of individual parameters; for example, hypotheses regarding the existence of distinct subgroups in a collection of parameters can be considered under the empirical Bayes framework by allowing the hyperparameters to vary among subgroups. Though empirical Bayes methods have been applied in a variety of contexts, they have received little attention in the ecological literature. We describe the empirical Bayes approach in application to estimation of proportions, using data obtained in a community-wide study of cowbird parasitism rates for illustration. Since observed proportions based on small sample sizes are heavily adjusted toward the mean, extreme values among empirical Bayes estimates identify those species for which there is the greatest evidence of extreme parasitism rates. Applying a subgroup analysis to our data on cowbird parasitism rates, we conclude that parasitism rates for Neotropical Migrants as a group are no greater than those of Resident/Short-distance Migrant species in this forest community. Our data and analyses demonstrate that the parasitism rates for certain Neotropical Migrant species are remarkably low (Wood Thrush and Rose-breasted Grosbeak) while those for others are remarkably high (Ovenbird and Red-eyed Vireo).

  7. The heuristic value of redundancy models of aging.

    PubMed

    Boonekamp, Jelle J; Briga, Michael; Verhulst, Simon

    2015-11-01

    Molecular studies of aging aim to unravel the cause(s) of aging bottom-up, but linking these mechanisms to organismal level processes remains a challenge. We propose that complementary top-down data-directed modelling of organismal level empirical findings may contribute to developing these links. To this end, we explore the heuristic value of redundancy models of aging to develop a deeper insight into the mechanisms causing variation in senescence and lifespan. We start by showing (i) how different redundancy model parameters affect projected aging and mortality, and (ii) how variation in redundancy model parameters relates to variation in parameters of the Gompertz equation. Lifestyle changes or medical interventions during life can modify mortality rate, and we investigate (iii) how interventions that change specific redundancy parameters within the model affect subsequent mortality and actuarial senescence. Lastly, as an example of data-directed modelling and the insights that can be gained from this, (iv) we fit a redundancy model to mortality patterns observed by Mair et al. (2003; Science 301: 1731-1733) in Drosophila that were subjected to dietary restriction and temperature manipulations. Mair et al. found that dietary restriction instantaneously reduced mortality rate without affecting aging, while temperature manipulations had more transient effects on mortality rate and did affect aging. We show that after adjusting model parameters the redundancy model describes both effects well, and a comparison of the parameter values yields a deeper insight in the mechanisms causing these contrasting effects. We see replacement of the redundancy model parameters by more detailed sub-models of these parameters as a next step in linking demographic patterns to underlying molecular mechanisms. Copyright © 2015 Elsevier Inc. All rights reserved.

  8. Revisiting the cape cod bacteria injection experiment using a stochastic modeling approach

    USGS Publications Warehouse

    Maxwell, R.M.; Welty, C.; Harvey, R.W.

    2007-01-01

    Bromide and resting-cell bacteria tracer tests conducted in a sandy aquifer at the U.S. Geological Survey Cape Cod site in 1987 were reinterpreted using a three-dimensional stochastic approach. Bacteria transport was coupled to colloid filtration theory through functional dependence of local-scale colloid transport parameters upon hydraulic conductivity and seepage velocity in a stochastic advection - dispersion/attachment - detachment model. Geostatistical information on the hydraulic conductivity (K) field that was unavailable at the time of the original test was utilized as input. Using geostatistical parameters, a groundwater flow and particle-tracking model of conservative solute transport was calibrated to the bromide-tracer breakthrough data. An optimization routine was employed over 100 realizations to adjust the mean and variance ofthe natural-logarithm of hydraulic conductivity (InK) field to achieve best fit of a simulated, average bromide breakthrough curve. A stochastic particle-tracking model for the bacteria was run without adjustments to the local-scale colloid transport parameters. Good predictions of mean bacteria breakthrough were achieved using several approaches for modeling components of the system. Simulations incorporating the recent Tufenkji and Elimelech (Environ. Sci. Technol. 2004, 38, 529-536) correlation equation for estimating single collector efficiency were compared to those using the older Rajagopalan and Tien (AIChE J. 1976, 22, 523-533) model. Both appeared to work equally well at predicting mean bacteria breakthrough using a constant mean bacteria diameter for this set of field conditions. Simulations using a distribution of bacterial cell diameters available from original field notes yielded a slight improvement in the model and data agreement compared to simulations using an average bacterial diameter. The stochastic approach based on estimates of local-scale parameters for the bacteria-transport process reasonably captured the mean bacteria transport behavior and calculated an envelope of uncertainty that bracketed the observations in most simulation cases. ?? 2007 American Chemical Society.

  9. Modeling of the Dorsal Gradient across Species Reveals Interaction between Embryo Morphology and Toll Signaling Pathway during Evolution

    PubMed Central

    Koslen, Hannah R.; Chiel, Hillel J.; Mizutani, Claudia Mieko

    2014-01-01

    Morphogenetic gradients are essential to allocate cell fates in embryos of varying sizes within and across closely related species. We previously showed that the maternal NF-κB/Dorsal (Dl) gradient has acquired different shapes in Drosophila species, which result in unequally scaled germ layers along the dorso-ventral axis and the repositioning of the neuroectodermal borders. Here we combined experimentation and mathematical modeling to investigate which factors might have contributed to the fast evolutionary changes of this gradient. To this end, we modified a previously developed model that employs differential equations of the main biochemical interactions of the Toll (Tl) signaling pathway, which regulates Dl nuclear transport. The original model simulations fit well the D. melanogaster wild type, but not mutant conditions. To broaden the applicability of this model and probe evolutionary changes in gradient distributions, we adjusted a set of 19 independent parameters to reproduce three quantified experimental conditions (i.e. Dl levels lowered, nuclear size and density increased or decreased). We next searched for the most relevant parameters that reproduce the species-specific Dl gradients. We show that adjusting parameters relative to morphological traits (i.e. embryo diameter, nuclear size and density) alone is not sufficient to reproduce the species Dl gradients. Since components of the Tl pathway simulated by the model are fast-evolving, we next asked which parameters related to Tl would most effectively reproduce these gradients and identified a particular subset. A sensitivity analysis reveals the existence of nonlinear interactions between the two fast-evolving traits tested above, namely the embryonic morphological changes and Tl pathway components. Our modeling further suggests that distinct Dl gradient shapes observed in closely related melanogaster sub-group lineages may be caused by similar sequence modifications in Tl pathway components, which are in agreement with their phylogenetic relationships. PMID:25165818

  10. A temperature-dependent coarse-grained model for the thermoresponsive polymer poly(N-isopropylacrylamide).

    PubMed

    Abbott, Lauren J; Stevens, Mark J

    2015-12-28

    A coarse-grained (CG) model is developed for the thermoresponsive polymer poly(N-isopropylacrylamide) (PNIPAM), using a hybrid top-down and bottom-up approach. Nonbonded parameters are fit to experimental thermodynamic data following the procedures of the SDK (Shinoda, DeVane, and Klein) CG force field, with minor adjustments to provide better agreement with radial distribution functions from atomistic simulations. Bonded parameters are fit to probability distributions from atomistic simulations using multi-centered Gaussian-based potentials. The temperature-dependent potentials derived for the PNIPAM CG model in this work properly capture the coil-globule transition of PNIPAM single chains and yield a chain-length dependence consistent with atomistic simulations.

  11. On the nature of the cosmic ray positron spectrum

    NASA Technical Reports Server (NTRS)

    Protheroe, R. J.

    1981-01-01

    A calculation was made of the flux of secondary positrons above 100 MeV expected for various propagation models. The models investigated were the leaky box or homogeneous model, a disk halo diffusion model, a dynamical halo model, and the closed galaxy model. In each case the parameters of these models were adjusted for agreement with the observed secondary or primary ratios and Be 10 abundance. The positron flux predicted for these models was compared with the available data. The possibility of a primary positron component was considered.

  12. Mathematical Storage-Battery Models

    NASA Technical Reports Server (NTRS)

    Chapman, C. P.; Aston, M.

    1985-01-01

    Empirical formula represents performance of electrical storage batteries. Formula covers many battery types and includes numerous coefficients adjusted to fit peculiarities of each type. Battery and load parameters taken into account include power density in battery, discharge time, and electrolyte temperature. Applications include electric-vehicle "fuel" gages and powerline load leveling.

  13. Near Real-Time Event Detection & Prediction Using Intelligent Software Agents

    DTIC Science & Technology

    2006-03-01

    value was 0.06743. Multiple autoregressive integrated moving average ( ARIMA ) models were then build to see if the raw data, differenced data, or...slight improvement. The best adjusted r^2 value was found to be 0.1814. Successful results were not expected from linear or ARIMA -based modelling ...appear, 2005. [63] Mora-Lopez, L., Mora, J., Morales-Bueno, R., et al. Modelling time series of climatic parameters with probabilistic finite

  14. Data filtering with support vector machines in geometric camera calibration.

    PubMed

    Ergun, B; Kavzoglu, T; Colkesen, I; Sahin, C

    2010-02-01

    The use of non-metric digital cameras in close-range photogrammetric applications and machine vision has become a popular research agenda. Being an essential component of photogrammetric evaluation, camera calibration is a crucial stage for non-metric cameras. Therefore, accurate camera calibration and orientation procedures have become prerequisites for the extraction of precise and reliable 3D metric information from images. The lack of accurate inner orientation parameters can lead to unreliable results in the photogrammetric process. A camera can be well defined with its principal distance, principal point offset and lens distortion parameters. Different camera models have been formulated and used in close-range photogrammetry, but generally sensor orientation and calibration is performed with a perspective geometrical model by means of the bundle adjustment. In this study, support vector machines (SVMs) using radial basis function kernel is employed to model the distortions measured for Olympus Aspherical Zoom lens Olympus E10 camera system that are later used in the geometric calibration process. It is intended to introduce an alternative approach for the on-the-job photogrammetric calibration stage. Experimental results for DSLR camera with three focal length settings (9, 18 and 36 mm) were estimated using bundle adjustment with additional parameters, and analyses were conducted based on object point discrepancies and standard errors. Results show the robustness of the SVMs approach on the correction of image coordinates by modelling total distortions on-the-job calibration process using limited number of images.

  15. Optimum data weighting and error calibration for estimation of gravitational parameters

    NASA Technical Reports Server (NTRS)

    Lerch, Francis J.

    1989-01-01

    A new technique was developed for the weighting of data from satellite tracking systems in order to obtain an optimum least-squares solution and an error calibration for the solution parameters. Data sets from optical, electronic, and laser systems on 17 satellites in GEM-T1 Goddard Earth Model-T1 (GEM-T1) were employed toward application of this technique for gravity field parameters. Also GEM-T2 (31 satellites) was recently computed as a direct application of the method and is summarized. The method employs subset solutions of the data associated with the complete solution to agree with their error estimates. With the adjusted weights the process provides for an automatic calibration of the error estimates for the solution parameters. The data weights derived are generally much smaller than corresponding weights obtained from nominal values of observation accuracy or residuals. Independent tests show significant improvement for solutions with optimal weighting. The technique is general and may be applied to orbit parameters, station coordinates, or other parameters than the gravity model.

  16. Automated method for the systematic interpretation of resonance peaks in spectrum data

    DOEpatents

    Damiano, Brian; Wood, Richard T.

    1997-01-01

    A method for spectral signature interpretation. The method includes the creation of a mathematical model of a system or process. A neural network training set is then developed based upon the mathematical model. The neural network training set is developed by using the mathematical model to generate measurable phenomena of the system or process based upon model input parameter that correspond to the physical condition of the system or process. The neural network training set is then used to adjust internal parameters of a neural network. The physical condition of an actual system or process represented by the mathematical model is then monitored by extracting spectral features from measured spectra of the actual process or system. The spectral features are then input into said neural network to determine the physical condition of the system or process represented by the mathematical. More specifically, the neural network correlates the spectral features (i.e. measurable phenomena) of the actual process or system with the corresponding model input parameters. The model input parameters relate to specific components of the system or process, and, consequently, correspond to the physical condition of the process or system.

  17. Carbon dioxide emission prediction using support vector machine

    NASA Astrophysics Data System (ADS)

    Saleh, Chairul; Rachman Dzakiyullah, Nur; Bayu Nugroho, Jonathan

    2016-02-01

    In this paper, the SVM model was proposed for predict expenditure of carbon (CO2) emission. The energy consumption such as electrical energy and burning coal is input variable that affect directly increasing of CO2 emissions were conducted to built the model. Our objective is to monitor the CO2 emission based on the electrical energy and burning coal used from the production process. The data electrical energy and burning coal used were obtained from Alcohol Industry in order to training and testing the models. It divided by cross-validation technique into 90% of training data and 10% of testing data. To find the optimal parameters of SVM model was used the trial and error approach on the experiment by adjusting C parameters and Epsilon. The result shows that the SVM model has an optimal parameter on C parameters 0.1 and 0 Epsilon. To measure the error of the model by using Root Mean Square Error (RMSE) with error value as 0.004. The smallest error of the model represents more accurately prediction. As a practice, this paper was contributing for an executive manager in making the effective decision for the business operation were monitoring expenditure of CO2 emission.

  18. A model for the formation of the Local Group

    NASA Technical Reports Server (NTRS)

    Peebles, P. J. E.; Melott, A. L.; Holmes, M. R.; Jiang, L. R.

    1989-01-01

    Observational tests of a model for the formation of the Local Group are presented and analyzed in which the mass concentration grows by gravitational accretion of local-pressure matter onto two seed masses in an otherwise homogeneous initial mass distribution. The evolution of the mass distribution is studied in an analytic approximation and a numerical computation. The initial seed mass and separation are adjusted to produce the observed present separation and relative velocity of the Andromeda Nebula and the Galaxy. If H(0) is adjusted to about 80 km/s/Mpc with density parameter Omega = 1, then the model gives a good fit to the motions of the outer members of the Local Group. The same model gives particle orbits at radius of about 100 kpc that reasonably approximate the observed distribution of redshifts of the Galactic satellites.

  19. Neuromusculoskeletal model self-calibration for on-line sequential bayesian moment estimation

    NASA Astrophysics Data System (ADS)

    Bueno, Diana R.; Montano, L.

    2017-04-01

    Objective. Neuromusculoskeletal models involve many subject-specific physiological parameters that need to be adjusted to adequately represent muscle properties. Traditionally, neuromusculoskeletal models have been calibrated with a forward-inverse dynamic optimization which is time-consuming and unfeasible for rehabilitation therapy. Non self-calibration algorithms have been applied to these models. To the best of our knowledge, the algorithm proposed in this work is the first on-line calibration algorithm for muscle models that allows a generic model to be adjusted to different subjects in a few steps. Approach. In this paper we propose a reformulation of the traditional muscle models that is able to sequentially estimate the kinetics (net joint moments), and also its full self-calibration (subject-specific internal parameters of the muscle from a set of arbitrary uncalibrated data), based on the unscented Kalman filter. The nonlinearity of the model as well as its calibration problem have obliged us to adopt the sum of Gaussians filter suitable for nonlinear systems. Main results. This sequential Bayesian self-calibration algorithm achieves a complete muscle model calibration using as input only a dataset of uncalibrated sEMG and kinematics data. The approach is validated experimentally using data from the upper limbs of 21 subjects. Significance. The results show the feasibility of neuromusculoskeletal model self-calibration. This study will contribute to a better understanding of the generalization of muscle models for subject-specific rehabilitation therapies. Moreover, this work is very promising for rehabilitation devices such as electromyography-driven exoskeletons or prostheses.

  20. Skipping on uneven ground: trailing leg adjustments simplify control and enhance robustness.

    PubMed

    Müller, Roy; Andrada, Emanuel

    2018-01-01

    It is known that humans intentionally choose skipping in special situations, e.g. when descending stairs or when moving in environments with lower gravity than on Earth. Although those situations involve uneven locomotion, the dynamics of human skipping on uneven ground have not yet been addressed. To find the reasons that may motivate this gait, we combined experimental data on humans with numerical simulations on a bipedal spring-loaded inverted pendulum model (BSLIP). To drive the model, the following parameters were estimated from nine subjects skipping across a single drop in ground level: leg lengths at touchdown, leg stiffness of both legs, aperture angle between legs, trailing leg angle at touchdown (leg landing first after flight phase), and trailing leg retraction speed. We found that leg adjustments in humans occur mostly in the trailing leg (low to moderate leg retraction during swing phase, reduced trailing leg stiffness, and flatter trailing leg angle at lowered touchdown). When transferring these leg adjustments to the BSLIP model, the capacity of the model to cope with sudden-drop perturbations increased.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ünlü, Hilmi, E-mail: hunlu@itu.edu.tr

    We propose a non-orthogonal sp{sup 3} hybrid bond orbital model to determine the electronic properties of semiconductor heterostructures. The model considers the non-orthogonality of sp{sup 3} hybrid states of nearest neighboring adjacent atoms using the intra-atomic Coulomb interactions corrected Hartree-Fock atomic energies and metallic contribution to calculate the valence band width energies of group IV elemental and group III-V and II-VI compound semiconductors without any adjustable parameter.

  2. Polymer density functional theory approach based on scaling second-order direct correlation function.

    PubMed

    Zhou, Shiqi

    2006-06-01

    A second-order direct correlation function (DCF) from solving the polymer-RISM integral equation is scaled up or down by an equation of state for bulk polymer, the resultant scaling second-order DCF is in better agreement with corresponding simulation results than the un-scaling second-order DCF. When the scaling second-order DCF is imported into a recently proposed LTDFA-based polymer DFT approach, an originally associated adjustable but mathematically meaningless parameter now becomes mathematically meaningful, i.e., the numerical value lies now between 0 and 1. When the adjustable parameter-free version of the LTDFA is used instead of the LTDFA, i.e., the adjustable parameter is fixed at 0.5, the resultant parameter-free version of the scaling LTDFA-based polymer DFT is also in good agreement with the corresponding simulation data for density profiles. The parameter-free version of the scaling LTDFA-based polymer DFT is employed to investigate the density profiles of a freely jointed tangent hard sphere chain near a variable sized central hard sphere, again the predictions reproduce accurately the simulational results. Importance of the present adjustable parameter-free version lies in its combination with a recently proposed universal theoretical way, in the resultant formalism, the contact theorem is still met by the adjustable parameter associated with the theoretical way.

  3. Fluidized-bed reactor modeling for production of silicon by silane pyrolysis

    NASA Technical Reports Server (NTRS)

    Dudukovic, M. P.; Ramachandran, P. A.; Lai, S.

    1986-01-01

    An ideal backmixed reactor model (CSTR) and a fluidized bed bubbling reactor model (FBBR) were developed for silane pyrolysis. Silane decomposition is assumed to occur via two pathways: homogeneous decomposition and heterogeneous chemical vapor deposition (CVD). Both models account for homogeneous and heterogeneous silane decomposition, homogeneous nucleation, coagulation and growth by diffusion of fines, scavenging of fines by large particles, elutriation of fines and CVD growth of large seed particles. At present the models do not account for attrition. The preliminary comparison of the model predictions with experimental results shows reasonable agreement. The CSTR model with no adjustable parameter yields a lower bound on fines formed and upper estimate on production rates. The FBBR model overpredicts the formation of fines but could be matched to experimental data by adjusting the unkown jet emulsion exchange efficients. The models clearly indicate that in order to suppress the formation of fines (smoke) good gas-solid contacting in the grid region must be achieved and the formation of the bubbles suppressed.

  4. Hydrologic Modeling in the Kenai River Watershed using Event Based Calibration

    NASA Astrophysics Data System (ADS)

    Wells, B.; Toniolo, H. A.; Stuefer, S. L.

    2015-12-01

    Understanding hydrologic changes is key for preparing for possible future scenarios. On the Kenai Peninsula in Alaska the yearly salmon runs provide a valuable stimulus to the economy. It is the focus of a large commercial fishing fleet, but also a prime tourist attraction. Modeling of anadromous waters provides a tool that assists in the prediction of future salmon run size. Beaver Creek, in Kenai, Alaska, is a lowlands stream that has been modeled using the Army Corps of Engineers event based modeling package HEC-HMS. With the use of historic precipitation and discharge data, the model was calibrated to observed discharge values. The hydrologic parameters were measured in the field or calculated, while soil parameters were estimated and adjusted during the calibration. With the calibrated parameter for HEC-HMS, discharge estimates can be used by other researches studying the area and help guide communities and officials to make better-educated decisions regarding the changing hydrology in the area and the tied economic drivers.

  5. Box-modeling of bone and tooth phosphate oxygen isotope compositions as a function of environmental and physiological parameters.

    PubMed

    Langlois, C; Simon, L; Lécuyer, Ch

    2003-12-01

    A time-dependent box model is developed to calculate oxygen isotope compositions of bone phosphate as a function of environmental and physiological parameters. Input and output oxygen fluxes related to body water and bone reservoirs are scaled to the body mass. The oxygen fluxes are evaluated by stoichiometric scaling to the calcium accretion and resorption rates, assuming a pure hydroxylapatite composition for the bone and tooth mineral. The model shows how the diet composition, body mass, ambient relative humidity and temperature may control the oxygen isotope composition of bone phosphate. The model also computes how bones and teeth record short-term variations in relative humidity, air temperature and delta18O of drinking water, depending on body mass. The documented diversity of oxygen isotope fractionation equations for vertebrates is accounted for by our model when for each specimen the physiological and diet parameters are adjusted in the living range of environmental conditions.

  6. Evaluation of the VIIRS BRDF, Albedo and NBAR products suite and an assessment of continuity with the long term MODIS record

    USDA-ARS?s Scientific Manuscript database

    Bidirectional Reflectance Distribution Function (BRDF) model parameters, Albedo quantities, and Nadir BRDF Adjusted Reflectance (NBAR) products derived from the Visible Infrared Imaging Radiometer Suite (VIIRS), on the Suomi-NPP (National Polar-orbiting Partnership) satellite are evaluated through c...

  7. QCD Sum Rules and Models for Generalized Parton Distributions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anatoly Radyushkin

    2004-10-01

    I use QCD sum rule ideas to construct models for generalized parton distributions. To this end, the perturbative parts of QCD sum rules for the pion and nucleon electromagnetic form factors are interpreted in terms of GPDs and two models are discussed. One of them takes the double Borel transform at adjusted value of the Borel parameter as a model for nonforward parton densities, and another is based on the local duality relation. Possible ways of improving these Ansaetze are briefly discussed.

  8. Tests of high-resolution simulations over a region of complex terrain in Southeast coast of Brazil

    NASA Astrophysics Data System (ADS)

    Chou, Sin Chan; Luís Gomes, Jorge; Ristic, Ivan; Mesinger, Fedor; Sueiro, Gustavo; Andrade, Diego; Lima-e-Silva, Pedro Paulo

    2013-04-01

    The Eta Model is used operationally by INPE at the Centre for Weather Forecasts and Climate Studies (CPTEC) to produce weather forecasts over South America since 1997. The model has gone through upgrades along these years. In order to prepare the model for operational higher resolution forecasts, the model is configured and tested over a region of complex topography located near the coast of Southeast Brazil. The model domain includes the two Brazilians cities, Rio de Janeiro and Sao Paulo, urban areas, preserved tropical forest, pasture fields, and complex terrain where it can rise from sea level up to about 1000 m. Accurate near-surface wind direction and magnitude are needed for the power plant emergency plan. Besides, the region suffers from frequent events of floods and landslides, therefore accurate local forecasts are required for disaster warnings. The objective of this work is to carry out a series of numerical experiments to test and evaluate high resolution simulations in this complex area. Verification of model runs uses observations taken from the nuclear power plant and higher resolution reanalyses data. The runs were tested in a period when flow was predominately forced by local conditions and in a period forced by frontal passage. The Eta Model was configured initially with 2-km horizontal resolution and 50 layers. The Eta-2km is a second nesting, it is driven by Eta-15km, which in its turn is driven by Era-Interim reanalyses. The series of experiments consists of replacing surface layer stability function, adjusting cloud microphysics scheme parameters, further increasing vertical and horizontal resolutions. By replacing the stability function for the stable conditions substantially increased the katabatic winds and verified better against the tower wind data. Precipitation produced by the model was excessive in the region. Increasing vertical resolution to 60 layers caused a further increase in precipitation production. This excessive precipitation was reduced by adjusting some parameters in the cloud microphysics scheme. Precipitation overestimate still occurs and further tests are still necessary. The increase of horizontal resolution to 1 km required adjusting model diffusion parameters and refining divergence calculations. Available observations in the region for a thorough evaluation is a major constraint.

  9. A Fast Surrogate-facilitated Data-driven Bayesian Approach to Uncertainty Quantification of a Regional Groundwater Flow Model with Structural Error

    NASA Astrophysics Data System (ADS)

    Xu, T.; Valocchi, A. J.; Ye, M.; Liang, F.

    2016-12-01

    Due to simplification and/or misrepresentation of the real aquifer system, numerical groundwater flow and solute transport models are usually subject to model structural error. During model calibration, the hydrogeological parameters may be overly adjusted to compensate for unknown structural error. This may result in biased predictions when models are used to forecast aquifer response to new forcing. In this study, we extend a fully Bayesian method [Xu and Valocchi, 2015] to calibrate a real-world, regional groundwater flow model. The method uses a data-driven error model to describe model structural error and jointly infers model parameters and structural error. In this study, Bayesian inference is facilitated using high performance computing and fast surrogate models. The surrogate models are constructed using machine learning techniques to emulate the response simulated by the computationally expensive groundwater model. We demonstrate in the real-world case study that explicitly accounting for model structural error yields parameter posterior distributions that are substantially different from those derived by the classical Bayesian calibration that does not account for model structural error. In addition, the Bayesian with error model method gives significantly more accurate prediction along with reasonable credible intervals.

  10. Methods for Calibration of Prout-Tompkins Kinetics Parameters Using EZM Iteration and GLO

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wemhoff, A P; Burnham, A K; de Supinski, B

    2006-11-07

    This document contains information regarding the standard procedures used to calibrate chemical kinetics parameters for the extended Prout-Tompkins model to match experimental data. Two methods for calibration are mentioned: EZM calibration and GLO calibration. EZM calibration matches kinetics parameters to three data points, while GLO calibration slightly adjusts kinetic parameters to match multiple points. Information is provided regarding the theoretical approach and application procedure for both of these calibration algorithms. It is recommended that for the calibration process, the user begin with EZM calibration to provide a good estimate, and then fine-tune the parameters using GLO. Two examples have beenmore » provided to guide the reader through a general calibrating process.« less

  11. Parameter learning for performance adaptation

    NASA Technical Reports Server (NTRS)

    Peek, Mark D.; Antsaklis, Panos J.

    1990-01-01

    A parameter learning method is introduced and used to broaden the region of operability of the adaptive control system of a flexible space antenna. The learning system guides the selection of control parameters in a process leading to optimal system performance. A grid search procedure is used to estimate an initial set of parameter values. The optimization search procedure uses a variation of the Hooke and Jeeves multidimensional search algorithm. The method is applicable to any system where performance depends on a number of adjustable parameters. A mathematical model is not necessary, as the learning system can be used whenever the performance can be measured via simulation or experiment. The results of two experiments, the transient regulation and the command following experiment, are presented.

  12. FAST Model Calibration and Validation of the OC5-DeepCwind Floating Offshore Wind System Against Wave Tank Test Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wendt, Fabian F; Robertson, Amy N; Jonkman, Jason

    During the course of the Offshore Code Comparison Collaboration, Continued, with Correlation (OC5) project, which focused on the validation of numerical methods through comparison against tank test data, the authors created a numerical FAST model of the 1:50-scale DeepCwind semisubmersible system that was tested at the Maritime Research Institute Netherlands ocean basin in 2013. This paper discusses several model calibration studies that were conducted to identify model adjustments that improve the agreement between the numerical simulations and the experimental test data. These calibration studies cover wind-field-specific parameters (coherence, turbulence), hydrodynamic and aerodynamic modeling approaches, as well as rotor model (blade-pitchmore » and blade-mass imbalances) and tower model (structural tower damping coefficient) adjustments. These calibration studies were conducted based on relatively simple calibration load cases (wave only/wind only). The agreement between the final FAST model and experimental measurements is then assessed based on more-complex combined wind and wave validation cases.« less

  13. Least-Squares Data Adjustment with Rank-Deficient Data Covariance Matrices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williams, J.G.

    2011-07-01

    A derivation of the linear least-squares adjustment formulae is required that avoids the assumption that the covariance matrix of prior parameters can be inverted. Possible proofs are of several kinds, including: (i) extension of standard results for the linear regression formulae, and (ii) minimization by differentiation of a quadratic form of the deviations in parameters and responses. In this paper, the least-squares adjustment equations are derived in both these ways, while explicitly assuming that the covariance matrix of prior parameters is singular. It will be proved that the solutions are unique and that, contrary to statements that have appeared inmore » the literature, the least-squares adjustment problem is not ill-posed. No modification is required to the adjustment formulae that have been used in the past in the case of a singular covariance matrix for the priors. In conclusion: The linear least-squares adjustment formula that has been used in the past is valid in the case of a singular covariance matrix for the covariance matrix of prior parameters. Furthermore, it provides a unique solution. Statements in the literature, to the effect that the problem is ill-posed are wrong. No regularization of the problem is required. This has been proved in the present paper by two methods, while explicitly assuming that the covariance matrix of prior parameters is singular: i) extension of standard results for the linear regression formulae, and (ii) minimization by differentiation of a quadratic form of the deviations in parameters and responses. No modification is needed to the adjustment formulae that have been used in the past. (author)« less

  14. Observational Constraints on Models of the Universe with Time Variable Gravitational and Cosmological Constants Along MOG

    NASA Astrophysics Data System (ADS)

    Khurshudyan, M.; Mazhari, N. S.; Momeni, D.; Myrzakulov, R.; Raza, M.

    2015-02-01

    The subject of this paper is to investigate the weak regime covariant scalar-tensor-vector gravity (STVG) theory, known as the MOdified gravity (MOG) theory of gravity. First, we show that the MOG in the absence of scalar fields is converted into Λ( t), G( t) models. Time evolution of the cosmological parameters for a family of viable models have been investigated. Numerical results with the cosmological data have been adjusted. We've introduced a model for dark energy (DE) density and cosmological constant which involves first order derivatives of Hubble parameter. To extend this model, correction terms including the gravitational constant are added. In our scenario, the cosmological constant is a function of time. To complete the model, interaction terms between dark energy and dark matter (DM) manually entered in phenomenological form. Instead of using the dust model for DM, we have proposed DM equivalent to a barotropic fluid. Time evolution of DM is a function of other cosmological parameters. Using sophisticated algorithms, the behavior of various quantities including the densities, Hubble parameter, etc. have been investigated graphically. The statefinder parameters have been used for the classification of DE models. Consistency of the numerical results with experimental data of S n e I a + B A O + C M B are studied by numerical analysis with high accuracy.

  15. Determination of adsorption parameters in numerical simulation for polymer flooding

    NASA Astrophysics Data System (ADS)

    Bao, Pengyu; Li, Aifen; Luo, Shuai; Dang, Xu

    2018-02-01

    A study on the determination of adsorption parameters for polymer flooding simulation was carried out. The study mainly includes polymer static adsorption and dynamic adsorption. The law of adsorption amount changing with polymer concentration and core permeability was presented, and the one-dimensional numerical model of CMG was established under the support of a large number of experimental data. The adsorption laws of adsorption experiments were applied to the one-dimensional numerical model to compare the influence of two adsorption laws on the historical matching results. The results show that the static adsorption and dynamic adsorption abide by different rules, and differ greatly in adsorption. If the static adsorption results were directly applied to the numerical model, the difficulty of the historical matching will increase. Therefore, dynamic adsorption tests in the porous medium are necessary before the process of parameter adjustment in order to achieve the ideal history matching result.

  16. Adaptive control of Parkinson's state based on a nonlinear computational model with unknown parameters.

    PubMed

    Su, Fei; Wang, Jiang; Deng, Bin; Wei, Xi-Le; Chen, Ying-Yuan; Liu, Chen; Li, Hui-Yan

    2015-02-01

    The objective here is to explore the use of adaptive input-output feedback linearization method to achieve an improved deep brain stimulation (DBS) algorithm for closed-loop control of Parkinson's state. The control law is based on a highly nonlinear computational model of Parkinson's disease (PD) with unknown parameters. The restoration of thalamic relay reliability is formulated as the desired outcome of the adaptive control methodology, and the DBS waveform is the control input. The control input is adjusted in real time according to estimates of unknown parameters as well as the feedback signal. Simulation results show that the proposed adaptive control algorithm succeeds in restoring the relay reliability of the thalamus, and at the same time achieves accurate estimation of unknown parameters. Our findings point to the potential value of adaptive control approach that could be used to regulate DBS waveform in more effective treatment of PD.

  17. Study on reservoir time-varying design flood of inflow based on Poisson process with time-dependent parameters

    NASA Astrophysics Data System (ADS)

    Li, Jiqing; Huang, Jing; Li, Jianchang

    2018-06-01

    The time-varying design flood can make full use of the measured data, which can provide the reservoir with the basis of both flood control and operation scheduling. This paper adopts peak over threshold method for flood sampling in unit periods and Poisson process with time-dependent parameters model for simulation of reservoirs time-varying design flood. Considering the relationship between the model parameters and hypothesis, this paper presents the over-threshold intensity, the fitting degree of Poisson distribution and the design flood parameters are the time-varying design flood unit period and threshold discriminant basis, deduced Longyangxia reservoir time-varying design flood process at 9 kinds of design frequencies. The time-varying design flood of inflow is closer to the reservoir actual inflow conditions, which can be used to adjust the operating water level in flood season and make plans for resource utilization of flood in the basin.

  18. Improving flood forecasting capability of physically based distributed hydrological model by parameter optimization

    NASA Astrophysics Data System (ADS)

    Chen, Y.; Li, J.; Xu, H.

    2015-10-01

    Physically based distributed hydrological models discrete the terrain of the whole catchment into a number of grid cells at fine resolution, and assimilate different terrain data and precipitation to different cells, and are regarded to have the potential to improve the catchment hydrological processes simulation and prediction capability. In the early stage, physically based distributed hydrological models are assumed to derive model parameters from the terrain properties directly, so there is no need to calibrate model parameters, but unfortunately, the uncertanties associated with this model parameter deriving is very high, which impacted their application in flood forecasting, so parameter optimization may also be necessary. There are two main purposes for this study, the first is to propose a parameter optimization method for physically based distributed hydrological models in catchment flood forecasting by using PSO algorithm and to test its competence and to improve its performances, the second is to explore the possibility of improving physically based distributed hydrological models capability in cathcment flood forecasting by parameter optimization. In this paper, based on the scalar concept, a general framework for parameter optimization of the PBDHMs for catchment flood forecasting is first proposed that could be used for all PBDHMs. Then, with Liuxihe model as the study model, which is a physically based distributed hydrological model proposed for catchment flood forecasting, the improverd Particle Swarm Optimization (PSO) algorithm is developed for the parameter optimization of Liuxihe model in catchment flood forecasting, the improvements include to adopt the linear decreasing inertia weight strategy to change the inertia weight, and the arccosine function strategy to adjust the acceleration coefficients. This method has been tested in two catchments in southern China with different sizes, and the results show that the improved PSO algorithm could be used for Liuxihe model parameter optimization effectively, and could improve the model capability largely in catchment flood forecasting, thus proven that parameter optimization is necessary to improve the flood forecasting capability of physically based distributed hydrological model. It also has been found that the appropriate particle number and the maximum evolution number of PSO algorithm used for Liuxihe model catchment flood forcasting is 20 and 30, respectively.

  19. The Influence of Sediment Isostatic Adjustment on Sea-Level Change and Land Motion along the US Gulf Coast

    NASA Astrophysics Data System (ADS)

    Kuchar, J.; Milne, G. A.; Wolstencroft, M.; Love, R.; Tarasov, L.; Hijma, M.

    2017-12-01

    Sea level rise presents a hazard for coastal populations and the Mississippi Delta (MD) is a region particularly at risk due to the high rates of land subsidence. We apply a gravitationally self-consistent model of glacial and sediment isostatic adjustment (SIA) along with a realistic sediment load reconstruction in this region for the first time to determine isostatic contributions to relative sea level (RSL) and land motion. We determine optimal model parameters (Earth rheology and ice history) using a new high quality compaction-free sea level indicator database and a parameter space of four ice histories and 400 Earth rheologies. Using the optimal model parameters, we show that SIA is capable of lowering predicted RSL in the MD area by several metres over the Holocene and so should be taken into account when modelling these data. We compare modelled contemporary rates of vertical land motion with those inferred using GPS. This comparison indicates that isostatic processes can explain the majority of the observed vertical land motion north of latitude 30.7oN, where subsidence rates average about 1 mm/yr; however, vertical rates south of this latitude shows large data-model discrepancies of greater than 3 mm/yr, indicating the importance of non-isostatic processes controlling the observed subsidence. This discrepancy extends to contemporary RSL change, where we find that the SIA contribution in the Delta is on the order of 10-1 mm per year. We provide estimates of the isostatic contributions to 20th and 21st century sea level rates at Gulf Coast PSMSL tide gauge locations as well as vertical and horizontal land motion at GPS station locations near the Mississippi Delta.

  20. Hybrid intelligent methodology to design translation invariant morphological operators for Brazilian stock market prediction.

    PubMed

    Araújo, Ricardo de A

    2010-12-01

    This paper presents a hybrid intelligent methodology to design increasing translation invariant morphological operators applied to Brazilian stock market prediction (overcoming the random walk dilemma). The proposed Translation Invariant Morphological Robust Automatic phase-Adjustment (TIMRAA) method consists of a hybrid intelligent model composed of a Modular Morphological Neural Network (MMNN) with a Quantum-Inspired Evolutionary Algorithm (QIEA), which searches for the best time lags to reconstruct the phase space of the time series generator phenomenon and determines the initial (sub-optimal) parameters of the MMNN. Each individual of the QIEA population is further trained by the Back Propagation (BP) algorithm to improve the MMNN parameters supplied by the QIEA. Also, for each prediction model generated, it uses a behavioral statistical test and a phase fix procedure to adjust time phase distortions observed in stock market time series. Furthermore, an experimental analysis is conducted with the proposed method through four Brazilian stock market time series, and the achieved results are discussed and compared to results found with random walk models and the previously introduced Time-delay Added Evolutionary Forecasting (TAEF) and Morphological-Rank-Linear Time-lag Added Evolutionary Forecasting (MRLTAEF) methods. Copyright © 2010 Elsevier Ltd. All rights reserved.

  1. Synaptic dynamics regulation in response to high frequency stimulation in neuronal networks

    NASA Astrophysics Data System (ADS)

    Su, Fei; Wang, Jiang; Li, Huiyan; Wei, Xile; Yu, Haitao; Deng, Bin

    2018-02-01

    High frequency stimulation (HFS) has confirmed its ability in modulating the pathological neural activities. However its detailed mechanism is unclear. This study aims to explore the effects of HFS on neuronal networks dynamics. First, the two-neuron FitzHugh-Nagumo (FHN) networks with static coupling strength and the small-world FHN networks with spike-time-dependent plasticity (STDP) modulated synaptic coupling strength are constructed. Then, the multi-scale method is used to transform the network models into equivalent averaged models, where the HFS intensity is modeled as the ratio between stimulation amplitude and frequency. Results show that in static two-neuron networks, there is still synaptic current projected to the postsynaptic neuron even if the presynaptic neuron is blocked by the HFS. In the small-world networks, the effects of the STDP adjusting rate parameter on the inactivation ratio and synchrony degree increase with the increase of HFS intensity. However, only when the HFS intensity becomes very large can the STDP time window parameter affect the inactivation ratio and synchrony index. Both simulation and numerical analysis demonstrate that the effects of HFS on neuronal network dynamics are realized through the adjustment of synaptic variable and conductance.

  2. Semi-empirical correlation for binary interaction parameters of the Peng-Robinson equation of state with the van der Waals mixing rules for the prediction of high-pressure vapor-liquid equilibrium.

    PubMed

    Fateen, Seif-Eddeen K; Khalil, Menna M; Elnabawy, Ahmed O

    2013-03-01

    Peng-Robinson equation of state is widely used with the classical van der Waals mixing rules to predict vapor liquid equilibria for systems containing hydrocarbons and related compounds. This model requires good values of the binary interaction parameter kij . In this work, we developed a semi-empirical correlation for kij partly based on the Huron-Vidal mixing rules. We obtained values for the adjustable parameters of the developed formula for over 60 binary systems and over 10 categories of components. The predictions of the new equation system were slightly better than the constant-kij model in most cases, except for 10 systems whose predictions were considerably improved with the new correlation.

  3. Retrieving the optical parameters of biological tissues using diffuse reflectance spectroscopy and Fourier series expansions. I. theory and application.

    PubMed

    Muñoz Morales, Aarón A; Vázquez Y Montiel, Sergio

    2012-10-01

    The determination of optical parameters of biological tissues is essential for the application of optical techniques in the diagnosis and treatment of diseases. Diffuse Reflection Spectroscopy is a widely used technique to analyze the optical characteristics of biological tissues. In this paper we show that by using diffuse reflectance spectra and a new mathematical model we can retrieve the optical parameters by applying an adjustment of the data with nonlinear least squares. In our model we represent the spectra using a Fourier series expansion finding mathematical relations between the polynomial coefficients and the optical parameters. In this first paper we use spectra generated by the Monte Carlo Multilayered Technique to simulate the propagation of photons in turbid media. Using these spectra we determine the behavior of Fourier series coefficients when varying the optical parameters of the medium under study. With this procedure we find mathematical relations between Fourier series coefficients and optical parameters. Finally, the results show that our method can retrieve the optical parameters of biological tissues with accuracy that is adequate for medical applications.

  4. A design methodology for nonlinear systems containing parameter uncertainty

    NASA Technical Reports Server (NTRS)

    Young, G. E.; Auslander, D. M.

    1983-01-01

    In the present design methodology for nonlinear systems containing parameter uncertainty, a generalized sensitivity analysis is incorporated which employs parameter space sampling and statistical inference. For the case of a system with j adjustable and k nonadjustable parameters, this methodology (which includes an adaptive random search strategy) is used to determine the combination of j adjustable parameter values which maximize the probability of those performance indices which simultaneously satisfy design criteria in spite of the uncertainty due to k nonadjustable parameters.

  5. Understanding heart rate alarm adjustment in the intensive care units through an analytical approach.

    PubMed

    Fidler, Richard L; Pelter, Michele M; Drew, Barbara J; Palacios, Jorge Arroyo; Bai, Yong; Stannard, Daphne; Aldrich, J Matt; Hu, Xiao

    2017-01-01

    Heart rate (HR) alarms are prevalent in ICU, and these parameters are configurable. Not much is known about nursing behavior associated with tailoring HR alarm parameters to individual patients to reduce clinical alarm fatigue. To understand the relationship between heart rate (HR) alarms and adjustments to reduce unnecessary heart rate alarms. Retrospective, quantitative analysis of an adjudicated database using analytical approaches to understand behaviors surrounding parameter HR alarm adjustments. Patients were sampled from five adult ICUs (77 beds) over one month at a quaternary care university medical center. A total of 337 of 461 ICU patients had HR alarms with 53.7% male, mean age 60.3 years, and 39% non-Caucasian. Default HR alarm parameters were 50 and 130 beats per minute (bpm). The occurrence of each alarm, vital signs, and physiologic waveforms was stored in a relational database (SQL server). There were 23,624 HR alarms for analysis, with 65.4% exceeding the upper heart rate limit. Only 51% of patients with HR alarms had parameters adjusted, with a median upper limit change of +5 bpm and -1 bpm lower limit. The median time to first HR parameter adjustment was 17.9 hours, without reduction in alarms occurrence (p = 0.57). HR alarms are prevalent in ICU, and half of HR alarm settings remain at default. There is a long delay between HR alarms and parameters changes, with insufficient changes to decrease HR alarms. Increasing frequency of HR alarms shortens the time to first adjustment. Best practice guidelines for HR alarm limits are needed to reduce alarm fatigue and improve monitoring precision.

  6. Error analysis in inverse scatterometry. I. Modeling.

    PubMed

    Al-Assaad, Rayan M; Byrne, Dale M

    2007-02-01

    Scatterometry is an optical technique that has been studied and tested in recent years in semiconductor fabrication metrology for critical dimensions. Previous work presented an iterative linearized method to retrieve surface-relief profile parameters from reflectance measurements upon diffraction. With the iterative linear solution model in this work, rigorous models are developed to represent the random and deterministic or offset errors in scatterometric measurements. The propagation of different types of error from the measurement data to the profile parameter estimates is then presented. The improvement in solution accuracies is then demonstrated with theoretical and experimental data by adjusting for the offset errors. In a companion paper (in process) an improved optimization method is presented to account for unknown offset errors in the measurements based on the offset error model.

  7. Does weather affect daily pain intensity levels in patients with acute low back pain? A prospective cohort study.

    PubMed

    Duong, Vicky; Maher, Chris G; Steffens, Daniel; Li, Qiang; Hancock, Mark J

    2016-05-01

    The aim of this study was to investigate the influence of various weather parameters on pain intensity levels in patients with acute low back pain (LBP). We performed a secondary analysis using data from the PACE trial that evaluated paracetamol (acetaminophen) in the treatment of acute LBP. Data on 1604 patients with LBP were included in the analysis. Weather parameters (precipitation, temperature, relative humidity, and air pressure) were obtained from the Australian Bureau of Meteorology. Pain intensity was assessed daily on a 0-10 numerical pain rating scale over a 2-week period. A generalised estimating equation analysis was used to examine the relationship between daily pain intensity levels and weather in three different time epochs (current day, previous day, and change between previous and current days). A second model was adjusted for important back pain prognostic factors. The analysis did not show any association between weather and pain intensity levels in patients with acute LBP in each of the time epochs. There was no change in strength of association after the model was adjusted for prognostic factors. Contrary to common belief, the results demonstrated that the weather parameters of precipitation, temperature, relative humidity, and air pressure did not influence the intensity of pain reported by patients during an episode of acute LBP.

  8. Cosmological implications of a large complete quasar sample

    PubMed Central

    Segal, I. E.; Nicoll, J. F.

    1998-01-01

    Objective and reproducible determinations of the probabilistic significance levels of the deviations between theoretical cosmological prediction and direct model-independent observation are made for the Large Bright Quasar Sample [Foltz, C., Chaffee, F. H., Hewett, P. C., MacAlpine, G. M., Turnshek, D. A., et al. (1987) Astron. J. 94, 1423–1460]. The Expanding Universe model as represented by the Friedman–Lemaitre cosmology with parameters qo = 0, Λ = 0 denoted as C1 and chronometric cosmology (no relevant adjustable parameters) denoted as C2 are the cosmologies considered. The mean and the dispersion of the apparent magnitudes and the slope of the apparent magnitude–redshift relation are the directly observed statistics predicted. The C1 predictions of these cosmology-independent quantities are deviant by as much as 11σ from direct observation; none of the C2 predictions deviate by >2σ. The C1 deviations may be reconciled with theory by the hypothesis of quasar “evolution,” which, however, appears incapable of being substantiated through direct observation. The excellent quantitative agreement of the C1 deviations with those predicted by C2 without adjustable parameters for the results of analysis predicated on C1 indicates that the evolution hypothesis may well be a theoretical artifact. PMID:9560182

  9. The Association between Bone Quality and Atherosclerosis: Results from Two Large Population-Based Studies

    PubMed Central

    Lange, V.; Dörr, M.; Schminke, U.; Völzke, H.; Nauck, M.; Wallaschofski, H.

    2017-01-01

    Objective It is highly debated whether associations between osteoporosis and atherosclerosis are independent of cardiovascular risk factors. We aimed to explore the associations between quantitative ultrasound (QUS) parameters at the heel with the carotid artery intima-media thickness (IMT), the presence of carotid artery plaques, and the ankle-brachial index (ABI). Methods The study population comprised 5680 men and women aged 20–93 years from two population-based cohort studies: Study of Health in Pomerania (SHIP) and SHIP-Trend. QUS measurements were performed at the heel. The extracranial carotid arteries were examined with B-mode ultrasonography. ABI was measured in a subgroup of 3853 participants. Analyses of variance and linear and logistic regression models were calculated and adjusted for major cardiovascular risk factors. Results Men but not women had significantly increased odds for carotid artery plaques with decreasing QUS parameters independent of diabetes mellitus, dyslipidemia, and hypertension. Beyond this, the QUS parameters were not significantly associated with IMT or ABI in fully adjusted models. Conclusions Our data argue against an independent role of bone metabolism in atherosclerotic changes in women. Yet, in men, associations with advanced atherosclerosis, exist. Thus, men presenting with clinical signs of osteoporosis may be at increased risk for atherosclerotic disease. PMID:28852407

  10. A comparison of time dependent Cox regression, pooled logistic regression and cross sectional pooling with simulations and an application to the Framingham Heart Study.

    PubMed

    Ngwa, Julius S; Cabral, Howard J; Cheng, Debbie M; Pencina, Michael J; Gagnon, David R; LaValley, Michael P; Cupples, L Adrienne

    2016-11-03

    Typical survival studies follow individuals to an event and measure explanatory variables for that event, sometimes repeatedly over the course of follow up. The Cox regression model has been used widely in the analyses of time to diagnosis or death from disease. The associations between the survival outcome and time dependent measures may be biased unless they are modeled appropriately. In this paper we explore the Time Dependent Cox Regression Model (TDCM), which quantifies the effect of repeated measures of covariates in the analysis of time to event data. This model is commonly used in biomedical research but sometimes does not explicitly adjust for the times at which time dependent explanatory variables are measured. This approach can yield different estimates of association compared to a model that adjusts for these times. In order to address the question of how different these estimates are from a statistical perspective, we compare the TDCM to Pooled Logistic Regression (PLR) and Cross Sectional Pooling (CSP), considering models that adjust and do not adjust for time in PLR and CSP. In a series of simulations we found that time adjusted CSP provided identical results to the TDCM while the PLR showed larger parameter estimates compared to the time adjusted CSP and the TDCM in scenarios with high event rates. We also observed upwardly biased estimates in the unadjusted CSP and unadjusted PLR methods. The time adjusted PLR had a positive bias in the time dependent Age effect with reduced bias when the event rate is low. The PLR methods showed a negative bias in the Sex effect, a subject level covariate, when compared to the other methods. The Cox models yielded reliable estimates for the Sex effect in all scenarios considered. We conclude that survival analyses that explicitly account in the statistical model for the times at which time dependent covariates are measured provide more reliable estimates compared to unadjusted analyses. We present results from the Framingham Heart Study in which lipid measurements and myocardial infarction data events were collected over a period of 26 years.

  11. Numerical Simulation of Roller Levelling using SIMULIA Abaqus

    NASA Astrophysics Data System (ADS)

    Trusov, K. A.; Mishnev, P. A.; Kopaev, O. V.; Nushtaev, D. V.

    2017-12-01

    The finite element (FE) 2D-model of roller levelling process is developed in the SIMILIA Abaqus. The objective of this paper is development FE-model and investigation of adjustable parameters of roller leveller together with elastic-plastic material behaviour. Properties of the material were determined experimentally. After levelling, the strip had a residual stress distribution. The longbow after cutting is predicted too. Recommendation for practical use were proposed.

  12. Vehicle Concept Model Abstractions for Integrated Geometric, Inertial, Rigid Body, Powertrain, and FE Analysis

    DTIC Science & Technology

    2011-01-01

    refinement of the vehicle body structure through quantitative assessment of stiffness and modal parameter changes resulting from modifications to the beam...differential placed on the axle , adjustment of the torque output to the opposite wheel may be required to obtain the correct solution. Thus...represented by simple inertial components with appropriate model connectivity instead to determine the free modal response of powertrain type

  13. Interpolating between random walks and optimal transportation routes: Flow with multiple sources and targets

    NASA Astrophysics Data System (ADS)

    Guex, Guillaume

    2016-05-01

    In recent articles about graphs, different models proposed a formalism to find a type of path between two nodes, the source and the target, at crossroads between the shortest-path and the random-walk path. These models include a freely adjustable parameter, allowing to tune the behavior of the path toward randomized movements or direct routes. This article presents a natural generalization of these models, namely a model with multiple sources and targets. In this context, source nodes can be viewed as locations with a supply of a certain good (e.g. people, money, information) and target nodes as locations with a demand of the same good. An algorithm is constructed to display the flow of goods in the network between sources and targets. With again a freely adjustable parameter, this flow can be tuned to follow routes of minimum cost, thus displaying the flow in the context of the optimal transportation problem or, by contrast, a random flow, known to be similar to the electrical current flow if the random-walk is reversible. Moreover, a source-targetcoupling can be retrieved from this flow, offering an optimal assignment to the transportation problem. This algorithm is described in the first part of this article and then illustrated with case studies.

  14. A temperature-dependent coarse-grained model for the thermoresponsive polymer poly(N-isopropylacrylamide)

    DOE PAGES

    Abbott, Lauren J.; Stevens, Mark J.

    2015-12-22

    In this study, a coarse-grained (CG) model is developed for the thermoresponsive polymer poly(N-isopropylacrylamide) (PNIPAM), using a hybrid top-down and bottom-up approach. Nonbonded parameters are fit to experimental thermodynamic data following the procedures of the SDK (Shinoda, DeVane, and Klein) CG force field, with minor adjustments to provide better agreement with radial distribution functions from atomistic simulations. Bonded parameters are fit to probability distributions from atomistic simulations using multi-centered Gaussian-based potentials. The temperature-dependent potentials derived for the PNIPAM CG model in this work properly capture the coil–globule transition of PNIPAM single chains and yield a chain-length dependence consistent with atomisticmore » simulations.« less

  15. Estimating spatially distributed turbulent heat fluxes from high-resolution thermal imagery acquired with a UAV system

    PubMed Central

    Brenner, Claire; Thiem, Christina Elisabeth; Wizemann, Hans-Dieter; Bernhardt, Matthias; Schulz, Karsten

    2017-01-01

    ABSTRACT In this study, high-resolution thermal imagery acquired with a small unmanned aerial vehicle (UAV) is used to map evapotranspiration (ET) at a grassland site in Luxembourg. The land surface temperature (LST) information from the thermal imagery is the key input to a one-source and two-source energy balance model. While the one-source model treats the surface as a single uniform layer, the two-source model partitions the surface temperature and fluxes into soil and vegetation components. It thus explicitly accounts for the different contributions of both components to surface temperature as well as turbulent flux exchange with the atmosphere. Contrary to the two-source model, the one-source model requires an empirical adjustment parameter in order to account for the effect of the two components. Turbulent heat flux estimates of both modelling approaches are compared to eddy covariance (EC) measurements using the high-resolution input imagery UAVs provide. In this comparison, the effect of different methods for energy balance closure of the EC data on the agreement between modelled and measured fluxes is also analysed. Additionally, the sensitivity of the one-source model to the derivation of the empirical adjustment parameter is tested. Due to the very dry and hot conditions during the experiment, pronounced thermal patterns developed over the grassland site. These patterns result in spatially variable turbulent heat fluxes. The model comparison indicates that both models are able to derive ET estimates that compare well with EC measurements under these conditions. However, the two-source model, with a more complex treatment of the energy and surface temperature partitioning between the soil and vegetation, outperformed the simpler one-source model in estimating sensible and latent heat fluxes. This is consistent with findings from prior studies. For the one-source model, a time-variant expression of the adjustment parameter (to account for the difference between aerodynamic and radiometric temperature) that depends on the surface-to-air temperature gradient yielded the best agreement with EC measurements. This study showed that the applied UAV system equipped with a dual-camera set-up allows for the acquisition of thermal imagery with high spatial and temporal resolution that illustrates the small-scale heterogeneity of thermal surface properties. The UAV-based thermal imagery therefore provides the means for analysing patterns of LST and other surface properties with a high level of detail that cannot be obtained by traditional remote sensing methods. PMID:28515537

  16. Estimating spatially distributed turbulent heat fluxes from high-resolution thermal imagery acquired with a UAV system.

    PubMed

    Brenner, Claire; Thiem, Christina Elisabeth; Wizemann, Hans-Dieter; Bernhardt, Matthias; Schulz, Karsten

    2017-05-19

    In this study, high-resolution thermal imagery acquired with a small unmanned aerial vehicle (UAV) is used to map evapotranspiration (ET) at a grassland site in Luxembourg. The land surface temperature (LST) information from the thermal imagery is the key input to a one-source and two-source energy balance model. While the one-source model treats the surface as a single uniform layer, the two-source model partitions the surface temperature and fluxes into soil and vegetation components. It thus explicitly accounts for the different contributions of both components to surface temperature as well as turbulent flux exchange with the atmosphere. Contrary to the two-source model, the one-source model requires an empirical adjustment parameter in order to account for the effect of the two components. Turbulent heat flux estimates of both modelling approaches are compared to eddy covariance (EC) measurements using the high-resolution input imagery UAVs provide. In this comparison, the effect of different methods for energy balance closure of the EC data on the agreement between modelled and measured fluxes is also analysed. Additionally, the sensitivity of the one-source model to the derivation of the empirical adjustment parameter is tested. Due to the very dry and hot conditions during the experiment, pronounced thermal patterns developed over the grassland site. These patterns result in spatially variable turbulent heat fluxes. The model comparison indicates that both models are able to derive ET estimates that compare well with EC measurements under these conditions. However, the two-source model, with a more complex treatment of the energy and surface temperature partitioning between the soil and vegetation, outperformed the simpler one-source model in estimating sensible and latent heat fluxes. This is consistent with findings from prior studies. For the one-source model, a time-variant expression of the adjustment parameter (to account for the difference between aerodynamic and radiometric temperature) that depends on the surface-to-air temperature gradient yielded the best agreement with EC measurements. This study showed that the applied UAV system equipped with a dual-camera set-up allows for the acquisition of thermal imagery with high spatial and temporal resolution that illustrates the small-scale heterogeneity of thermal surface properties. The UAV-based thermal imagery therefore provides the means for analysing patterns of LST and other surface properties with a high level of detail that cannot be obtained by traditional remote sensing methods.

  17. SU-G-IeP4-03: Cone Beam X-Ray Luminescence Computed Tomography Based On Generalized Gaussian Markov Random Field

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, G; Xing, L

    2016-06-15

    Purpose: Cone beam X-ray luminescence computed tomography (CB-XLCT), which aims to achieve molecular and functional imaging by X-rays, has recently been proposed as a new imaging modality. However, the inverse problem of CB-XLCT is seriously ill-conditioned, hindering us to achieve good image quality. In this work, a novel reconstruction method based on Bayesian theory is proposed to tackle this problem Methods: Bayesian theory provides a natural framework for utilizing various kinds of available prior information to improve the reconstruction image quality. A generalized Gaussian Markov random field (GGMRF) model is proposed here to construct the prior model of the Bayesianmore » theory. The most important feature of GGMRF model is the adjustable shape parameter p, which can be continuously adjusted from 1 to 2. The reconstruction image tends to have more edge-preserving property when p is slide to 1, while having more noise tolerance property when p is slide to 2, just like the behavior of L1 and L2 regularization methods, respectively. The proposed method provides a flexible regularization framework to adapt to a wide range of applications. Results: Numerical simulations were implemented to test the performance of the proposed method. The Digimouse atlas were employed to construct a three-dimensional mouse model, and two small cylinders were placed inside to serve as the targets. Reconstruction results show that the proposed method tends to obtain better spatial resolution with a smaller shape parameter, while better signal-to-noise image with a larger shape parameter. Quantitative indexes, contrast-to-noise ratio (CNR) and full-width at half-maximum (FWHM), were used to assess the performance of the proposed method, and confirmed its effectiveness in CB-XLCT reconstruction. Conclusion: A novel reconstruction method for CB-XLCT is proposed based on GGMRF model, which enables an adjustable performance tradeoff between L1 and L2 regularization methods. Numerical simulations were conducted to demonstrate its performance.« less

  18. Predicting Salt Permeability Coefficients in Highly Swollen, Highly Charged Ion Exchange Membranes.

    PubMed

    Kamcev, Jovan; Paul, Donald R; Manning, Gerald S; Freeman, Benny D

    2017-02-01

    This study presents a framework for predicting salt permeability coefficients in ion exchange membranes in contact with an aqueous salt solution. The model, based on the solution-diffusion mechanism, was tested using experimental salt permeability data for a series of commercial ion exchange membranes. Equilibrium salt partition coefficients were calculated using a thermodynamic framework (i.e., Donnan theory), incorporating Manning's counterion condensation theory to calculate ion activity coefficients in the membrane phase and the Pitzer model to calculate ion activity coefficients in the solution phase. The model predicted NaCl partition coefficients in a cation exchange membrane and two anion exchange membranes, as well as MgCl 2 partition coefficients in a cation exchange membrane, remarkably well at higher external salt concentrations (>0.1 M) and reasonably well at lower external salt concentrations (<0.1 M) with no adjustable parameters. Membrane ion diffusion coefficients were calculated using a combination of the Mackie and Meares model, which assumes ion diffusion in water-swollen polymers is affected by a tortuosity factor, and a model developed by Manning to account for electrostatic effects. Agreement between experimental and predicted salt diffusion coefficients was good with no adjustable parameters. Calculated salt partition and diffusion coefficients were combined within the framework of the solution-diffusion model to predict salt permeability coefficients. Agreement between model and experimental data was remarkably good. Additionally, a simplified version of the model was used to elucidate connections between membrane structure (e.g., fixed charge group concentration) and salt transport properties.

  19. Calibration and LOD/LOQ estimation of a chemiluminescent hybridization assay for residual DNA in recombinant protein drugs expressed in E. coli using a four-parameter logistic model.

    PubMed

    Lee, K R; Dipaolo, B; Ji, X

    2000-06-01

    Calibration is the process of fitting a model based on reference data points (x, y), then using the model to estimate an unknown x based on a new measured response, y. In DNA assay, x is the concentration, and y is the measured signal volume. A four-parameter logistic model was used frequently for calibration of immunoassay when the response is optical density for enzyme-linked immunosorbent assay (ELISA) or adjusted radioactivity count for radioimmunoassay (RIA). Here, it is shown that the same model or a linearized version of the curve are equally useful for the calibration of a chemiluminescent hybridization assay for residual DNA in recombinant protein drugs and calculation of performance measures of the assay.

  20. A model for the accurate computation of the lateral scattering of protons in water

    NASA Astrophysics Data System (ADS)

    Bellinzona, E. V.; Ciocca, M.; Embriaco, A.; Ferrari, A.; Fontana, A.; Mairani, A.; Parodi, K.; Rotondi, A.; Sala, P.; Tessonnier, T.

    2016-02-01

    A pencil beam model for the calculation of the lateral scattering in water of protons for any therapeutic energy and depth is presented. It is based on the full Molière theory, taking into account the energy loss and the effects of mixtures and compounds. Concerning the electromagnetic part, the model has no free parameters and is in very good agreement with the FLUKA Monte Carlo (MC) code. The effects of the nuclear interactions are parametrized with a two-parameter tail function, adjusted on MC data calculated with FLUKA. The model, after the convolution with the beam and the detector response, is in agreement with recent proton data in water from HIT. The model gives results with the same accuracy of the MC codes based on Molière theory, with a much shorter computing time.

  1. A model for the accurate computation of the lateral scattering of protons in water.

    PubMed

    Bellinzona, E V; Ciocca, M; Embriaco, A; Ferrari, A; Fontana, A; Mairani, A; Parodi, K; Rotondi, A; Sala, P; Tessonnier, T

    2016-02-21

    A pencil beam model for the calculation of the lateral scattering in water of protons for any therapeutic energy and depth is presented. It is based on the full Molière theory, taking into account the energy loss and the effects of mixtures and compounds. Concerning the electromagnetic part, the model has no free parameters and is in very good agreement with the FLUKA Monte Carlo (MC) code. The effects of the nuclear interactions are parametrized with a two-parameter tail function, adjusted on MC data calculated with FLUKA. The model, after the convolution with the beam and the detector response, is in agreement with recent proton data in water from HIT. The model gives results with the same accuracy of the MC codes based on Molière theory, with a much shorter computing time.

  2. Optical phantoms with adjustable subdiffusive scattering parameters

    NASA Astrophysics Data System (ADS)

    Krauter, Philipp; Nothelfer, Steffen; Bodenschatz, Nico; Simon, Emanuel; Stocker, Sabrina; Foschum, Florian; Kienle, Alwin

    2015-10-01

    A new epoxy-resin-based optical phantom system with adjustable subdiffusive scattering parameters is presented along with measurements of the intrinsic absorption, scattering, fluorescence, and refractive index of the matrix material. Both an aluminium oxide powder and a titanium dioxide dispersion were used as scattering agents and we present measurements of their scattering and reduced scattering coefficients. A method is theoretically described for a mixture of both scattering agents to obtain continuously adjustable anisotropy values g between 0.65 and 0.9 and values of the phase function parameter γ in the range of 1.4 to 2.2. Furthermore, we show absorption spectra for a set of pigments that can be added to achieve particular absorption characteristics. By additional analysis of the aging, a fully characterized phantom system is obtained with the novelty of g and γ parameter adjustment.

  3. Theoretical study of hydrogen absorption-desorption on LaNi3.8Al1.2-xMnx using statistical physics treatment

    NASA Astrophysics Data System (ADS)

    Bouaziz, Nadia; Ben Manaa, Marwa; Ben Lamine, Abdelmottaleb

    2017-11-01

    The hydrogen absorption-desorption isotherms on LaNi3.8Al1.2-xMnx alloy at temperature T = 433 K is studied through various theoretical models. The analytical expressions of these models were deduced exploiting the grand canonical ensemble in statistical physics by taking some simplifying hypotheses. Among these models an adequate model which presents a good correlation with the experimental curves has been selected. The physicochemical parameters intervening in the absorption-desorption processes and involved in the model expressions could be directly deduced from the experimental isotherms by numerical simulation. Six parameters of the model are adjusted, namely the numbers of hydrogen atoms per site n1 and n2, the receptor site densities N1m and N2m, and the energetic parameters P1 and P2. The behaviors of these parameters are discussed in relation with absorption and desorption processes to better understand and compare these phenomena. Thanks to the energetic parameters, we calculated the sorption energies which are typically ranged between 266 and 269.4 KJ/mol for absorption process and between 267 and 269.5 KJ/mol for desorption process comparable to usual chemical bond energies. Using the adopted model expression, the thermodynamic potential functions which govern the absorption/desorption process such as internal energy Eint, free enthalpy of Gibbs G and entropy Sa are derived.

  4. Revised Parameters for the AMOEBA Polarizable Atomic Multipole Water Model.

    PubMed

    Laury, Marie L; Wang, Lee-Ping; Pande, Vijay S; Head-Gordon, Teresa; Ponder, Jay W

    2015-07-23

    A set of improved parameters for the AMOEBA polarizable atomic multipole water model is developed. An automated procedure, ForceBalance, is used to adjust model parameters to enforce agreement with ab initio-derived results for water clusters and experimental data for a variety of liquid phase properties across a broad temperature range. The values reported here for the new AMOEBA14 water model represent a substantial improvement over the previous AMOEBA03 model. The AMOEBA14 model accurately predicts the temperature of maximum density and qualitatively matches the experimental density curve across temperatures from 249 to 373 K. Excellent agreement is observed for the AMOEBA14 model in comparison to experimental properties as a function of temperature, including the second virial coefficient, enthalpy of vaporization, isothermal compressibility, thermal expansion coefficient, and dielectric constant. The viscosity, self-diffusion constant, and surface tension are also well reproduced. In comparison to high-level ab initio results for clusters of 2-20 water molecules, the AMOEBA14 model yields results similar to AMOEBA03 and the direct polarization iAMOEBA models. With advances in computing power, calibration data, and optimization techniques, we recommend the use of the AMOEBA14 water model for future studies employing a polarizable water model.

  5. Numerical investigation of compaction of deformable particles with bonded-particle model

    NASA Astrophysics Data System (ADS)

    Dosta, Maksym; Costa, Clara; Al-Qureshi, Hazim

    2017-06-01

    In this contribution, a novel approach developed for the microscale modelling of particles which undergo large deformations is presented. The proposed method is based on the bonded-particle model (BPM) and multi-stage strategy to adjust material and model parameters. By the BPM, modelled objects are represented as agglomerates which consist of smaller ideally spherical particles and are connected with cylindrical solid bonds. Each bond is considered as a separate object and in each time step the forces and moments acting in them are calculated. The developed approach has been applied to simulate the compaction of elastomeric rubber particles as single particles or in a random packing. To describe the complex mechanical behaviour of the particles, the solid bonds were modelled as ideally elastic beams. The functional parameters of solid bonds as well as material parameters of bonds and primary particles were estimated based on the experimental data for rubber spheres. Obtained results for acting force and for particle deformations during uniaxial compression are in good agreement with experimental data at higher strains.

  6. Renal mass anatomic characteristics and perioperative outcomes of laparoscopic partial nephrectomy: a critical analysis.

    PubMed

    Tsivian, Matvey; Ulusoy, Said; Abern, Michael; Wandel, Ayelet; Sidi, A Ami; Tsivian, Alexander

    2012-10-01

    Anatomic parameters determining renal mass complexity have been used in a number of proposed scoring systems despite lack of a critical analysis of their independent contributions. We sought to assess the independent contribution of anatomic parameters on perioperative outcomes of laparoscopic partial nephrectomy (LPN). Preoperative imaging studies were reviewed for 147 consecutive patients undergoing LPN for a single renal mass. Renal mass anatomy was recorded: Size, growth pattern (endo-/meso-/exophytic), centrality (central/hilar/peripheral), anterior/posterior, lateral/medial, polar location. Multivariable models were used to determine associations of anatomic parameters with warm ischemia time (WIT), operative time (OT), estimated blood loss (EBL), intra- and postoperative complications, as well as renal function. All models were adjusted for the learning curve and relevant confounders. Median (range) tumor size was 3.3 cm (1.5-11 cm); 52% were central and 14% hilar. While 44% were exophytic, 23% and 33% were mesophytic and endophytic, respectively. Anatomic parameters did not uniformly predict perioperative outcomes. WIT was associated with tumor size (P=0.068), centrality (central, P=0.016; hilar, P=0.073), and endophytic growth pattern (P=0.017). OT was only associated with tumor size (P<0.001). No anatomic parameter predicted EBL. Tumor centrality increased the odds of overall and intraoperative complications, without reaching statistical significance. Postoperative renal function was not associated with any of the anatomic parameters considered after adjustment for baseline function and WIT. Learning curve, considered as a confounder, was independently associated with reduced WIT and OT as well as reduced odds of intraoperative complications. This study provides a detailed analysis of the independent impact of renal mass anatomic parameters on perioperative outcomes. Our findings suggest diverse independent contributions of the anatomic parameters to the different measures of outcomes (WIT, OT, EBL, complications, and renal function) emphasizing the importance of the learning curve.

  7. Three-dimensional interpretation of TEM soundings

    NASA Astrophysics Data System (ADS)

    Barsukov, P. O.; Fainberg, E. B.

    2013-07-01

    We describe the approach to the interpretation of electromagnetic (EM) sounding data which iteratively adjusts the three-dimensional (3D) model of the environment by local one-dimensional (1D) transformations and inversions and reconstructs the geometrical skeleton of the model. The final 3D inversion is carried out with the minimal number of the sought parameters. At each step of the interpretation, the model of the medium is corrected according to the geological information. The practical examples of the suggested method are presented.

  8. Covariate-adjusted Spearman's rank correlation with probability-scale residuals.

    PubMed

    Liu, Qi; Li, Chun; Wanga, Valentine; Shepherd, Bryan E

    2018-06-01

    It is desirable to adjust Spearman's rank correlation for covariates, yet existing approaches have limitations. For example, the traditionally defined partial Spearman's correlation does not have a sensible population parameter, and the conditional Spearman's correlation defined with copulas cannot be easily generalized to discrete variables. We define population parameters for both partial and conditional Spearman's correlation through concordance-discordance probabilities. The definitions are natural extensions of Spearman's rank correlation in the presence of covariates and are general for any orderable random variables. We show that they can be neatly expressed using probability-scale residuals (PSRs). This connection allows us to derive simple estimators. Our partial estimator for Spearman's correlation between X and Y adjusted for Z is the correlation of PSRs from models of X on Z and of Y on Z, which is analogous to the partial Pearson's correlation derived as the correlation of observed-minus-expected residuals. Our conditional estimator is the conditional correlation of PSRs. We describe estimation and inference, and highlight the use of semiparametric cumulative probability models, which allow preservation of the rank-based nature of Spearman's correlation. We conduct simulations to evaluate the performance of our estimators and compare them with other popular measures of association, demonstrating their robustness and efficiency. We illustrate our method in two applications, a biomarker study and a large survey. © 2017, The International Biometric Society.

  9. Deep space network software cost estimation model

    NASA Technical Reports Server (NTRS)

    Tausworthe, R. C.

    1981-01-01

    A parametric software cost estimation model prepared for Deep Space Network (DSN) Data Systems implementation tasks is presented. The resource estimation model incorporates principles and data from a number of existing models. The model calibrates task magnitude and difficulty, development environment, and software technology effects through prompted responses to a set of approximately 50 questions. Parameters in the model are adjusted to fit DSN software life cycle statistics. The estimation model output scales a standard DSN Work Breakdown Structure skeleton, which is then input into a PERT/CPM system, producing a detailed schedule and resource budget for the project being planned.

  10. A simple dynamic subgrid-scale model for LES of particle-laden turbulence

    NASA Astrophysics Data System (ADS)

    Park, George Ilhwan; Bassenne, Maxime; Urzay, Javier; Moin, Parviz

    2017-04-01

    In this study, a dynamic model for large-eddy simulations is proposed in order to describe the motion of small inertial particles in turbulent flows. The model is simple, involves no significant computational overhead, contains no adjustable parameters, and is flexible enough to be deployed in any type of flow solvers and grids, including unstructured setups. The approach is based on the use of elliptic differential filters to model the subgrid-scale velocity. The only model parameter, which is related to the nominal filter width, is determined dynamically by imposing consistency constraints on the estimated subgrid energetics. The performance of the model is tested in large-eddy simulations of homogeneous-isotropic turbulence laden with particles, where improved agreement with direct numerical simulation results is observed in the dispersed-phase statistics, including particle acceleration, local carrier-phase velocity, and preferential-concentration metrics.

  11. MO-C-17A-04: Forecasting Longitudinal Changes in Oropharyngeal Tumor Morphology Throughout the Course of Head and Neck Radiation Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yock, A; UT Graduate School of Biomedical Sciences, Houston, TX; Rao, A

    2014-06-15

    Purpose: To generate, evaluate, and compare models that predict longitudinal changes in tumor morphology throughout the course of radiation therapy. Methods: Two morphology feature vectors were used to describe the size, shape, and position of 35 oropharyngeal GTVs at each treatment fraction during intensity-modulated radiation therapy. The feature vectors comprised the coordinates of the GTV centroids and one of two shape descriptors. One shape descriptor was based on radial distances between the GTV centroid and 614 GTV surface landmarks. The other was based on a spherical harmonic decomposition of these distances. Feature vectors over the course of therapy were describedmore » using static, linear, and mean models. The error of these models in forecasting GTV morphology was evaluated with leave-one-out cross-validation, and their accuracy was compared using Wilcoxon signed-rank tests. The effect of adjusting model parameters at 1, 2, 3, or 5 time points (adjustment points) was also evaluated. Results: The addition of a single adjustment point to the static model decreased the median error in forecasting the position of GTV surface landmarks by 1.2 mm (p<0.001). Additional adjustment points further decreased forecast error by about 0.4 mm each. The linear model decreased forecast error compared to the static model for feature vectors based on both shape descriptors (0.2 mm), while the mean model did so only for those based on the inter-landmark distances (0.2 mm). The decrease in forecast error due to adding adjustment points was greater than that due to model selection. Both effects diminished with subsequent adjustment points. Conclusion: Models of tumor morphology that include information from prior patients and/or prior treatment fractions are able to predict the tumor surface at each treatment fraction during radiation therapy. The predicted tumor morphology can be compared with patient anatomy or dose distributions, opening the possibility of anticipatory re-planning. American Legion Auxiliary Fellowship; The University of Texas Graduate School of Biomedical Sciences at Houston.« less

  12. Development of a hydraulic model of the human systemic circulation

    NASA Technical Reports Server (NTRS)

    Sharp, M. K.; Dharmalingham, R. K.

    1999-01-01

    Physical and numeric models of the human circulation are constructed for a number of objectives, including studies and training in physiologic control, interpretation of clinical observations, and testing of prosthetic cardiovascular devices. For many of these purposes it is important to quantitatively validate the dynamic response of the models in terms of the input impedance (Z = oscillatory pressure/oscillatory flow). To address this need, the authors developed an improved physical model. Using a computer study, the authors first identified the configuration of lumped parameter elements in a model of the systemic circulation; the result was a good match with human aortic input impedance with a minimum number of elements. Design, construction, and testing of a hydraulic model analogous to the computer model followed. Numeric results showed that a three element model with two resistors and one compliance produced reasonable matching without undue complication. The subsequent analogous hydraulic model included adjustable resistors incorporating a sliding plate to vary the flow area through a porous material and an adjustable compliance consisting of a variable-volume air chamber. The response of the hydraulic model compared favorably with other circulation models.

  13. The effect of zero-point energy differences on the isotope dependence of the formation of ozone: a classical trajectory study.

    PubMed

    Schinke, Reinhard; Fleurat-Lessard, Paul

    2005-03-01

    The effect of zero-point energy differences (DeltaZPE) between the possible fragmentation channels of highly excited O(3) complexes on the isotope dependence of the formation of ozone is investigated by means of classical trajectory calculations and a strong-collision model. DeltaZPE is incorporated in the calculations in a phenomenological way by adjusting the potential energy surface in the product channels so that the correct exothermicities and endothermicities are matched. The model contains two parameters, the frequency of stabilizing collisions omega and an energy dependent parameter Delta(damp), which favors the lower energies in the Maxwell-Boltzmann distribution. The stabilization frequency is used to adjust the pressure dependence of the absolute formation rate while Delta(damp) is utilized to control its isotope dependence. The calculations for several isotope combinations of oxygen atoms show a clear dependence of relative formation rates on DeltaZPE. The results are similar to those of Gao and Marcus [J. Chem. Phys. 116, 137 (2002)] obtained within a statistical model. In particular, like in the statistical approach an ad hoc parameter eta approximately 1.14, which effectively reduces the formation rates of the symmetric ABA ozone molecules, has to be introduced in order to obtain good agreement with the measured relative rates of Janssen et al. [Phys. Chem. Chem. Phys. 3, 4718 (2001)]. The temperature dependence of the recombination rate is also addressed.

  14. Wire rope tension control of hoisting systems using a robust nonlinear adaptive backstepping control scheme.

    PubMed

    Zhu, Zhen-Cai; Li, Xiang; Shen, Gang; Zhu, Wei-Dong

    2018-01-01

    This paper concerns wire rope tension control of a double-rope winding hoisting system (DRWHS), which consists of a hoisting system employed to realize a transportation function and an electro-hydraulic servo system utilized to adjust wire rope tensions. A dynamic model of the DRWHS is developed in which parameter uncertainties and external disturbances are considered. A comparison between simulation results using the dynamic model and experimental results using a double-rope winding hoisting experimental system is given in order to demonstrate accuracy of the dynamic model. In order to improve the wire rope tension coordination control performance of the DRWHS, a robust nonlinear adaptive backstepping controller (RNABC) combined with a nonlinear disturbance observer (NDO) is proposed. Main features of the proposed combined controller are: (1) using the RNABC to adjust wire rope tensions with consideration of parameter uncertainties, whose parameters are designed online by adaptive laws derived from Lyapunov stability theory to guarantee the control performance and stability of the closed-loop system; and (2) introducing the NDO to deal with uncertain external disturbances. In order to demonstrate feasibility and effectiveness of the proposed controller, experimental studies have been conducted on the DRWHS controlled by an xPC rapid prototyping system. Experimental results verify that the proposed controller exhibits excellent performance on wire rope tension coordination control compared with a conventional proportional-integral (PI) controller and adaptive backstepping controller. Copyright © 2017 ISA. All rights reserved.

  15. Regional regression of flood characteristics employing historical information

    USGS Publications Warehouse

    Tasker, Gary D.; Stedinger, J.R.

    1987-01-01

    Streamflow gauging networks provide hydrologic information for use in estimating the parameters of regional regression models. The regional regression models can be used to estimate flood statistics, such as the 100 yr peak, at ungauged sites as functions of drainage basin characteristics. A recent innovation in regional regression is the use of a generalized least squares (GLS) estimator that accounts for unequal station record lengths and sample cross correlation among the flows. However, this technique does not account for historical flood information. A method is proposed here to adjust this generalized least squares estimator to account for possible information about historical floods available at some stations in a region. The historical information is assumed to be in the form of observations of all peaks above a threshold during a long period outside the systematic record period. A Monte Carlo simulation experiment was performed to compare the GLS estimator adjusted for historical floods with the unadjusted GLS estimator and the ordinary least squares estimator. Results indicate that using the GLS estimator adjusted for historical information significantly improves the regression model. ?? 1987.

  16. Method of control position of laser focus during surfacing teeth of cutters

    NASA Astrophysics Data System (ADS)

    Zvezdin, V. V.; Hisamutdinov, R. M.; Rakhimov, R. R.; Israfilov, I. H.; Akhtiamov, R. F.

    2017-09-01

    Providing the quality laser of surfacing the edges of teeth requires control not only the energy of the radiation parameters, but also the position of the focal spot. The control channel of position of laser focus during surfacing, which determines the parameters of quality of the deposited layer, was calculated in the work. The parameters of the active opto-electronic system for the subsystem adjust the focus position relative to the deposited layer with a laser illumination of the cutting edges the teeth cutters were calculated, the model of a control channel based on thermal phenomena occurring in the zone of surfacing was proposed.

  17. The ITSG-Grace2014 Gravity Field Model

    NASA Astrophysics Data System (ADS)

    Kvas, Andreas; Mayer-Gürr, Torsten; Zehenter, Norbert; Klinger, Beate

    2015-04-01

    The ITSG-Grace2014 GRACE-only gravity field model consists of a high resolution unconstrained static model (up to degree 200) with trend and annual signal, monthly unconstrained solutions with different spatial resolutions as well as daily snapshots derived by using a Kalman smoother. Apart from the estimated spherical harmonic coefficients, full variance-covariance matrices for the monthly solutions and the static gravity field component are provided. Compared to the previous release, multiple improvements in the processing chain are implemented: updated background models, better ionospheric modeling for GPS observations, an improved satellite attitude by combination of star camera and angular accelerations, estimation of K-band antenna center variations within the gravity field recovery process as well as error covariance function determination. Furthermore, daily gravity field variations have been modeled in the adjustment process to reduce errors caused by temporal leakage. This combined estimation of daily gravity variations field variations together with the static gravity field component represents a computational challenge due to the significantly increased parameter count. The modeling of daily variations up to a spherical harmonic degree of 40 for the whole GRACE observation period results in a system of linear equations with over 6 million unknown gravity field parameters. A least squares adjustment of this size is not solvable in a sensible time frame, therefore measures to reduce the problem size have to be taken. The ITSG-Grace2014 release is presented and selected parts of the processing chain and their effect on the estimated gravity field solutions are discussed.

  18. A Stochastic Fractional Dynamics Model of Space-time Variability of Rain

    NASA Technical Reports Server (NTRS)

    Kundu, Prasun K.; Travis, James E.

    2013-01-01

    Rainfall varies in space and time in a highly irregular manner and is described naturally in terms of a stochastic process. A characteristic feature of rainfall statistics is that they depend strongly on the space-time scales over which rain data are averaged. A spectral model of precipitation has been developed based on a stochastic differential equation of fractional order for the point rain rate, that allows a concise description of the second moment statistics of rain at any prescribed space-time averaging scale. The model is thus capable of providing a unified description of the statistics of both radar and rain gauge data. The underlying dynamical equation can be expressed in terms of space-time derivatives of fractional orders that are adjusted together with other model parameters to fit the data. The form of the resulting spectrum gives the model adequate flexibility to capture the subtle interplay between the spatial and temporal scales of variability of rain but strongly constrains the predicted statistical behavior as a function of the averaging length and times scales. We test the model with radar and gauge data collected contemporaneously at the NASA TRMM ground validation sites located near Melbourne, Florida and in Kwajalein Atoll, Marshall Islands in the tropical Pacific. We estimate the parameters by tuning them to the second moment statistics of radar data. The model predictions are then found to fit the second moment statistics of the gauge data reasonably well without any further adjustment.

  19. Understanding heart rate alarm adjustment in the intensive care units through an analytical approach

    PubMed Central

    Pelter, Michele M.; Drew, Barbara J.; Palacios, Jorge Arroyo; Bai, Yong; Stannard, Daphne; Aldrich, J. Matt; Hu, Xiao

    2017-01-01

    Background Heart rate (HR) alarms are prevalent in ICU, and these parameters are configurable. Not much is known about nursing behavior associated with tailoring HR alarm parameters to individual patients to reduce clinical alarm fatigue. Objectives To understand the relationship between heart rate (HR) alarms and adjustments to reduce unnecessary heart rate alarms. Methods Retrospective, quantitative analysis of an adjudicated database using analytical approaches to understand behaviors surrounding parameter HR alarm adjustments. Patients were sampled from five adult ICUs (77 beds) over one month at a quaternary care university medical center. A total of 337 of 461 ICU patients had HR alarms with 53.7% male, mean age 60.3 years, and 39% non-Caucasian. Default HR alarm parameters were 50 and 130 beats per minute (bpm). The occurrence of each alarm, vital signs, and physiologic waveforms was stored in a relational database (SQL server). Results There were 23,624 HR alarms for analysis, with 65.4% exceeding the upper heart rate limit. Only 51% of patients with HR alarms had parameters adjusted, with a median upper limit change of +5 bpm and -1 bpm lower limit. The median time to first HR parameter adjustment was 17.9 hours, without reduction in alarms occurrence (p = 0.57). Conclusions HR alarms are prevalent in ICU, and half of HR alarm settings remain at default. There is a long delay between HR alarms and parameters changes, with insufficient changes to decrease HR alarms. Increasing frequency of HR alarms shortens the time to first adjustment. Best practice guidelines for HR alarm limits are needed to reduce alarm fatigue and improve monitoring precision. PMID:29176776

  20. Development and Validation of an Agency for Healthcare Research and Quality Indicator for Mortality After Congenital Heart Surgery Harmonized With Risk Adjustment for Congenital Heart Surgery (RACHS-1) Methodology.

    PubMed

    Jenkins, Kathy J; Koch Kupiec, Jennifer; Owens, Pamela L; Romano, Patrick S; Geppert, Jeffrey J; Gauvreau, Kimberlee

    2016-05-20

    The National Quality Forum previously approved a quality indicator for mortality after congenital heart surgery developed by the Agency for Healthcare Research and Quality (AHRQ). Several parameters of the validated Risk Adjustment for Congenital Heart Surgery (RACHS-1) method were included, but others differed. As part of the National Quality Forum endorsement maintenance process, developers were asked to harmonize the 2 methodologies. Parameters that were identical between the 2 methods were retained. AHRQ's Healthcare Cost and Utilization Project State Inpatient Databases (SID) 2008 were used to select optimal parameters where differences existed, with a goal to maximize model performance and face validity. Inclusion criteria were not changed and included all discharges for patients <18 years with International Classification of Diseases, Ninth Revision, Clinical Modification procedure codes for congenital heart surgery or nonspecific heart surgery combined with congenital heart disease diagnosis codes. The final model includes procedure risk group, age (0-28 days, 29-90 days, 91-364 days, 1-17 years), low birth weight (500-2499 g), other congenital anomalies (Clinical Classifications Software 217, except for 758.xx), multiple procedures, and transfer-in status. Among 17 945 eligible cases in the SID 2008, the c statistic for model performance was 0.82. In the SID 2013 validation data set, the c statistic was 0.82. Risk-adjusted mortality rates by center ranged from 0.9% to 4.1% (5th-95th percentile). Congenital heart surgery programs can now obtain national benchmarking reports by applying AHRQ Quality Indicator software to hospital administrative data, based on the harmonized RACHS-1 method, with high discrimination and face validity. © 2016 The Authors. Published on behalf of the American Heart Association, Inc., by Wiley Blackwell.

  1. Retrieval of cloud cover parameters from multispectral satellite images

    NASA Technical Reports Server (NTRS)

    Arking, A.; Childs, J. D.

    1985-01-01

    A technique is described for extracting cloud cover parameters from multispectral satellite radiometric measurements. Utilizing three channels from the AVHRR (Advanced Very High Resolution Radiometer) on NOAA polar orbiting satellites, it is shown that one can retrieve four parameters for each pixel: cloud fraction within the FOV, optical thickness, cloud-top temperature and a microphysical model parameter. The last parameter is an index representing the properties of the cloud particle and is determined primarily by the radiance at 3.7 microns. The other three parameters are extracted from the visible and 11 micron infrared radiances, utilizing the information contained in the two-dimensional scatter plot of the measured radiances. The solution is essentially one in which the distributions of optical thickness and cloud-top temperature are maximally clustered for each region, with cloud fraction for each pixel adjusted to achieve maximal clustering.

  2. PID feedback controller used as a tactical asset allocation technique: The G.A.M. model

    NASA Astrophysics Data System (ADS)

    Gandolfi, G.; Sabatini, A.; Rossolini, M.

    2007-09-01

    The objective of this paper is to illustrate a tactical asset allocation technique utilizing the PID controller. The proportional-integral-derivative (PID) controller is widely applied in most industrial processes; it has been successfully used for over 50 years and it is used by more than 95% of the plants processes. It is a robust and easily understood algorithm that can provide excellent control performance in spite of the diverse dynamic characteristics of the process plant. In finance, the process plant, controlled by the PID controller, can be represented by financial market assets forming a portfolio. More specifically, in the present work, the plant is represented by a risk-adjusted return variable. Money and portfolio managers’ main target is to achieve a relevant risk-adjusted return in their managing activities. In literature and in the financial industry business, numerous kinds of return/risk ratios are commonly studied and used. The aim of this work is to perform a tactical asset allocation technique consisting in the optimization of risk adjusted return by means of asset allocation methodologies based on the PID model-free feedback control modeling procedure. The process plant does not need to be mathematically modeled: the PID control action lies in altering the portfolio asset weights, according to the PID algorithm and its parameters, Ziegler-and-Nichols-tuned, in order to approach the desired portfolio risk-adjusted return efficiently.

  3. Modeled Urea Distribution Volume and Mortality in the HEMO Study

    PubMed Central

    Greene, Tom; Depner, Thomas A.; Levin, Nathan W.; Chertow, Glenn M.

    2011-01-01

    Summary Background and objectives In the Hemodialysis (HEMO) Study, observed small decreases in achieved equilibrated Kt/Vurea were noncausally associated with markedly increased mortality. Here we examine the association of mortality with modeled volume (Vm), the denominator of equilibrated Kt/Vurea. Design, setting, participants, & measurements Parameters derived from modeled urea kinetics (including Vm) and blood pressure (BP) were obtained monthly in 1846 patients. Case mix–adjusted time-dependent Cox regressions were used to relate the relative mortality hazard at each time point to Vm and to the change in Vm over the preceding 6 months. Mixed effects models were used to relate Vm to changes in intradialytic systolic BP and to other factors at each follow-up visit. Results Mortality was associated with Vm and change in Vm over the preceding 6 months. The association between change in Vm and mortality was independent of vascular access complications. In contrast, mortality was inversely associated with V calculated from anthropometric measurements (Vant). In case mix–adjusted analysis using Vm as a time-dependent covariate, the association of mortality with Vm strengthened after statistical adjustment for Vant. After adjustment for Vant, higher Vm was associated with slightly smaller reductions in intradialytic systolic BP and with risk factors for mortality including recent hospitalization and reductions in serum albumin concentration and body weight. Conclusions An increase in Vm is a marker for illness and mortality risk in hemodialysis patients. PMID:21511841

  4. Model-Based Thermal System Design Optimization for the James Webb Space Telescope

    NASA Technical Reports Server (NTRS)

    Cataldo, Giuseppe; Niedner, Malcolm B.; Fixsen, Dale J.; Moseley, Samuel H.

    2017-01-01

    Spacecraft thermal model validation is normally performed by comparing model predictions with thermal test data and reducing their discrepancies to meet the mission requirements. Based on thermal engineering expertise, the model input parameters are adjusted to tune the model output response to the test data. The end result is not guaranteed to be the best solution in terms of reduced discrepancy and the process requires months to complete. A model-based methodology was developed to perform the validation process in a fully automated fashion and provide mathematical bases to the search for the optimal parameter set that minimizes the discrepancies between model and data. The methodology was successfully applied to several thermal subsystems of the James Webb Space Telescope (JWST). Global or quasiglobal optimal solutions were found and the total execution time of the model validation process was reduced to about two weeks. The model sensitivities to the parameters, which are required to solve the optimization problem, can be calculated automatically before the test begins and provide a library for sensitivity studies. This methodology represents a crucial commodity when testing complex, large-scale systems under time and budget constraints. Here, results for the JWST Core thermal system will be presented in detail.

  5. Model-based thermal system design optimization for the James Webb Space Telescope

    NASA Astrophysics Data System (ADS)

    Cataldo, Giuseppe; Niedner, Malcolm B.; Fixsen, Dale J.; Moseley, Samuel H.

    2017-10-01

    Spacecraft thermal model validation is normally performed by comparing model predictions with thermal test data and reducing their discrepancies to meet the mission requirements. Based on thermal engineering expertise, the model input parameters are adjusted to tune the model output response to the test data. The end result is not guaranteed to be the best solution in terms of reduced discrepancy and the process requires months to complete. A model-based methodology was developed to perform the validation process in a fully automated fashion and provide mathematical bases to the search for the optimal parameter set that minimizes the discrepancies between model and data. The methodology was successfully applied to several thermal subsystems of the James Webb Space Telescope (JWST). Global or quasiglobal optimal solutions were found and the total execution time of the model validation process was reduced to about two weeks. The model sensitivities to the parameters, which are required to solve the optimization problem, can be calculated automatically before the test begins and provide a library for sensitivity studies. This methodology represents a crucial commodity when testing complex, large-scale systems under time and budget constraints. Here, results for the JWST Core thermal system will be presented in detail.

  6. Adjusting the specificity of an engine map based on the sensitivity of an engine control parameter relative to a performance variable

    DOEpatents

    Jiang, Li; Lee, Donghoon; Yilmaz, Hakan; Stefanopoulou, Anna

    2014-10-28

    Methods and systems for engine control optimization are provided. A first and a second operating condition of a vehicle engine are detected. An initial value is identified for a first and a second engine control parameter corresponding to a combination of the detected operating conditions according to a first and a second engine map look-up table. The initial values for the engine control parameters are adjusted based on a detected engine performance variable to cause the engine performance variable to approach a target value. A first and a second sensitivity of the engine performance variable are determined in response to changes in the engine control parameters. The first engine map look-up table is adjusted when the first sensitivity is greater than a threshold, and the second engine map look-up table is adjusted when the second sensitivity is greater than a threshold.

  7. Quantifying the predictive consequences of model error with linear subspace analysis

    USGS Publications Warehouse

    White, Jeremy T.; Doherty, John E.; Hughes, Joseph D.

    2014-01-01

    All computer models are simplified and imperfect simulators of complex natural systems. The discrepancy arising from simplification induces bias in model predictions, which may be amplified by the process of model calibration. This paper presents a new method to identify and quantify the predictive consequences of calibrating a simplified computer model. The method is based on linear theory, and it scales efficiently to the large numbers of parameters and observations characteristic of groundwater and petroleum reservoir models. The method is applied to a range of predictions made with a synthetic integrated surface-water/groundwater model with thousands of parameters. Several different observation processing strategies and parameterization/regularization approaches are examined in detail, including use of the Karhunen-Loève parameter transformation. Predictive bias arising from model error is shown to be prediction specific and often invisible to the modeler. The amount of calibration-induced bias is influenced by several factors, including how expert knowledge is applied in the design of parameterization schemes, the number of parameters adjusted during calibration, how observations and model-generated counterparts are processed, and the level of fit with observations achieved through calibration. Failure to properly implement any of these factors in a prediction-specific manner may increase the potential for predictive bias in ways that are not visible to the calibration and uncertainty analysis process.

  8. The contribution of NOAA/CMDL ground-based measurements to understanding long-term stratospheric changes

    NASA Astrophysics Data System (ADS)

    Montzka, S. A.; Butler, J. H.; Dutton, G.; Thompson, T. M.; Hall, B.; Mondeel, D. J.; Elkins, J. W.

    2005-05-01

    The El-Nino/Southern-Oscillation (ENSO) dominates interannual climate variability and plays, therefore, a key role in seasonal-to-interannual prediction. Much is known by now about the main physical mechanisms that give rise to and modulate ENSO, but the values of several parameters that enter these mechanisms are an important unknown. We apply Extended Kalman Filtering (EKF) for both model state and parameter estimation in an intermediate, nonlinear, coupled ocean--atmosphere model of ENSO. The coupled model consists of an upper-ocean, reduced-gravity model of the Tropical Pacific and a steady-state atmospheric response to the sea surface temperature (SST). The model errors are assumed to be mainly in the atmospheric wind stress, and assimilated data are equatorial Pacific SSTs. Model behavior is very sensitive to two key parameters: (i) μ, the ocean-atmosphere coupling coefficient between SST and wind stress anomalies; and (ii) δs, the surface-layer coefficient. Previous work has shown that δs determines the period of the model's self-sustained oscillation, while μ measures the degree of nonlinearity. Depending on the values of these parameters, the spatio-temporal pattern of model solutions is either that of a delayed oscillator or of a westward propagating mode. Estimation of these parameters is tested first on synthetic data and allows us to recover the delayed-oscillator mode starting from model parameter values that correspond to the westward-propagating case. Assimilation of SST data from the NCEP-NCAR Reanalysis-2 shows that the parameters can vary on fairly short time scales and switch between values that approximate the two distinct modes of ENSO behavior. Rapid adjustments of these parameters occur, in particular, during strong ENSO events. Ways to apply EKF parameter estimation efficiently to state-of-the-art coupled ocean--atmosphere GCMs will be discussed.

  9. Electrostatically frequency tunable micro-beam-based piezoelectric fluid flow energy harvester

    NASA Astrophysics Data System (ADS)

    Rezaee, Mousa; Sharafkhani, Naser

    2017-07-01

    This research investigates the dynamic behavior of a sandwich micro-beam based piezoelectric energy harvester with electrostatically adjustable resonance frequency. The system consists of a cantilever micro-beam immersed in a fluid domain and is subjected to the simultaneous action of cross fluid flow and nonlinear electrostatic force. Two parallel piezoelectric laminates are extended along the length of the micro-beam and connected to an external electric circuit which generates an output power as a result of the micro-beam oscillations. The fluid-coupled structure is modeled using Euler-Bernoulli beam theory and the equivalent force terms for the fluid flow. Fluid induced forces comprise the added inertia force which is evaluated using equivalent added mass and the drag and lift forces which are evaluated using relative velocity and Van der Pol equation. In addition to flow velocity and fluid density, the influence of several design parameters such as external electrical resistance, piezo layer position, and dc voltage on the generated power are investigated by using Galerkin and step by step linearization method. It is shown that for given flowing fluid parameters, i.e., density and velocity, one can adjust the applied dc voltage to tune resonance frequency so that the lock-in phenomenon with steady large amplitude oscillations happens, also by adjusting the harvester parameters including the mechanical and electrical ones, the maximal output power of the harvester becomes possible.

  10. Illumination-parameter adjustable and illumination-distribution visible LED helmet for low-level light therapy on brain injury

    NASA Astrophysics Data System (ADS)

    Wang, Pengbo; Gao, Yuan; Chen, Xiao; Li, Ting

    2016-03-01

    Low-level light therapy (LLLT) has been clinically applied. Recently, more and more cases are reported with positive therapeutic effect by using transcranial light emitting diodes (LEDs) illumination. Here, we developed a LLLT helmet for treating brain injuries based on LED arrays. We designed the LED arrays in circle shape and assembled them in multilayered 3D printed helmet with water-cooling module. The LED arrays can be adjust to touch the head of subjects. A control circuit was developed to drive and control the illumination of the LLLT helmet. The software portion provides the control of on and off of each LED arrays, the setup of illumination parameters, and 3D distribution of LLLT light dose in human subject according to the illumination setups. This LLLT light dose distribution was computed by a Monte Carlo model for voxelized media and the Visible Chinese Human head dataset and displayed in 3D view at the background of head anatomical structure. The performance of the whole system was fully tested. One stroke patient was recruited in the preliminary LLLT experiment and the following neuropsychological testing showed obvious improvement in memory and executive functioning. This clinical case suggested the potential of this Illumination-parameter adjustable and illuminationdistribution visible LED helmet as a reliable, noninvasive, and effective tool in treating brain injuries.

  11. Mice take calculated risks.

    PubMed

    Kheifets, Aaron; Gallistel, C R

    2012-05-29

    Animals successfully navigate the world despite having only incomplete information about behaviorally important contingencies. It is an open question to what degree this behavior is driven by estimates of stochastic parameters (brain-constructed models of the experienced world) and to what degree it is directed by reinforcement-driven processes that optimize behavior in the limit without estimating stochastic parameters (model-free adaptation processes, such as associative learning). We find that mice adjust their behavior in response to a change in probability more quickly and abruptly than can be explained by differential reinforcement. Our results imply that mice represent probabilities and perform calculations over them to optimize their behavior, even when the optimization produces negligible material gain.

  12. Mice take calculated risks

    PubMed Central

    Kheifets, Aaron; Gallistel, C. R.

    2012-01-01

    Animals successfully navigate the world despite having only incomplete information about behaviorally important contingencies. It is an open question to what degree this behavior is driven by estimates of stochastic parameters (brain-constructed models of the experienced world) and to what degree it is directed by reinforcement-driven processes that optimize behavior in the limit without estimating stochastic parameters (model-free adaptation processes, such as associative learning). We find that mice adjust their behavior in response to a change in probability more quickly and abruptly than can be explained by differential reinforcement. Our results imply that mice represent probabilities and perform calculations over them to optimize their behavior, even when the optimization produces negligible material gain. PMID:22592792

  13. Application of a compressible flow solver and barotropic cavitation model for the evaluation of the suction head in a low specific speed centrifugal pump impeller channel

    NASA Astrophysics Data System (ADS)

    Limbach, P.; Müller, T.; Skoda, R.

    2015-12-01

    Commonly, for the simulation of cavitation in centrifugal pumps incompressible flow solvers with VOF kind cavitation models are applied. Since the source/sink terms of the void fraction transport equation are based on simplified bubble dynamics, empirical parameters may need to be adjusted to the particular pump operating point. In the present study a barotropic cavitation model, which is based solely on thermodynamic fluid properties and does not include any empirical parameters, is applied on a single flow channel of a pump impeller in combination with a time-explicit viscous compressible flow solver. The suction head curves (head drop) are compared to the results of an incompressible implicit standard industrial CFD tool and are predicted qualitatively correct by the barotropic model.

  14. Objective calibration of regional climate models

    NASA Astrophysics Data System (ADS)

    Bellprat, O.; Kotlarski, S.; Lüthi, D.; SchäR, C.

    2012-12-01

    Climate models are subject to high parametric uncertainty induced by poorly confined model parameters of parameterized physical processes. Uncertain model parameters are typically calibrated in order to increase the agreement of the model with available observations. The common practice is to adjust uncertain model parameters manually, often referred to as expert tuning, which lacks objectivity and transparency in the use of observations. These shortcomings often haze model inter-comparisons and hinder the implementation of new model parameterizations. Methods which would allow to systematically calibrate model parameters are unfortunately often not applicable to state-of-the-art climate models, due to computational constraints facing the high dimensionality and non-linearity of the problem. Here we present an approach to objectively calibrate a regional climate model, using reanalysis driven simulations and building upon a quadratic metamodel presented by Neelin et al. (2010) that serves as a computationally cheap surrogate of the model. Five model parameters originating from different parameterizations are selected for the optimization according to their influence on the model performance. The metamodel accurately estimates spatial averages of 2 m temperature, precipitation and total cloud cover, with an uncertainty of similar magnitude as the internal variability of the regional climate model. The non-linearities of the parameter perturbations are well captured, such that only a limited number of 20-50 simulations are needed to estimate optimal parameter settings. Parameter interactions are small, which allows to further reduce the number of simulations. In comparison to an ensemble of the same model which has undergone expert tuning, the calibration yields similar optimal model configurations, but leading to an additional reduction of the model error. The performance range captured is much wider than sampled with the expert-tuned ensemble and the presented methodology is effective and objective. It is argued that objective calibration is an attractive tool and could become standard procedure after introducing new model implementations, or after a spatial transfer of a regional climate model. Objective calibration of parameterizations with regional models could also serve as a strategy toward improving parameterization packages of global climate models.

  15. Vibration isolation by exploring bio-inspired structural nonlinearity.

    PubMed

    Wu, Zhijing; Jing, Xingjian; Bian, Jing; Li, Fengming; Allen, Robert

    2015-10-08

    Inspired by the limb structures of animals/insects in motion vibration control, a bio-inspired limb-like structure (LLS) is systematically studied for understanding and exploring its advantageous nonlinear function in passive vibration isolation. The bio-inspired system consists of asymmetric articulations (of different rod lengths) with inside vertical and horizontal springs (as animal muscle) of different linear stiffness. Mathematical modeling and analysis of the proposed LLS reveal that, (a) the system has very beneficial nonlinear stiffness which can provide flexible quasi-zero, zero and/or negative stiffness, and these nonlinear stiffness properties are adjustable or designable with structure parameters; (b) the asymmetric rod-length ratio and spring-stiffness ratio present very beneficial factors for tuning system equivalent stiffness; (c) the system loading capacity is also adjustable with the structure parameters which presents another flexible benefit in application. Experiments and comparisons with existing quasi-zero-stiffness isolators validate the advantageous features above, and some discussions are also given about how to select structural parameters for practical applications. The results would provide an innovative bio-inspired solution to passive vibration control in various engineering practice.

  16. System health monitoring using multiple-model adaptive estimation techniques

    NASA Astrophysics Data System (ADS)

    Sifford, Stanley Ryan

    Monitoring system health for fault detection and diagnosis by tracking system parameters concurrently with state estimates is approached using a new multiple-model adaptive estimation (MMAE) method. This novel method is called GRid-based Adaptive Parameter Estimation (GRAPE). GRAPE expands existing MMAE methods by using new techniques to sample the parameter space. GRAPE expands on MMAE with the hypothesis that sample models can be applied and resampled without relying on a predefined set of models. GRAPE is initially implemented in a linear framework using Kalman filter models. A more generalized GRAPE formulation is presented using extended Kalman filter (EKF) models to represent nonlinear systems. GRAPE can handle both time invariant and time varying systems as it is designed to track parameter changes. Two techniques are presented to generate parameter samples for the parallel filter models. The first approach is called selected grid-based stratification (SGBS). SGBS divides the parameter space into equally spaced strata. The second approach uses Latin Hypercube Sampling (LHS) to determine the parameter locations and minimize the total number of required models. LHS is particularly useful when the parameter dimensions grow. Adding more parameters does not require the model count to increase for LHS. Each resample is independent of the prior sample set other than the location of the parameter estimate. SGBS and LHS can be used for both the initial sample and subsequent resamples. Furthermore, resamples are not required to use the same technique. Both techniques are demonstrated for both linear and nonlinear frameworks. The GRAPE framework further formalizes the parameter tracking process through a general approach for nonlinear systems. These additional methods allow GRAPE to either narrow the focus to converged values within a parameter range or expand the range in the appropriate direction to track the parameters outside the current parameter range boundary. Customizable rules define the specific resample behavior when the GRAPE parameter estimates converge. Convergence itself is determined from the derivatives of the parameter estimates using a simple moving average window to filter out noise. The system can be tuned to match the desired performance goals by making adjustments to parameters such as the sample size, convergence criteria, resample criteria, initial sampling method, resampling method, confidence in prior sample covariances, sample delay, and others.

  17. On application of asymmetric Kan-like exact equilibria to the Earth magnetotail modeling

    NASA Astrophysics Data System (ADS)

    Korovinskiy, Daniil B.; Kubyshkina, Darya I.; Semenov, Vladimir S.; Kubyshkina, Marina V.; Erkaev, Nikolai V.; Kiehas, Stefan A.

    2018-04-01

    A specific class of solutions of the Vlasov-Maxwell equations, developed by means of generalization of the well-known Harris-Fadeev-Kan-Manankova family of exact two-dimensional equilibria, is studied. The examined model reproduces the current sheet bending and shifting in the vertical plane, arising from the Earth dipole tilting and the solar wind nonradial propagation. The generalized model allows magnetic configurations with equatorial magnetic fields decreasing in a tailward direction as slow as 1/x, contrary to the original Kan model (1/x3); magnetic configurations with a single X point are also available. The analytical solution is compared with the empirical T96 model in terms of the magnetic flux tube volume. It is found that parameters of the analytical model may be adjusted to fit a wide range of averaged magnetotail configurations. The best agreement between analytical and empirical models is obtained for the midtail at distances beyond 10-15 RE at high levels of magnetospheric activity. The essential model parameters (current sheet scale, current density) are compared to Cluster data of magnetotail crossings. The best match of parameters is found for single-peaked current sheets with medium values of number density, proton temperature and drift velocity.

  18. A Bayesian Model of the Memory Colour Effect.

    PubMed

    Witzel, Christoph; Olkkonen, Maria; Gegenfurtner, Karl R

    2018-01-01

    According to the memory colour effect, the colour of a colour-diagnostic object is not perceived independently of the object itself. Instead, it has been shown through an achromatic adjustment method that colour-diagnostic objects still appear slightly in their typical colour, even when they are colourimetrically grey. Bayesian models provide a promising approach to capture the effect of prior knowledge on colour perception and to link these effects to more general effects of cue integration. Here, we model memory colour effects using prior knowledge about typical colours as priors for the grey adjustments in a Bayesian model. This simple model does not involve any fitting of free parameters. The Bayesian model roughly captured the magnitude of the measured memory colour effect for photographs of objects. To some extent, the model predicted observed differences in memory colour effects across objects. The model could not account for the differences in memory colour effects across different levels of realism in the object images. The Bayesian model provides a particularly simple account of memory colour effects, capturing some of the multiple sources of variation of these effects.

  19. A Bayesian Model of the Memory Colour Effect

    PubMed Central

    Olkkonen, Maria; Gegenfurtner, Karl R.

    2018-01-01

    According to the memory colour effect, the colour of a colour-diagnostic object is not perceived independently of the object itself. Instead, it has been shown through an achromatic adjustment method that colour-diagnostic objects still appear slightly in their typical colour, even when they are colourimetrically grey. Bayesian models provide a promising approach to capture the effect of prior knowledge on colour perception and to link these effects to more general effects of cue integration. Here, we model memory colour effects using prior knowledge about typical colours as priors for the grey adjustments in a Bayesian model. This simple model does not involve any fitting of free parameters. The Bayesian model roughly captured the magnitude of the measured memory colour effect for photographs of objects. To some extent, the model predicted observed differences in memory colour effects across objects. The model could not account for the differences in memory colour effects across different levels of realism in the object images. The Bayesian model provides a particularly simple account of memory colour effects, capturing some of the multiple sources of variation of these effects. PMID:29760874

  20. When to Make Mountains out of Molehills: The Pros and Cons of Simple and Complex Model Calibration Procedures

    NASA Astrophysics Data System (ADS)

    Smith, K. A.; Barker, L. J.; Harrigan, S.; Prudhomme, C.; Hannaford, J.; Tanguy, M.; Parry, S.

    2017-12-01

    Earth and environmental models are relied upon to investigate system responses that cannot otherwise be examined. In simulating physical processes, models have adjustable parameters which may, or may not, have a physical meaning. Determining the values to assign to these model parameters is an enduring challenge for earth and environmental modellers. Selecting different error metrics by which the models results are compared to observations will lead to different sets of calibrated model parameters, and thus different model results. Furthermore, models may exhibit `equifinal' behaviour, where multiple combinations of model parameters lead to equally acceptable model performance against observations. These decisions in model calibration introduce uncertainty that must be considered when model results are used to inform environmental decision-making. This presentation focusses on the uncertainties that derive from the calibration of a four parameter lumped catchment hydrological model (GR4J). The GR models contain an inbuilt automatic calibration algorithm that can satisfactorily calibrate against four error metrics in only a few seconds. However, a single, deterministic model result does not provide information on parameter uncertainty. Furthermore, a modeller interested in extreme events, such as droughts, may wish to calibrate against more low flows specific error metrics. In a comprehensive assessment, the GR4J model has been run with 500,000 Latin Hypercube Sampled parameter sets across 303 catchments in the United Kingdom. These parameter sets have been assessed against six error metrics, including two drought specific metrics. This presentation compares the two approaches, and demonstrates that the inbuilt automatic calibration can outperform the Latin Hypercube experiment approach in single metric assessed performance. However, it is also shown that there are many merits of the more comprehensive assessment, which allows for probabilistic model results, multi-objective optimisation, and better tailoring to calibrate the model for specific applications such as drought event characterisation. Modellers and decision-makers may be constrained in their choice of calibration method, so it is important that they recognise the strengths and limitations of their chosen approach.

  1. Priming effect and microbial diversity in ecosystem functioning and response to global change: a modeling approach using the SYMPHONY model.

    PubMed

    Perveen, Nazia; Barot, Sébastien; Alvarez, Gaël; Klumpp, Katja; Martin, Raphael; Rapaport, Alain; Herfurth, Damien; Louault, Frédérique; Fontaine, Sébastien

    2014-04-01

    Integration of the priming effect (PE) in ecosystem models is crucial to better predict the consequences of global change on ecosystem carbon (C) dynamics and its feedbacks on climate. Over the last decade, many attempts have been made to model PE in soil. However, PE has not yet been incorporated into any ecosystem models. Here, we build plant/soil models to explore how PE and microbial diversity influence soil/plant interactions and ecosystem C and nitrogen (N) dynamics in response to global change (elevated CO2 and atmospheric N depositions). Our results show that plant persistence, soil organic matter (SOM) accumulation, and low N leaching in undisturbed ecosystems relies on a fine adjustment of microbial N mineralization to plant N uptake. This adjustment can be modeled in the SYMPHONY model by considering the destruction of SOM through PE, and the interactions between two microbial functional groups: SOM decomposers and SOM builders. After estimation of parameters, SYMPHONY provided realistic predictions on forage production, soil C storage and N leaching for a permanent grassland. Consistent with recent observations, SYMPHONY predicted a CO2 -induced modification of soil microbial communities leading to an intensification of SOM mineralization and a decrease in the soil C stock. SYMPHONY also indicated that atmospheric N deposition may promote SOM accumulation via changes in the structure and metabolic activities of microbial communities. Collectively, these results suggest that the PE and functional role of microbial diversity may be incorporated in ecosystem models with a few additional parameters, improving accuracy of predictions. © 2013 John Wiley & Sons Ltd.

  2. Improved Conjugate Gradient Bundle Adjustment of Dunhuang Wall Painting Images

    NASA Astrophysics Data System (ADS)

    Hu, K.; Huang, X.; You, H.

    2017-09-01

    Bundle adjustment with additional parameters is identified as a critical step for precise orthoimage generation and 3D reconstruction of Dunhuang wall paintings. Due to the introduction of self-calibration parameters and quasi-planar constraints, the structure of coefficient matrix of the reduced normal equation is banded-bordered, making the solving process of bundle adjustment complex. In this paper, Conjugate Gradient Bundle Adjustment (CGBA) method is deduced by calculus of variations. A preconditioning method based on improved incomplete Cholesky factorization is adopt to reduce the condition number of coefficient matrix, as well as to accelerate the iteration rate of CGBA. Both theoretical analysis and experimental results comparison with conventional method indicate that, the proposed method can effectively conquer the ill-conditioned problem of normal equation and improve the calculation efficiency of bundle adjustment with additional parameters considerably, while maintaining the actual accuracy.

  3. The Routine Fitting of Kinetic Data to Models

    PubMed Central

    Berman, Mones; Shahn, Ezra; Weiss, Marjory F.

    1962-01-01

    A mathematical formalism is presented for use with digital computers to permit the routine fitting of data to physical and mathematical models. Given a set of data, the mathematical equations describing a model, initial conditions for an experiment, and initial estimates for the values of model parameters, the computer program automatically proceeds to obtain a least squares fit of the data by an iterative adjustment of the values of the parameters. When the experimental measures are linear combinations of functions, the linear coefficients for a least squares fit may also be calculated. The values of both the parameters of the model and the coefficients for the sum of functions may be unknown independent variables, unknown dependent variables, or known constants. In the case of dependence, only linear dependencies are provided for in routine use. The computer program includes a number of subroutines, each one of which performs a special task. This permits flexibility in choosing various types of solutions and procedures. One subroutine, for example, handles linear differential equations, another, special non-linear functions, etc. The use of analytic or numerical solutions of equations is possible. PMID:13867975

  4. Parameterization of a mesoscopic model for the self-assembly of linear sodium alkyl sulfates

    NASA Astrophysics Data System (ADS)

    Mai, Zhaohuan; Couallier, Estelle; Rakib, Mohammed; Rousseau, Bernard

    2014-05-01

    A systematic approach to develop mesoscopic models for a series of linear anionic surfactants (CH3(CH2)n - 1OSO3Na, n = 6, 9, 12, 15) by dissipative particle dynamics (DPD) simulations is presented in this work. The four surfactants are represented by coarse-grained models composed of the same head group and different numbers of identical tail beads. The transferability of the DPD model over different surfactant systems is carefully checked by adjusting the repulsive interaction parameters and the rigidity of surfactant molecules, in order to reproduce key equilibrium properties of the aqueous micellar solutions observed experimentally, including critical micelle concentration (CMC) and average micelle aggregation number (Nag). We find that the chain length is a good index to optimize the parameters and evaluate the transferability of the DPD model. Our models qualitatively reproduce the essential properties of these surfactant analogues with a set of best-fit parameters. It is observed that the logarithm of the CMC value decreases linearly with the surfactant chain length, in agreement with Klevens' rule. With the best-fit and transferable set of parameters, we have been able to calculate the free energy contribution to micelle formation per methylene unit of -1.7 kJ/mol, very close to the experimentally reported value.

  5. Computer simulation of storm runoff for three watersheds in Albuquerque, New Mexico

    USGS Publications Warehouse

    Knutilla, R.L.; Veenhuis, J.E.

    1994-01-01

    Rainfall-runoff data from three watersheds were selected for calibration and verification of the U.S. Geological Survey's Distributed Routing Rainfall-Runoff Model. The watersheds chosen are residentially developed. The conceptually based model uses an optimization process that adjusts selected parameters to achieve the best fit between measured and simulated runoff volumes and peak discharges. Three of these optimization parameters represent soil-moisture conditions, three represent infiltration, and one accounts for effective impervious area. Each watershed modeled was divided into overland-flow segments and channel segments. The overland-flow segments were further subdivided to reflect pervious and impervious areas. Each overland-flow and channel segment was assigned representative values of area, slope, percentage of imperviousness, and roughness coefficients. Rainfall-runoff data for each watershed were separated into two sets for use in calibration and verification. For model calibration, seven input parameters were optimized to attain a best fit of the data. For model verification, parameter values were set using values from model calibration. The standard error of estimate for calibration of runoff volumes ranged from 19 to 34 percent, and for peak discharge calibration ranged from 27 to 44 percent. The standard error of estimate for verification of runoff volumes ranged from 26 to 31 percent, and for peak discharge verification ranged from 31 to 43 percent.

  6. Generalized image contrast enhancement technique based on the Heinemann contrast discrimination model

    NASA Astrophysics Data System (ADS)

    Liu, Hong; Nodine, Calvin F.

    1996-07-01

    This paper presents a generalized image contrast enhancement technique, which equalizes the perceived brightness distribution based on the Heinemann contrast discrimination model. It is based on the mathematically proven existence of a unique solution to a nonlinear equation, and is formulated with easily tunable parameters. The model uses a two-step log-log representation of luminance contrast between targets and surround in a luminous background setting. The algorithm consists of two nonlinear gray scale mapping functions that have seven parameters, two of which are adjustable Heinemann constants. Another parameter is the background gray level. The remaining four parameters are nonlinear functions of the gray-level distribution of the given image, and can be uniquely determined once the previous three are set. Tests have been carried out to demonstrate the effectiveness of the algorithm for increasing the overall contrast of radiology images. The traditional histogram equalization can be reinterpreted as an image enhancement technique based on the knowledge of human contrast perception. In fact, it is a special case of the proposed algorithm.

  7. Photo- and electroproduction of K+Λ with a unitarity-restored isobar model

    NASA Astrophysics Data System (ADS)

    Skoupil, D.; Bydžovský, P.

    2018-02-01

    Exploiting the isobar model, kaon photo- and electroproduction on the proton in the resonance region comes under scrutiny. An upgrade of our previous model, comprising higher-spin nucleon and hyperon exchanges in the consistent formalism, was accomplished by implementing energy-dependent widths of nucleon resonances, which leads to a different choice of hadron form factor with much softer values of cutoff parameter for the resonant part. For a reliable description of electroproduction, the necessity of including longitudinal couplings of nucleon resonances to virtual photons was revealed. We present a new model whose free parameters were adjusted to photo- and electroproduction data and which provides a reliable overall description of experimental data in all kinematic regions. The majority of nucleon resonances chosen in this analysis coincide with those selected in our previous analysis and also in the Bayesian analysis with the Regge-plus-resonance model as the states contributing to this process with the highest probability.

  8. Advances in land modeling of KIAPS based on the Noah Land Surface Model

    NASA Astrophysics Data System (ADS)

    Koo, Myung-Seo; Baek, Sunghye; Seol, Kyung-Hee; Cho, Kyoungmi

    2017-08-01

    As of 2013, the Noah Land Surface Model (LSM) version 2.7.1 was implemented in a new global model being developed at the Korea Institute of Atmospheric Prediction Systems (KIAPS). This land surface scheme is further refined in two aspects, by adding new physical processes and by updating surface input parameters. Thus, the treatment of glacier land, sea ice, and snow cover are addressed more realistically. Inconsistencies in the amount of absorbed solar flux at ground level by the land surface and radiative processes are rectified. In addition, new parameters are available by using 1-km land cover data, which had usually not been possible at a global scale. Land surface albedo/emissivity climatology is newly created using Moderate-Resolution Imaging Spectroradiometer (MODIS) satellitebased data and adjusted parameterization. These updates have been applied to the KIAPS-developed model and generally provide a positive impact on near-surface weather forecasting.

  9. Independent-particle models for light negative atomic ions

    NASA Technical Reports Server (NTRS)

    Ganas, P. S.; Talman, J. D.; Green, A. E. S.

    1980-01-01

    For the purposes of astrophysical, aeronomical, and laboratory application, a precise independent-particle model for electrons in negative atomic ions of the second and third period is discussed. The optimum-potential model (OPM) of Talman et al. (1979) is first used to generate numerical potentials for eight of these ions. Results for total energies and electron affinities are found to be very close to Hartree-Fock solutions. However, the OPM and HF electron affinities both depart significantly from experimental affinities. For this reason, two analytic potentials are developed whose inner energy levels are very close to the OPM and HF levels but whose last electron eigenvalues are adjusted precisely with the magnitudes of experimental affinities. These models are: (1) a four-parameter analytic characterization of the OPM potential and (2) a two-parameter potential model of the Green, Sellin, Zachor type. The system O(-) or e-O, which is important in upper atmospheric physics is examined in some detail.

  10. Target switching in curved human arm movements is predicted by changing a single control parameter.

    PubMed

    Hoffmann, Heiko

    2011-01-01

    Straight-line movements have been studied extensively in the human motor-control literature, but little is known about how to generate curved movements and how to adjust them in a dynamic environment. The present work studied, for the first time to my knowledge, how humans adjust curved hand movements to a target that switches location. Subjects (n = 8) sat in front of a drawing tablet and looked at a screen. They moved a cursor on a curved trajectory (spiral or oval shaped) toward a goal point. In half of the trials, this goal switched 200 ms after movement onset to either one of two alternative positions, and subjects smoothly adjusted their movements to the new goal. To explain this adjustment, we compared three computational models: a superposition of curved and minimum-jerk movements (Flash and Henis in J Cogn Neurosci 3(3):220-230, 1991), Vector Planning (Gordon et al. in Exp Brain Res 99(1):97-111, 1994) adapted to curved movements (Rescale), and a nonlinear dynamical system, which could generate arbitrarily curved smooth movements and had a point attractor at the goal. For each model, we predicted the trajectory adjustment to the target switch by changing only the goal position in the model. As result, the dynamical model could explain the observed switch behavior significantly better than the two alternative models (spiral: P = 0.0002 vs. Flash, P = 0.002 vs. Rescale; oval: P = 0.04 vs. Flash; P values obtained from Wilcoxon test on R (2) values). We conclude that generalizing arbitrary hand trajectories to new targets may be explained by switching a single control command, without the need to re-plan or re-optimize the whole movement or superimpose movements.

  11. About the Modeling of Radio Source Time Series as Linear Splines

    NASA Astrophysics Data System (ADS)

    Karbon, Maria; Heinkelmann, Robert; Mora-Diaz, Julian; Xu, Minghui; Nilsson, Tobias; Schuh, Harald

    2016-12-01

    Many of the time series of radio sources observed in geodetic VLBI show variations, caused mainly by changes in source structure. However, until now it has been common practice to consider source positions as invariant, or to exclude known misbehaving sources from the datum conditions. This may lead to a degradation of the estimated parameters, as unmodeled apparent source position variations can propagate to the other parameters through the least squares adjustment. In this paper we will introduce an automated algorithm capable of parameterizing the radio source coordinates as linear splines.

  12. Need of tetraiodothyronine supplemental therapy in pregnant women

    NASA Astrophysics Data System (ADS)

    Stoian, Dana; Craciunescu, Mihalea; Timar, Romulus; Schiller, Adalbert; Pater, Liana; Craina, Marius

    2013-10-01

    Thyroid hormones are essential for fetal development. Normal thyroid function in pregnant women adjusts by itself in cases of pregnancy, phenomenon that is deficient in cases of previous maternal thyroid disease. The study group was represented by 120 females, with reproductive age, with known thyroid disease, that had a up to delivery pregnancy. Thyroid ultrasound parameters and functional parameters were follow-up during the 9-month of gestation. The study proposes a mathematical model of predicting the need and the amount of tetraiodothyronine treatment in pregnant women with prevalent thyroid disease.

  13. Combined comfort model of thermal comfort and air quality on buses in Hong Kong.

    PubMed

    Shek, Ka Wing; Chan, Wai Tin

    2008-01-25

    Air-conditioning settings are important factors in controlling the comfort of passengers on buses. The local bus operators control in-bus air quality and thermal environment by conforming to the prescribed levels stated in published standards. As a result, the settings are merely adjusted to fulfill the standards, rather than to satisfy the passengers' thermal comfort and air quality. Such "standard-oriented" practices are not appropriate; the passengers' preferences and satisfaction should be emphasized instead. Thus a "comfort-oriented" philosophy should be implemented to achieve a comfortable in-bus commuting environment. In this study, the achievement of a comfortable in-bus environment was examined with emphasis on thermal comfort and air quality. Both the measurement of physical parameters and subjective questionnaire surveys were conducted to collect practical in-bus thermal and air parameters data, as well as subjective satisfaction and sensation votes from the passengers. By analyzing the correlation between the objective and subjective data, a combined comfort models were developed. The models helped in evaluating the percentage of dissatisfaction under various combinations of passengers' sensation votes towards thermal comfort and air quality. An effective approach integrated the combined comfort model, hardware and software systems and the bus air-conditioning system could effectively control the transient in-bus environment. By processing and analyzing the data from the continuous monitoring system with the combined comfort model, air-conditioning setting adjustment commands could be determined and delivered to the hardware. This system adjusted air-conditioning settings depending on real-time commands along the bus journey. Therefore, a comfortable in-bus air quality and thermal environment could be achieved and efficiently maintained along the bus journey despite dynamic outdoor influences. Moreover, this model can help optimize air-conditioning control by striking a beneficial balance between energy conservation and passengers' satisfaction level.

  14. Single-bubble sonoluminescence in sulfuric acid and water: bubble dynamics, stability, and continuous spectra.

    PubMed

    Puente, Gabriela F; García-Martínez, Pablo; Bonetto, Fabián J

    2007-01-01

    We present theoretical calculations of an argon bubble in a liquid solution of 85%wt sulfuric acid and 15%wt water in single-bubble sonoluminescence. We used a model without free parameters to be adjusted. We predict from first principles the region in parameter space for stable bubble evolution, the temporal evolution of the bubble radius, the maximum temperature, pressures, and the light spectra due to thermal emissions. We also used a partial differential equation based model (hydrocode) to compute the temperature and pressure evolutions at the center of the bubble during maximum compression. We found the behavior of this liquid mixture to be very different from water in several aspects. Most of the models in sonoluminescence were compared with water experimental results.

  15. An enhanced PM 2.5 air quality forecast model based on nonlinear regression and back-trajectory concentrations

    NASA Astrophysics Data System (ADS)

    Cobourn, W. Geoffrey

    2010-08-01

    An enhanced PM 2.5 air quality forecast model based on nonlinear regression (NLR) and back-trajectory concentrations has been developed for use in the Louisville, Kentucky metropolitan area. The PM 2.5 air quality forecast model is designed for use in the warm season, from May through September, when PM 2.5 air quality is more likely to be critical for human health. The enhanced PM 2.5 model consists of a basic NLR model, developed for use with an automated air quality forecast system, and an additional parameter based on upwind PM 2.5 concentration, called PM24. The PM24 parameter is designed to be determined manually, by synthesizing backward air trajectory and regional air quality information to compute 24-h back-trajectory concentrations. The PM24 parameter may be used by air quality forecasters to adjust the forecast provided by the automated forecast system. In this study of the 2007 and 2008 forecast seasons, the enhanced model performed well using forecasted meteorological data and PM24 as input. The enhanced PM 2.5 model was compared with three alternative models, including the basic NLR model, the basic NLR model with a persistence parameter added, and the NLR model with persistence and PM24. The two models that included PM24 were of comparable accuracy. The two models incorporating back-trajectory concentrations had lower mean absolute errors and higher rates of detecting unhealthy PM2.5 concentrations compared to the other models.

  16. A model of the productivity of the northern pintail

    USGS Publications Warehouse

    Carlson, J.D.; Clark, W.R.; Klaas, E.E.

    1993-01-01

    We adapted a stochastic computer model to simulate productivity of the northern pintail (Anas acuta). Researchers at the Northern Prairie Wildlife Research Center of the U.S. Fish and Wildlife Service originally developed the model to simulate productivity of the mallard (A. platyrhynchos). We obtained data and descriptive information on the breeding biology of pintails from a literature review and from discussions with waterfowl biologists. All biological parameters in the productivity component of the mallard model (e.g, initial body weights, weight loss during laying and incubation, incubation time, clutch size, nest site selection characteristics) were compared with data on pintails and adjusted accordingly. The function in the mallard model that predicts nest initiation in response to pond conditions adequately mimicked pintail behavior and did not require adjustment.Recruitment rate was most sensitive to variations in parameters that control nest success, seasonal duckling survival rate, and yearling and adult body weight. We simulated upland and wetland habitat conditions in central North Dakota and compared simulation results with observed data. Simulated numbers were not significantly different from observed numbers of successful nests during wet, average, and dry wetland conditions. The simulated effect of predator barrier fencing in a study area in central North Dakota increased recruitment rate by an average of 18.4%. This modeling synthesized existing knowledge on the breeding biology of the northern pintail, identified necessary research, and furnished a useful tool for the examination and comparison of various management options.

  17. Beyond Adjustment: Parameters of Successful Resolution of Bereavement.

    ERIC Educational Resources Information Center

    Rubin, Simon Shimshon

    The problem of human response to loss is complex. To approach understanding of this process it is valuable to use a number of models. Phenomenologically the application of a temporal matrix divides the reaction into three useful heuristic and empirical stages: initial, acute grief (1-3 months); mourning (1-2 years); and post-mourning, with no set…

  18. Using Least Squares for Error Propagation

    ERIC Educational Resources Information Center

    Tellinghuisen, Joel

    2015-01-01

    The method of least-squares (LS) has a built-in procedure for estimating the standard errors (SEs) of the adjustable parameters in the fit model: They are the square roots of the diagonal elements of the covariance matrix. This means that one can use least-squares to obtain numerical values of propagated errors by defining the target quantities as…

  19. On-line estimation of error covariance parameters for atmospheric data assimilation

    NASA Technical Reports Server (NTRS)

    Dee, Dick P.

    1995-01-01

    A simple scheme is presented for on-line estimation of covariance parameters in statistical data assimilation systems. The scheme is based on a maximum-likelihood approach in which estimates are produced on the basis of a single batch of simultaneous observations. Simple-sample covariance estimation is reasonable as long as the number of available observations exceeds the number of tunable parameters by two or three orders of magnitude. Not much is known at present about model error associated with actual forecast systems. Our scheme can be used to estimate some important statistical model error parameters such as regionally averaged variances or characteristic correlation length scales. The advantage of the single-sample approach is that it does not rely on any assumptions about the temporal behavior of the covariance parameters: time-dependent parameter estimates can be continuously adjusted on the basis of current observations. This is of practical importance since it is likely to be the case that both model error and observation error strongly depend on the actual state of the atmosphere. The single-sample estimation scheme can be incorporated into any four-dimensional statistical data assimilation system that involves explicit calculation of forecast error covariances, including optimal interpolation (OI) and the simplified Kalman filter (SKF). The computational cost of the scheme is high but not prohibitive; on-line estimation of one or two covariance parameters in each analysis box of an operational bozed-OI system is currently feasible. A number of numerical experiments performed with an adaptive SKF and an adaptive version of OI, using a linear two-dimensional shallow-water model and artificially generated model error are described. The performance of the nonadaptive versions of these methods turns out to depend rather strongly on correct specification of model error parameters. These parameters are estimated under a variety of conditions, including uniformly distributed model error and time-dependent model error statistics.

  20. Effectiveness and limitations of parameter tuning in reducing biases of top-of-atmosphere radiation and clouds in MIROC version 5

    NASA Astrophysics Data System (ADS)

    Ogura, Tomoo; Shiogama, Hideo; Watanabe, Masahiro; Yoshimori, Masakazu; Yokohata, Tokuta; Annan, James D.; Hargreaves, Julia C.; Ushigami, Naoto; Hirota, Kazuya; Someya, Yu; Kamae, Youichi; Tatebe, Hiroaki; Kimoto, Masahide

    2017-12-01

    This study discusses how much of the biases in top-of-atmosphere (TOA) radiation and clouds can be removed by parameter tuning in the present-day simulation of a climate model in the Coupled Model Inter-comparison Project phase 5 (CMIP5) generation. We used output of a perturbed parameter ensemble (PPE) experiment conducted with an atmosphere-ocean general circulation model (AOGCM) without flux adjustment. The Model for Interdisciplinary Research on Climate version 5 (MIROC5) was used for the PPE experiment. Output of the PPE was compared with satellite observation data to evaluate the model biases and the parametric uncertainty of the biases with respect to TOA radiation and clouds. The results indicate that removing or changing the sign of the biases by parameter tuning alone is difficult. In particular, the cooling bias of the shortwave cloud radiative effect at low latitudes could not be removed, neither in the zonal mean nor at each latitude-longitude grid point. The bias was related to the overestimation of both cloud amount and cloud optical thickness, which could not be removed by the parameter tuning either. However, they could be alleviated by tuning parameters such as the maximum cumulus updraft velocity at the cloud base. On the other hand, the bias of the shortwave cloud radiative effect in the Arctic was sensitive to parameter tuning. It could be removed by tuning such parameters as albedo of ice and snow both in the zonal mean and at each grid point. The obtained results illustrate the benefit of PPE experiments which provide useful information regarding effectiveness and limitations of parameter tuning. Implementing a shallow convection parameterization is suggested as a potential measure to alleviate the biases in radiation and clouds.

  1. FAST Model Calibration and Validation of the OC5- DeepCwind Floating Offshore Wind System Against Wave Tank Test Data: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wendt, Fabian F; Robertson, Amy N; Jonkman, Jason

    During the course of the Offshore Code Comparison Collaboration, Continued, with Correlation (OC5) project, which focused on the validation of numerical methods through comparison against tank test data, the authors created a numerical FAST model of the 1:50-scale DeepCwind semisubmersible system that was tested at the Maritime Research Institute Netherlands ocean basin in 2013. This paper discusses several model calibration studies that were conducted to identify model adjustments that improve the agreement between the numerical simulations and the experimental test data. These calibration studies cover wind-field-specific parameters (coherence, turbulence), hydrodynamic and aerodynamic modeling approaches, as well as rotor model (blade-pitchmore » and blade-mass imbalances) and tower model (structural tower damping coefficient) adjustments. These calibration studies were conducted based on relatively simple calibration load cases (wave only/wind only). The agreement between the final FAST model and experimental measurements is then assessed based on more-complex combined wind and wave validation cases.« less

  2. Genetic value of herd life adjusted for milk production.

    PubMed

    Allaire, F R; Gibson, J P

    1992-05-01

    Cow herd life adjusted for lactational milk production was investigated as a genetic trait in the breeding objective. Under a simple model, the relative economic weight of milk to adjusted herd life on a per genetic standard deviation basis was equal to CVY/dCVL where CVY and CVL are the genetic coefficients of variation of milk production and adjusted herd life, respectively, and d is the depreciation per year per cow divided by the total fixed costs per year per cow. The relative economic value of milk to adjusted herd life at the prices and parameters for North America was about 3.2. An increase of 100-kg milk was equivalent to 2.2 mo of adjusted herd life. Three to 7% lower economic gain is expected when only improved milk production is sought compared with a breeding objective that included both production and adjusted herd life for relative value changed +/- 20%. A favorable economic gain to cost ratio probably exists for herd life used as a genetic trait to supplement milk in the breeding objective. Cow survival records are inexpensive, and herd life evaluations from such records may not extend the generation interval when such an evaluation is used in bull sire selection.

  3. Digital-computer model of ground-water flow in Tooele Valley, Utah

    USGS Publications Warehouse

    Razem, Allan C.; Bartholoma, Scott D.

    1980-01-01

    A two-dimensional, finite-difference digital-computer model was used to simulate the ground-water flow in the principal artesian aquifer in Tooele Valley, Utah. The parameters used in the model were obtained through field measurements and tests, from historical records, and by trial-and-error adjustments. The model was calibrated against observed water-level changes that occurred during 1941-50, 1951-60, 1961-66, 1967-73, and 1974-78. The reliability of the predictions is good in most parts of the valley, as is shown by the ability of the model to match historical water-level changes.

  4. Analytical fitting model for rough-surface BRDF.

    PubMed

    Renhorn, Ingmar G E; Boreman, Glenn D

    2008-08-18

    A physics-based model is developed for rough surface BRDF, taking into account angles of incidence and scattering, effective index, surface autocovariance, and correlation length. Shadowing is introduced on surface correlation length and reflectance. Separate terms are included for surface scatter, bulk scatter and retroreflection. Using the FindFit function in Mathematica, the functional form is fitted to BRDF measurements over a wide range of incident angles. The model has fourteen fitting parameters; once these are fixed, the model accurately describes scattering data over two orders of magnitude in BRDF without further adjustment. The resulting analytical model is convenient for numerical computations.

  5. Study and performances analysis of fuel cell assisted vector control variable speed drive system used for electric vehicles

    NASA Astrophysics Data System (ADS)

    Pachauri, Rupendra Kumar; Chauhan, Yogesh K.

    2017-02-01

    This paper is a novel attempt to combine two important aspects of fuel cell (FC). First, it presents investigations on FC technology and its applications. A description of FC operating principles is followed by the comparative analysis of the present FC technologies together with the issues concerning various fuels. Second, this paper also proposes a model for the simulation and performances evaluation of a proton exchange membrane fuel cell (PEMFC) generation system. Furthermore, a MATLAB/Simulink-based dynamic model of PEMFC is developed and parameters of FC are so adjusted to emulate a commercially available PEMFC. The system results are obtained for the PEMFC-driven adjusted speed induction motor drive (ASIMD) system, normally used in electric vehicles and analysis is carried out for different operating conditions of FC and ASIMD system. The obtained results prove the validation of system concept and modelling.

  6. Parameterization and Uncertainty Analysis of SWAT model in Hydrological Simulation of Chaohe River Basin

    NASA Astrophysics Data System (ADS)

    Jie, M.; Zhang, J.; Guo, B. B.

    2017-12-01

    As a typical distributed hydrological model, the SWAT model also has a challenge in calibrating parameters and analysis their uncertainty. This paper chooses the Chaohe River Basin China as the study area, through the establishment of the SWAT model, loading the DEM data of the Chaohe river basin, the watershed is automatically divided into several sub-basins. Analyzing the land use, soil and slope which are on the basis of the sub-basins and calculating the hydrological response unit (HRU) of the study area, after running SWAT model, the runoff simulation values in the watershed are obtained. On this basis, using weather data, known daily runoff of three hydrological stations, combined with the SWAT-CUP automatic program and the manual adjustment method are used to analyze the multi-site calibration of the model parameters. Furthermore, the GLUE algorithm is used to analyze the parameters uncertainty of the SWAT model. Through the sensitivity analysis, calibration and uncertainty study of SWAT, the results indicate that the parameterization of the hydrological characteristics of the Chaohe river is successful and feasible which can be used to simulate the Chaohe river basin.

  7. [Comparison between administrative and clinical databases in the evaluation of cardiac surgery performance].

    PubMed

    Rosato, Stefano; D'Errigo, Paola; Badoni, Gabriella; Fusco, Danilo; Perucci, Carlo A; Seccareccia, Fulvia

    2008-08-01

    The availability of two contemporary sources of information about coronary artery bypass graft (CABG) interventions, allowed 1) to verify the feasibility of performing outcome evaluation studies using administrative data sources, and 2) to compare hospital performance obtainable using the CABG Project clinical database with hospital performance derived from the use of current administrative data. Interventions recorded in the CABG Project were linked to the hospital discharge record (HDR) administrative database. Only the linked records were considered for subsequent analyses (46% of the total CABG Project). A new selected population "clinical card-HDR" was then defined. Two independent risk-adjustment models were applied, each of them using information derived from one of the two different sources. Then, HDR information was supplemented with some patient preoperative conditions from the CABG clinical database. The two models were compared in terms of their adaptability to data. Hospital performances identified by the two different models and significantly different from the mean was compared. In only 4 of the 13 hospitals considered for analysis, the results obtained using the HDR model did not completely overlap with those obtained by the CABG model. When comparing statistical parameters of the HDR model and the HDR model + patient preoperative conditions, the latter showed the best adaptability to data. In this "clinical card-HDR" population, hospital performance assessment obtained using information from the clinical database is similar to that derived from the use of current administrative data. However, when risk-adjustment models built on administrative databases are supplemented with a few clinical variables, their statistical parameters improve and hospital performance assessment becomes more accurate.

  8. A Generalized Simple Formulation of Convective Adjustment ...

    EPA Pesticide Factsheets

    Convective adjustment timescale (τ) for cumulus clouds is one of the most influential parameters controlling parameterized convective precipitation in climate and weather simulation models at global and regional scales. Due to the complex nature of deep convection, a prescribed value or ad hoc representation of τ is used in most global and regional climate/weather models making it a tunable parameter and yet still resulting in uncertainties in convective precipitation simulations. In this work, a generalized simple formulation of τ for use in any convection parameterization for shallow and deep clouds is developed to reduce convective precipitation biases at different grid spacing. Unlike existing other methods, our new formulation can be used with field campaign measurements to estimate τ as demonstrated by using data from two different special field campaigns. Then, we implemented our formulation into a regional model (WRF) for testing and evaluation. Results indicate that our simple τ formulation can give realistic temporal and spatial variations of τ across continental U.S. as well as grid-scale and subgrid scale precipitation. We also found that as the grid spacing decreases (e.g., from 36 to 4-km grid spacing), grid-scale precipitation dominants over subgrid-scale precipitation. The generalized τ formulation works for various types of atmospheric conditions (e.g., continental clouds due to heating and large-scale forcing over la

  9. Adjustment and validation of a simulation tool for CSP plants based on parabolic trough technology

    NASA Astrophysics Data System (ADS)

    García-Barberena, Javier; Ubani, Nora

    2016-05-01

    The present work presents the validation process carried out for a simulation tool especially designed for the energy yield assessment of concentrating solar plants based on parabolic through (PT) technology. The validation has been carried out by comparing the model estimations with real data collected from a commercial CSP plant. In order to adjust the model parameters used for the simulation, 12 different days were selected among one-year of operational data measured at the real plant. The 12 days were simulated and the estimations compared with the measured data, focusing on the most important variables from the simulation point of view: temperatures, pressures and mass flow of the solar field, gross power, parasitic power, and net power delivered by the plant. Based on these 12 days, the key parameters for simulating the model were properly fixed and the simulation of a whole year performed. The results obtained for a complete year simulation showed very good agreement for the gross and net electric total production. The estimations for these magnitudes show a 1.47% and 2.02% BIAS respectively. The results proved that the simulation software describes with great accuracy the real operation of the power plant and correctly reproduces its transient behavior.

  10. Conditional Poisson models: a flexible alternative to conditional logistic case cross-over analysis.

    PubMed

    Armstrong, Ben G; Gasparrini, Antonio; Tobias, Aurelio

    2014-11-24

    The time stratified case cross-over approach is a popular alternative to conventional time series regression for analysing associations between time series of environmental exposures (air pollution, weather) and counts of health outcomes. These are almost always analyzed using conditional logistic regression on data expanded to case-control (case crossover) format, but this has some limitations. In particular adjusting for overdispersion and auto-correlation in the counts is not possible. It has been established that a Poisson model for counts with stratum indicators gives identical estimates to those from conditional logistic regression and does not have these limitations, but it is little used, probably because of the overheads in estimating many stratum parameters. The conditional Poisson model avoids estimating stratum parameters by conditioning on the total event count in each stratum, thus simplifying the computing and increasing the number of strata for which fitting is feasible compared with the standard unconditional Poisson model. Unlike the conditional logistic model, the conditional Poisson model does not require expanding the data, and can adjust for overdispersion and auto-correlation. It is available in Stata, R, and other packages. By applying to some real data and using simulations, we demonstrate that conditional Poisson models were simpler to code and shorter to run than are conditional logistic analyses and can be fitted to larger data sets than possible with standard Poisson models. Allowing for overdispersion or autocorrelation was possible with the conditional Poisson model but when not required this model gave identical estimates to those from conditional logistic regression. Conditional Poisson regression models provide an alternative to case crossover analysis of stratified time series data with some advantages. The conditional Poisson model can also be used in other contexts in which primary control for confounding is by fine stratification.

  11. Delayed heart rate recovery after exercise as a risk factor of incident type 2 diabetes mellitus after adjusting for glycometabolic parameters in men.

    PubMed

    Yu, Tae Yang; Jee, Jae Hwan; Bae, Ji Cheol; Hong, Won-Jung; Jin, Sang-Man; Kim, Jae Hyeon; Lee, Moon-Kyu

    2016-10-15

    Some studies have reported that delayed heart rate recovery (HRR) after exercise is associated with incident type 2 diabetes mellitus (T2DM). This study aimed to investigate the longitudinal association of delayed HRR following a graded exercise treadmill test (GTX) with the development of T2DM including glucose-associated parameters as an adjusting factor in healthy Korean men. Analyses including fasting plasma glucose, HOMA-IR, HOMA-β, and HbA1c as confounding factors and known confounders were performed. HRR was calculated as peak heart rate minus heart rate after a 1-min rest (HRR 1). Cox proportional hazards model was used to quantify the independent association between HRR and incident T2DM. During 9082 person-years of follow-up between 2006 and 2012, there were 180 (10.1%) incident cases of T2DM. After adjustment for age, BMI, systolic BP, diastolic BP, smoking status, peak heart rate, peak oxygen uptake, TG, LDL-C, HDL-C, fasting plasma glucose, HOMA-IR, HOMA-β, and HbA1c, the hazard ratios (HRs) [95% confidence interval (CI)] of incident T2DM comparing the second and third tertiles to the first tertile of HRR 1 were 0.867 (0.609-1.235) and 0.624 (0.426-0.915), respectively (p for trend=0.017). As a continuous variable, in the fully-adjusted model, the HR (95% CI) of incident T2DM associated with each 1 beat increase in HRR 1 was 0.980 (0.960-1.000) (p=0.048). This study demonstrated that delayed HRR after exercise predicts incident T2DM in men, even after adjusting for fasting glucose, HOMA-IR, HOMA-β, and HbA1c. However, only HRR 1 had clinical significance. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  12. Factor weighting in DRASTIC modeling.

    PubMed

    Pacheco, F A L; Pires, L M G R; Santos, R M B; Sanches Fernandes, L F

    2015-02-01

    Evaluation of aquifer vulnerability comprehends the integration of very diverse data, including soil characteristics (texture), hydrologic settings (recharge), aquifer properties (hydraulic conductivity), environmental parameters (relief), and ground water quality (nitrate contamination). It is therefore a multi-geosphere problem to be handled by a multidisciplinary team. The DRASTIC model remains the most popular technique in use for aquifer vulnerability assessments. The algorithm calculates an intrinsic vulnerability index based on a weighted addition of seven factors. In many studies, the method is subject to adjustments, especially in the factor weights, to meet the particularities of the studied regions. However, adjustments made by different techniques may lead to markedly different vulnerabilities and hence to insecurity in the selection of an appropriate technique. This paper reports the comparison of 5 weighting techniques, an enterprise not attempted before. The studied area comprises 26 aquifer systems located in Portugal. The tested approaches include: the Delphi consensus (original DRASTIC, used as reference), Sensitivity Analysis, Spearman correlations, Logistic Regression and Correspondence Analysis (used as adjustment techniques). In all cases but Sensitivity Analysis, adjustment techniques have privileged the factors representing soil characteristics, hydrologic settings, aquifer properties and environmental parameters, by leveling their weights to ≈4.4, and have subordinated the factors describing the aquifer media by downgrading their weights to ≈1.5. Logistic Regression predicts the highest and Sensitivity Analysis the lowest vulnerabilities. Overall, the vulnerability indices may be separated by a maximum value of 51 points. This represents an uncertainty of 2.5 vulnerability classes, because they are 20 points wide. Given this ambiguity, the selection of a weighting technique to integrate a vulnerability index may require additional expertise to be set up satisfactorily. Following a general criterion that weights must be proportional to the range of the ratings, Correspondence Analysis may be recommended as the best adjustment technique. Copyright © 2014 Elsevier B.V. All rights reserved.

  13. Parameter Estimation and Sensitivity Analysis of an Urban Surface Energy Balance Parameterization at a Tropical Suburban Site

    NASA Astrophysics Data System (ADS)

    Harshan, S.; Roth, M.; Velasco, E.

    2014-12-01

    Forecasting of the urban weather and climate is of great importance as our cities become more populated and considering the combined effects of global warming and local land use changes which make urban inhabitants more vulnerable to e.g. heat waves and flash floods. In meso/global scale models, urban parameterization schemes are used to represent the urban effects. However, these schemes require a large set of input parameters related to urban morphological and thermal properties. Obtaining all these parameters through direct measurements are usually not feasible. A number of studies have reported on parameter estimation and sensitivity analysis to adjust and determine the most influential parameters for land surface schemes in non-urban areas. Similar work for urban areas is scarce, in particular studies on urban parameterization schemes in tropical cities have so far not been reported. In order to address above issues, the town energy balance (TEB) urban parameterization scheme (part of the SURFEX land surface modeling system) was subjected to a sensitivity and optimization/parameter estimation experiment at a suburban site in, tropical Singapore. The sensitivity analysis was carried out as a screening test to identify the most sensitive or influential parameters. Thereafter, an optimization/parameter estimation experiment was performed to calibrate the input parameter. The sensitivity experiment was based on the "improved Sobol's global variance decomposition method" . The analysis showed that parameters related to road, roof and soil moisture have significant influence on the performance of the model. The optimization/parameter estimation experiment was performed using the AMALGM (a multi-algorithm genetically adaptive multi-objective method) evolutionary algorithm. The experiment showed a remarkable improvement compared to the simulations using the default parameter set. The calibrated parameters from this optimization experiment can be used for further model validation studies to identify inherent deficiencies in model physics.

  14. Model-data fusion across ecosystems: from multisite optimizations to global simulations

    NASA Astrophysics Data System (ADS)

    Kuppel, S.; Peylin, P.; Maignan, F.; Chevallier, F.; Kiely, G.; Montagnani, L.; Cescatti, A.

    2014-11-01

    This study uses a variational data assimilation framework to simultaneously constrain a global ecosystem model with eddy covariance measurements of daily net ecosystem exchange (NEE) and latent heat (LE) fluxes from a large number of sites grouped in seven plant functional types (PFTs). It is an attempt to bridge the gap between the numerous site-specific parameter optimization works found in the literature and the generic parameterization used by most land surface models within each PFT. The present multisite approach allows deriving PFT-generic sets of optimized parameters enhancing the agreement between measured and simulated fluxes at most of the sites considered, with performances often comparable to those of the corresponding site-specific optimizations. Besides reducing the PFT-averaged model-data root-mean-square difference (RMSD) and the associated daily output uncertainty, the optimization improves the simulated CO2 balance at tropical and temperate forests sites. The major site-level NEE adjustments at the seasonal scale are reduced amplitude in C3 grasslands and boreal forests, increased seasonality in temperate evergreen forests, and better model-data phasing in temperate deciduous broadleaf forests. Conversely, the poorer performances in tropical evergreen broadleaf forests points to deficiencies regarding the modelling of phenology and soil water stress for this PFT. An evaluation with data-oriented estimates of photosynthesis (GPP - gross primary productivity) and ecosystem respiration (Reco) rates indicates distinctively improved simulations of both gross fluxes. The multisite parameter sets are then tested against CO2 concentrations measured at 53 locations around the globe, showing significant adjustments of the modelled seasonality of atmospheric CO2 concentration, whose relevance seems PFT-dependent, along with an improved interannual variability. Lastly, a global-scale evaluation with remote sensing NDVI (normalized difference vegetation index) measurements indicates an improvement of the simulated seasonal variations of the foliar cover for all considered PFTs.

  15. Modeling Effects of RNA on Capsid Assembly Pathways via Coarse-Grained Stochastic Simulation

    PubMed Central

    Smith, Gregory R.; Xie, Lu; Schwartz, Russell

    2016-01-01

    The environment of a living cell is vastly different from that of an in vitro reaction system, an issue that presents great challenges to the use of in vitro models, or computer simulations based on them, for understanding biochemistry in vivo. Virus capsids make an excellent model system for such questions because they typically have few distinct components, making them amenable to in vitro and modeling studies, yet their assembly can involve complex networks of possible reactions that cannot be resolved in detail by any current experimental technology. We previously fit kinetic simulation parameters to bulk in vitro assembly data to yield a close match between simulated and real data, and then used the simulations to study features of assembly that cannot be monitored experimentally. The present work seeks to project how assembly in these simulations fit to in vitro data would be altered by computationally adding features of the cellular environment to the system, specifically the presence of nucleic acid about which many capsids assemble. The major challenge of such work is computational: simulating fine-scale assembly pathways on the scale and in the parameter domains of real viruses is far too computationally costly to allow for explicit models of nucleic acid interaction. We bypass that limitation by applying analytical models of nucleic acid effects to adjust kinetic rate parameters learned from in vitro data to see how these adjustments, singly or in combination, might affect fine-scale assembly progress. The resulting simulations exhibit surprising behavioral complexity, with distinct effects often acting synergistically to drive efficient assembly and alter pathways relative to the in vitro model. The work demonstrates how computer simulations can help us understand how assembly might differ between the in vitro and in vivo environments and what features of the cellular environment account for these differences. PMID:27244559

  16. The effect of loudness on the reverberance of music: reverberance prediction using loudness models.

    PubMed

    Lee, Doheon; Cabrera, Densil; Martens, William L

    2012-02-01

    This study examines the auditory attribute that describes the perceived amount of reverberation, known as "reverberance." Listening experiments were performed using two signals commonly heard in auditoria: excerpts of orchestral music and western classical singing. Listeners adjusted the decay rate of room impulse responses prior to convolution with these signals, so as to match the reverberance of each stimulus to that of a reference stimulus. The analysis examines the hypothesis that reverberance is related to the loudness decay rate of the underlying room impulse response. This hypothesis is tested using computational models of time varying or dynamic loudness, from which parameters analogous to conventional reverberation parameters (early decay time and reverberation time) are derived. The results show that listening level significantly affects reverberance, and that the loudness-based parameters outperform related conventional parameters. Results support the proposed relationship between reverberance and the computationally predicted loudness decay function of sound in rooms. © 2012 Acoustical Society of America

  17. Photoinjector optimization using a derivative-free, model-based trust-region algorithm for the Argonne Wakefield Accelerator

    NASA Astrophysics Data System (ADS)

    Neveu, N.; Larson, J.; Power, J. G.; Spentzouris, L.

    2017-07-01

    Model-based, derivative-free, trust-region algorithms are increasingly popular for optimizing computationally expensive numerical simulations. A strength of such methods is their efficient use of function evaluations. In this paper, we use one such algorithm to optimize the beam dynamics in two cases of interest at the Argonne Wakefield Accelerator (AWA) facility. First, we minimize the emittance of a 1 nC electron bunch produced by the AWA rf photocathode gun by adjusting three parameters: rf gun phase, solenoid strength, and laser radius. The algorithm converges to a set of parameters that yield an emittance of 1.08 μm. Second, we expand the number of optimization parameters to model the complete AWA rf photoinjector (the gun and six accelerating cavities) at 40 nC. The optimization algorithm is used in a Pareto study that compares the trade-off between emittance and bunch length for the AWA 70MeV photoinjector.

  18. Generalized image contrast enhancement technique based on Heinemann contrast discrimination model

    NASA Astrophysics Data System (ADS)

    Liu, Hong; Nodine, Calvin F.

    1994-03-01

    This paper presents a generalized image contrast enhancement technique which equalizes perceived brightness based on the Heinemann contrast discrimination model. This is a modified algorithm which presents an improvement over the previous study by Mokrane in its mathematically proven existence of a unique solution and in its easily tunable parameterization. The model uses a log-log representation of contrast luminosity between targets and the surround in a fixed luminosity background setting. The algorithm consists of two nonlinear gray-scale mapping functions which have seven parameters, two of which are adjustable Heinemann constants. Another parameter is the background gray level. The remaining four parameters are nonlinear functions of gray scale distribution of the image, and can be uniquely determined once the previous three are given. Tests have been carried out to examine the effectiveness of the algorithm for increasing the overall contrast of images. It can be demonstrated that the generalized algorithm provides better contrast enhancement than histogram equalization. In fact, the histogram equalization technique is a special case of the proposed mapping.

  19. Metric Calibration of a Focused Plenoptic Camera Based on a 3d Calibration Target

    NASA Astrophysics Data System (ADS)

    Zeller, N.; Noury, C. A.; Quint, F.; Teulière, C.; Stilla, U.; Dhome, M.

    2016-06-01

    In this paper we present a new calibration approach for focused plenoptic cameras. We derive a new mathematical projection model of a focused plenoptic camera which considers lateral as well as depth distortion. Therefore, we derive a new depth distortion model directly from the theory of depth estimation in a focused plenoptic camera. In total the model consists of five intrinsic parameters, the parameters for radial and tangential distortion in the image plane and two new depth distortion parameters. In the proposed calibration we perform a complete bundle adjustment based on a 3D calibration target. The residual of our optimization approach is three dimensional, where the depth residual is defined by a scaled version of the inverse virtual depth difference and thus conforms well to the measured data. Our method is evaluated based on different camera setups and shows good accuracy. For a better characterization of our approach we evaluate the accuracy of virtual image points projected back to 3D space.

  20. Predictive process simulation of cryogenic implants for leading edge transistor design

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gossmann, Hans-Joachim; Zographos, Nikolas; Park, Hugh

    2012-11-06

    Two cryogenic implant TCAD-modules have been developed: (i) A continuum-based compact model targeted towards a TCAD production environment calibrated against an extensive data-set for all common dopants. Ion-specific calibration parameters related to damage generation and dynamic annealing were used and resulted in excellent fits to the calibration data-set. (ii) A Kinetic Monte Carlo (kMC) model including the full time dependence of ion-exposure that a particular spot on the wafer experiences, as well as the resulting temperature vs. time profile of this spot. It was calibrated by adjusting damage generation and dynamic annealing parameters. The kMC simulations clearly demonstrate the importancemore » of the time-structure of the beam for the amorphization process: Assuming an average dose-rate does not capture all of the physics and may lead to incorrect conclusions. The model enables optimization of the amorphization process through tool parameters such as scan speed or beam height.« less

  1. Design of experiment for earth rotation and baseline parameter determination from very long baseline interferometry

    NASA Technical Reports Server (NTRS)

    Dermanis, A.

    1977-01-01

    The possibility of recovering earth rotation and network geometry (baseline) parameters are emphasized. The numerical simulated experiments performed are set up in an environment where station coordinates vary with respect to inertial space according to a simulated earth rotation model similar to the actual but unknown rotation of the earth. The basic technique of VLBI and its mathematical model are presented. The parametrization of earth rotation chosen is described and the resulting model is linearized. A simple analysis of the geometry of the observations leads to some useful hints on achieving maximum sensitivity of the observations with respect to the parameters considered. The basic philosophy for the simulation of data and their analysis through standard least squares adjustment techniques is presented. A number of characteristic network designs based on present and candidate station locations are chosen. The results of the simulations for each design are presented together with a summary of the conclusions.

  2. Revised Parameters for the AMOEBA Polarizable Atomic Multipole Water Model

    PubMed Central

    Pande, Vijay S.; Head-Gordon, Teresa; Ponder, Jay W.

    2016-01-01

    A set of improved parameters for the AMOEBA polarizable atomic multipole water model is developed. The protocol uses an automated procedure, ForceBalance, to adjust model parameters to enforce agreement with ab initio-derived results for water clusters and experimentally obtained data for a variety of liquid phase properties across a broad temperature range. The values reported here for the new AMOEBA14 water model represent a substantial improvement over the previous AMOEBA03 model. The new AMOEBA14 water model accurately predicts the temperature of maximum density and qualitatively matches the experimental density curve across temperatures ranging from 249 K to 373 K. Excellent agreement is observed for the AMOEBA14 model in comparison to a variety of experimental properties as a function of temperature, including the 2nd virial coefficient, enthalpy of vaporization, isothermal compressibility, thermal expansion coefficient and dielectric constant. The viscosity, self-diffusion constant and surface tension are also well reproduced. In comparison to high-level ab initio results for clusters of 2 to 20 water molecules, the AMOEBA14 model yields results similar to the AMOEBA03 and the direct polarization iAMOEBA models. With advances in computing power, calibration data, and optimization techniques, we recommend the use of the AMOEBA14 water model for future studies employing a polarizable water model. PMID:25683601

  3. Inspiration of slip effects on electromagnetohydrodynamics (EMHD) nanofluid flow through a horizontal Riga plate

    NASA Astrophysics Data System (ADS)

    Ayub, M.; Abbas, T.; Bhatti, M. M.

    2016-06-01

    The boundary layer flow of nanofluid that is electrically conducting over a Riga plate is considered. The Riga plate is an electromagnetic actuator which comprises a spanwise adjusted cluster of substituting terminal and lasting magnets mounted on a plane surface. The numerical model fuses the Brownian motion and the thermophoresis impacts because of the nanofluid and the Grinberg term for the wall parallel Lorentz force due to the Riga plate in the presence of slip effects. The numerical solution of the problem is presented using the shooting method. The novelties of all the physical parameters such as modified Hartmann number, Richardson number, nanoparticle concentration flux parameter, Prandtl number, Lewis number, thermophoresis parameter, Brownian motion parameter and slip parameter are demonstrated graphically. Numerical values of reduced Nusselt number, Sherwood number are discussed in detail.

  4. Modeling the Infrared Spectra of Earth-Analog Exoplanets

    NASA Astrophysics Data System (ADS)

    Nixon, C.

    2014-04-01

    As a preparation for future observations with the James Webb Space Telescope (JWST) and other facilities, we have undertaken to model the infrared spectra of Earth-like exoplanets. Two atmospheric models were used: the modern (low CO2) and archean (high CO2) predictive models of the Kasting group at Penn state. Several model parameters such as distance to star, and stellar type (visible-UV spectrum spectrum) were adjusted, and the models reconverged. Subsequently, the final model atmospheres were input to a radiative transfer code (NEMESIS) and the results intercompared to search for the most significant spectral changes. Implications for exoplanet spectrum detectivity will be discussed.

  5. Characterizing Facial Skin Ageing in Humans: Disentangling Extrinsic from Intrinsic Biological Phenomena

    PubMed Central

    Trojahn, Carina; Dobos, Gabor; Lichterfeld, Andrea; Blume-Peytavi, Ulrike; Kottner, Jan

    2015-01-01

    Facial skin ageing is caused by intrinsic and extrinsic mechanisms. Intrinsic ageing is highly related to chronological age. Age related skin changes can be measured using clinical and biophysical methods. The aim of this study was to evaluate whether and how clinical characteristics and biophysical parameters are associated with each other with and without adjustment for chronological age. Twenty-four female subjects of three age groups were enrolled. Clinical assessments (global facial skin ageing, wrinkling, and sagging), and biophysical measurements (roughness, colour, skin elasticity, and barrier function) were conducted at both upper cheeks. Pearson's correlations and linear regression models adjusted for age were calculated. Most of the measured parameters were correlated with chronological age (e.g., association with wrinkle score, r = 0.901) and with each other (e.g., residual skin deformation and wrinkle score, r = 0.606). After statistical adjustment for age, only few associations remained (e.g., mean roughness (R z) and luminance (L *),  β = −0.507, R 2 = 0.377). Chronological age as surrogate marker for intrinsic ageing has the most important influence on most facial skin ageing signs. Changes in skin elasticity, wrinkling, sagging, and yellowness seem to be caused by additional extrinsic ageing. PMID:25767806

  6. Vertical Eddy Diffusivity as a Control Parameter in the Tropical Pacific Ocean

    NASA Astrophysics Data System (ADS)

    Martinez Avellaneda, N.; Cornuelle, B.; Mazloff, M. R.; Stammer, D.

    2012-12-01

    Ocean models suffer from errors in the treatment of turbulent sub-grid scale motions causing mixing and energy dissipation. Unrealistic small-scale features in models can have large-scale consequences, such as biases in the upper ocean temperature, a symptom of poorly-simulated upwelling, currents and air-sea interactions. This is of special importance in the tropical Pacific Ocean, which is home to energetic air-sea interactions that affect global climate. It has been shown in a number of studies that the simulated ENSO variability is highly dependent on the state of the ocean (e.g.: background mixing). Moreover, the magnitude of the vertical numerical diffusion is of primary importance in properly reproducing the Pacific equatorial thermocline. Yet, it is a common practice to use spatially uniform mixing parameters in ocean simulations. This work is part of a NASA-funded project to estimate the space-varying ocean mixing coefficients in an eddy-permitting model of the tropical Pacific. The usefulness of assimilation techniques in estimating mixing parameters has been previously explored (e.g.: Stammer, 2005, Ferreira et al., 2005). The authors also demonstrated that the spatial structure of the Equatorial Undercurrent (EUC) could be improved by adjusting wind-stress and surface buoyancy flux within their error bounds. In our work, we address the important question of whether adjusting mixing parameterizations can bring about similar improvements. To that end, an eddy-permitting state estimate for the tropical Pacific is developed using the MIT general circulation model and its adjoint where the vertical diffusivity is set as a control parameter. Complementary adjoint-based sensitivity results show strong sensitivities of the Tropical Pacific thermocline (thickness and location) and the EUC transport to the vertical diffusivity in the tropics. Argo, CTD, XBT and mooring in-situ data, as well as TMI SST and altimetry observations are assimilated in order to reduce the misfit between the model simulations and the ocean observations. Model domain topography of 1/3dgr of spatial resolution interpolated from ETOPO 2. The first and the last color levels represent regions shallower than 100m and deeper than 5000m, respectively

  7. Adaptation of model proteins from cold to hot environments involves continuous and small adjustments of average parameters related to amino acid composition.

    PubMed

    De Vendittis, Emmanuele; Castellano, Immacolata; Cotugno, Roberta; Ruocco, Maria Rosaria; Raimo, Gennaro; Masullo, Mariorosario

    2008-01-07

    The growth temperature adaptation of six model proteins has been studied in 42 microorganisms belonging to eubacterial and archaeal kingdoms, covering optimum growth temperatures from 7 to 103 degrees C. The selected proteins include three elongation factors involved in translation, the enzymes glyceraldehyde-3-phosphate dehydrogenase and superoxide dismutase, the cell division protein FtsZ. The common strategy of protein adaptation from cold to hot environments implies the occurrence of small changes in the amino acid composition, without altering the overall structure of the macromolecule. These continuous adjustments were investigated through parameters related to the amino acid composition of each protein. The average value per residue of mass, volume and accessible surface area allowed an evaluation of the usage of bulky residues, whereas the average hydrophobicity reflected that of hydrophobic residues. The specific proportion of bulky and hydrophobic residues in each protein almost linearly increased with the temperature of the host microorganism. This finding agrees with the structural and functional properties exhibited by proteins in differently adapted sources, thus explaining the great compactness or the high flexibility exhibited by (hyper)thermophilic or psychrophilic proteins, respectively. Indeed, heat-adapted proteins incline toward the usage of heavier-size and more hydrophobic residues with respect to mesophiles, whereas the cold-adapted macromolecules show the opposite behavior with a certain preference for smaller-size and less hydrophobic residues. An investigation on the different increase of bulky residues along with the growth temperature observed in the six model proteins suggests the relevance of the possible different role and/or structure organization played by protein domains. The significance of the linear correlations between growth temperature and parameters related to the amino acid composition improved when the analysis was collectively carried out on all model proteins.

  8. A unified inversion scheme to process multifrequency measurements of various dispersive electromagnetic properties

    NASA Astrophysics Data System (ADS)

    Han, Y.; Misra, S.

    2018-04-01

    Multi-frequency measurement of a dispersive electromagnetic (EM) property, such as electrical conductivity, dielectric permittivity, or magnetic permeability, is commonly analyzed for purposes of material characterization. Such an analysis requires inversion of the multi-frequency measurement based on a specific relaxation model, such as Cole-Cole model or Pelton's model. We develop a unified inversion scheme that can be coupled to various type of relaxation models to independently process multi-frequency measurement of varied EM properties for purposes of improved EM-based geomaterial characterization. The proposed inversion scheme is firstly tested in few synthetic cases in which different relaxation models are coupled into the inversion scheme and then applied to multi-frequency complex conductivity, complex resistivity, complex permittivity, and complex impedance measurements. The method estimates up to seven relaxation-model parameters exhibiting convergence and accuracy for random initializations of the relaxation-model parameters within up to 3-orders of magnitude variation around the true parameter values. The proposed inversion method implements a bounded Levenberg algorithm with tuning initial values of damping parameter and its iterative adjustment factor, which are fixed in all the cases shown in this paper and irrespective of the type of measured EM property and the type of relaxation model. Notably, jump-out step and jump-back-in step are implemented as automated methods in the inversion scheme to prevent the inversion from getting trapped around local minima and to honor physical bounds of model parameters. The proposed inversion scheme can be easily used to process various types of EM measurements without major changes to the inversion scheme.

  9. Improving flood forecasting capability of physically based distributed hydrological models by parameter optimization

    NASA Astrophysics Data System (ADS)

    Chen, Y.; Li, J.; Xu, H.

    2016-01-01

    Physically based distributed hydrological models (hereafter referred to as PBDHMs) divide the terrain of the whole catchment into a number of grid cells at fine resolution and assimilate different terrain data and precipitation to different cells. They are regarded to have the potential to improve the catchment hydrological process simulation and prediction capability. In the early stage, physically based distributed hydrological models are assumed to derive model parameters from the terrain properties directly, so there is no need to calibrate model parameters. However, unfortunately the uncertainties associated with this model derivation are very high, which impacted their application in flood forecasting, so parameter optimization may also be necessary. There are two main purposes for this study: the first is to propose a parameter optimization method for physically based distributed hydrological models in catchment flood forecasting by using particle swarm optimization (PSO) algorithm and to test its competence and to improve its performances; the second is to explore the possibility of improving physically based distributed hydrological model capability in catchment flood forecasting by parameter optimization. In this paper, based on the scalar concept, a general framework for parameter optimization of the PBDHMs for catchment flood forecasting is first proposed that could be used for all PBDHMs. Then, with the Liuxihe model as the study model, which is a physically based distributed hydrological model proposed for catchment flood forecasting, the improved PSO algorithm is developed for the parameter optimization of the Liuxihe model in catchment flood forecasting. The improvements include adoption of the linearly decreasing inertia weight strategy to change the inertia weight and the arccosine function strategy to adjust the acceleration coefficients. This method has been tested in two catchments in southern China with different sizes, and the results show that the improved PSO algorithm could be used for the Liuxihe model parameter optimization effectively and could improve the model capability largely in catchment flood forecasting, thus proving that parameter optimization is necessary to improve the flood forecasting capability of physically based distributed hydrological models. It also has been found that the appropriate particle number and the maximum evolution number of PSO algorithm used for the Liuxihe model catchment flood forecasting are 20 and 30 respectively.

  10. Technical Note: Approximate Bayesian parameterization of a process-based tropical forest model

    NASA Astrophysics Data System (ADS)

    Hartig, F.; Dislich, C.; Wiegand, T.; Huth, A.

    2014-02-01

    Inverse parameter estimation of process-based models is a long-standing problem in many scientific disciplines. A key question for inverse parameter estimation is how to define the metric that quantifies how well model predictions fit to the data. This metric can be expressed by general cost or objective functions, but statistical inversion methods require a particular metric, the probability of observing the data given the model parameters, known as the likelihood. For technical and computational reasons, likelihoods for process-based stochastic models are usually based on general assumptions about variability in the observed data, and not on the stochasticity generated by the model. Only in recent years have new methods become available that allow the generation of likelihoods directly from stochastic simulations. Previous applications of these approximate Bayesian methods have concentrated on relatively simple models. Here, we report on the application of a simulation-based likelihood approximation for FORMIND, a parameter-rich individual-based model of tropical forest dynamics. We show that approximate Bayesian inference, based on a parametric likelihood approximation placed in a conventional Markov chain Monte Carlo (MCMC) sampler, performs well in retrieving known parameter values from virtual inventory data generated by the forest model. We analyze the results of the parameter estimation, examine its sensitivity to the choice and aggregation of model outputs and observed data (summary statistics), and demonstrate the application of this method by fitting the FORMIND model to field data from an Ecuadorian tropical forest. Finally, we discuss how this approach differs from approximate Bayesian computation (ABC), another method commonly used to generate simulation-based likelihood approximations. Our results demonstrate that simulation-based inference, which offers considerable conceptual advantages over more traditional methods for inverse parameter estimation, can be successfully applied to process-based models of high complexity. The methodology is particularly suitable for heterogeneous and complex data structures and can easily be adjusted to other model types, including most stochastic population and individual-based models. Our study therefore provides a blueprint for a fairly general approach to parameter estimation of stochastic process-based models.

  11. Marginally specified priors for non-parametric Bayesian estimation

    PubMed Central

    Kessler, David C.; Hoff, Peter D.; Dunson, David B.

    2014-01-01

    Summary Prior specification for non-parametric Bayesian inference involves the difficult task of quantifying prior knowledge about a parameter of high, often infinite, dimension. A statistician is unlikely to have informed opinions about all aspects of such a parameter but will have real information about functionals of the parameter, such as the population mean or variance. The paper proposes a new framework for non-parametric Bayes inference in which the prior distribution for a possibly infinite dimensional parameter is decomposed into two parts: an informative prior on a finite set of functionals, and a non-parametric conditional prior for the parameter given the functionals. Such priors can be easily constructed from standard non-parametric prior distributions in common use and inherit the large support of the standard priors on which they are based. Additionally, posterior approximations under these informative priors can generally be made via minor adjustments to existing Markov chain approximation algorithms for standard non-parametric prior distributions. We illustrate the use of such priors in the context of multivariate density estimation using Dirichlet process mixture models, and in the modelling of high dimensional sparse contingency tables. PMID:25663813

  12. System identification for modeling for control of flexible structures

    NASA Technical Reports Server (NTRS)

    Mettler, Edward; Milman, Mark

    1986-01-01

    The major components of a design and operational flight strategy for flexible structure control systems are presented. In this strategy an initial distributed parameter control design is developed and implemented from available ground test data and on-orbit identification using sophisticated modeling and synthesis techniques. The reliability of this high performance controller is directly linked to the accuracy of the parameters on which the design is based. Because uncertainties inevitably grow without system monitoring, maintaining the control system requires an active on-line system identification function to supply parameter updates and covariance information. Control laws can then be modified to improve performance when the error envelopes are decreased. In terms of system safety and stability the covariance information is of equal importance as the parameter values themselves. If the on-line system ID function detects an increase in parameter error covariances, then corresponding adjustments must be made in the control laws to increase robustness. If the error covariances exceed some threshold, an autonomous calibration sequence could be initiated to restore the error enveloped to an acceptable level.

  13. Prognostic characteristics of the lowest-mode internal waves in the Sea of Okhotsk

    NASA Astrophysics Data System (ADS)

    Kurkin, Andrey; Kurkina, Oxana; Zaytsev, Andrey; Rybin, Artem; Talipova, Tatiana

    2017-04-01

    The nonlinear dynamics of short-period internal waves on ocean shelves is well described by generalized nonlinear evolutionary models of Korteweg - de Vries type. Parameters of these models such as long wave propagation speed, nonlinear and dispersive coefficients can be calculated using hydrological data (sea water density stratification), and therefore have geographical and seasonal variations. The internal wave parameters for the basin of the Sea of Okhotsk are computed on a base of recent version of hydrological data source GDEM V3.0. Geographical and seasonal variability of internal wave characteristics is investigated. It is shown that annually or seasonally averaged data can be used for linear parameters. The nonlinear parameters are more sensitive to temporal averaging of hydrological data and detailed data are preferable to use. The zones for nonlinear parameters to change their signs (so-called "turning points") are selected. Possible internal waveforms appearing in the process of internal tide transformation including the solitary waves changing polarities are simulated for the hydrological conditions in the Sea of Okhotsk shelf to demonstrate different scenarios of internal wave adjustment, transformation, refraction and cylindrical divergence.

  14. Computer modeling of electrical performance of detonators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Furnberg, C.M.; Peevy, G.R.; Brigham, W.P.

    1995-05-01

    An empirical model of detonator electrical performance which describes the resistance of the exploding bridgewire (EBW) or exploding foil initiator (EFI or slapper) as a function of energy, deposition will be described. This model features many parameters that can be adjusted to obtain a close fit to experimental data. This has been demonstrated using recent experimental data taken with the cable discharge system located at Sandia National Laboratories. This paper will be a continuation of the paper entitled ``Cable Discharge System for Fundamental Detonator Studies`` presented at the 2nd NASA/DOD/DOE Pyrotechnic Workshop.

  15. Lattice model calculation of elastic and thermodynamic properties at high pressure and temperature. [for alkali halides in NaCl lattice

    NASA Technical Reports Server (NTRS)

    Demarest, H. H., Jr.

    1972-01-01

    The elastic constants and the entire frequency spectrum were calculated up to high pressure for the alkali halides in the NaCl lattice, based on an assumed functional form of the inter-atomic potential. The quasiharmonic approximation is used to calculate the vibrational contribution to the pressure and the elastic constants at arbitrary temperature. By explicitly accounting for the effect of thermal and zero point motion, the adjustable parameters in the potential are determined to a high degree of accuracy from the elastic constants and their pressure derivatives measured at zero pressure. The calculated Gruneisen parameter, the elastic constants and their pressure derivatives are in good agreement with experimental results up to about 600 K. The model predicts that for some alkali halides the Grunesen parameter may decrease monotonically with pressure, while for others it may increase with pressure, after an initial decrease.

  16. Mathematical models of radiation action on living cells: From the target theory to the modern approaches. A historical and critical review.

    PubMed

    Bodgi, Larry; Canet, Aurélien; Pujo-Menjouet, Laurent; Lesne, Annick; Victor, Jean-Marc; Foray, Nicolas

    2016-04-07

    Cell survival is conventionally defined as the capability of irradiated cells to produce colonies. It is quantified by the clonogenic assays that consist in determining the number of colonies resulting from a known number of irradiated cells. Several mathematical models were proposed to describe the survival curves, notably from the target theory. The Linear-Quadratic (LQ) model, which is to date the most frequently used model in radiobiology and radiotherapy, dominates all the other models by its robustness and simplicity. Its usefulness is particularly important because the ratio of the values of the adjustable parameters, α and β, on which it is based, predicts the occurrence of post-irradiation tissue reactions. However, the biological interpretation of these parameters is still unknown. Throughout this review, we revisit and discuss historically, mathematically and biologically, the different models of the radiation action by providing clues for resolving the enigma of the LQ model. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. Method and apparatus for lead-unity-lag electric power generation system

    NASA Technical Reports Server (NTRS)

    Ganev, Evgeni (Inventor); Warr, William (Inventor); Salam, Mohamed (Arif) (Inventor)

    2013-01-01

    A method employing a lead-unity-lag adjustment on a power generation system is disclosed. The method may include calculating a unity power factor point and adjusting system parameters to shift a power factor angle to substantially match an operating power angle creating a new unity power factor point. The method may then define operation parameters for a high reactance permanent magnet machine based on the adjusted power level.

  18. Development of Standard Fuel Models in Boreal Forests of Northeast China through Calibration and Validation

    PubMed Central

    Cai, Longyan; He, Hong S.; Wu, Zhiwei; Lewis, Benard L.; Liang, Yu

    2014-01-01

    Understanding the fire prediction capabilities of fuel models is vital to forest fire management. Various fuel models have been developed in the Great Xing'an Mountains in Northeast China. However, the performances of these fuel models have not been tested for historical occurrences of wildfires. Consequently, the applicability of these models requires further investigation. Thus, this paper aims to develop standard fuel models. Seven vegetation types were combined into three fuel models according to potential fire behaviors which were clustered using Euclidean distance algorithms. Fuel model parameter sensitivity was analyzed by the Morris screening method. Results showed that the fuel model parameters 1-hour time-lag loading, dead heat content, live heat content, 1-hour time-lag SAV(Surface Area-to-Volume), live shrub SAV, and fuel bed depth have high sensitivity. Two main sensitive fuel parameters: 1-hour time-lag loading and fuel bed depth, were determined as adjustment parameters because of their high spatio-temporal variability. The FARSITE model was then used to test the fire prediction capabilities of the combined fuel models (uncalibrated fuel models). FARSITE was shown to yield an unrealistic prediction of the historical fire. However, the calibrated fuel models significantly improved the capabilities of the fuel models to predict the actual fire with an accuracy of 89%. Validation results also showed that the model can estimate the actual fires with an accuracy exceeding 56% by using the calibrated fuel models. Therefore, these fuel models can be efficiently used to calculate fire behaviors, which can be helpful in forest fire management. PMID:24714164

  19. A stochastic fractional dynamics model of space-time variability of rain

    NASA Astrophysics Data System (ADS)

    Kundu, Prasun K.; Travis, James E.

    2013-09-01

    varies in space and time in a highly irregular manner and is described naturally in terms of a stochastic process. A characteristic feature of rainfall statistics is that they depend strongly on the space-time scales over which rain data are averaged. A spectral model of precipitation has been developed based on a stochastic differential equation of fractional order for the point rain rate, which allows a concise description of the second moment statistics of rain at any prescribed space-time averaging scale. The model is thus capable of providing a unified description of the statistics of both radar and rain gauge data. The underlying dynamical equation can be expressed in terms of space-time derivatives of fractional orders that are adjusted together with other model parameters to fit the data. The form of the resulting spectrum gives the model adequate flexibility to capture the subtle interplay between the spatial and temporal scales of variability of rain but strongly constrains the predicted statistical behavior as a function of the averaging length and time scales. We test the model with radar and gauge data collected contemporaneously at the NASA TRMM ground validation sites located near Melbourne, Florida and on the Kwajalein Atoll, Marshall Islands in the tropical Pacific. We estimate the parameters by tuning them to fit the second moment statistics of radar data at the smaller spatiotemporal scales. The model predictions are then found to fit the second moment statistics of the gauge data reasonably well at these scales without any further adjustment.

  20. Ocean Turbulence. Paper 2; One-Point Closure Model Momentum, Heat and Salt Vertical Diffusivities in the Presence of Shear

    NASA Technical Reports Server (NTRS)

    Canuto, V. M.; Howard, A.; Cheng, Y.; Dubovikov, M. S.

    1999-01-01

    We develop and test a 1-point closure turbulence model with the following features: 1) we include the salinity field and derive the expression for the vertical turbulent diffusivities of momentum K(sub m) , heat K(sub h) and salt K(sub s) as a function of two stability parameters: the Richardson number R(sub i) (stratification vs. shear) and the Turner number R(sub rho) (salinity gradient vs. temperature gradient). 2) to describe turbulent mixing below the mixed layer (ML), all previous models have adopted three adjustable "background diffusivities" for momentum, heat and salt. We propose a model that avoids such adjustable diffusivities. We assume that below the ML, the three diffusivities have the same functional dependence on R( sub i) and R(sub rho) as derived from the turbulence model. However, in order to compute R(sub i) below the ML, we use data of vertical shear due to wave-breaking.measured by Gargett et al. The procedure frees the model from adjustable background diffusivities and indeed we employ the same model throughout the entire vertical extent of the ocean. 3) in the local model, the turbulent diffusivities K(sub m,h,s) are given as analytical functions of R(sub i) and R(sub rho). 5) the model is used in an O-GCM and several results are presented to exhibit the effect of double diffusion processes. 6) the code is available upon request.

  1. Application of the simplex method to the optimal adjustment of the parameters of a ventilation network

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kamba, G.M.; Jacques, E.; Patigny, J.

    1995-12-31

    Literature is rather abundant on the topic of steady-state network analysis programs. Many versions exist, some of them have real extended facilities such as full graphical manipulation, fire simulation in motion, etc. These programs are certainly of great help to any ventilation planning and often assist the ventilation engineer in his operational decision making. However, what ever the efficiency of the calculation algorithms might be, their weak point still is the overall validity of the model. This numerical model, apart from maybe the questionable application of some physical laws, depends directly on the quality of the data used to identifymore » its most influencing parameters such as the passive (resistance) or active (fan) characteristic of each of the branches in the network. Considering the non-linear character of the problem and the great number of variables involved, finding the closest numerical model of a real mine ventilation network is without any doubt a very difficult problem. This problem, often referred to as the parameter adjustment problem, is in almost every practical case solved on an experimental and {open_quotes}feeling{close_quotes} basis. Only a few papers put forward a mathematical solution based on a least square approach as the best fit criterion. The aim of this paper is to examine the possibility to apply the well-known simplex method to this problem. The performance of this method and its capability to reach the global optimum which corresponds to the best fit is discussed and compared to that of other methods.« less

  2. Applying Probabilistic Decision Models to Clinical Trial Design

    PubMed Central

    Smith, Wade P; Phillips, Mark H

    2018-01-01

    Clinical trial design most often focuses on a single or several related outcomes with corresponding calculations of statistical power. We consider a clinical trial to be a decision problem, often with competing outcomes. Using a current controversy in the treatment of HPV-positive head and neck cancer, we apply several different probabilistic methods to help define the range of outcomes given different possible trial designs. Our model incorporates the uncertainties in the disease process and treatment response and the inhomogeneities in the patient population. Instead of expected utility, we have used a Markov model to calculate quality adjusted life expectancy as a maximization objective. Monte Carlo simulations over realistic ranges of parameters are used to explore different trial scenarios given the possible ranges of parameters. This modeling approach can be used to better inform the initial trial design so that it will more likely achieve clinical relevance. PMID:29888075

  3. AC impedance study of degradation of porous nickel battery electrodes

    NASA Technical Reports Server (NTRS)

    Lenhart, Stephen J.; Macdonald, D. D.; Pound, B. G.

    1987-01-01

    AC impedance spectra of porous nickel battery electrodes were recorded periodically during charge/discharge cycling in concentrated KOH solution at various temperatures. A transmission line model (TLM) was adopted to represent the impedance of the porous electrodes, and various model parameters were adjusted in a curve fitting routine to reproduce the experimental impedances. Degradation processes were deduced from changes in model parameters with electrode cycling time. In developing the TLM, impedance spectra of planar (nonporous) electrodes were used to represent the pore wall and backing plate interfacial impedances. These data were measured over a range of potentials and temperatures, and an equivalent circuit model was adopted to represent the planar electrode data. Cyclic voltammetry was used to study the characteristics of the oxygen evolution reaction on planar nickel electrodes during charging, since oxygen evolution can affect battery electrode charging efficiency and ultimately electrode cycle life if the overpotential for oxygen evolution is sufficiently low.

  4. A new item response theory model to adjust data allowing examinee choice

    PubMed Central

    Costa, Marcelo Azevedo; Braga Oliveira, Rivert Paulo

    2018-01-01

    In a typical questionnaire testing situation, examinees are not allowed to choose which items they answer because of a technical issue in obtaining satisfactory statistical estimates of examinee ability and item difficulty. This paper introduces a new item response theory (IRT) model that incorporates information from a novel representation of questionnaire data using network analysis. Three scenarios in which examinees select a subset of items were simulated. In the first scenario, the assumptions required to apply the standard Rasch model are met, thus establishing a reference for parameter accuracy. The second and third scenarios include five increasing levels of violating those assumptions. The results show substantial improvements over the standard model in item parameter recovery. Furthermore, the accuracy was closer to the reference in almost every evaluated scenario. To the best of our knowledge, this is the first proposal to obtain satisfactory IRT statistical estimates in the last two scenarios. PMID:29389996

  5. Modeling a Material's Instantaneous Velocity during Acceleration Driven by a Detonation's Gas-Push Process

    NASA Astrophysics Data System (ADS)

    Backofen, Joseph E.

    2005-07-01

    This paper will describe both the scientific findings and the model developed in order to quantfy a material's instantaneous velocity versus position, time, or the expansion ratio of an explosive's gaseous products while its gas pressure is accelerating the material. The formula derived to represent this gas-push process for the 2nd stage of the BRIGS Two-Step Detonation Propulsion Model was found to fit very well the published experimental data available for twenty explosives. When the formula's two key parameters (the ratio Vinitial / Vfinal and ExpansionRatioFinal) were adjusted slightly from the average values describing closely many explosives to values representing measured data for a particular explosive, the formula's representation of that explosive's gas-push process was improved. The time derivative of the velocity formula representing acceleration and/or pressure compares favorably to Jones-Wilkins-Lee equation-of-state model calculations performed using published JWL parameters.

  6. A biomechanical triphasic approach to the transport of nondilute solutions in articular cartilage.

    PubMed

    Abazari, Alireza; Elliott, Janet A W; Law, Garson K; McGann, Locksley E; Jomha, Nadr M

    2009-12-16

    Biomechanical models for biological tissues such as articular cartilage generally contain an ideal, dilute solution assumption. In this article, a biomechanical triphasic model of cartilage is described that includes nondilute treatment of concentrated solutions such as those applied in vitrification of biological tissues. The chemical potential equations of the triphasic model are modified and the transport equations are adjusted for the volume fraction and frictional coefficients of the solutes that are not negligible in such solutions. Four transport parameters, i.e., water permeability, solute permeability, diffusion coefficient of solute in solvent within the cartilage, and the cartilage stiffness modulus, are defined as four degrees of freedom for the model. Water and solute transport in cartilage were simulated using the model and predictions of average concentration increase and cartilage weight were fit to experimental data to obtain the values of the four transport parameters. As far as we know, this is the first study to formulate the solvent and solute transport equations of nondilute solutions in the cartilage matrix. It is shown that the values obtained for the transport parameters are within the ranges reported in the available literature, which confirms the proposed model approach.

  7. A Biomechanical Triphasic Approach to the Transport of Nondilute Solutions in Articular Cartilage

    PubMed Central

    Abazari, Alireza; Elliott, Janet A.W.; Law, Garson K.; McGann, Locksley E.; Jomha, Nadr M.

    2009-01-01

    Abstract Biomechanical models for biological tissues such as articular cartilage generally contain an ideal, dilute solution assumption. In this article, a biomechanical triphasic model of cartilage is described that includes nondilute treatment of concentrated solutions such as those applied in vitrification of biological tissues. The chemical potential equations of the triphasic model are modified and the transport equations are adjusted for the volume fraction and frictional coefficients of the solutes that are not negligible in such solutions. Four transport parameters, i.e., water permeability, solute permeability, diffusion coefficient of solute in solvent within the cartilage, and the cartilage stiffness modulus, are defined as four degrees of freedom for the model. Water and solute transport in cartilage were simulated using the model and predictions of average concentration increase and cartilage weight were fit to experimental data to obtain the values of the four transport parameters. As far as we know, this is the first study to formulate the solvent and solute transport equations of nondilute solutions in the cartilage matrix. It is shown that the values obtained for the transport parameters are within the ranges reported in the available literature, which confirms the proposed model approach. PMID:20006942

  8. Combined Yamamoto approach for simultaneous estimation of adsorption isotherm and kinetic parameters in ion-exchange chromatography.

    PubMed

    Rüdt, Matthias; Gillet, Florian; Heege, Stefanie; Hitzler, Julian; Kalbfuss, Bernd; Guélat, Bertrand

    2015-09-25

    Application of model-based design is appealing to support the development of protein chromatography in the biopharmaceutical industry. However, the required efforts for parameter estimation are frequently perceived as time-consuming and expensive. In order to speed-up this work, a new parameter estimation approach for modelling ion-exchange chromatography in linear conditions was developed. It aims at reducing the time and protein demand for the model calibration. The method combines the estimation of kinetic and thermodynamic parameters based on the simultaneous variation of the gradient slope and the residence time in a set of five linear gradient elutions. The parameters are estimated from a Yamamoto plot and a gradient-adjusted Van Deemter plot. The combined approach increases the information extracted per experiment compared to the individual methods. As a proof of concept, the combined approach was successfully applied for a monoclonal antibody on a cation-exchanger and for a Fc-fusion protein on an anion-exchange resin. The individual parameter estimations for the mAb confirmed that the new approach maintained the accuracy of the usual Yamamoto and Van Deemter plots. In the second case, offline size-exclusion chromatography was performed in order to estimate the thermodynamic parameters of an impurity (high molecular weight species) simultaneously with the main product. Finally, the parameters obtained from the combined approach were used in a lumped kinetic model to simulate the chromatography runs. The simulated chromatograms obtained for a wide range of gradient lengths and residence times showed only small deviations compared to the experimental data. Copyright © 2015 Elsevier B.V. All rights reserved.

  9. Using a multinomial tree model for detecting mixtures in perceptual detection

    PubMed Central

    Chechile, Richard A.

    2014-01-01

    In the area of memory research there have been two rival approaches for memory measurement—signal detection theory (SDT) and multinomial processing trees (MPT). Both approaches provide measures for the quality of the memory representation, and both approaches provide for corrections for response bias. In recent years there has been a strong case advanced for the MPT approach because of the finding of stochastic mixtures on both target-present and target-absent tests. In this paper a case is made that perceptual detection, like memory recognition, involves a mixture of processes that are readily represented as a MPT model. The Chechile (2004) 6P memory measurement model is modified in order to apply to the case of perceptual detection. This new MPT model is called the Perceptual Detection (PD) model. The properties of the PD model are developed, and the model is applied to some existing data of a radiologist examining CT scans. The PD model brings out novel features that were absent from a standard SDT analysis. Also the topic of optimal parameter estimation on an individual-observer basis is explored with Monte Carlo simulations. These simulations reveal that the mean of the Bayesian posterior distribution is a more accurate estimator than the corresponding maximum likelihood estimator (MLE). Monte Carlo simulations also indicate that model estimates based on only the data from an individual observer can be improved upon (in the sense of being more accurate) by an adjustment that takes into account the parameter estimate based on the data pooled across all the observers. The adjustment of the estimate for an individual is discussed as an analogous statistical effect to the improvement over the individual MLE demonstrated by the James–Stein shrinkage estimator in the case of the multiple-group normal model. PMID:25018741

  10. Phenomenology of wall-bounded Newtonian turbulence.

    PubMed

    L'vov, Victor S; Pomyalov, Anna; Procaccia, Itamar; Zilitinkevich, Sergej S

    2006-01-01

    We construct a simple analytic model for wall-bounded turbulence, containing only four adjustable parameters. Two of these parameters are responsible for the viscous dissipation of the components of the Reynolds stress tensor. The other two parameters control the nonlinear relaxation of these objects. The model offers an analytic description of the profiles of the mean velocity and the correlation functions of velocity fluctuations in the entire boundary region, from the viscous sublayer, through the buffer layer, and further into the log-law turbulent region. In particular, the model predicts a very simple distribution of the turbulent kinetic energy in the log-law region between the velocity components: the streamwise component contains a half of the total energy whereas the wall-normal and cross-stream components contain a quarter each. In addition, the model predicts a very simple relation between the von Kármán slope k and the turbulent velocity in the log-law region v+ (in wall units): v+=6k. These predictions are in excellent agreement with direct numerical simulation data and with recent laboratory experiments.

  11. Hierarchical models and the analysis of bird survey information

    USGS Publications Warehouse

    Sauer, J.R.; Link, W.A.

    2003-01-01

    Management of birds often requires analysis of collections of estimates. We describe a hierarchical modeling approach to the analysis of these data, in which parameters associated with the individual species estimates are treated as random variables, and probability statements are made about the species parameters conditioned on the data. A Markov-Chain Monte Carlo (MCMC) procedure is used to fit the hierarchical model. This approach is computer intensive, and is based upon simulation. MCMC allows for estimation both of parameters and of derived statistics. To illustrate the application of this method, we use the case in which we are interested in attributes of a collection of estimates of population change. Using data for 28 species of grassland-breeding birds from the North American Breeding Bird Survey, we estimate the number of species with increasing populations, provide precision-adjusted rankings of species trends, and describe a measure of population stability as the probability that the trend for a species is within a certain interval. Hierarchical models can be applied to a variety of bird survey applications, and we are investigating their use in estimation of population change from survey data.

  12. The Influence of Welding Parameters on the Nugget Formation of Resistance Spot Welding of Inconel 625 Sheets

    NASA Astrophysics Data System (ADS)

    Rezaei Ashtiani, Hamid Reza; Zarandooz, Roozbeh

    2015-09-01

    A 2D axisymmetric electro-thermo-mechanical finite element (FE) model is developed to investigate the effect of current intensity, welding time, and electrode tip diameter on temperature distributions and nugget size in resistance spot welding (RSW) process of Inconel 625 superalloy sheets using ABAQUS commercial software package. The coupled electro-thermal analysis and uncoupled thermal-mechanical analysis are used for modeling process. In order to improve accuracy of simulation, material properties including physical, thermal, and mechanical properties have been considered to be temperature dependent. The thickness and diameter of computed weld nuggets are compared with experimental results and good agreement is observed. So, FE model developed in this paper provides prediction of quality and shape of the weld nuggets and temperature distributions with variation of each process parameter, suitably. Utilizing this FE model assists in adjusting RSW parameters, so that expensive experimental process can be avoided. The results show that increasing welding time and current intensity lead to an increase in the nugget size and electrode indentation, whereas increasing electrode tip diameter decreases nugget size and electrode indentation.

  13. Application of the precipitation-runoff model in the Warrior coal field, Alabama

    USGS Publications Warehouse

    Kidd, Robert E.; Bossong, C.R.

    1987-01-01

    A deterministic precipitation-runoff model, the Precipitation-Runoff Modeling System, was applied in two small basins located in the Warrior coal field, Alabama. Each basin has distinct geologic, hydrologic, and land-use characteristics. Bear Creek basin (15.03 square miles) is undisturbed, is underlain almost entirely by consolidated coal-bearing rocks of Pennsylvanian age (Pottsville Formation), and is drained by an intermittent stream. Turkey Creek basin (6.08 square miles) contains a surface coal mine and is underlain by both the Pottsville Formation and unconsolidated clay, sand, and gravel deposits of Cretaceous age (Coker Formation). Aquifers in the Coker Formation sustain flow through extended rainless periods. Preliminary daily and storm calibrations were developed for each basin. Initial parameter and variable values were determined according to techniques recommended in the user's manual for the modeling system and through field reconnaissance. Parameters with meaningful sensitivity were identified and adjusted to match hydrograph shapes and to compute realistic water year budgets. When the developed calibrations were applied to data exclusive of the calibration period as a verification exercise, results were comparable to those for the calibration period. The model calibrations included preliminary parameter values for the various categories of geology and land use in each basin. The parameter values for areas underlain by the Pottsville Formation in the Bear Creek basin were transferred directly to similar areas in the Turkey Creek basin, and these parameter values were held constant throughout the model calibration. Parameter values for all geologic and land-use categories addressed in the two calibrations can probably be used in ungaged basins where similar conditions exist. The parameter transfer worked well, as a good calibration was obtained for Turkey Creek basin.

  14. Temporal gravity field modeling based on least square collocation with short-arc approach

    NASA Astrophysics Data System (ADS)

    ran, jiangjun; Zhong, Min; Xu, Houze; Liu, Chengshu; Tangdamrongsub, Natthachet

    2014-05-01

    After the launch of the Gravity Recovery And Climate Experiment (GRACE) in 2002, several research centers have attempted to produce the finest gravity model based on different approaches. In this study, we present an alternative approach to derive the Earth's gravity field, and two main objectives are discussed. Firstly, we seek the optimal method to estimate the accelerometer parameters, and secondly, we intend to recover the monthly gravity model based on least square collocation method. The method has been paid less attention compared to the least square adjustment method because of the massive computational resource's requirement. The positions of twin satellites are treated as pseudo-observations and unknown parameters at the same time. The variance covariance matrices of the pseudo-observations and the unknown parameters are valuable information to improve the accuracy of the estimated gravity solutions. Our analyses showed that introducing a drift parameter as an additional accelerometer parameter, compared to using only a bias parameter, leads to a significant improvement of our estimated monthly gravity field. The gravity errors outside the continents are significantly reduced based on the selected set of the accelerometer parameters. We introduced the improved gravity model namely the second version of Institute of Geodesy and Geophysics, Chinese Academy of Sciences (IGG-CAS 02). The accuracy of IGG-CAS 02 model is comparable to the gravity solutions computed from the Geoforschungszentrum (GFZ), the Center for Space Research (CSR) and the NASA Jet Propulsion Laboratory (JPL). In term of the equivalent water height, the correlation coefficients over the study regions (the Yangtze River valley, the Sahara desert, and the Amazon) among four gravity models are greater than 0.80.

  15. Class Enumeration and Parameter Recovery of Growth Mixture Modeling and Second-Order Growth Mixture Modeling in the Presence of Measurement Noninvariance between Latent Classes

    PubMed Central

    Kim, Eun Sook; Wang, Yan

    2017-01-01

    Population heterogeneity in growth trajectories can be detected with growth mixture modeling (GMM). It is common that researchers compute composite scores of repeated measures and use them as multiple indicators of growth factors (baseline performance and growth) assuming measurement invariance between latent classes. Considering that the assumption of measurement invariance does not always hold, we investigate the impact of measurement noninvariance on class enumeration and parameter recovery in GMM through a Monte Carlo simulation study (Study 1). In Study 2, we examine the class enumeration and parameter recovery of the second-order growth mixture modeling (SOGMM) that incorporates measurement models at the first order level. Thus, SOGMM estimates growth trajectory parameters with reliable sources of variance, that is, common factor variance of repeated measures and allows heterogeneity in measurement parameters between latent classes. The class enumeration rates are examined with information criteria such as AIC, BIC, sample-size adjusted BIC, and hierarchical BIC under various simulation conditions. The results of Study 1 showed that the parameter estimates of baseline performance and growth factor means were biased to the degree of measurement noninvariance even when the correct number of latent classes was extracted. In Study 2, the class enumeration accuracy of SOGMM depended on information criteria, class separation, and sample size. The estimates of baseline performance and growth factor mean differences between classes were generally unbiased but the size of measurement noninvariance was underestimated. Overall, SOGMM is advantageous in that it yields unbiased estimates of growth trajectory parameters and more accurate class enumeration compared to GMM by incorporating measurement models. PMID:28928691

  16. Symplectic no-core shell-model approach to intermediate-mass nuclei

    NASA Astrophysics Data System (ADS)

    Tobin, G. K.; Ferriss, M. C.; Launey, K. D.; Dytrych, T.; Draayer, J. P.; Dreyfuss, A. C.; Bahri, C.

    2014-03-01

    We present a microscopic description of nuclei in the intermediate-mass region, including the proximity to the proton drip line, based on a no-core shell model with a schematic many-nucleon long-range interaction with no parameter adjustments. The outcome confirms the essential role played by the symplectic symmetry to inform the interaction and the winnowing of shell-model spaces. We show that it is imperative that model spaces be expanded well beyond the current limits up through 15 major shells to accommodate particle excitations, which appear critical to highly deformed spatial structures and the convergence of associated observables.

  17. A subthreshold aVLSI implementation of the Izhikevich simple neuron model.

    PubMed

    Rangan, Venkat; Ghosh, Abhishek; Aparin, Vladimir; Cauwenberghs, Gert

    2010-01-01

    We present a circuit architecture for compact analog VLSI implementation of the Izhikevich neuron model, which efficiently describes a wide variety of neuron spiking and bursting dynamics using two state variables and four adjustable parameters. Log-domain circuit design utilizing MOS transistors in subthreshold results in high energy efficiency, with less than 1pJ of energy consumed per spike. We also discuss the effects of parameter variations on the dynamics of the equations, and present simulation results that replicate several types of neural dynamics. The low power operation and compact analog VLSI realization make the architecture suitable for human-machine interface applications in neural prostheses and implantable bioelectronics, as well as large-scale neural emulation tools for computational neuroscience.

  18. i3Drive, a 3D interactive driving simulator.

    PubMed

    Ambroz, Miha; Prebil, Ivan

    2010-01-01

    i3Drive, a wheeled-vehicle simulator, can accurately simulate vehicles of various configurations with up to eight wheels in real time on a desktop PC. It presents the vehicle dynamics as an interactive animation in a virtual 3D environment. The application is fully GUI-controlled, giving users an easy overview of the simulation parameters and letting them adjust those parameters interactively. It models all relevant vehicle systems, including the mechanical models of the suspension, power train, and braking and steering systems. The simulation results generally correspond well with actual measurements, making the system useful for studying vehicle performance in various driving scenarios. i3Drive is thus a worthy complement to other, more complex tools for vehicle-dynamics simulation and analysis.

  19. Uncertainty quantification and propagation in nuclear density functional theory

    DOE PAGES

    Schunck, N.; McDonnell, J. D.; Higdon, D.; ...

    2015-12-23

    Nuclear density functional theory (DFT) is one of the main theoretical tools used to study the properties of heavy and superheavy elements, or to describe the structure of nuclei far from stability. While on-going eff orts seek to better root nuclear DFT in the theory of nuclear forces, energy functionals remain semi-phenomenological constructions that depend on a set of parameters adjusted to experimental data in fi nite nuclei. In this study, we review recent eff orts to quantify the related uncertainties, and propagate them to model predictions. In particular, we cover the topics of parameter estimation for inverse problems, statisticalmore » analysis of model uncertainties and Bayesian inference methods. Illustrative examples are taken from the literature.« less

  20. A new theory of gravity.

    NASA Technical Reports Server (NTRS)

    Ni, W.-T.

    1973-01-01

    A new relativistic theory of gravity is presented. This theory agrees with all experiments to date. It is a metric theory; it is Lagrangian-based; and it possesses a preferred frame with conformally flat space slices. With an appropriate choice of certain adjustable functions and parameters and of the cosmological model, this theory possesses precisely the same post-Newtonian limit as general relativity.

  1. Finite element design for the HPHT synthesis of diamond

    NASA Astrophysics Data System (ADS)

    Li, Rui; Ding, Mingming; Shi, Tongfei

    2018-06-01

    The finite element method is used to simulate the steady-state temperature field in diamond synthesis cell. The 2D and 3D models of the China-type cubic press with large deformation of the synthesis cell was established successfully, which has been verified by situ measurements of synthesis cell. The assembly design, component design and process design for the HPHT synthesis of diamond based on the finite element simulation were presented one by one. The temperature field in a high-pressure synthetic cavity for diamond production is optimized by adjusting the cavity assembly. A series of analysis about the influence of the pressure media parameters on the temperature field are examined through adjusting the model parameters. Furthermore, the formation mechanism of wasteland was studied in detail. It indicates that the wasteland is inevitably exists in the synthesis sample, the distribution of growth region of the diamond with hex-octahedral is move to the center of the synthesis sample from near the heater as the power increasing, and the growth conditions of high quality diamond is locating at the center of the synthesis sample. These works can offer suggestion and advice to the development and optimization of a diamond production process.

  2. Heat transfer measurements for Stirling machine cylinders

    NASA Technical Reports Server (NTRS)

    Kornhauser, Alan A.; Kafka, B. C.; Finkbeiner, D. L.; Cantelmi, F. C.

    1994-01-01

    The primary purpose of this study was to measure the effects of inflow-produced heat turbulence on heat transfer in Stirling machine cylinders. A secondary purpose was to provide new experimental information on heat transfer in gas springs without inflow. The apparatus for the experiment consisted of a varying-volume piston-cylinder space connected to a fixed volume space by an orifice. The orifice size could be varied to adjust the level of inflow-produced turbulence, or the orifice plate could be removed completely so as to merge the two spaces into a single gas spring space. Speed, cycle mean pressure, overall volume ratio, and varying volume space clearance ratio could also be adjusted. Volume, pressure in both spaces, and local heat flux at two locations were measured. The pressure and volume measurements were used to calculate area averaged heat flux, heat transfer hysteresis loss, and other heat transfer-related effects. Experiments in the one space arrangement extended the range of previous gas spring tests to lower volume ratio and higher nondimensional speed. The tests corroborated previous results and showed that analytic models for heat transfer and loss based on volume ratio approaching 1 were valid for volume ratios ranging from 1 to 2, a range covering most gas springs in Stirling machines. Data from experiments in the two space arrangement were first analyzed based on lumping the two spaces together and examining total loss and averaged heat transfer as a function of overall nondimensional parameter. Heat transfer and loss were found to be significantly increased by inflow-produced turbulence. These increases could be modeled by appropriate adjustment of empirical coefficients in an existing semi-analytic model. An attempt was made to use an inverse, parameter optimization procedure to find the heat transfer in each of the two spaces. This procedure was successful in retrieving this information from simulated pressure-volume data with artificially generated noise, but it failed with the actual experimental data. This is evidence that the models used in the parameter optimization procedure (and to generate the simulated data) were not correct. Data from the surface heat flux sensors indicated that the primary shortcoming of these models was that they assumed turbulence levels to be constant over the cycle. Sensor data in the varying volume space showed a large increase in heat flux, probably due to turbulence, during the expansion stroke.

  3. Measurement of Muon Neutrino Quasielastic Scattering on Carbon

    NASA Astrophysics Data System (ADS)

    Aguilar-Arevalo, A. A.; Bazarko, A. O.; Brice, S. J.; Brown, B. C.; Bugel, L.; Cao, J.; Coney, L.; Conrad, J. M.; Cox, D. C.; Curioni, A.; Djurcic, Z.; Finley, D. A.; Fleming, B. T.; Ford, R.; Garcia, F. G.; Garvey, G. T.; Green, C.; Green, J. A.; Hart, T. L.; Hawker, E.; Imlay, R.; Johnson, R. A.; Kasper, P.; Katori, T.; Kobilarcik, T.; Kourbanis, I.; Koutsoliotas, S.; Laird, E. M.; Link, J. M.; Liu, Y.; Liu, Y.; Louis, W. C.; Mahn, K. B. M.; Marsh, W.; Martin, P. S.; McGregor, G.; Metcalf, W.; Meyers, P. D.; Mills, F.; Mills, G. B.; Monroe, J.; Moore, C. D.; Nelson, R. H.; Nienaber, P.; Ouedraogo, S.; Patterson, R. B.; Perevalov, D.; Polly, C. C.; Prebys, E.; Raaf, J. L.; Ray, H.; Roe, B. P.; Russell, A. D.; Sandberg, V.; Schirato, R.; Schmitz, D.; Shaevitz, M. H.; Shoemaker, F. C.; Smith, D.; Sorel, M.; Spentzouris, P.; Stancu, I.; Stefanski, R. J.; Sung, M.; Tanaka, H. A.; Tayloe, R.; Tzanov, M.; van de Water, R.; Wascko, M. O.; White, D. H.; Wilking, M. J.; Yang, H. J.; Zeller, G. P.; Zimmerman, E. D.

    2008-01-01

    The observation of neutrino oscillations is clear evidence for physics beyond the standard model. To make precise measurements of this phenomenon, neutrino oscillation experiments, including MiniBooNE, require an accurate description of neutrino charged current quasielastic (CCQE) cross sections to predict signal samples. Using a high-statistics sample of νμ CCQE events, MiniBooNE finds that a simple Fermi gas model, with appropriate adjustments, accurately characterizes the CCQE events observed in a carbon-based detector. The extracted parameters include an effective axial mass, MAeff=1.23±0.20GeV, that describes the four-momentum dependence of the axial-vector form factor of the nucleon, and a Pauli-suppression parameter, κ=1.019±0.011. Such a modified Fermi gas model may also be used by future accelerator-based experiments measuring neutrino oscillations on nuclear targets.

  4. Mathematical modeling of synthetic unit hydrograph case study: Citarum watershed

    NASA Astrophysics Data System (ADS)

    Islahuddin, Muhammad; Sukrainingtyas, Adiska L. A.; Kusuma, M. Syahril B.; Soewono, Edy

    2015-09-01

    Deriving unit hydrograph is very important in analyzing watershed's hydrologic response of a rainfall event. In most cases, hourly measures of stream flow data needed in deriving unit hydrograph are not always available. Hence, one needs to develop methods for deriving unit hydrograph for ungagged watershed. Methods that have evolved are based on theoretical or empirical formulas relating hydrograph peak discharge and timing to watershed characteristics. These are usually referred to Synthetic Unit Hydrograph. In this paper, a gamma probability density function and its variant are used as mathematical approximations of a unit hydrograph for Citarum Watershed. The model is adjusted with real field condition by translation and scaling. Optimal parameters are determined by using Particle Swarm Optimization method with weighted objective function. With these models, a synthetic unit hydrograph can be developed and hydrologic parameters can be well predicted.

  5. Modelling the impact of new patient visits on risk adjusted access at 2 clinics.

    PubMed

    Kolber, Michael A; Rueda, Germán; Sory, John B

    2018-06-01

    To evaluate the effect new outpatient clinic visits has on the availability of follow-up visits for established patients when patient visit frequency is risk adjusted. Diagnosis codes for patients from 2 Internal Medicine Clinics were extracted through billing data. The HHS-HCC risk adjusted scores for each clinic were determined based upon the average of all clinic practitioners' profiles. These scores were then used to project encounter frequencies for established patients, and for new patients entering the clinic based on risk and time of entry into the clinics. A distinct mean risk frequency distribution for physicians in each clinic could be defined providing model parameters. Within the model, follow-up visit utilization at the highest risk adjusted visit frequencies would require more follow-up slots than currently available when new patient no-show rates and annual patient loss are included. Patients seen at an intermediate or lower visit risk adjusted frequency could be accommodated when new patient no-show rates and annual patient clinic loss are considered. Value-based care is driven by control of cost while maintaining quality of care. In order to control cost, there has been a drive to increase visit frequency in primary care for those patients at increased risk. Adding new patients to primary care clinics limits the availability of follow-up slots that accrue over time for those at highest risk, thereby limiting disease and, potentially, cost control. If frequency of established care visits can be reduced by improved disease control, closing the practice to new patients, hiring health care extenders, or providing non-face to face care models then quality and cost of care may be improved. © 2018 John Wiley & Sons, Ltd.

  6. Duloxetine for the treatment of painful diabetic peripheral neuropathy in Venezuela: economic evaluation.

    PubMed

    Carlos, Fernando; Espejel, Luis; Novick, Diego; López, Rubén; Flores, Daniel

    2015-09-25

    Painful diabetic peripheral neuropathy affects 40-50% of patients with diabetic neuropathy, leading to impaired quality of life and substantial costs. Duloxetine and pregabalin have evidence-based support, and are formally approved for controlling painful diabetic peripheral neuropathy. We used a 12-week decision model for examining painful diabetic peripheral neuropathy first-line therapy with daily doses of duloxetine 60mg or pregabalin 300mg, under the perspective of the Instituto Venezolano de los Seguros Sociales. We gathered model parameters from published literature and expert´s opinion, focusing on the magnitude of pain relief, the presence of adverse events, the possibility of withdrawal owing to intolerable adverse events or due to lack of efficacy, and the quality-adjusted life years expected in each strategy. We analyzed direct medical costs (which are expressed in Bolívares Fuertes, BsF) comprising drug acquisition besides additional care devoted to treatment of adverse events and poor pain relief. We conducted both deterministic and probabilistic sensitivity analyses. Total expected costs per 1000 patients were BsF 1 046 146 (26%) lower with duloxetine than with pregabalin. Most of these savings (91%) corresponds to the difference in the acquisition’s cost of each medication. duloxetine also provided 23 more patients achieving good pain relief and a gain of about two quality-adjusted life years per 1000 treated. Model was robust to plausible changes in main parameters. Duloxetine remained the preferred option in 93.9% of the second-order Monte Carlo simulations. This study suggests duloxetine dominates (i.e., is more effective and lead to gains in quality-adjusted life years), remaining less costly than pregabalin for treatment of painful diabetic peripheral neuropathy.

  7. The Sensitivity of Glacial Isostatic Adjustment in West Antarctica to Lateral Variations in Earth Structure

    NASA Astrophysics Data System (ADS)

    Nield, G.; Whitehouse, P. L.; Blank, B.; van der Wal, W.; O'Donnell, J. P.; Stuart, G. W.; Lloyd, A. J.; Wiens, D.

    2017-12-01

    Accurate models of Glacial Isostatic Adjustment (GIA) are required for correcting satellite measurements of ice-mass change and for interpretation of geodetic data at the location of present and former ice sheets. Global models of GIA tend to adopt a 1-D representation of Earth structure, varying in the radial direction only. In some regions rheological parameters may differ significantly from this global average leading to bias in model predictions of present-day deformation, geoid change rates and sea-level change. The advancement of 3-D GIA modelling techniques in recent years has led to improvements in the representation of the Earth via the incorporation of laterally varying structure. This study investigates the influence of 3-D Earth structure on deformation rates in West Antarctica using a finite element GIA model with power-law rheology. We utilise datasets of seismic velocity and temperature for the crust and upper mantle with the aim of determining a data-driven Earth model, and consider the differences when compared to deformation predicted from an equivalent 1-D Earth structure.

  8. Error propagation in energetic carrying capacity models

    USGS Publications Warehouse

    Pearse, Aaron T.; Stafford, Joshua D.

    2014-01-01

    Conservation objectives derived from carrying capacity models have been used to inform management of landscapes for wildlife populations. Energetic carrying capacity models are particularly useful in conservation planning for wildlife; these models use estimates of food abundance and energetic requirements of wildlife to target conservation actions. We provide a general method for incorporating a foraging threshold (i.e., density of food at which foraging becomes unprofitable) when estimating food availability with energetic carrying capacity models. We use a hypothetical example to describe how past methods for adjustment of foraging thresholds biased results of energetic carrying capacity models in certain instances. Adjusting foraging thresholds at the patch level of the species of interest provides results consistent with ecological foraging theory. Presentation of two case studies suggest variation in bias which, in certain instances, created large errors in conservation objectives and may have led to inefficient allocation of limited resources. Our results also illustrate how small errors or biases in application of input parameters, when extrapolated to large spatial extents, propagate errors in conservation planning and can have negative implications for target populations.

  9. Bring It On, Complexity! Present and Future of Self-Organising Middle-Out Abstraction

    NASA Astrophysics Data System (ADS)

    Mammen, Sebastian Von; Steghöfer, Jan-Philipp

    The following sections are included: * The Great Complexity Challenge * Self-Organising Middle-Out Abstraction * Optimising Graphics, Physics and Artificial Intelligence * Emergence and Hierarchies in a Natural System * The Technical Concept of SOMO * Observation of interactions * Interaction pattern recognition and behavioural abstraction * Creating and adjusting hierarchies * Confidence measures * Execution model * Learning SOMO: parameters, knowledge propagation, and procreation * Current Implementations * Awareness Beyond Virtuality * Integration and emergence * Model inference * SOMO net * SOMO after me * The Future of SOMO

  10. Modeling challenges and approaches in simulating the Jovian synchrotron radiation belts from an in-situ perspective

    NASA Astrophysics Data System (ADS)

    Adumitroaie, V.; Oyafuso, F. A.; Levin, S.; Gulkis, S.; Janssen, M. A.; Santos-Costa, D.; Bolton, S. J.

    2017-12-01

    In order to obtain credible atmospheric composition retrieval values from Jupiter's observed radiative signature via Juno's MWR instrument, it is necessary to separate as robustly as possible the contributions from three emission sources: CMB, planet and synchrotron radiation belts. The numerical separation requires a refinement, based on the in-situ data, of a higher fidelity model for the synchrotron emission, namely the multi-parameter, multi-zonal model of Levin at al. (2001). This model employs an empirical electron energy distribution, which prior to the Juno mission, has been adjusted exclusively from VLA observations. At minimum 8 sets of perijove observations (i.e. by PJ9) have to be delivered to an inverse model for retrieval of the electron distribution parameters with the goal of matching the synchrotron emission observed along MWR's lines of sight. The challenges and approaches taken to perform this task are discussed here. The model will be continuously improved with the availability of additional information, both from the MWR and magnetometer instruments.

  11. A Novel Adjustment Method for Shearer Traction Speed through Integration of T-S Cloud Inference Network and Improved PSO

    PubMed Central

    Si, Lei; Wang, Zhongbin; Yang, Yinwei

    2014-01-01

    In order to efficiently and accurately adjust the shearer traction speed, a novel approach based on Takagi-Sugeno (T-S) cloud inference network (CIN) and improved particle swarm optimization (IPSO) is proposed. The T-S CIN is built through the combination of cloud model and T-S fuzzy neural network. Moreover, the IPSO algorithm employs parameter automation adjustment strategy and velocity resetting to significantly improve the performance of basic PSO algorithm in global search and fine-tuning of the solutions, and the flowchart of proposed approach is designed. Furthermore, some simulation examples are carried out and comparison results indicate that the proposed method is feasible, efficient, and is outperforming others. Finally, an industrial application example of coal mining face is demonstrated to specify the effect of proposed system. PMID:25506358

  12. 40 CFR 89.108 - Adjustable parameters, requirements.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... this subpart. (d) For engines that use noncommercial fuels significantly different than the specified test fuel of the same type, the manufacturer may ask to use the parameter-adjustment provisions of 40... separate engine family. See 40 CFR 1039.801 for the definition of “noncommercial fuels”. [59 FR 31335, June...

  13. 40 CFR 89.108 - Adjustable parameters, requirements.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... this subpart. (d) For engines that use noncommercial fuels significantly different than the specified test fuel of the same type, the manufacturer may ask to use the parameter-adjustment provisions of 40... separate engine family. See 40 CFR 1039.801 for the definition of “noncommercial fuels”. [59 FR 31335, June...

  14. 40 CFR 89.108 - Adjustable parameters, requirements.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... this subpart. (d) For engines that use noncommercial fuels significantly different than the specified test fuel of the same type, the manufacturer may ask to use the parameter-adjustment provisions of 40... separate engine family. See 40 CFR 1039.801 for the definition of “noncommercial fuels”. [59 FR 31335, June...

  15. 40 CFR 89.108 - Adjustable parameters, requirements.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... this subpart. (d) For engines that use noncommercial fuels significantly different than the specified test fuel of the same type, the manufacturer may ask to use the parameter-adjustment provisions of 40... separate engine family. See 40 CFR 1039.801 for the definition of “noncommercial fuels”. [59 FR 31335, June...

  16. 40 CFR 89.108 - Adjustable parameters, requirements.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... this subpart. (d) For engines that use noncommercial fuels significantly different than the specified test fuel of the same type, the manufacturer may ask to use the parameter-adjustment provisions of 40... separate engine family. See 40 CFR 1039.801 for the definition of “noncommercial fuels”. [59 FR 31335, June...

  17. A strain-, cow-, and herd-specific bio-economic simulation model of intramammary infections in dairy cattle herds.

    PubMed

    Gussmann, Maya; Kirkeby, Carsten; Græsbøll, Kaare; Farre, Michael; Halasa, Tariq

    2018-07-14

    Intramammary infections (IMI) in dairy cattle lead to economic losses for farmers, both through reduced milk production and disease control measures. We present the first strain-, cow- and herd-specific bio-economic simulation model of intramammary infections in a dairy cattle herd. The model can be used to investigate the cost-effectiveness of different prevention and control strategies against IMI. The objective of this study was to describe a transmission framework, which simulates spread of IMI causing pathogens through different transmission modes. These include the traditional contagious and environmental spread and a new opportunistic transmission mode. In addition, the within-herd transmission dynamics of IMI causing pathogens were studied. Sensitivity analysis was conducted to investigate the influence of input parameters on model predictions. The results show that the model is able to represent various within-herd levels of IMI prevalence, depending on the simulated pathogens and their parameter settings. The parameters can be adjusted to include different combinations of IMI causing pathogens at different prevalence levels, representing herd-specific situations. The model is most sensitive to varying the transmission rate parameters and the strain-specific recovery rates from IMI. It can be used for investigating both short term operational and long term strategic decisions for the prevention and control of IMI in dairy cattle herds. Copyright © 2018 Elsevier Ltd. All rights reserved.

  18. Systolic blood pressure but not electrocardiogram QRS duration is associated with heart rate variability (HRV): a cross-sectional study in rural Australian non-diabetics.

    PubMed

    Lee, Yvonne Yin Leng; Jelinek, Herbert F; McLachlan, Craig S

    2017-01-01

    A positive correlation between ECG derived QRS duration and heart rate variability (HRV) parameters had previously been reported in young healthy adults. We note this study used a narrow QRS duration range, and did not adjust for systolic blood pressure. Our aims are to investigate associations between systolic blood pressure (SBP), QRS duration and HRV in a rural aging population. A retrospective cross sectional population was obtained from the CSU Diabetes Screening Research Initiative data base where 200 participants had no diabetes or pre-diabetes. SBP data were matched with ECG derived QRS duration and HRV parameters. HRV parameters were calculated from R-R intervals. Resting 12-lead electrocardiograms were obtained from each subject using a Welch Allyn PC-Based ECG system. Pearson correlation analysis revealed no statistically significant associations between HRV parameters and QRS duration. No significant mean differences in HRV parameter subgroups across defined QRS cut-offs were found. SBP > 146 mmHg was associated with increasing QRS durations, however this association disappeared once models were adjusted for age and gender. SBP was also significantly associated with a number of HRV parameters using Pearson correlation analysis, including high frequency (HF) ( p  < 0.05), HFln ( p  < 0.02), RMSDD ( p  < 0.02) and non-linear parameters; ApEN ( p  < 0.001) were negatively correlated with increasing SBP while the low frequency to high frequency ratio (LF/HF) increased with increasing SBP ( p  < 0.03). Our results do not support associations between ECG derived R-R derived HRV parameters and QRS duration in aging populations. We suggest that ventricular conduction as determined by QRS duration is independent of variations in SA-node heart rate variability.

  19. Simulation of parameters of hydraulic drive with volumetric type controller

    NASA Astrophysics Data System (ADS)

    Mulyukin, V. L.; Boldyrev, A. V.; Karelin, D. L.; Belousov, A. M.

    2017-09-01

    The article presents a mathematical model of volumetric type hydraulic drive controller that allows to calculate the parameters of forward and reverse motion. According to the results of simulation static characteristics of rod’s speed and the force of the hydraulic cylinder rod were built and the influence of the angle of swash plate of the controller at the characteristics profile is shown. The results analysis showed that the proposed controller allows steplessly adjust the speed□ц of hydraulic cylinder’s rod motion and the force developed on the rod without the use of flow throttling.

  20. A correlation to estimate the velocity of convective currents in boilover.

    PubMed

    Ferrero, Fabio; Kozanoglu, Bulent; Arnaldos, Josep

    2007-05-08

    The mathematical model proposed by Kozanoglu et al. [B. Kozanoglu, F. Ferrero, M. Muñoz, J. Arnaldos, J. Casal, Velocity of the convective currents in boilover, Chem. Eng. Sci. 61 (8) (2006) 2550-2556] for simulating heat transfer in hydrocarbon mixtures in the process that leads to boilover requires the initial value of the convective current's velocity through the fuel layer as an adjustable parameter. Here, a correlation for predicting this parameter based on the properties of the fuel (average ebullition temperature) and the initial thickness of the fuel layer is proposed.

  1. Calibrating the orientation between a microlens array and a sensor based on projective geometry

    NASA Astrophysics Data System (ADS)

    Su, Lijuan; Yan, Qiangqiang; Cao, Jun; Yuan, Yan

    2016-07-01

    We demonstrate a method for calibrating a microlens array (MLA) with a sensor component by building a plenoptic camera with a conventional prime lens. This calibration method includes a geometric model, a setup to adjust the distance (L) between the prime lens and the MLA, a calibration procedure for determining the subimage centers, and an optimization algorithm. The geometric model introduces nine unknown parameters regarding the centers of the microlenses and their images, whereas the distance adjustment setup provides an initial guess for the distance L. The simulation results verify the effectiveness and accuracy of the proposed method. The experimental results demonstrate the calibration process can be performed with a commercial prime lens and the proposed method can be used to quantitatively evaluate whether a MLA and a sensor is assembled properly for plenoptic systems.

  2. Influence of Population Variation of Physiological Parameters in Computational Models of Space Physiology

    NASA Technical Reports Server (NTRS)

    Myers, J. G.; Feola, A.; Werner, C.; Nelson, E. S.; Raykin, J.; Samuels, B.; Ethier, C. R.

    2016-01-01

    The earliest manifestations of Visual Impairment and Intracranial Pressure (VIIP) syndrome become evident after months of spaceflight and include a variety of ophthalmic changes, including posterior globe flattening and distension of the optic nerve sheath. Prevailing evidence links the occurrence of VIIP to the cephalic fluid shift induced by microgravity and the subsequent pressure changes around the optic nerve and eye. Deducing the etiology of VIIP is challenging due to the wide range of physiological parameters that may be influenced by spaceflight and are required to address a realistic spectrum of physiological responses. Here, we report on the application of an efficient approach to interrogating physiological parameter space through computational modeling. Specifically, we assess the influence of uncertainty in input parameters for two models of VIIP syndrome: a lumped-parameter model (LPM) of the cardiovascular and central nervous systems, and a finite-element model (FEM) of the posterior eye, optic nerve head (ONH) and optic nerve sheath. Methods: To investigate the parameter space in each model, we employed Latin hypercube sampling partial rank correlation coefficient (LHSPRCC) strategies. LHS techniques outperform Monte Carlo approaches by enforcing efficient sampling across the entire range of all parameters. The PRCC method estimates the sensitivity of model outputs to these parameters while adjusting for the linear effects of all other inputs. The LPM analysis addressed uncertainties in 42 physiological parameters, such as initial compartmental volume and nominal compartment percentage of total cardiac output in the supine state, while the FEM evaluated the effects on biomechanical strain from uncertainties in 23 material and pressure parameters for the ocular anatomy. Results and Conclusion: The LPM analysis identified several key factors including high sensitivity to the initial fluid distribution. The FEM study found that intraocular pressure and intracranial pressure had dominant impact on the peak strains in the ONH and retro-laminar optic nerve, respectively; optic nerve and lamina cribrosa stiffness were also important. This investigation illustrates the ability of LHSPRCC to identify the most influential physiological parameters, which must therefore be well-characterized to produce the most accurate numerical results.

  3. Player Modeling for Intelligent Difficulty Adjustment

    NASA Astrophysics Data System (ADS)

    Missura, Olana; Gärtner, Thomas

    In this paper we aim at automatically adjusting the difficulty of computer games by clustering players into different types and supervised prediction of the type from short traces of gameplay. An important ingredient of video games is to challenge players by providing them with tasks of appropriate and increasing difficulty. How this difficulty should be chosen and increase over time strongly depends on the ability, experience, perception and learning curve of each individual player. It is a subjective parameter that is very difficult to set. Wrong choices can easily lead to players stopping to play the game as they get bored (if underburdened) or frustrated (if overburdened). An ideal game should be able to adjust its difficulty dynamically governed by the player’s performance. Modern video games utilise a game-testing process to investigate among other factors the perceived difficulty for a multitude of players. In this paper, we investigate how machine learning techniques can be used for automatic difficulty adjustment. Our experiments confirm the potential of machine learning in this application.

  4. An Analysis of the Effects of RFID Tags on Narrowband Navigation and Communication Receivers

    NASA Technical Reports Server (NTRS)

    LaBerge, E. F. Charles

    2007-01-01

    The simulated effects of the Radio Frequency Identification (RFID) tag emissions on ILS Localizer and ILS Glide Slope functions match the analytical models developed in support of DO-294B provided that the measured peak power levels are adjusted for 1) peak-to-average power ratio, 2) effective duty cycle, and 3) spectrum analyzer measurement bandwidth. When these adjustments are made, simulated and theoretical results are in extraordinarily good agreement. The relationships hold over a large range of potential interference-to-desired signal power ratios, provided that the adjusted interference power is significantly higher than the sum of the receiver noise floor and the noise-like contributions of all other interference sources. When the duty-factor adjusted power spectral densities are applied in the evaluation process described in Section 6 of DO-294B, most narrowband guidance and communications radios performance parameters are unaffected by moderate levels of RFID interference. Specific conclusions and recommendations are provided.

  5. The Art and Science of Climate Model Tuning

    DOE PAGES

    Hourdin, Frederic; Mauritsen, Thorsten; Gettelman, Andrew; ...

    2017-03-31

    The process of parameter estimation targeting a chosen set of observations is an essential aspect of numerical modeling. This process is usually named tuning in the climate modeling community. In climate models, the variety and complexity of physical processes involved, and their interplay through a wide range of spatial and temporal scales, must be summarized in a series of approximate submodels. Most submodels depend on uncertain parameters. Tuning consists of adjusting the values of these parameters to bring the solution as a whole into line with aspects of the observed climate. Tuning is an essential aspect of climate modeling withmore » its own scientific issues, which is probably not advertised enough outside the community of model developers. Optimization of climate models raises important questions about whether tuning methods a priori constrain the model results in unintended ways that would affect our confidence in climate projections. Here, we present the definition and rationale behind model tuning, review specific methodological aspects, and survey the diversity of tuning approaches used in current climate models. We also discuss the challenges and opportunities in applying so-called objective methods in climate model tuning. Here, we discuss how tuning methodologies may affect fundamental results of climate models, such as climate sensitivity. The article concludes with a series of recommendations to make the process of climate model tuning more transparent.« less

  6. The Art and Science of Climate Model Tuning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hourdin, Frederic; Mauritsen, Thorsten; Gettelman, Andrew

    The process of parameter estimation targeting a chosen set of observations is an essential aspect of numerical modeling. This process is usually named tuning in the climate modeling community. In climate models, the variety and complexity of physical processes involved, and their interplay through a wide range of spatial and temporal scales, must be summarized in a series of approximate submodels. Most submodels depend on uncertain parameters. Tuning consists of adjusting the values of these parameters to bring the solution as a whole into line with aspects of the observed climate. Tuning is an essential aspect of climate modeling withmore » its own scientific issues, which is probably not advertised enough outside the community of model developers. Optimization of climate models raises important questions about whether tuning methods a priori constrain the model results in unintended ways that would affect our confidence in climate projections. Here, we present the definition and rationale behind model tuning, review specific methodological aspects, and survey the diversity of tuning approaches used in current climate models. We also discuss the challenges and opportunities in applying so-called objective methods in climate model tuning. Here, we discuss how tuning methodologies may affect fundamental results of climate models, such as climate sensitivity. The article concludes with a series of recommendations to make the process of climate model tuning more transparent.« less

  7. Modelling the EDLC-based Power Supply Module for a Maneuvering System of a Nanosatellite

    NASA Astrophysics Data System (ADS)

    Kumarin, A. A.; Kudryavtsev, I. A.

    2018-01-01

    The development of the model of the power supply module of a maneuvering system of a nanosatellite is described. The module is based on an EDLC battery as an energy buffer. The EDLC choice is described. Experiments are conducted to provide data for model. Simulation of the power supply module is made for charging and discharging of the battery processes. The difference between simulation and experiment does not exceed 0.5% for charging and 10% for discharging. The developed model can be used in early design and to adjust charger and load parameters. The model can be expanded to represent the entire power system.

  8. [Development of a virtual model of fibro-bronchoscopy].

    PubMed

    Solar, Mauricio; Ducoing, Eugenio

    2011-09-01

    A virtual model of fibro-bronchoscopy is reported. The virtual model represents in 3D the trachea and the bronchi creating a virtual world of the bronchial tree. The bronchoscope is modeled to look over the bronchial tree imitating the displacement and rotation of the real bronchoscope. The parameters of the virtual model were gradually adjusted according to expert opinion and allowed the training of specialists with a virtual bronchoscope of great realism. The virtual bronchial tree provides clues of reality regarding the movement of the bronchoscope, creating the illusion that the virtual instrument is behaving as the real one with all the benefits in costs that this means.

  9. Spectroscopic properties of Cr3+ ions at the defect sites in cubic fluoroperovskite crystals

    NASA Astrophysics Data System (ADS)

    Wan-Lun, Yu; Xin-Min, Zhang; La-Xun, Yang; Bao-Qing, Zen

    1994-09-01

    The spin-Hamiltonian (SH) parameters for the 4A2(F) state of 3d3/3d7 ions for tetragonal and trigonal symmetries are studied as a function of the crystal-field (CF) parameters based on simultaneous diagonalization of the electrostatic, CF, and the spin-orbit-coupling Hamiltonians. The results obtained are compared to those in earlier works. The CF and SH parameters of Cr3+ ions at the A and M vacancies and at codoped Li+ sites in the cubic fluoroperovskites AMF3 are investigated by taking into account the contributions of the defects and the defect-induced lattice distortion. Suitable models are proposed for the lattice distortion, and the distortion parameters are obtained by adjusting them to fit to the observed data for the SH parameters and the energy of the first excited state.

  10. Cost-effectiveness analysis of a patient-centered care model for management of psoriasis.

    PubMed

    Parsi, Kory; Chambers, Cindy J; Armstrong, April W

    2012-04-01

    Cost-effectiveness analyses help policymakers make informed decisions regarding funding allocation of health care resources. Cost-effectiveness analysis of technology-enabled models of health care delivery is necessary to assess sustainability of novel online, patient-centered health care models. We sought to compare cost-effectiveness of conventional in-office care with a patient-centered, online model for follow-up treatment of patients with psoriasis. Cost-effectiveness analysis was performed from a societal perspective on a randomized controlled trial comparing a patient-centered online model with in-office visits for treatment of patients with psoriasis during a 24-week period. Quality-adjusted life expectancy was calculated using the life table method. Costs were generated from the original study parameters and national averages for salaries and services. No significant difference existed in the mean change in Dermatology Life Quality Index scores between the two groups (online: 3.51 ± 4.48 and in-office: 3.88 ± 6.65, P value = .79). Mean improvement in quality-adjusted life expectancy was not significantly different between the groups (P value = .93), with a gain of 0.447 ± 0.48 quality-adjusted life years for the online group and a gain of 0.463 ± 0.815 quality-adjusted life years for the in-office group. The cost of follow-up psoriasis care with online visits was 1.7 times less than the cost of in-person visits ($315 vs $576). Variations in travel time existed among patients depending on their distance from the dermatologist's office. From a societal perspective, the patient-centered online care model appears to be cost saving, while maintaining similar effectiveness to standard in-office care. Copyright © 2011 American Academy of Dermatology, Inc. Published by Mosby, Inc. All rights reserved.

  11. Using Remote Sensing and High-Resolution Digital Elevation Models to Identify Potential Erosional Hotspots Along River Channels During High Discharge Storm Events

    NASA Astrophysics Data System (ADS)

    Orland, E. D.; Amidon, W. H.

    2017-12-01

    As global warming intensifies, large precipitation events and associated floods are becoming increasingly common. Channel adjustments during floods can occur by both erosion and deposition of sediment, often damaging infrastructure in the process. There is thus a need for predictive models that can help managers identify river reaches that are most prone to adjustment during storms. Because rivers in post-glacial landscapes often flow over a mixture of bedrock and alluvial substrates, the identification of bedrock vs. alluvial channel reaches is an important first step in predicting vulnerability to channel adjustment during flood events, especially because bedrock channels are unlikely to adjust significantly, even during floods. This study develops a semi-automated approach to predicting channel substrate using a high-resolution LiDAR-derived digital elevation model (DEM). The study area is the Middlebury River in Middlebury, VT-a well-studied watershed with a wide variety of channel substrates, including reaches with documented channel adjustments during recent flooding events. Multiple metrics were considered for reference—such as channel width and drainage area—but the study utilized channel slope as a key parameter for identifying morphological variations within the Middlebury River. Using data extracted from the DEM, a power law was fit to selected slope and drainage area values for each branch in order to model idealized slope-drainage area relationships, which were then compared with measured slope-drainage area relationships. Differences in measured slope minus predicted slope (called delta-slope) are shown to help predict river channel substrate. Compared with field observations, higher delta-slope values correlate with more stable, boulder rich channels or bedrock gorges; conversely the lowest delta-slope values correlate with flat, sediment rich alluvial channels. The delta-slope metric thus serves as a reliable first-order predictor of channel substrate in the Middlebury River, which in turn can be used to help identify local reaches that are most vulnerable to channel adjustment during large flood events.

  12. Population Synthesis of Radio and Y-ray Millisecond Pulsars Using Markov Chain Monte Carlo

    NASA Astrophysics Data System (ADS)

    Gonthier, Peter L.; Billman, C.; Harding, A. K.

    2013-04-01

    We present preliminary results of a new population synthesis of millisecond pulsars (MSP) from the Galactic disk using Markov Chain Monte Carlo techniques to better understand the model parameter space. We include empirical radio and γ-ray luminosity models that are dependent on the pulsar period and period derivative with freely varying exponents. The magnitudes of the model luminosities are adjusted to reproduce the number of MSPs detected by a group of ten radio surveys and by Fermi, predicting the MSP birth rate in the Galaxy. We follow a similar set of assumptions that we have used in previous, more constrained Monte Carlo simulations. The parameters associated with the birth distributions such as those for the accretion rate, magnetic field and period distributions are also free to vary. With the large set of free parameters, we employ Markov Chain Monte Carlo simulations to explore the large and small worlds of the parameter space. We present preliminary comparisons of the simulated and detected distributions of radio and γ-ray pulsar characteristics. We express our gratitude for the generous support of the National Science Foundation (REU and RUI), Fermi Guest Investigator Program and the NASA Astrophysics Theory and Fundamental Program.

  13. Technical Note: Approximate Bayesian parameterization of a complex tropical forest model

    NASA Astrophysics Data System (ADS)

    Hartig, F.; Dislich, C.; Wiegand, T.; Huth, A.

    2013-08-01

    Inverse parameter estimation of process-based models is a long-standing problem in ecology and evolution. A key problem of inverse parameter estimation is to define a metric that quantifies how well model predictions fit to the data. Such a metric can be expressed by general cost or objective functions, but statistical inversion approaches are based on a particular metric, the probability of observing the data given the model, known as the likelihood. Deriving likelihoods for dynamic models requires making assumptions about the probability for observations to deviate from mean model predictions. For technical reasons, these assumptions are usually derived without explicit consideration of the processes in the simulation. Only in recent years have new methods become available that allow generating likelihoods directly from stochastic simulations. Previous applications of these approximate Bayesian methods have concentrated on relatively simple models. Here, we report on the application of a simulation-based likelihood approximation for FORMIND, a parameter-rich individual-based model of tropical forest dynamics. We show that approximate Bayesian inference, based on a parametric likelihood approximation placed in a conventional MCMC, performs well in retrieving known parameter values from virtual field data generated by the forest model. We analyze the results of the parameter estimation, examine the sensitivity towards the choice and aggregation of model outputs and observed data (summary statistics), and show results from using this method to fit the FORMIND model to field data from an Ecuadorian tropical forest. Finally, we discuss differences of this approach to Approximate Bayesian Computing (ABC), another commonly used method to generate simulation-based likelihood approximations. Our results demonstrate that simulation-based inference, which offers considerable conceptual advantages over more traditional methods for inverse parameter estimation, can successfully be applied to process-based models of high complexity. The methodology is particularly suited to heterogeneous and complex data structures and can easily be adjusted to other model types, including most stochastic population and individual-based models. Our study therefore provides a blueprint for a fairly general approach to parameter estimation of stochastic process-based models in ecology and evolution.

  14. A Stochastic Fractional Dynamics Model of Rainfall Statistics

    NASA Astrophysics Data System (ADS)

    Kundu, Prasun; Travis, James

    2013-04-01

    Rainfall varies in space and time in a highly irregular manner and is described naturally in terms of a stochastic process. A characteristic feature of rainfall statistics is that they depend strongly on the space-time scales over which rain data are averaged. A spectral model of precipitation has been developed based on a stochastic differential equation of fractional order for the point rain rate, that allows a concise description of the second moment statistics of rain at any prescribed space-time averaging scale. The model is designed to faithfully reflect the scale dependence and is thus capable of providing a unified description of the statistics of both radar and rain gauge data. The underlying dynamical equation can be expressed in terms of space-time derivatives of fractional orders that are adjusted together with other model parameters to fit the data. The form of the resulting spectrum gives the model adequate flexibility to capture the subtle interplay between the spatial and temporal scales of variability of rain but strongly constrains the predicted statistical behavior as a function of the averaging length and times scales. The main restriction is the assumption that the statistics of the precipitation field is spatially homogeneous and isotropic and stationary in time. We test the model with radar and gauge data collected contemporaneously at the NASA TRMM ground validation sites located near Melbourne, Florida and in Kwajalein Atoll, Marshall Islands in the tropical Pacific. We estimate the parameters by tuning them to the second moment statistics of the radar data. The model predictions are then found to fit the second moment statistics of the gauge data reasonably well without any further adjustment. Some data sets containing periods of non-stationary behavior that involves occasional anomalously correlated rain events, present a challenge for the model.

  15. Bias-adjusted satellite-based rainfall estimates for predicting floods: Narayani Basin

    USGS Publications Warehouse

    Shrestha, M.S.; Artan, G.A.; Bajracharya, S.R.; Gautam, D.K.; Tokar, S.A.

    2011-01-01

    In Nepal, as the spatial distribution of rain gauges is not sufficient to provide detailed perspective on the highly varied spatial nature of rainfall, satellite-based rainfall estimates provides the opportunity for timely estimation. This paper presents the flood prediction of Narayani Basin at the Devghat hydrometric station (32000km2) using bias-adjusted satellite rainfall estimates and the Geospatial Stream Flow Model (GeoSFM), a spatially distributed, physically based hydrologic model. The GeoSFM with gridded gauge observed rainfall inputs using kriging interpolation from 2003 was used for calibration and 2004 for validation to simulate stream flow with both having a Nash Sutcliff Efficiency of above 0.7. With the National Oceanic and Atmospheric Administration Climate Prediction Centre's rainfall estimates (CPC-RFE2.0), using the same calibrated parameters, for 2003 the model performance deteriorated but improved after recalibration with CPC-RFE2.0 indicating the need to recalibrate the model with satellite-based rainfall estimates. Adjusting the CPC-RFE2.0 by a seasonal, monthly and 7-day moving average ratio, improvement in model performance was achieved. Furthermore, a new gauge-satellite merged rainfall estimates obtained from ingestion of local rain gauge data resulted in significant improvement in flood predictability. The results indicate the applicability of satellite-based rainfall estimates in flood prediction with appropriate bias correction. ?? 2011 The Authors. Journal of Flood Risk Management ?? 2011 The Chartered Institution of Water and Environmental Management.

  16. Bias-adjusted satellite-based rainfall estimates for predicting floods: Narayani Basin

    USGS Publications Warehouse

    Artan, Guleid A.; Tokar, S.A.; Gautam, D.K.; Bajracharya, S.R.; Shrestha, M.S.

    2011-01-01

    In Nepal, as the spatial distribution of rain gauges is not sufficient to provide detailed perspective on the highly varied spatial nature of rainfall, satellite-based rainfall estimates provides the opportunity for timely estimation. This paper presents the flood prediction of Narayani Basin at the Devghat hydrometric station (32 000 km2) using bias-adjusted satellite rainfall estimates and the Geospatial Stream Flow Model (GeoSFM), a spatially distributed, physically based hydrologic model. The GeoSFM with gridded gauge observed rainfall inputs using kriging interpolation from 2003 was used for calibration and 2004 for validation to simulate stream flow with both having a Nash Sutcliff Efficiency of above 0.7. With the National Oceanic and Atmospheric Administration Climate Prediction Centre's rainfall estimates (CPC_RFE2.0), using the same calibrated parameters, for 2003 the model performance deteriorated but improved after recalibration with CPC_RFE2.0 indicating the need to recalibrate the model with satellite-based rainfall estimates. Adjusting the CPC_RFE2.0 by a seasonal, monthly and 7-day moving average ratio, improvement in model performance was achieved. Furthermore, a new gauge-satellite merged rainfall estimates obtained from ingestion of local rain gauge data resulted in significant improvement in flood predictability. The results indicate the applicability of satellite-based rainfall estimates in flood prediction with appropriate bias correction.

  17. Facial motion parameter estimation and error criteria in model-based image coding

    NASA Astrophysics Data System (ADS)

    Liu, Yunhai; Yu, Lu; Yao, Qingdong

    2000-04-01

    Model-based image coding has been given extensive attention due to its high subject image quality and low bit-rates. But the estimation of object motion parameter is still a difficult problem, and there is not a proper error criteria for the quality assessment that are consistent with visual properties. This paper presents an algorithm of the facial motion parameter estimation based on feature point correspondence and gives the motion parameter error criteria. The facial motion model comprises of three parts. The first part is the global 3-D rigid motion of the head, the second part is non-rigid translation motion in jaw area, and the third part consists of local non-rigid expression motion in eyes and mouth areas. The feature points are automatically selected by a function of edges, brightness and end-node outside the blocks of eyes and mouth. The numbers of feature point are adjusted adaptively. The jaw translation motion is tracked by the changes of the feature point position of jaw. The areas of non-rigid expression motion can be rebuilt by using block-pasting method. The estimation approach of motion parameter error based on the quality of reconstructed image is suggested, and area error function and the error function of contour transition-turn rate are used to be quality criteria. The criteria reflect the image geometric distortion caused by the error of estimated motion parameters properly.

  18. A new car-following model for autonomous vehicles flow with mean expected velocity field

    NASA Astrophysics Data System (ADS)

    Wen-Xing, Zhu; Li-Dong, Zhang

    2018-02-01

    Due to the development of the modern scientific technology, autonomous vehicles may realize to connect with each other and share the information collected from each vehicle. An improved forward considering car-following model was proposed with mean expected velocity field to describe the autonomous vehicles flow behavior. The new model has three key parameters: adjustable sensitivity, strength factor and mean expected velocity field size. Two lemmas and one theorem were proven as criteria for judging the stability of homogeneousautonomous vehicles flow. Theoretical results show that the greater parameters means larger stability regions. A series of numerical simulations were carried out to check the stability and fundamental diagram of autonomous flow. From the numerical simulation results, the profiles, hysteresis loop and density waves of the autonomous vehicles flow were exhibited. The results show that with increased sensitivity, strength factor or field size the traffic jam was suppressed effectively which are well in accordance with the theoretical results. Moreover, the fundamental diagrams corresponding to three parameters respectively were obtained. It demonstrates that these parameters play almost the same role on traffic flux: i.e. before the critical density the bigger parameter is, the greater flux is and after the critical density, the opposite tendency is. In general, the three parameters have a great influence on the stability and jam state of the autonomous vehicles flow.

  19. One-Dimensional Transport with Inflow and Storage (OTIS): A Solute Transport Model for Streams and Rivers

    USGS Publications Warehouse

    Runkel, Robert L.

    1998-01-01

    OTIS is a mathematical simulation model used to characterize the fate and transport of water-borne solutes in streams and rivers. The governing equation underlying the model is the advection-dispersion equation with additional terms to account for transient storage, lateral inflow, first-order decay, and sorption. This equation and the associated equations describing transient storage and sorption are solved using a Crank-Nicolson finite-difference solution. OTIS may be used in conjunction with data from field-scale tracer experiments to quantify the hydrologic parameters affecting solute transport. This application typically involves a trial-and-error approach wherein parameter estimates are adjusted to obtain an acceptable match between simulated and observed tracer concentrations. Additional applications include analyses of nonconservative solutes that are subject to sorption processes or first-order decay. OTIS-P, a modified version of OTIS, couples the solution of the governing equation with a nonlinear regression package. OTIS-P determines an optimal set of parameter estimates that minimize the squared differences between the simulated and observed concentrations, thereby automating the parameter estimation process. This report details the development and application of OTIS and OTIS-P. Sections of the report describe model theory, input/output specifications, sample applications, and installation instructions.

  20. Manipulating transmission and reflection properties of a photonic crystal doped with quantum dot nanostructures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Solookinejad, G.; Panahi, M.; Sangachin, E. A.

    The transmission and reflection properties of incident light in a defect dielectric structure is studied theoretically. The defect structure consists of donor and acceptor quantum dot nanostructures embedded in a photonic crystal. It is shown that the transmission and reflection properties of incident light can be controlled by adjusting the corresponding parameters of the system. The role of dipole–dipole interaction is considered as a new parameter in our calculations. It is noted that the features of transmission and reflection curves can be adjusted in the presence of dipole–dipole interaction. It is found that the absorption of weak probe light canmore » be converted to the probe amplification in the presence of dipole–dipole interaction. Moreover, the group velocity of transmitted and reflected probe light is discussed in detail in the absence and presence of dipole–dipole interaction. Our proposed model can be used as a new all-optical devices based on photonic materials doped with nanoparticles.« less

  1. Derivation and calibration of a gas metal arc welding (GMAW) dynamic droplet model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reutzel, E.W.; Einerson, C.J.; Johnson, J.A.

    1996-12-31

    A rudimentary, existing dynamic model for droplet growth and detachment in gas metal arc welding (GMAW) was improved and calibrated to match experimental data. The model simulates droplets growing at the end of an imaginary spring. Mass is added to the drop as the electrode melts, the droplet grows, and the spring is displaced. Detachment occurs when one of two criteria is met, and the amount of mass that is detached is a function of the droplet velocity at the time of detachment. Improvements to the model include the addition of a second criterion for drop detachment, a more sophisticatedmore » model of the power supply and secondary electric circuit, and the incorporation of a variable electrode resistance. Relevant physical parameters in the model were adjusted during model calibration. The average current, droplet frequency, and parameter-space location of globular-to-streaming mode transition were used as criteria for tuning the model. The average current predicted by the calibrated model matched the experimental average current to within 5% over a wide range of operating conditions.« less

  2. The comparison study among several data transformations in autoregressive modeling

    NASA Astrophysics Data System (ADS)

    Setiyowati, Susi; Waluyo, Ramdhani Try

    2015-12-01

    In finance, the adjusted close of stocks are used to observe the performance of a company. The extreme prices, which may increase or decrease drastically, are often become particular concerned since it can impact to bankruptcy. As preventing action, the investors have to observe the future (forecasting) stock prices comprehensively. For that purpose, time series analysis could be one of statistical methods that can be implemented, for both stationary and non-stationary processes. Since the variability process of stocks prices tend to large and also most of time the extreme values are always exist, then it is necessary to do data transformation so that the time series models, i.e. autoregressive model, could be applied appropriately. One of popular data transformation in finance is return model, in addition to ratio of logarithm and some others Tukey ladder transformation. In this paper these transformations are applied to AR stationary models and non-stationary ARCH and GARCH models through some simulations with varying parameters. As results, this work present the suggestion table that shows transformations behavior for some condition of parameters and models. It is confirmed that the better transformation is obtained, depends on type of data distributions. In other hands, the parameter conditions term give significant influence either.

  3. Synchronous Control Method and Realization of Automated Pharmacy Elevator

    NASA Astrophysics Data System (ADS)

    Liu, Xiang-Quan

    Firstly, the control method of elevator's synchronous motion is provided, the synchronous control structure of double servo motor based on PMAC is accomplished. Secondly, synchronous control program of elevator is implemented by using PMAC linear interpolation motion model and position error compensation method. Finally, the PID parameters of servo motor were adjusted. The experiment proves the control method has high stability and reliability.

  4. Cost and performance model for redox flow batteries

    NASA Astrophysics Data System (ADS)

    Viswanathan, Vilayanur; Crawford, Alasdair; Stephenson, David; Kim, Soowhan; Wang, Wei; Li, Bin; Coffey, Greg; Thomsen, Ed; Graff, Gordon; Balducci, Patrick; Kintner-Meyer, Michael; Sprenkle, Vincent

    2014-02-01

    A cost model is developed for all vanadium and iron-vanadium redox flow batteries. Electrochemical performance modeling is done to estimate stack performance at various power densities as a function of state of charge and operating conditions. This is supplemented with a shunt current model and a pumping loss model to estimate actual system efficiency. The operating parameters such as power density, flow rates and design parameters such as electrode aspect ratio and flow frame channel dimensions are adjusted to maximize efficiency and minimize capital costs. Detailed cost estimates are obtained from various vendors to calculate cost estimates for present, near-term and optimistic scenarios. The most cost-effective chemistries with optimum operating conditions for power or energy intensive applications are determined, providing a roadmap for battery management systems development for redox flow batteries. The main drivers for cost reduction for various chemistries are identified as a function of the energy to power ratio of the storage system. Levelized cost analysis further guide suitability of various chemistries for different applications.

  5. The management of patients with T1 adenocarcinoma of the low rectum: a decision analysis.

    PubMed

    Johnston, Calvin F; Tomlinson, George; Temple, Larissa K; Baxter, Nancy N

    2013-04-01

    Decision making for patients with T1 adenocarcinoma of the low rectum, when treatment options are limited to a transanal local excision or abdominoperineal resection, is challenging. The aim of this study was to develop a contemporary decision analysis to assist patients and clinicians in balancing the goals of maximizing life expectancy and quality of life in this situation. We constructed a Markov-type microsimulation in open-source software. Recurrence rates and quality-of-life parameters were elicited by systematic literature reviews. Sensitivity analyses were performed on key model parameters. Our base case for analysis was a 65-year-old man with low-lying T1N0 rectal cancer. We determined the sensitivity of our model for sex, age up to 80, and T stage. The main outcome measured was quality-adjusted life-years. In the base case, selecting transanal local excision over abdominoperineal resection resulted in a loss of 0.53 years of life expectancy but a gain of 0.97 quality-adjusted life-years. One-way sensitivity analysis demonstrated a health state utility value threshold for permanent colostomy of 0.93. This value ranged from 0.88 to 1.0 based on tumor recurrence risk. There were no other model sensitivities. Some model parameter estimates were based on weak data. In our model, transanal local excision was found to be the preferable approach for most patients. An abdominoperineal resection has a 3.5% longer life expectancy, but this advantage is lost when the quality-of-life reduction reported by stoma patients is weighed in. The minority group in whom abdominoperineal resection is preferred are those who are unwilling to sacrifice 7% of their life expectancy to avoid a permanent stoma. This is estimated to be approximately 25% of all patients. The threshold increases to 12% of life expectancy in high-risk tumors. No other factors are found to be relevant to the decision.

  6. The evolution of root zone moisture storage capacities after deforestation: a step towards hydrological predictions under change?

    NASA Astrophysics Data System (ADS)

    Nijzink, Remko C.; Hutton, Christopher; Pechlivanidis, Ilias; Capell, René; Arheimer, Berit; Freer, Jim; Han, Dawei; Wagener, Thorsten; McGuire, Kevin; Savenije, Hubert; Hrachowitz, Markus

    2017-04-01

    The moisture storage available to vegetation is a key parameter in the hydrological functioning of ecosystems. This parameter, the root zone storage capacity, determines the partitioning between runoff and transpiration, but is impossible to observe at the catchment scale. In this research, data from the experimental forests of HJ Andrews (Oregon, USA) and Hubbard Brook (New Hampshire, USA) was used to test the hypotheses that: (1) the root zone storage capacity significantly changes after deforestation, (2) changes in the root zone storage capacity can to a large extent explain post-treatment changes to the hydrological regimes and that (3) a time-dynamic formulation of the root zone storage can improve the performance of a hydrological model. At first, root zone storage capacities were estimated based on a simple, water-balance based method. Briefly, the maximum difference between cumulative rainfall and estimated transpiration was determined, which could be considered a proxy for root zone storage capacity. These values were compared with root zone storage capacities obtained from four conceptual models (HYPE, HYMOD, FLEX, TUW), calibrated for consecutive 2-year windows. Both methods showed a sharp decline in root zone storage capacity after deforestation, which was followed by a gradual recovery signal. It was found in a trend analysis that these recovery periods took between 5 and 13 years for the different catchments. Eventually, one of the models was adjusted to allow for a time-dynamic formulation of root zone storage capacity. This adjusted model showed improvements in model performance as evaluated by 28 hydrological signatures, such as rising limb density or peak flows. Thus, this research clearly shows the time-dynamic character of a crucial parameter, which is often considered to remain constant in time. Root zone storage capacities are strongly affected by deforestation, leading to changes in hydrological regimes, and time-dynamic formulations of root zone storage are therefore necessary in systems under change.

  7. Uncertainty assessment in geodetic network adjustment by combining GUM and Monte-Carlo-simulations

    NASA Astrophysics Data System (ADS)

    Niemeier, Wolfgang; Tengen, Dieter

    2017-06-01

    In this article first ideas are presented to extend the classical concept of geodetic network adjustment by introducing a new method for uncertainty assessment as two-step analysis. In the first step the raw data and possible influencing factors are analyzed using uncertainty modeling according to GUM (Guidelines to the Expression of Uncertainty in Measurements). This approach is well established in metrology, but rarely adapted within Geodesy. The second step consists of Monte-Carlo-Simulations (MC-simulations) for the complete processing chain from raw input data and pre-processing to adjustment computations and quality assessment. To perform these simulations, possible realizations of raw data and the influencing factors are generated, using probability distributions for all variables and the established concept of pseudo-random number generators. Final result is a point cloud which represents the uncertainty of the estimated coordinates; a confidence region can be assigned to these point clouds, as well. This concept may replace the common concept of variance propagation and the quality assessment of adjustment parameters by using their covariance matrix. It allows a new way for uncertainty assessment in accordance with the GUM concept for uncertainty modelling and propagation. As practical example the local tie network in "Metsähovi Fundamental Station", Finland is used, where classical geodetic observations are combined with GNSS data.

  8. Glacial isostatic adjustment using GNSS permanent stations and GIA modelling tools

    NASA Astrophysics Data System (ADS)

    Kollo, Karin; Spada, Giorgio; Vermeer, Martin

    2013-04-01

    Glacial Isostatic Adjustment (GIA) affects the Earth's mantle in areas which were once ice covered and the process is still ongoing. In this contribution we focus on GIA processes in Fennoscandian and North American uplift regions. In this contribution we use horizontal and vertical uplift rates from Global Navigation Satellite System (GNSS) permanent stations. For Fennoscandia the BIFROST dataset (Lidberg, 2010) and North America the dataset from Sella, 2007 were used respectively. We perform GIA modelling with the SELEN program (Spada and Stocchi, 2007) and we vary ice model parameters in space in order to find ice model which suits best with uplift values obtained from GNSS time series analysis. In the GIA modelling, the ice models ICE-5G (Peltier, 2004) and the ice model denoted as ANU05 ((Fleming and Lambeck, 2004) and references therein) were used. As reference, the velocity field from GNSS permanent station time series was used for both target areas. Firstly the sensitivity to the harmonic degree was tested in order to reduce the computation time. In the test, nominal viscosity values and pre-defined lithosphere thicknesses models were used, varying maximum harmonic degree values. Main criteria for choosing the suitable harmonic degree was chi-square fit - if the error measure does not differ more than 10%, then one might use as well lower harmonic degree value. From this test, maximum harmonic degree of 72 was chosen to perform calculations, as the larger value did not significantly modify the results obtained, as well the computational time for observations was kept reasonable. Secondly the GIA computations were performed to find the model, which could fit with highest probability to the GNSS-based velocity field in the target areas. In order to find best fitting Earth viscosity parameters, different viscosity profiles for the Earth models were tested and their impact on horizontal and vertical velocity rates from GIA modelling was studied. For every tested model the chi-square misfit for horizontal, vertical and three-dimensional velocity rates from the reference model was found (Milne, 2001). Finally, the best fitting models from GIA modelling were compared with rates obtained from GNSS data. Keywords: Fennoscandia, North America, land uplift, glacial isostatic adjustment, visco-elastic modelling, BIFROST. References Lidberg, M., Johannson, J., Scherneck, H.-G. and Milne, G. (2010). Recent results based on continuous GPS observations of the GIA process in Fennoscandia from BIFROST. Journal of Geodynamics, 50. pp. 8-18. Sella, G. F., Stein, S., Dixon, T. H., Craymer, M., James, T. S., Mazotti, S. and Dokka, R. K. (2007). Observations of glacial isostatic adjustment in "stable" North America with GPS. Geophysical Research Letters, 34, L02306. Spada, G., Stocchi, P. (2007). SELEN: A Fortran 90 program for solving the "sea-level equation". Computers & Geosciences, 33:538-562, 2007. Peltier, W. R. (2004). Global glacial isostasy and the surface of the ice-age Earth: The Ice-5G (VM2) model and GRACE. Annu. Rev. Earth Planet. Sci., 32:111-149, 2004. Fleming, K. and Lambeck, K. (2004). Constraints on the Greenland Ice Sheet since the Last Glacial Maximum from sea-level observations and glacial-rebound models. Quaternary Science Reviews 23 (2004), pp. 1053-1077. Milne, G. A. and Davis, J. L. and Mitrovica, J. X. and Scherneck, H.-G. and Johansson, J. M. and Vermeer, M. and Koivula, H. (2001). Space-geodetic constraints on glacial isostatic adjustment in Fennoscandia. Science 291 (2001), pp. 2381-2385.

  9. The influence of ligament modelling strategies on the predictive capability of finite element models of the human knee joint.

    PubMed

    Naghibi Beidokhti, Hamid; Janssen, Dennis; van de Groes, Sebastiaan; Hazrati, Javad; Van den Boogaard, Ton; Verdonschot, Nico

    2017-12-08

    In finite element (FE) models knee ligaments can represented either by a group of one-dimensional springs, or by three-dimensional continuum elements based on segmentations. Continuum models closer approximate the anatomy, and facilitate ligament wrapping, while spring models are computationally less expensive. The mechanical properties of ligaments can be based on literature, or adjusted specifically for the subject. In the current study we investigated the effect of ligament modelling strategy on the predictive capability of FE models of the human knee joint. The effect of literature-based versus specimen-specific optimized material parameters was evaluated. Experiments were performed on three human cadaver knees, which were modelled in FE models with ligaments represented either using springs, or using continuum representations. In spring representation collateral ligaments were each modelled with three and cruciate ligaments with two single-element bundles. Stiffness parameters and pre-strains were optimized based on laxity tests for both approaches. Validation experiments were conducted to evaluate the outcomes of the FE models. Models (both spring and continuum) with subject-specific properties improved the predicted kinematics and contact outcome parameters. Models incorporating literature-based parameters, and particularly the spring models (with the representations implemented in this study), led to relatively high errors in kinematics and contact pressures. Using a continuum modelling approach resulted in more accurate contact outcome variables than the spring representation with two (cruciate ligaments) and three (collateral ligaments) single-element-bundle representations. However, when the prediction of joint kinematics is of main interest, spring ligament models provide a faster option with acceptable outcome. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. Predictors of femtosecond laser intrastromal astigmatic keratotomy efficacy for astigmatism management in cataract surgery.

    PubMed

    Day, Alexander C; Stevens, Julian D

    2016-02-01

    To evaluate the factors associated with the efficacy of femtosecond laser intrastromal astigmatic keratotomy (AK). Moorfields Eye Hospital, London, United Kingdom. Prospective case series. Eyes having intrastromal AK for corneal cylinder correction were analyzed. Preoperative biometric parameters included axial length, anterior chamber depth, central corneal thickness, and Ocular Response Analyzer corneal hysteresis (CH) and corneal resistance factor (CRF). Preoperative and 1-month postoperative corneal keratometry was measured using the Topcon KR8100PA topographer-autorefractor. Astigmatic analyses were performed using the Alpins method. The study analyzed 319 eyes of 213 patients with a mean target induced astigmatism of 1.24 diopters (D) ± 0.44 (SD), mean surgically induced astigmatism (SIA) of 0.71 ± 0.43 D, and mean difference vector of 0.79 ± 0.41 D. Two multiple regression models were constructed for SIA prediction. Model 1, based on previous manual limbal relaxing incision parameters, confirmed age and astigmatism meridian (with/against the rule and oblique) to be associated with SIA in addition to AK arc length, AK start depth, and preoperative corneal cylinder magnitude. Model 2, additionally considering other parameters, found only lower CH (-0.06 DC per unit CH), a higher CRF (0.04 D per unit CRF), and the astigmatism meridian to be independent predictors of greater SIA (after adjusting for intrastromal AK arc length, start depth, and preoperative corneal cylinder). With-the-rule astigmatism was associated with a 0.13 D higher SIA than against-the-rule astigmatism, holding all other variables constant. Corneal biomechanical parameters and astigmatism meridian were independent predictors of femtosecond laser intrastromal AK efficacy even after adjusting for AK arc length, AK start depth, and preoperative corneal cylinder. Dr. Stevens is a previous consultant to Optimedica, Inc. which is now part of Abbott Medical Optics, Inc. Drs. Stevens and Day have no financial or proprietary interest in any material or method mentioned. Copyright © 2016 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.

  11. Present-Day Vegetation Helps Quantifying Past Land Cover in Selected Regions of the Czech Republic

    PubMed Central

    Abraham, Vojtěch; Oušková, Veronika; Kuneš, Petr

    2014-01-01

    The REVEALS model is a tool for recalculating pollen data into vegetation abundances on a regional scale. We explored the general effect of selected parameters by performing simulations and ascertained the best model setting for the Czech Republic using the shallowest samples from 120 fossil sites and data on actual regional vegetation (60 km radius). Vegetation proportions of 17 taxa were obtained by combining the CORINE Land Cover map with forest inventories, agricultural statistics and habitat mapping data. Our simulation shows that changing the site radius for all taxa substantially affects REVEALS estimates of taxa with heavy or light pollen grains. Decreasing the site radius has a similar effect as increasing the wind speed parameter. However, adjusting the site radius to 1 m for local taxa only (even taxa with light pollen) yields lower, more correct estimates despite their high pollen signal. Increasing the background radius does not affect the estimates significantly. Our comparison of estimates with actual vegetation in seven regions shows that the most accurate relative pollen productivity estimates (PPEs) come from Central Europe and Southern Sweden. The initial simulation and pollen data yielded unrealistic estimates for Abies under the default setting of the wind speed parameter (3 m/s). We therefore propose the setting of 4 m/s, which corresponds to the spring average in most regions of the Czech Republic studied. Ad hoc adjustment of PPEs with this setting improves the match 3–4-fold. We consider these values (apart from four exceptions) to be appropriate, because they are within the ranges of standard errors, so they are related to original PPEs. Setting a 1 m radius for local taxa (Alnus, Salix, Poaceae) significantly improves the match between estimates and actual vegetation. However, further adjustments to PPEs exceed the ranges of original values, so their relevance is uncertain. PMID:24936973

  12. Temperature-viscosity models reassessed.

    PubMed

    Peleg, Micha

    2017-05-04

    The temperature effect on viscosity of liquid and semi-liquid foods has been traditionally described by the Arrhenius equation, a few other mathematical models, and more recently by the WLF and VTF (or VFT) equations. The essence of the Arrhenius equation is that the viscosity is proportional to the absolute temperature's reciprocal and governed by a single parameter, namely, the energy of activation. However, if the absolute temperature in K in the Arrhenius equation is replaced by T + b where both T and the adjustable b are in °C, the result is a two-parameter model, which has superior fit to experimental viscosity-temperature data. This modified version of the Arrhenius equation is also mathematically equal to the WLF and VTF equations, which are known to be equal to each other. Thus, despite their dissimilar appearances all three equations are essentially the same model, and when used to fit experimental temperature-viscosity data render exactly the same very high regression coefficient. It is shown that three new hybrid two-parameter mathematical models, whose formulation bears little resemblance to any of the conventional models, can also have excellent fit with r 2 ∼ 1. This is demonstrated by comparing the various models' regression coefficients to published viscosity-temperature relationships of 40% sucrose solution, soybean oil, and 70°Bx pear juice concentrate at different temperature ranges. Also compared are reconstructed temperature-viscosity curves using parameters calculated directly from 2 or 3 data points and fitted curves obtained by nonlinear regression using a larger number of experimental viscosity measurements.

  13. Bayesian effect estimation accounting for adjustment uncertainty.

    PubMed

    Wang, Chi; Parmigiani, Giovanni; Dominici, Francesca

    2012-09-01

    Model-based estimation of the effect of an exposure on an outcome is generally sensitive to the choice of which confounding factors are included in the model. We propose a new approach, which we call Bayesian adjustment for confounding (BAC), to estimate the effect of an exposure of interest on the outcome, while accounting for the uncertainty in the choice of confounders. Our approach is based on specifying two models: (1) the outcome as a function of the exposure and the potential confounders (the outcome model); and (2) the exposure as a function of the potential confounders (the exposure model). We consider Bayesian variable selection on both models and link the two by introducing a dependence parameter, ω, denoting the prior odds of including a predictor in the outcome model, given that the same predictor is in the exposure model. In the absence of dependence (ω= 1), BAC reduces to traditional Bayesian model averaging (BMA). In simulation studies, we show that BAC, with ω > 1, estimates the exposure effect with smaller bias than traditional BMA, and improved coverage. We, then, compare BAC, a recent approach of Crainiceanu, Dominici, and Parmigiani (2008, Biometrika 95, 635-651), and traditional BMA in a time series data set of hospital admissions, air pollution levels, and weather variables in Nassau, NY for the period 1999-2005. Using each approach, we estimate the short-term effects of on emergency admissions for cardiovascular diseases, accounting for confounding. This application illustrates the potentially significant pitfalls of misusing variable selection methods in the context of adjustment uncertainty. © 2012, The International Biometric Society.

  14. Adaptation Method for Overall and Local Performances of Gas Turbine Engine Model

    NASA Astrophysics Data System (ADS)

    Kim, Sangjo; Kim, Kuisoon; Son, Changmin

    2018-04-01

    An adaptation method was proposed to improve the modeling accuracy of overall and local performances of gas turbine engine. The adaptation method was divided into two steps. First, the overall performance parameters such as engine thrust, thermal efficiency, and pressure ratio were adapted by calibrating compressor maps, and second, the local performance parameters such as temperature of component intersection and shaft speed were adjusted by additional adaptation factors. An optimization technique was used to find the correlation equation of adaptation factors for compressor performance maps. The multi-island genetic algorithm (MIGA) was employed in the present optimization. The correlations of local adaptation factors were generated based on the difference between the first adapted engine model and performance test data. The proposed adaptation method applied to a low-bypass ratio turbofan engine of 12,000 lb thrust. The gas turbine engine model was generated and validated based on the performance test data in the sea-level static condition. In flight condition at 20,000 ft and 0.9 Mach number, the result of adapted engine model showed improved prediction in engine thrust (overall performance parameter) by reducing the difference from 14.5 to 3.3%. Moreover, there was further improvement in the comparison of low-pressure turbine exit temperature (local performance parameter) as the difference is reduced from 3.2 to 0.4%.

  15. Population synthesis of radio and gamma-ray millisecond pulsars using Markov Chain Monte Carlo techniques

    NASA Astrophysics Data System (ADS)

    Gonthier, Peter L.; Koh, Yew-Meng; Kust Harding, Alice

    2016-04-01

    We present preliminary results of a new population synthesis of millisecond pulsars (MSP) from the Galactic disk using Markov Chain Monte Carlo techniques to better understand the model parameter space. We include empirical radio and gamma-ray luminosity models that are dependent on the pulsar period and period derivative with freely varying exponents. The magnitudes of the model luminosities are adjusted to reproduce the number of MSPs detected by a group of thirteen radio surveys as well as the MSP birth rate in the Galaxy and the number of MSPs detected by Fermi. We explore various high-energy emission geometries like the slot gap, outer gap, two pole caustic and pair starved polar cap models. The parameters associated with the birth distributions for the mass accretion rate, magnetic field, and period distributions are well constrained. With the set of four free parameters, we employ Markov Chain Monte Carlo simulations to explore the model parameter space. We present preliminary comparisons of the simulated and detected distributions of radio and gamma-ray pulsar characteristics. We estimate the contribution of MSPs to the diffuse gamma-ray background with a special focus on the Galactic Center.We express our gratitude for the generous support of the National Science Foundation (RUI: AST-1009731), Fermi Guest Investigator Program and the NASA Astrophysics Theory and Fundamental Program (NNX09AQ71G).

  16. A development of logistics management models for the Space Transportation System

    NASA Technical Reports Server (NTRS)

    Carrillo, M. J.; Jacobsen, S. E.; Abell, J. B.; Lippiatt, T. F.

    1983-01-01

    A new analytic queueing approach was described which relates stockage levels, repair level decisions, and the project network schedule of prelaunch operations directly to the probability distribution of the space transportation system launch delay. Finite source population and limited repair capability were additional factors included in this logistics management model developed specifically for STS maintenance requirements. Data presently available to support logistics decisions were based on a comparability study of heavy aircraft components. A two-phase program is recommended by which NASA would implement an integrated data collection system, assemble logistics data from previous STS flights, revise extant logistics planning and resource requirement parameters using Bayes-Lin techniques, and adjust for uncertainty surrounding logistics systems performance parameters. The implementation of these recommendations can be expected to deliver more cost-effective logistics support.

  17. Method and system for monitoring and displaying engine performance parameters

    NASA Technical Reports Server (NTRS)

    Abbott, Terence S. (Inventor); Person, Lee H., Jr. (Inventor)

    1988-01-01

    The invention is believed a major improvement that will have a broad application in governmental and commercial aviation. It provides a dynamic method and system for monitoring and simultaneously displaying in easily scanned form the available, predicted, and actual thrust of a jet aircraft engine under actual operating conditions. The available and predicted thrusts are based on the performance of a functional model of the aircraft engine under the same operating conditions. Other critical performance parameters of the aircraft engine and functional model are generated and compared, the differences in value being simultaneously displayed in conjunction with the displayed thrust values. Thus, the displayed information permits the pilot to make power adjustments directly while keeping him aware of total performance at a glance of a single display panel.

  18. ORBSIM- ESTIMATING GEOPHYSICAL MODEL PARAMETERS FROM PLANETARY GRAVITY DATA

    NASA Technical Reports Server (NTRS)

    Sjogren, W. L.

    1994-01-01

    The ORBSIM program was developed for the accurate extraction of geophysical model parameters from Doppler radio tracking data acquired from orbiting planetary spacecraft. The model of the proposed planetary structure is used in a numerical integration of the spacecraft along simulated trajectories around the primary body. Using line of sight (LOS) Doppler residuals, ORBSIM applies fast and efficient modelling and optimization procedures which avoid the traditional complex dynamic reduction of data. ORBSIM produces quantitative geophysical results such as size, depth, and mass. ORBSIM has been used extensively to investigate topographic features on the Moon, Mars, and Venus. The program has proven particulary suitable for modelling gravitational anomalies and mascons. The basic observable for spacecraft-based gravity data is the Doppler frequency shift of a transponded radio signal. The time derivative of this signal carries information regarding the gravity field acting on the spacecraft in the LOS direction (the LOS direction being the path between the spacecraft and the receiving station, either Earth or another satellite). There are many dynamic factors taken into account: earth rotation, solar radiation, acceleration from planetary bodies, tracking station time and location adjustments, etc. The actual trajectories of the spacecraft are simulated using least squares fitted to conic motion. The theoretical Doppler readings from the simulated orbits are compared to actual Doppler observations and another least squares adjustment is made. ORBSIM has three modes of operation: trajectory simulation, optimization, and gravity modelling. In all cases, an initial gravity model of curved and/or flat disks, harmonics, and/or a force table are required input. ORBSIM is written in FORTRAN 77 for batch execution and has been implemented on a DEC VAX 11/780 computer operating under VMS. This program was released in 1985.

  19. Towards flash-flood prediction in the dry Dead Sea region utilizing radar rainfall information

    NASA Astrophysics Data System (ADS)

    Morin, Efrat; Jacoby, Yael; Navon, Shilo; Bet-Halachmi, Erez

    2009-07-01

    Flash-flood warning models can save lives and protect various kinds of infrastructure. In dry climate regions, rainfall is highly variable and can be of high-intensity. Since rain gauge networks in such areas are sparse, rainfall information derived from weather radar systems can provide useful input for flash-flood models. This paper presents a flash-flood warning model which utilizes radar rainfall data and applies it to two catchments that drain into the dry Dead Sea region. Radar-based quantitative precipitation estimates (QPEs) were derived using a rain gauge adjustment approach, either on a daily basis (allowing the adjustment factor to change over time, assuming available real-time gauge data) or using a constant factor value (derived from rain gauge data) over the entire period of the analysis. The QPEs served as input for a continuous hydrological model that represents the main hydrological processes in the region, namely infiltration, flow routing and transmission losses. The infiltration function is applied in a distributed mode while the routing and transmission loss functions are applied in a lumped mode. Model parameters were found by calibration based on the 5 years of data for one of the catchments. Validation was performed for a subsequent 5-year period for the same catchment and then for an entire 10-year record for the second catchment. The probability of detection and false alarm rates for the validation cases were reasonable. Probabilistic flash-flood prediction is presented applying Monte Carlo simulations with an uncertainty range for the QPEs and model parameters. With low probability thresholds, one can maintain more than 70% detection with no more than 30% false alarms. The study demonstrates that a flash-flood warning model is feasible for catchments in the area studied.

  20. Towards flash flood prediction in the dry Dead Sea region utilizing radar rainfall information

    NASA Astrophysics Data System (ADS)

    Morin, E.; Jacoby, Y.; Navon, S.; Bet-Halachmi, E.

    2009-04-01

    Flash-flood warning models can save lives and protect various kinds of infrastructure. In dry climate regions, rainfall is highly variable and can be of high-intensity. Since rain gauge networks in such areas are sparse, rainfall information derived from weather radar systems can provide useful input for flash-flood models. This paper presents a flash-flood warning model utilizing radar rainfall data and applies it to two catchments that drain into the dry Dead Sea region. Radar-based quantitative precipitation estimates (QPEs) were derived using a rain gauge adjustment approach, either on a daily basis (allowing the adjustment factor to change over time, assuming available real-time gauge data) or using a constant factor value (derived from rain gauge data) over the entire period of the analysis. The QPEs served as input for a continuous hydrological model that represents the main hydrological processes in the region, namely infiltration, flow routing and transmission losses. The infiltration function is applied in a distributed mode while the routing and transmission loss functions are applied in a lumped mode. Model parameters were found by calibration based on five years of data for one of the catchments. Validation was performed for a subsequent five-year period for the same catchment and then for an entire ten year record for the second catchment. The probability of detection and false alarm rates for the validation cases were reasonable. Probabilistic flash-flood prediction is presented applying Monte Carlo simulations with an uncertainty range for the QPEs and model parameters. With low probability thresholds, one can maintain more than 70% detection with no more than 30% false alarms. The study demonstrates that a flash-flood-warning model is feasible for catchments in the area studied.

  1. Simulation Model for Scenario Optimization of the Ready-Mix Concrete Delivery Problem

    NASA Astrophysics Data System (ADS)

    Galić, Mario; Kraus, Ivan

    2016-12-01

    This paper introduces a discrete simulation model for solving routing and network material flow problems in construction projects. Before the description of the model a detailed literature review is provided. The model is verified using a case study of solving the ready-mix concrete network flow and routing problem in metropolitan area in Croatia. Within this study real-time input parameters were taken into account. Simulation model is structured in Enterprise Dynamics simulation software and Microsoft Excel linked with Google Maps. The model is dynamic, easily managed and adjustable, but also provides good estimation for minimization of costs and realization time in solving discrete routing and material network flow problems.

  2. Nonsequential modeling of laser diode stacks using Zemax: simulation, optimization, and experimental validation.

    PubMed

    Coluccelli, Nicola

    2010-08-01

    Modeling a real laser diode stack based on Zemax ray tracing software that operates in a nonsequential mode is reported. The implementation of the model is presented together with the geometric and optical parameters to be adjusted to calibrate the model and to match the simulated intensity irradiance profiles with the experimental profiles. The calibration of the model is based on a near-field and a far-field measurement. The validation of the model has been accomplished by comparing the simulated and experimental transverse irradiance profiles at different positions along the caustic formed by a lens. Spot sizes and waist location are predicted with a maximum error below 6%.

  3. Sampling-free Bayesian inversion with adaptive hierarchical tensor representations

    NASA Astrophysics Data System (ADS)

    Eigel, Martin; Marschall, Manuel; Schneider, Reinhold

    2018-03-01

    A sampling-free approach to Bayesian inversion with an explicit polynomial representation of the parameter densities is developed, based on an affine-parametric representation of a linear forward model. This becomes feasible due to the complete treatment in function spaces, which requires an efficient model reduction technique for numerical computations. The advocated perspective yields the crucial benefit that error bounds can be derived for all occuring approximations, leading to provable convergence subject to the discretization parameters. Moreover, it enables a fully adaptive a posteriori control with automatic problem-dependent adjustments of the employed discretizations. The method is discussed in the context of modern hierarchical tensor representations, which are used for the evaluation of a random PDE (the forward model) and the subsequent high-dimensional quadrature of the log-likelihood, alleviating the ‘curse of dimensionality’. Numerical experiments demonstrate the performance and confirm the theoretical results.

  4. Removal of sodium lauryl sulphate by coagulation/flocculation with Moringa oleifera seed extract.

    PubMed

    Beltrán-Heredia, J; Sánchez-Martín, J

    2009-05-30

    Among other natural flocculant/coagulant agents, Moringa oleifera seed extract ability to remove an anionic surfactant has been evaluated and it has been found to be very interesting. Sodium lauryl sulphate was removed from aqueous solutions up to 80% through coagulation/flocculation process. pH and temperature were found to be not very important factors in removal efficiency. Freundlich (F), Frumkin-Fowler-Guggenheim (FFG) and Gu-Zhu (GZ) models were used to adjust experimental data in a solid-liquid adsorption hypothesis. Last one resulted to be the most accurate one. Several data fit parameters were determined, as Freundlich order, which was found to be 1.66, Flory-Huggins interaction parameter from FFG model, which was found to be 4.87; and limiting Moringa surfactant adsorption capacity from GZ model, which was found to be 2.13 x 10(-3)mol/g.

  5. Cracking on anisotropic neutron stars

    NASA Astrophysics Data System (ADS)

    Setiawan, A. M.; Sulaksono, A.

    2017-07-01

    We study the effect of cracking of a local anisotropic neutron star (NS) due to small density fluctuations. It is assumed that the neutron star core consists of leptons, nucleons and hyperons. The relativistic mean field model is used to describe the core of equation of state (EOS). For the crust, we use the EOS introduced by Miyatsu et al. [1]. Furthermore, two models are used to describe pressure anisotropic in neutron star matter. One is proposed by Doneva-Yazadjiev (DY) [2] and the other is proposed by Herrera-Barreto (HB) [3]. The anisotropic parameter of DY and HB models are adjusted in order the predicted maximum mass compatible to the mass of PSR J1614-2230 [4] and PSR J0348+0432 [5]. We have found that cracking can potentially present in the region close to the neutron star surface. The instability due cracking is quite sensitive to the NS mass and anisotropic parameter used.

  6. An interval programming model for continuous improvement in micro-manufacturing

    NASA Astrophysics Data System (ADS)

    Ouyang, Linhan; Ma, Yizhong; Wang, Jianjun; Tu, Yiliu; Byun, Jai-Hyun

    2018-03-01

    Continuous quality improvement in micro-manufacturing processes relies on optimization strategies that relate an output performance to a set of machining parameters. However, when determining the optimal machining parameters in a micro-manufacturing process, the economics of continuous quality improvement and decision makers' preference information are typically neglected. This article proposes an economic continuous improvement strategy based on an interval programming model. The proposed strategy differs from previous studies in two ways. First, an interval programming model is proposed to measure the quality level, where decision makers' preference information is considered in order to determine the weight of location and dispersion effects. Second, the proposed strategy is a more flexible approach since it considers the trade-off between the quality level and the associated costs, and leaves engineers a larger decision space through adjusting the quality level. The proposed strategy is compared with its conventional counterparts using an Nd:YLF laser beam micro-drilling process.

  7. A kinetic model for the characteristic surface morphologies of thin films by directional vapor deposition

    NASA Astrophysics Data System (ADS)

    Li, Kun-Dar; Huang, Po-Yu

    2017-12-01

    In order to simulate a process of directional vapor deposition, in this study, a numerical approach was applied to model the growth and evolution of surface morphologies for the crystallographic structures of thin films. The critical factors affecting the surface morphologies in a deposition process, such as the crystallographic symmetry, anisotropic interfacial energy, shadowing effect, and deposition rate, were all enclosed in the theoretical model. By altering the parameters of crystallographic symmetry in the structures, the faceted nano-columns with rectangular and hexagonal shapes were established in the simulation results. Furthermore, for revealing the influences of the anisotropic strength and the deposition rate theoretically on the crystallographic structure formations, various parameters adjusted in the numerical calculations were also investigated. Not only the morphologies but also the surface roughnesses for different processing conditions were distinctly demonstrated with the quantitative analysis of the simulations.

  8. Calibration of AIS Data Using Ground-based Spectral Reflectance Measurements

    NASA Technical Reports Server (NTRS)

    Conel, J. E.

    1985-01-01

    Present methods of correcting airborne imaging spectrometer (AIS) data for instrumental and atmospheric effects include the flat- or curved-field correction and a deviation-from-the-average adjustment performed on a line-by-line basis throughout the image. Both methods eliminate the atmospheric absorptions, but remove the possibility of studying the atmosphere for its own sake, or of using the atmospheric information present as a possible basis for theoretical modeling. The method discussed here relies on use of ground-based measurements of the surface spectral reflectance in comparison with scanner data to fix in a least-squares sense parameters in a simplified model of the atmosphere on a wavelength-by-wavelength basis. The model parameters (for optically thin conditions) are interpretable in terms of optical depth and scattering phase function, and thus, in principle, provide an approximate description of the atmosphere as a homogeneous body intervening between the sensor and the ground.

  9. Research on the Diesel Engine with Sliding Mode Variable Structure Theory

    NASA Astrophysics Data System (ADS)

    Ma, Zhexuan; Mao, Xiaobing; Cai, Le

    2018-05-01

    This study constructed the nonlinear mathematical model of the diesel engine high-pressure common rail (HPCR) system through two polynomial fitting which was treated as a kind of affine nonlinear system. Based on sliding-mode variable structure control (SMVSC) theory, a sliding-mode controller for affine nonlinear systems was designed for achieving the control of common rail pressure and the diesel engine’s rotational speed. Finally, on the simulation platform of MATLAB, the designed nonlinear HPCR system was simulated. The simulation results demonstrated that sliding-mode variable structure control algorithm shows favourable control performances which are overcoming the shortcomings of traditional PID control in overshoot, parameter adjustment, system precision, adjustment time and ascending time.

  10. Acid volatile sulfides oxidation and metals (Mn, Zn) release upon sediment resuspension: laboratory experiment and model development.

    PubMed

    Hong, Yong Seok; Kinney, Kerry A; Reible, Danny D

    2011-03-01

    Sediment from the Anacostia River (Washington, DC, USA) was suspended in aerobic artificial river water for 14 d to investigate the dynamics of dissolved metals release and related parameters including pH, acid volatile sulfides (AVS), and dissolved/solid phase Fe(2+). To better understand and predict the underlying processes, a mathematical model is developed considering oxidation of reduced species, dissolution of minerals, pH changes, and pH-dependent metals' sorption to sediment. Oxidation rate constants of elemental sulfur and zinc sulfide, and a dissolution rate constant of carbonate minerals, were adjusted to fit observations. The proposed model and parameters were then applied, without further calibration, to literature-reported experimental observations of resuspension in an acid sulfate soil collected in a coastal flood plain. The model provided a good description of the dynamics of AVS, Fe(2+), S(0)((s)), pH, dissolved carbonates concentrations, and the release of Ca((aq)), Mg((aq)), and Zn((aq)) in both sediments. Accurate predictions of Mn((aq)) release required adjustment of sorption partitioning coefficient, presumably due to the presence of Mn scavenging by phases not accounted for in the model. The oxidation of AVS (and the resulting release of sulfide-bound metals) was consistent with a two-step process, a relatively rapid AVS oxidation to elemental sulfur (S(0)((s))) and a slow oxidation of S(0)((s)) to SO(4)(2-)((aq)), with an associated decrease in pH from neutral to acidic conditions. This acidification was the dominant factor for the release of metals into the aqueous phase. Copyright © 2010 SETAC.

  11. A new symmetry model for hohlraum-driven capsule implosion experiments on the NIF

    NASA Astrophysics Data System (ADS)

    Jones, O.; Rygg, R.; Tomasini, R.; Eder, D.; Kritcher, A.; Milovich, J.; Peterson, L.; Thomas, C.; Barrios, M.; Benedetti, R.; Doeppner, T.; Ma, T.; Nagel, S.; Pak, A.; Field, J.; Izumi, N.; Glenn, S.; Town, R.; Bradley, D.

    2016-03-01

    We have developed a new model for predicting the time-dependent radiation drive asymmetry in laser-heated hohlraums. The model consists of integrated Hydra capsule-hohlraum calculations coupled to a separate model for calculating the crossbeam energy transfer between the inner and outer cones of the National Ignition Facility (NIF) indirect drive configuration. The time- dependent crossbeam transfer model parameters were adjusted in order to best match the P2 component of the shape of the inflight shell inferred from backlit radiographs of the capsule taken when the shell was at a radius of 150-250 μm. The adjusted model correctly predicts the observed inflight P2 and P4 components of the shape of the inflight shell, and also the P2 component of the shape of the hotspot inferred from x-ray self-emission images at the time of peak emission. It also correctly captures the scaling of the inflight P4 as the hohlraum length is varied. We then applied the newly benchmarked model to quantify the improved symmetry of the N130331 layered deuterium- tritium (DT) experiment in a re-optimized longer hohlraum.

  12. ANTHROPOMETRIC CHARACTERISTICS OF FLIGHT PERSONNEL FOR DESIGNING DAMPERS FOR SHOCKPROOF SEATS OF HELICOPTER CREWS.

    PubMed

    Moiseev, Yu B; Ignatovich, S N; Strakhov, A Yu

    The article discusses anthropometric design of shockproof pilot seats for state-of-the-art helicopters. Object of the investigation was anthropometric parameters of the helicopter aviation personnel of the Russian interior troops. It was stated that the body parameters essential for designing helicopter seat dampers are mass of the body part that presses against the seat in the seating position, and eye level above the seat surface. An uncontrolled seat damper ensuring shockproof safety to 95 % helicopter crews must be designed for the body mass contacting the seat of 99.7 kg and eye level above the seat of 78.6 cm. To absorb.shock effectively, future dampers should be adjustable to pilot's body parameters. The optimal approach to anthropometric design of a helicopter seat is development of type pilot' body models with due account of pilot's the flight outfit and seat geometry. Principle criteria of type models are body mass and eye level. The authors propose a system of type body models facilitating specification of anthropometric data helicopter seat developers.

  13. Pixel pitch and particle energy influence on the dark current distribution of neutron irradiated CMOS image sensors.

    PubMed

    Belloir, Jean-Marc; Goiffon, Vincent; Virmontois, Cédric; Raine, Mélanie; Paillet, Philippe; Duhamel, Olivier; Gaillardin, Marc; Molina, Romain; Magnan, Pierre; Gilard, Olivier

    2016-02-22

    The dark current produced by neutron irradiation in CMOS Image Sensors (CIS) is investigated. Several CIS with different photodiode types and pixel pitches are irradiated with various neutron energies and fluences to study the influence of each of these optical detector and irradiation parameters on the dark current distribution. An empirical model is tested on the experimental data and validated on all the irradiated optical imagers. This model is able to describe all the presented dark current distributions with no parameter variation for neutron energies of 14 MeV or higher, regardless of the optical detector and irradiation characteristics. For energies below 1 MeV, it is shown that a single parameter has to be adjusted because of the lower mean damage energy per nuclear interaction. This model and these conclusions can be transposed to any silicon based solid-state optical imagers such as CIS or Charged Coupled Devices (CCD). This work can also be used when designing an optical imager instrument, to anticipate the dark current increase or to choose a mitigation technique.

  14. Astrophysical 3He(α ,γ )7Be and 3H(α ,γ )7Li direct capture reactions in a potential-model approach

    NASA Astrophysics Data System (ADS)

    Tursunov, E. M.; Turakulov, S. A.; Kadyrov, A. S.

    2018-03-01

    The astrophysical 3He(α ,γ )7Be and 3H(α ,γ )7Li direct capture processes are studied in the framework of the two-body model with potentials of a simple Gaussian form, which describe correctly the phase shifts in the s , p , d , and f waves, as well as the binding energy and the asymptotic normalization constant of the ground p3 /2 and the first excited p1 /2 bound states. It is shown that the E 1 transition from the initial s wave to the final p waves is strongly dominant in both capture reactions. On this basis the s -wave potential parameters are adjusted to reproduce the new data of the LUNA Collaboration around 100 keV and the newest data at the Gamov peak estimated with the help of the observed neutrino fluxes from the sun, S34(23-5+6keV ) =0.548 ±0.054 keV b for the astrophysical S factor of the capture process 3He(α ,γ )7Be . The resulting model describes well the astrophysical S factor in the low-energy big-bang nucleosynthesis region of 180-400 keV; however, it has a tendency to underestimate the data above 0.5 MeV. The energy dependence of the S factor is mostly consistent with the data and the results of the no-core shell model with continuum, but substantially different from the fermionic molecular dynamics model predictions. Two-body potentials, adjusted for the properties of the 7Be nucleus, 3He+α elastic scattering data, and the astrophysical S factor of the 3He(α ,γ )7Be direct capture reaction, are able to reproduce the properties of the 7Li nucleus, the binding energies of the ground 3 /2- and first excited 1 /2- states, and phase shifts of the 3H+α elastic scattering in partial waves. Most importantly, these potential models can successfully describe both absolute value and energy dependence of the existing experimental data for the mirror astrophysical 3H(α ,γ )7Li capture reaction without any additional adjustment of the parameters.

  15. Controlling for varying effort in count surveys --an analysis of Christmas Bird Count Data

    USGS Publications Warehouse

    Link, W.A.; Sauer, J.R.

    1999-01-01

    The Christmas Bird Count (CBC) is a valuable source of information about midwinter populations of birds in the continental U.S. and Canada. Analysis of CBC data is complicated by substantial variation among sites and years in effort expended in counting; this feature of the CBC is common to many other wildlife surveys. Specification of a method for adjusting counts for effort is a matter of some controversy. Here, we present models for longitudinal count surveys with varying effort; these describe the effect of effort as proportional to exp(B effortp), where B and p are parameters. For any fixed p, our models are loglinear in the transformed explanatory variable (effort)p and other covariables. Hence we fit a collection of loglinear models corresponding to a range of values of p, and select the best effort adjustment from among these on the basis of fit statistics. We apply this procedure to data for six bird species in five regions, for the period 1959-1988.

  16. Adaptation of SUBSTOR for controlled-environment potato production with elevated carbon dioxide

    NASA Technical Reports Server (NTRS)

    Fleisher, D. H.; Cavazzoni, J.; Giacomelli, G. A.; Ting, K. C.; Janes, H. W. (Principal Investigator)

    2003-01-01

    The SUBSTOR crop growth model was adapted for controlled-environment hydroponic production of potato (Solanum tuberosum L. cv. Norland) under elevated atmospheric carbon dioxide concentration. Adaptations included adjustment of input files to account for cultural differences between the field and controlled environments, calibration of genetic coefficients, and adjustment of crop parameters including radiation use efficiency. Source code modifications were also performed to account for the absorption of light reflected from the surface below the crop canopy, an increased leaf senescence rate, a carbon (mass) balance to the model, and to modify the response of crop growth rate to elevated atmospheric carbon dioxide concentration. Adaptations were primarily based on growth and phenological data obtained from growth chamber experiments at Rutgers University (New Brunswick, N.J.) and from the modeling literature. Modified-SUBSTOR predictions were compared with data from Kennedy Space Center's Biomass Production Chamber for verification. Results show that, with further development, modified-SUBSTOR will be a useful tool for analysis and optimization of potato growth in controlled environments.

  17. Real-time physics-based 3D biped character animation using an inverted pendulum model.

    PubMed

    Tsai, Yao-Yang; Lin, Wen-Chieh; Cheng, Kuangyou B; Lee, Jehee; Lee, Tong-Yee

    2010-01-01

    We present a physics-based approach to generate 3D biped character animation that can react to dynamical environments in real time. Our approach utilizes an inverted pendulum model to online adjust the desired motion trajectory from the input motion capture data. This online adjustment produces a physically plausible motion trajectory adapted to dynamic environments, which is then used as the desired motion for the motion controllers to track in dynamics simulation. Rather than using Proportional-Derivative controllers whose parameters usually cannot be easily set, our motion tracking adopts a velocity-driven method which computes joint torques based on the desired joint angular velocities. Physically correct full-body motion of the 3D character is computed in dynamics simulation using the computed torques and dynamical model of the character. Our experiments demonstrate that tracking motion capture data with real-time response animation can be achieved easily. In addition, physically plausible motion style editing, automatic motion transition, and motion adaptation to different limb sizes can also be generated without difficulty.

  18. Load controller and method to enhance effective capacity of a photovoltaic power supply using a dynamically determined expected peak loading

    DOEpatents

    Perez, Richard

    2005-05-03

    A load controller and method are provided for maximizing effective capacity of a non-controllable, renewable power supply coupled to a variable electrical load also coupled to a conventional power grid. Effective capacity is enhanced by monitoring power output of the renewable supply and loading, and comparing the loading against the power output and a load adjustment threshold determined from an expected peak loading. A value for a load adjustment parameter is calculated by subtracting the renewable supply output and the load adjustment parameter from the current load. This value is then employed to control the variable load in an amount proportional to the value of the load control parameter when the parameter is within a predefined range. By so controlling the load, the effective capacity of the non-controllable, renewable power supply is increased without any attempt at operational feedback control of the renewable supply.

  19. Quantitation of fixative-induced morphologic and antigenic variation in mouse and human breast cancers

    PubMed Central

    Cardiff, Robert D; Hubbard, Neil E; Engelberg, Jesse A; Munn, Robert J; Miller, Claramae H; Walls, Judith E; Chen, Jane Q; Velásquez-García, Héctor A; Galvez, Jose J; Bell, Katie J; Beckett, Laurel A; Li, Yue-Ju; Borowsky, Alexander D

    2013-01-01

    Quantitative Image Analysis (QIA) of digitized whole slide images for morphometric parameters and immunohistochemistry of breast cancer antigens was used to evaluate the technical reproducibility, biological variability, and intratumoral heterogeneity in three transplantable mouse mammary tumor models of human breast cancer. The relative preservation of structure and immunogenicity of the three mouse models and three human breast cancers was also compared when fixed with representatives of four distinct classes of fixatives. The three mouse mammary tumor cell models were an ER + /PR + model (SSM2), a Her2 + model (NDL), and a triple negative model (MET1). The four breast cancer antigens were ER, PR, Her2, and Ki67. The fixatives included examples of (1) strong cross-linkers, (2) weak cross-linkers, (3) coagulants, and (4) combination fixatives. Each parameter was quantitatively analyzed using modified Aperio Technologies ImageScope algorithms. Careful pre-analytical adjustments to the algorithms were required to provide accurate results. The QIA permitted rigorous statistical analysis of results and grading by rank order. The analyses suggested excellent technical reproducibility and confirmed biological heterogeneity within each tumor. The strong cross-linker fixatives, such as formalin, consistently ranked higher than weak cross-linker, coagulant and combination fixatives in both the morphometric and immunohistochemical parameters. PMID:23399853

  20. Engineering model for ultrafast laser microprocessing

    NASA Astrophysics Data System (ADS)

    Audouard, E.; Mottay, E.

    2016-03-01

    Ultrafast laser micro-machining relies on complex laser-matter interaction processes, leading to a virtually athermal laser ablation. The development of industrial ultrafast laser applications benefits from a better understanding of these processes. To this end, a number of sophisticated scientific models have been developed, providing valuable insights in the physics of the interaction. Yet, from an engineering point of view, they are often difficult to use, and require a number of adjustable parameters. We present a simple engineering model for ultrafast laser processing, applied in various real life applications: percussion drilling, line engraving, and non normal incidence trepanning. The model requires only two global parameters. Analytical results are derived for single pulse percussion drilling or simple pass engraving. Simple assumptions allow to predict the effect of non normal incident beams to obtain key parameters for trepanning drilling. The model is compared to experimental data on stainless steel with a wide range of laser characteristics (time duration, repetition rate, pulse energy) and machining conditions (sample or beam speed). Ablation depth and volume ablation rate are modeled for pulse durations from 100 fs to 1 ps. Trepanning time of 5.4 s with a conicity of 0.15° is obtained for a hole of 900 μm depth and 100 μm diameter.

  1. Operators manual for the magnetograph program (section 2)

    NASA Technical Reports Server (NTRS)

    November, L.; Title, A. M.

    1974-01-01

    This manual for use of the magnetograph program describes: (1) black box use of the programs; (2) the magtape data formats used; (3) the adjustable control parameters in the program; and (4) the algorithms. With no adjustments on the control parameters this program may be used purely as a black box. For optimal use, however, the control parameters may be varied. The magtape data formats are of use in adopting other programs to look at raw data or final magnetograph data.

  2. Isotherm-Based Thermodynamic Model for Solute Activities of Asymmetric Electrolyte Aqueous Solutions.

    PubMed

    Nandy, Lucy; Dutcher, Cari S

    2017-09-21

    Adsorption isotherm-based statistical thermodynamic models can be used to determine solute concentration and solute and solvent activities in aqueous solutions. Recently, the number of adjustable parameters in the isotherm model of Dutcher et al. J. Phys. Chem. A/C 2011, 2012, 2013 were reduced for neutral solutes as well as symmetric 1:1 electrolytes by using a Coulombic model to describe the solute-solvent energy interactions (Ohm et al. J. Phys. Chem. A 2015, Nandy et al. J. Phys. Chem. A 2016). Here, the Coulombic treatment for symmetric electrolytes is extended to establish improved isotherm model equations for asymmetric 1-2 and 1-3 electrolyte systems. The Coulombic model developed here results in prediction of activities and other thermodynamic properties in multicomponent systems containing ions of arbitrary charge. The model is found to accurately calculate the osmotic coefficient over the entire solute concentration range with two model parameters, related to intermolecular solute-solute and solute-solvent spacing. The inorganic salts and acids treated here are generally considered to be fully dissociated. However, there are certain weak acids that do not dissociate completely, such as the bisulfate ion. In this work, partial dissociation of the bisulfate ion from sulfuric acid is treated as a mixture, with an additional model parameter that accounts for the dissociation ratio of the dissociated ions to nondissociated ions.

  3. Determination of ionospheric electron density profiles from satellite UV (Ultraviolet) emission measurements, fiscal year 1984

    NASA Astrophysics Data System (ADS)

    Daniell, R. E.; Strickland, D. J.; Decker, D. T.; Jasperse, J. R.; Carlson, H. C., Jr.

    1985-04-01

    The possible use of satellite ultraviolet measurements to deduce the ionospheric electron density profile (EDP) on a global basis is discussed. During 1984 comparisons were continued between the hybrid daytime ionospheric model and the experimental observations. These comparison studies indicate that: (1) the essential features of the EDP and certain UV emissions can be modelled; (2) the models are sufficiently sensitive to input parameters to yield poor agreement with observations when typical input values are used; (3) reasonable adjustments of the parameters can produce excellent agreement between theory and data for either EDP or airglow but not both; and (4) the qualitative understanding of the relationship between two input parameters (solar flux and neutral densities) and the model EDP and airglow features has been verified. The development of a hybrid dynamic model for the nighttime midlatitude ionosphere has been initiated. This model is similar to the daytime hybrid model, but uses the sunset EDP as an initial value and calculates the EDP as a function of time through the night. In addition, a semiempirical model has been developed, based on the assumption that the nighttime EDP is always well described by a modified Chapman function. This model has great simplicity and allows the EDP to be inferred in a straightforward manner from optical observations. Comparisons with data are difficult, however, because of the low intensity of the nightglow.

  4. Application of a parameter-estimation technique to modeling the regional aquifer underlying the eastern Snake River plain, Idaho

    USGS Publications Warehouse

    Garabedian, Stephen P.

    1986-01-01

    A nonlinear, least-squares regression technique for the estimation of ground-water flow model parameters was applied to the regional aquifer underlying the eastern Snake River Plain, Idaho. The technique uses a computer program to simulate two-dimensional, steady-state ground-water flow. Hydrologic data for the 1980 water year were used to calculate recharge rates, boundary fluxes, and spring discharges. Ground-water use was estimated from irrigated land maps and crop consumptive-use figures. These estimates of ground-water withdrawal, recharge rates, and boundary flux, along with leakance, were used as known values in the model calibration of transmissivity. Leakance values were adjusted between regression solutions by comparing model-calculated to measured spring discharges. In other simulations, recharge and leakance also were calibrated as prior-information regression parameters, which limits the variation of these parameters using a normalized standard error of estimate. Results from a best-fit model indicate a wide areal range in transmissivity from about 0.05 to 44 feet squared per second and in leakance from about 2.2x10 -9 to 6.0 x 10 -8 feet per second per foot. Along with parameter values, model statistics also were calculated, including the coefficient of correlation between calculated and observed head (0.996), the standard error of the estimates for head (40 feet), and the parameter coefficients of variation (about 10-40 percent). Additional boundary flux was added in some areas during calibration to achieve proper fit to ground-water flow directions. Model fit improved significantly when areas that violated model assumptions were removed. It also improved slightly when y-direction (northwest-southeast) transmissivity values were larger than x-direction (northeast-southwest) transmissivity values. The model was most sensitive to changes in recharge, and in some areas, to changes in transmissivity, particularly near the spring discharge area from Milner Dam to King Hill.

  5. Imitative modeling automatic system Control of steam pressure in the main steam collector with the influence on the main Servomotor steam turbine

    NASA Astrophysics Data System (ADS)

    Andriushin, A. V.; Zverkov, V. P.; Kuzishchin, V. F.; Ryzhkov, O. S.; Sabanin, V. R.

    2017-11-01

    The research and setting results of steam pressure in the main steam collector “Do itself” automatic control system (ACS) with high-speed feedback on steam pressure in the turbine regulating stage are presented. The ACS setup is performed on the simulation model of the controlled object developed for this purpose with load-dependent static and dynamic characteristics and a non-linear control algorithm with pulse control of the turbine main servomotor. A method for tuning nonlinear ACS with a numerical algorithm for multiparametric optimization and a procedure for separate dynamic adjustment of control devices in a two-loop ACS are proposed and implemented. It is shown that the nonlinear ACS adjusted with the proposed method with the regulators constant parameters ensures reliable and high-quality operation without the occurrence of oscillations in the transient processes the operating range of the turbine loads.

  6. Generation of Adaptive Gait Patterns for Quadruped Robot with CPG Network including Motor Dynamic Model

    NASA Astrophysics Data System (ADS)

    Son, Yurak; Kamano, Takuya; Yasuno, Takashi; Suzuki, Takayuki; Harada, Hironobu

    This paper describes the generation of adaptive gait patterns using new Central Pattern Generators (CPGs) including motor dynamic models for a quadruped robot under various environment. The CPGs act as the flexible oscillators of the joints and make the desired angle of the joints. The CPGs are mutually connected each other, and the sets of their coupling parameters are adjusted by genetic algorithm so that the quadruped robot can realize the stable and adequate gait patterns. As a result of generation, the suitable CPG networks for not only a walking straight gait pattern but also rotation gait patterns are obtained. Experimental results demonstrate that the proposed CPG networks are effective to automatically adjust the adaptive gait patterns for the tested quadruped robot under various environment. Furthermore, the target tracking control based on image processing is achieved by combining the generated gait patterns.

  7. The nuclear Thomas-Fermi model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Myers, W.D.; Swiatecki, W.J.

    1994-08-01

    The statistical Thomas-Fermi model is applied to a comprehensive survey of macroscopic nuclear properties. The model uses a Seyler-Blanchard effective nucleon-nucleon interaction, generalized by the addition of one momentum-dependent and one density-dependent term. The adjustable parameters of the interaction were fitted to shell-corrected masses of 1654 nuclei, to the diffuseness of the nuclear surface and to the measured depths of the optical model potential. With these parameters nuclear sizes are well reproduced, and only relatively minor deviations between measured and calculated fission barriers of 36 nuclei are found. The model determines the principal bulk and surface properties of nuclear mattermore » and provides estimates for the more subtle, Droplet Model, properties. The predicted energy vs density relation for neutron matter is in striking correspondence with the 1981 theoretical estimate of Friedman and Pandharipande. Other extreme situations to which the model is applied are a study of Sn isotopes from {sup 82}Sn to {sup 170}Sn, and the rupture into a bubble configuration of a nucleus (constrained to spherical symmetry) which takes place when Z{sup 2}/A exceeds about 100.« less

  8. Thermodynamic models for vapor-liquid equilibria of nitrogen + oxygen + carbon dioxide at low temperatures

    NASA Astrophysics Data System (ADS)

    Vrabec, Jadran; Kedia, Gaurav Kumar; Buchhauser, Ulrich; Meyer-Pittroff, Roland; Hasse, Hans

    2009-02-01

    For the design and optimization of CO 2 recovery from alcoholic fermentation processes by distillation, models for vapor-liquid equilibria (VLE) are needed. Two such thermodynamic models, the Peng-Robinson equation of state (EOS) and a model based on Henry's law constants, are proposed for the ternary mixture N 2 + O 2 + CO 2. Pure substance parameters of the Peng-Robinson EOS are taken from the literature, whereas the binary parameters of the Van der Waals one-fluid mixing rule are adjusted to experimental binary VLE data. The Peng-Robinson EOS describes both binary and ternary experimental data well, except at high pressures approaching the critical region. A molecular model is validated by simulation using binary and ternary experimental VLE data. On the basis of this model, the Henry's law constants of N 2 and O 2 in CO 2 are predicted by molecular simulation. An easy-to-use thermodynamic model, based on those Henry's law constants, is developed to reliably describe the VLE in the CO 2-rich region.

  9. The Nuclear Thomas-Fermi Model

    DOE R&D Accomplishments Database

    Myers, W. D.; Swiatecki, W. J.

    1994-08-01

    The statistical Thomas-Fermi model is applied to a comprehensive survey of macroscopic nuclear properties. The model uses a Seyler-Blanchard effective nucleon-nucleon interaction, generalized by the addition of one momentum-dependent and one density-dependent term. The adjustable parameters of the interaction were fitted to shell-corrected masses of 1654 nuclei, to the diffuseness of the nuclear surface and to the measured depths of the optical model potential. With these parameters nuclear sizes are well reproduced, and only relatively minor deviations between measured and calculated fission barriers of 36 nuclei are found. The model determines the principal bulk and surface properties of nuclear matter and provides estimates for the more subtle, Droplet Model, properties. The predicted energy vs density relation for neutron matter is in striking correspondence with the 1981 theoretical estimate of Friedman and Pandharipande. Other extreme situations to which the model is applied are a study of Sn isotopes from {sup 82}Sn to {sup 170}Sn, and the rupture into a bubble configuration of a nucleus (constrained to spherical symmetry) which takes place when Z{sup 2}/A exceeds about 100.

  10. The human placental perfusion model: a systematic review and development of a model to predict in vivo transfer of therapeutic drugs.

    PubMed

    Hutson, J R; Garcia-Bournissen, F; Davis, A; Koren, G

    2011-07-01

    Dual perfusion of a single placental lobule is the only experimental model to study human placental transfer of substances in organized placental tissue. To date, there has not been any attempt at a systematic evaluation of this model. The aim of this study was to systematically evaluate the perfusion model in predicting placental drug transfer and to develop a pharmacokinetic model to account for nonplacental pharmacokinetic parameters in the perfusion results. In general, the fetal-to-maternal drug concentration ratios matched well between placental perfusion experiments and in vivo samples taken at the time of delivery of the infant. After modeling for differences in maternal and fetal/neonatal protein binding and blood pH, the perfusion results were able to accurately predict in vivo transfer at steady state (R² = 0.85, P < 0.0001). Placental perfusion experiments can be used to predict placental drug transfer when adjusting for extra parameters and can be useful for assessing drug therapy risks and benefits in pregnancy.

  11. Drug delivery optimization through Bayesian networks.

    PubMed Central

    Bellazzi, R.

    1992-01-01

    This paper describes how Bayesian Networks can be used in combination with compartmental models to plan Recombinant Human Erythropoietin (r-HuEPO) delivery in the treatment of anemia of chronic uremic patients. Past measurements of hematocrit or hemoglobin concentration in a patient during the therapy can be exploited to adjust the parameters of a compartmental model of the erythropoiesis. This adaptive process allows more accurate patient-specific predictions, and hence a more rational dosage planning. We describe a drug delivery optimization protocol, based on our approach. Some results obtained on real data are presented. PMID:1482938

  12. Bayes-Turchin analysis of x-ray absorption data above the Fe L{sub 2,3}-edges

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rossner, H. H.; Schmitz, D.; Imperia, P.

    2006-10-01

    Extended x-ray absorption fine structure (EXAFS) data and magnetic EXAFS (MEXAFS) data were measured at two temperatures (180 and 296 K) in the energy region of the overlapping L-edges of bcc Fe grown on a V(110) crystal surface. In combination with a Bayes-Turchin data analysis procedure these measurements enable the exploration of local crystallographic and magnetic structures. The analysis determined the atomic-like background together with the EXAFS parameters which consisted of ten shell radii, the Debye-Waller parameters, separated into structural and vibrational components, and the third cumulant of the first scattering path. The vibrational components for 97 different scattering pathsmore » were determined by a two parameter force-field model using a priori values adjusted to Born-von Karman parameters of inelastic neutron scattering data. The investigations of the system Fe/V(110) demonstrate that the simultaneous fitting of atomic background parameters and EXAFS parameters can be performed reliably. Using the L{sub 2}- and L{sub 3}-components extracted from the EXAFS analysis and the rigid-band model, the MEXAFS oscillations can only be described when the sign of the exchange energy is changed compared to the predictions of the Hedin Lundquist exchange and correlation functional.« less

  13. Global electrical heterogeneity as a predictor of cardiovascular mortality in men and women.

    PubMed

    Lipponen, Jukka A; Kurl, Sudhir; Laukkanen, Jari A

    2018-06-02

    The aim of this study was to investigate the contribution of depolarization and repolarization abnormalities, specially abnormalities in global electrical heterogeneity of heart in cardiovascular disease (CVD) and all-cause mortality. Eight hundred and forty men and 911 women, average age of 63 years participated in this study with average follow-up was 14 years. Six electrocardiogram/vector electrocardiogram (ECG/VECG) markers QRS-duration, QTc-interval, QRST-angle, sum of absolute QRST integral (SAI QRST), T-wave roundness, and TV1-amplitude were estimated from VECG measurements. Hazard ratios (HRs) for CVD events (164 deaths) and all-cause mortality (383 deaths) for ECG parameters were calculated. Electrocardiogram or vector electrocardiogram parameter models adjusted for risk clinical factors showed that strongest predictors for CVD mortality were QRST-angle (HR 3.44, 95% confidence interval 2.12-5.36), QTc-interval (2.72, 1.73-4.29), and T-wave roundness (2.09, 1.26-3.46) among men. The strongest ECG/VECG parameters for CVD death were QRST-angle (2.47, 1.37-4.45), SAI QRST (2.37, 1.23-4.6), and QTc-interval (2.15, 1.16-4.01) among female participants. Multivariable adjusted models revealed that strongest independent ECG predictors for CVD death were QRST-angle, QTc-interval, resting heart rate, and T-roundness for men, QRST-angle and SAI QRST for women. QRST-angle, QTc-interval, resting heart rate, and T-roundness were associated with all-cause mortality in male population, although none of the ECG/VECG parameters predicted all-cause mortality among women. Characteristics of global electrical heterogeneity QRST-angle and QTc-interval in men and QRST-angle and SAI QRST among females were strong and independent risk markers for cardiovascular mortality. These parameters provide new additional ECG tools for cardiovascular risk stratification.

  14. Parameters of Glucose and Lipid Metabolism Affect the Occurrence of Colorectal Adenomas Detected by Surveillance Colonoscopies

    PubMed Central

    Kim, Nam Hee; Suh, Jung Yul; Park, Jung Ho; Park, Dong Il; Cho, Yong Kyun; Sohn, Chong Il; Choi, Kyuyong

    2017-01-01

    Purpose Limited data are available regarding the associations between parameters of glucose and lipid metabolism and the occurrence of metachronous adenomas. We investigated whether these parameters affect the occurrence of adenomas detected on surveillance colonoscopy. Materials and Methods This longitudinal study was performed on 5289 subjects who underwent follow-up colonoscopy between 2012 and 2013 among 62171 asymptomatic subjects who underwent an initial colonoscopy for a health check-up between 2010 and 2011. The risk of adenoma occurrence was assessed using Cox proportional hazards modeling. Results The mean interval between the initial and follow-up colonoscopy was 2.2±0.6 years. The occurrence of adenomas detected by the follow-up colonoscopy increased linearly with the increasing quartiles of fasting glucose, hemoglobin A1c (HbA1c), insulin, homeostasis model assessment of insulin resistance (HOMA-IR), and triglycerides measured at the initial colonoscopy. These associations persisted after adjusting for confounding factors. The adjusted hazard ratios for adenoma occurrence comparing the fourth with the first quartiles of fasting glucose, HbA1c, insulin, HOMA-IR, and triglycerides were 1.50 [95% confidence interval (CI), 1.26–1.77; ptrend<0.001], 1.22 (95% CI, 1.04–1.43; ptrend=0.024), 1.22 (95% CI, 1.02–1.46; ptrend=0.046), 1.36 (95% CI, 1.14–1.63; ptrend=0.004), and 1.19 (95% CI, 0.99–1.42; ptrend=0.041), respectively. In addition, increasing quartiles of low-density lipoprotein-cholesterol and apolipoprotein B were associated with an increasing occurrence of adenomas. Conclusion The levels of parameters of glucose and lipid metabolism were significantly associated with the occurrence of adenomas detected on surveillance colonoscopy. Improving the parameters of glucose and lipid metabolism through lifestyle changes or medications may be helpful in preventing metachronous adenomas. PMID:28120565

  15. Differential cross sections in a thick brane world scenario

    NASA Astrophysics Data System (ADS)

    Pedraza, Omar; Arceo, R.; López, L. A.; Cerón, V. E.

    2018-04-01

    The elastic differential cross section is calculated at low energies for the elements He and Ne using an effective 4D electromagnetic potential coming from the contribution of the massive Kaluza-Klein modes of the 5D vector field in a thick brane scenario. The length scale is adjusted in the potential to compare with known experimental data and to set bounds for the parameter of the model.

  16. Analysis of earth rotation solution from Starlette

    NASA Technical Reports Server (NTRS)

    Schutz, B. E.; Cheng, M. K.; Shum, C. K.; Eanes, R. J.; Tapley, B. D.

    1989-01-01

    Earth rotation parameter (ERP) solutions were derived from the Starlette orbit analysis during the Main MERIT Campaign, using a technique of a consider-covariance analysis to assess the effects of errors on the polar motion solutions. The polar motion solution was then improved through the simultaneous adjustment of some dynamical parameters representing identified dominant perturbing sources (such as the geopotential and ocean-tide coefficients) on the polar motion solutions. Finally, an improved ERP solution was derived using the gravity field model, PTCF1, described by Tapley et al. (1986). The accuracy of the Starlette ERP solution was assessed by a comparison with the LAGEOS-derived ERP solutions.

  17. Study on Fuzzy Adaptive Fractional Order PIλDμ Control for Maglev Guiding System

    NASA Astrophysics Data System (ADS)

    Hu, Qing; Hu, Yuwei

    The mathematical model of the linear elevator maglev guiding system is analyzed in this paper. For the linear elevator needs strong stability and robustness to run, the integer order PID was expanded to the fractional order, in order to improve the steady state precision, rapidity and robustness of the system, enhance the accuracy of the parameter in fractional order PIλDμ controller, the fuzzy control is combined with the fractional order PIλDμ control, using the fuzzy logic achieves the parameters online adjustment. The simulations reveal that the system has faster response speed, higher tracking precision, and has stronger robustness to the disturbance.

  18. Efficient calculation of general Voigt profiles

    NASA Astrophysics Data System (ADS)

    Cope, D.; Khoury, R.; Lovett, R. J.

    1988-02-01

    An accurate and efficient program is presented for the computation of OIL profiles, generalizations of the Voigt profile resulting from the one-interacting-level model of Ward et al. (1974). These profiles have speed dependent shift and width functions and have asymmetric shapes. The program contains an adjustable error control parameter and includes the Voigt profile as a special case, although the general nature of this program renders it slower than a specialized Voigt profile method. Results on accuracy and computation time are presented for a broad set of test parameters, and a comparison is made with previous work on the asymptotic behavior of general Voigt profiles.

  19. Empirical scaling laws for coronal heating

    NASA Technical Reports Server (NTRS)

    Golub, L.

    1983-01-01

    The origins and uses of scaling laws in studies of stellar outer atmospheres are reviewed with particular emphasis on the properties of coronal loops. Some evidence is presented for a fundamental structuring of the solar corona and the thermodynamics of scaling laws are discussed. It is found that magnetic field-related scaling laws can be obtained by relating coronal pressure, temperature, and magnetic field strength. Available data validate this method. Some parameters of the theory, however, must be treated as adjustable, and it is considered necessary to examine data from other stars in order to determine the validity of the parameters. Using detailed observational data, the applicability of single loop models is examined.

  20. Improved distorted wave theory with the localized virial conditions

    NASA Astrophysics Data System (ADS)

    Hahn, Y. K.; Zerrad, E.

    2009-12-01

    The distorted wave theory is operationally improved to treat the full collision amplitude, such that the corrections to the distorted wave Born amplitude can be systematically calculated. The localized virial conditions provide the tools necessary to test the quality of successive approximations at each stage and to optimize the solution. The details of the theoretical procedure are explained in concrete terms using a collisional ionization model and variational trial functions. For the first time, adjustable parameters associated with an approximate scattering solution can be fully determined by the theory. A small number of linear parameters are introduced to examine the convergence property and the effectiveness of the new approach.

Top