Ustaoglu, Eda; Lavalle, Carlo
2017-01-01
In most empirical applications, forecasting models for the analysis of industrial land focus on the relationship between current values of economic parameters and industrial land use. This paper aims to test this assumption by focusing on the dynamic relationship between current and lagged values of the 'economic fundamentals' and industrial land development. Not much effort has yet been attributed to develop land forecasting models to predict the demand for industrial land except those applying static regressions or other statistical measures. In this research, we estimated a dynamic panel data model across 40 regions from 2000 to 2008 for the Netherlands to uncover the relationship between current and lagged values of economic parameters and industrial land development. Land-use regulations such as land zoning policies, and other land-use restrictions like natural protection areas, geographical limitations in the form of water bodies or sludge areas are expected to affect supply of land, which will in turn be reflected in industrial land market outcomes. Our results suggest that gross domestic product (GDP), industrial employment, gross value added (GVA), property price, and other parameters representing demand and supply conditions in the industrial market explain industrial land developments with high significance levels. It is also shown that contrary to the current values, lagged values of the economic parameters have more sound relationships with the industrial developments in the Netherlands. The findings suggest use of lags between selected economic parameters and industrial land use in land forecasting applications.
Ustaoglu, Eda; Lavalle, Carlo
2017-01-01
In most empirical applications, forecasting models for the analysis of industrial land focus on the relationship between current values of economic parameters and industrial land use. This paper aims to test this assumption by focusing on the dynamic relationship between current and lagged values of the ‘economic fundamentals’ and industrial land development. Not much effort has yet been attributed to develop land forecasting models to predict the demand for industrial land except those applying static regressions or other statistical measures. In this research, we estimated a dynamic panel data model across 40 regions from 2000 to 2008 for the Netherlands to uncover the relationship between current and lagged values of economic parameters and industrial land development. Land-use regulations such as land zoning policies, and other land-use restrictions like natural protection areas, geographical limitations in the form of water bodies or sludge areas are expected to affect supply of land, which will in turn be reflected in industrial land market outcomes. Our results suggest that gross domestic product (GDP), industrial employment, gross value added (GVA), property price, and other parameters representing demand and supply conditions in the industrial market explain industrial land developments with high significance levels. It is also shown that contrary to the current values, lagged values of the economic parameters have more sound relationships with the industrial developments in the Netherlands. The findings suggest use of lags between selected economic parameters and industrial land use in land forecasting applications. PMID:28877204
An improved state-parameter analysis of ecosystem models using data assimilation
Chen, M.; Liu, S.; Tieszen, L.L.; Hollinger, D.Y.
2008-01-01
Much of the effort spent in developing data assimilation methods for carbon dynamics analysis has focused on estimating optimal values for either model parameters or state variables. The main weakness of estimating parameter values alone (i.e., without considering state variables) is that all errors from input, output, and model structure are attributed to model parameter uncertainties. On the other hand, the accuracy of estimating state variables may be lowered if the temporal evolution of parameter values is not incorporated. This research develops a smoothed ensemble Kalman filter (SEnKF) by combining ensemble Kalman filter with kernel smoothing technique. SEnKF has following characteristics: (1) to estimate simultaneously the model states and parameters through concatenating unknown parameters and state variables into a joint state vector; (2) to mitigate dramatic, sudden changes of parameter values in parameter sampling and parameter evolution process, and control narrowing of parameter variance which results in filter divergence through adjusting smoothing factor in kernel smoothing algorithm; (3) to assimilate recursively data into the model and thus detect possible time variation of parameters; and (4) to address properly various sources of uncertainties stemming from input, output and parameter uncertainties. The SEnKF is tested by assimilating observed fluxes of carbon dioxide and environmental driving factor data from an AmeriFlux forest station located near Howland, Maine, USA, into a partition eddy flux model. Our analysis demonstrates that model parameters, such as light use efficiency, respiration coefficients, minimum and optimum temperatures for photosynthetic activity, and others, are highly constrained by eddy flux data at daily-to-seasonal time scales. The SEnKF stabilizes parameter values quickly regardless of the initial values of the parameters. Potential ecosystem light use efficiency demonstrates a strong seasonality. Results show that the simultaneous parameter estimation procedure significantly improves model predictions. Results also show that the SEnKF can dramatically reduce the variance in state variables stemming from the uncertainty of parameters and driving variables. The SEnKF is a robust and effective algorithm in evaluating and developing ecosystem models and in improving the understanding and quantification of carbon cycle parameters and processes. ?? 2008 Elsevier B.V.
Kwak, Dai Soon; Tao, Quang Bang; Todo, Mitsugu; Jeon, Insu
2012-05-01
Knee joint implants developed by western companies have been imported to Korea and used for Korean patients. However, many clinical problems occur in knee joints of Korean patients after total knee joint replacement owing to the geometric mismatch between the western implants and Korean knee joint structures. To solve these problems, a method to determine the representative dimension parameter values of Korean knee joints is introduced to aid in the design of knee joint implants appropriate for Korean patients. Measurements of the dimension parameters of 88 male Korean knee joint subjects were carried out. The distribution of the subjects versus each measured parameter value was investigated. The measured dimension parameter values of each parameter were grouped by suitable intervals called the "size group," and average values of the size groups were calculated. The knee joint subjects were grouped as the "patient group" based on "size group numbers" of each parameter. From the iterative calculations to decrease the errors between the average dimension parameter values of each "patient group" and the dimension parameter values of the subjects, the average dimension parameter values that give less than the error criterion were determined to be the representative dimension parameter values for designing knee joint implants for Korean patients.
Lothe, Anjali G; Sinha, Alok
2017-05-01
Leachate pollution index (LPI) is an environmental index which quantifies the pollution potential of leachate generated in landfill site. Calculation of Leachate pollution index (LPI) is based on concentration of 18 parameters present in leachate. However, in case of non-availability of all 18 parameters evaluation of actual values of LPI becomes difficult. In this study, a model has been developed to predict the actual values of LPI in case of partial availability of parameters. This model generates eleven equations that helps in determination of upper and lower limit of LPI. The geometric mean of these two values results in LPI value. Application of this model to three landfill site results in LPI value with an error of ±20% for ∑ i n w i ⩾0.6. Copyright © 2016 Elsevier Ltd. All rights reserved.
Noszczyk-Nowak, Agnieszka; Cepiel, Alicja; Janiszewski, Adrian; Pasławski, Robert; Gajek, Jacek; Pasławska, Urszula; Nicpoń, Józef
2016-01-01
Swine are a well-recognized animal model for human cardiovascular diseases. Despite the widespread use of porcine model in experimental electrophysiology, still no reference values for intracardiac electrical activity and conduction parameters determined during an invasive electrophysiology study (EPS) have been developed in this species thus far. The aim of the study was to develop a set of normal values for intracardiac electrical activity and conduction parameters determined during an invasive EPS of swine. The study included 36 healthy domestic swine (24-40 kg body weight). EPS was performed under a general anesthesia with midazolam, propofol and isoflurane. The reference values for intracardiac electrical activity and conduction parameters were calculated as arithmetic means ± 2 standard deviations. The reference values were determined for AH, HV and PA intervals, interatrial conduction time at its own and imposed rhythm, sinus node recovery time (SNRT), corrected sinus node recovery time (CSNRT), anterograde and retrograde Wenckebach points, atrial, atrioventricular node and ventricular refractory periods. No significant correlations were found between body weight and heart rate of the examined pigs and their electrophysiological parameters. The hereby presented reference values can be helpful in comparing the results of various studies, as well as in more accurately estimating the values of electrophysiological parameters that can be expected in a given experiment.
A Probabilistic Approach to Model Update
NASA Technical Reports Server (NTRS)
Horta, Lucas G.; Reaves, Mercedes C.; Voracek, David F.
2001-01-01
Finite element models are often developed for load validation, structural certification, response predictions, and to study alternate design concepts. In rare occasions, models developed with a nominal set of parameters agree with experimental data without the need to update parameter values. Today, model updating is generally heuristic and often performed by a skilled analyst with in-depth understanding of the model assumptions. Parameter uncertainties play a key role in understanding the model update problem and therefore probabilistic analysis tools, developed for reliability and risk analysis, may be used to incorporate uncertainty in the analysis. In this work, probability analysis (PA) tools are used to aid the parameter update task using experimental data and some basic knowledge of potential error sources. Discussed here is the first application of PA tools to update parameters of a finite element model for a composite wing structure. Static deflection data at six locations are used to update five parameters. It is shown that while prediction of individual response values may not be matched identically, the system response is significantly improved with moderate changes in parameter values.
Méndez-Cid, Francisco J; Lorenzo, José M; Martínez, Sidonia; Carballo, Javier
2017-02-15
The agreement among the results determined for the main parameters used in the evaluation of the fat auto-oxidation was investigated in animal fats (butter fat, subcutaneous pig back-fat and subcutaneous ham fat). Also, graduated colour scales representing the colour change during storage/ripening were developed for the three types of fat, and the values read in these scales were correlated with the values observed for the different parameters indicating fat oxidation. In general good correlation among the values of the different parameters was observed (e.g. TBA value correlated with the peroxide value: r=0.466 for butter and r=0.898 for back-fat). A reasonable correlation was observed between the values read in the developed colour scales and the values for the other parameters determined (e.g. values of r=0.320 and r=0.793 with peroxide value for butter and back-fat, respectively, and of r=0.767 and r=0.498 with TBA value for back-fat and ham fat, respectively). Copyright © 2016 Elsevier Ltd. All rights reserved.
Automatic parameter selection for feature-based multi-sensor image registration
NASA Astrophysics Data System (ADS)
DelMarco, Stephen; Tom, Victor; Webb, Helen; Chao, Alan
2006-05-01
Accurate image registration is critical for applications such as precision targeting, geo-location, change-detection, surveillance, and remote sensing. However, the increasing volume of image data is exceeding the current capacity of human analysts to perform manual registration. This image data glut necessitates the development of automated approaches to image registration, including algorithm parameter value selection. Proper parameter value selection is crucial to the success of registration techniques. The appropriate algorithm parameters can be highly scene and sensor dependent. Therefore, robust algorithm parameter value selection approaches are a critical component of an end-to-end image registration algorithm. In previous work, we developed a general framework for multisensor image registration which includes feature-based registration approaches. In this work we examine the problem of automated parameter selection. We apply the automated parameter selection approach of Yitzhaky and Peli to select parameters for feature-based registration of multisensor image data. The approach consists of generating multiple feature-detected images by sweeping over parameter combinations and using these images to generate estimated ground truth. The feature-detected images are compared to the estimated ground truth images to generate ROC points associated with each parameter combination. We develop a strategy for selecting the optimal parameter set by choosing the parameter combination corresponding to the optimal ROC point. We present numerical results showing the effectiveness of the approach using registration of collected SAR data to reference EO data.
Scheduling on the basis of the research of dependences among the construction process parameters
NASA Astrophysics Data System (ADS)
Romanovich, Marina; Ermakov, Alexander; Mukhamedzhanova, Olga
2017-10-01
The dependences among the construction process parameters are investigated in the article: average integrated value of qualification of the shift, number of workers per shift and average daily amount of completed work on the basis of correlation coefficient are considered. Basic data for the research of dependences among the above-stated parameters have been collected during the construction of two standard objects A and B (monolithic houses), in four months of construction (October, November, December, January). Kobb-Douglas production function has proved the values of coefficients of correlation close to 1. Function is simple to be used and is ideal for the description of the considered dependences. The development function, describing communication among the considered parameters of the construction process, is developed. The function of the development gives the chance to select optimum quantitative and qualitative (qualification) structure of the brigade link for the work during the next period of time, according to a preset value of amount of works. Function of the optimized amounts of works, which reflects interrelation of key parameters of construction process, is developed. Values of function of the optimized amounts of works should be used as the average standard for scheduling of the storming periods of construction.
Regan, R. Steven; Markstrom, Steven L.; Hay, Lauren E.; Viger, Roland J.; Norton, Parker A.; Driscoll, Jessica M.; LaFontaine, Jacob H.
2018-01-08
This report documents several components of the U.S. Geological Survey National Hydrologic Model of the conterminous United States for use with the Precipitation-Runoff Modeling System (PRMS). It provides descriptions of the (1) National Hydrologic Model, (2) Geospatial Fabric for National Hydrologic Modeling, (3) PRMS hydrologic simulation code, (4) parameters and estimation methods used to compute spatially and temporally distributed default values as required by PRMS, (5) National Hydrologic Model Parameter Database, and (6) model extraction tool named Bandit. The National Hydrologic Model Parameter Database contains values for all PRMS parameters used in the National Hydrologic Model. The methods and national datasets used to estimate all the PRMS parameters are described. Some parameter values are derived from characteristics of topography, land cover, soils, geology, and hydrography using traditional Geographic Information System methods. Other parameters are set to long-established default values and computation of initial values. Additionally, methods (statistical, sensitivity, calibration, and algebraic) were developed to compute parameter values on the basis of a variety of nationally-consistent datasets. Values in the National Hydrologic Model Parameter Database can periodically be updated on the basis of new parameter estimation methods and as additional national datasets become available. A companion ScienceBase resource provides a set of static parameter values as well as images of spatially-distributed parameters associated with PRMS states and fluxes for each Hydrologic Response Unit across the conterminuous United States.
Robust design of configurations and parameters of adaptable products
NASA Astrophysics Data System (ADS)
Zhang, Jian; Chen, Yongliang; Xue, Deyi; Gu, Peihua
2014-03-01
An adaptable product can satisfy different customer requirements by changing its configuration and parameter values during the operation stage. Design of adaptable products aims at reducing the environment impact through replacement of multiple different products with single adaptable ones. Due to the complex architecture, multiple functional requirements, and changes of product configurations and parameter values in operation, impact of uncertainties to the functional performance measures needs to be considered in design of adaptable products. In this paper, a robust design approach is introduced to identify the optimal design configuration and parameters of an adaptable product whose functional performance measures are the least sensitive to uncertainties. An adaptable product in this paper is modeled by both configurations and parameters. At the configuration level, methods to model different product configuration candidates in design and different product configuration states in operation to satisfy design requirements are introduced. At the parameter level, four types of product/operating parameters and relations among these parameters are discussed. A two-level optimization approach is developed to identify the optimal design configuration and its parameter values of the adaptable product. A case study is implemented to illustrate the effectiveness of the newly developed robust adaptable design method.
Fafin-Lefevre, Mélanie; Morlais, Fabrice; Guittet, Lydia; Clin, Bénédicte; Launoy, Guy; Galateau-Sallé, Françoise; Plancoulaine, Benoît; Herlin, Paulette; Letourneux, Marc
2011-08-01
To identify which morphologic or densitometric parameters are modified in cell nuclei from bronchopulmonary cancer based on 18 parameters involving shape, intensity, chromatin, texture, and DNA content and develop a bronchopulmonary cancer screening method relying on analysis of sputum sample cell nuclei. A total of 25 sputum samples from controls and 22 bronchial aspiration samples from patients presenting with bronchopulmonary cancer who were professionally exposed to cancer were used. After Feulgen staining, 18 morphologic and DNA content parameters were measured on cell nuclei, via image cytom- etry. A method was developed for analyzing distribution quantiles, compared with simply interpreting mean values, to characterize morphologic modifications in cell nuclei. Distribution analysis of parameters enabled us to distinguish 13 of 18 parameters that demonstrated significant differences between controls and cancer cases. These parameters, used alone, enabled us to distinguish two population types, with both sensitivity and specificity > 70%. Three parameters offered 100% sensitivity and specificity. When mean values offered high sensitivity and specificity, comparable or higher sensitivity and specificity values were observed for at least one of the corresponding quantiles. Analysis of modification in morphologic parameters via distribution analysis proved promising for screening bronchopulmonary cancer from sputum.
Quinn, Terrance; Sinkala, Zachariah
2014-01-01
We develop a general method for computing extreme value distribution (Gumbel, 1958) parameters for gapped alignments. Our approach uses mixture distribution theory to obtain associated BLOSUM matrices for gapped alignments, which in turn are used for determining significance of gapped alignment scores for pairs of biological sequences. We compare our results with parameters already obtained in the literature.
NASA Astrophysics Data System (ADS)
Lim, Kyoung Jae; Park, Youn Shik; Kim, Jonggun; Shin, Yong-Chul; Kim, Nam Won; Kim, Seong Joon; Jeon, Ji-Hong; Engel, Bernard A.
2010-07-01
Many hydrologic and water quality computer models have been developed and applied to assess hydrologic and water quality impacts of land use changes. These models are typically calibrated and validated prior to their application. The Long-Term Hydrologic Impact Assessment (L-THIA) model was applied to the Little Eagle Creek (LEC) watershed and compared with the filtered direct runoff using BFLOW and the Eckhardt digital filter (with a default BFI max value of 0.80 and filter parameter value of 0.98), both available in the Web GIS-based Hydrograph Analysis Tool, called WHAT. The R2 value and the Nash-Sutcliffe coefficient values were 0.68 and 0.64 with BFLOW, and 0.66 and 0.63 with the Eckhardt digital filter. Although these results indicate that the L-THIA model estimates direct runoff reasonably well, the filtered direct runoff values using BFLOW and Eckhardt digital filter with the default BFI max and filter parameter values do not reflect hydrological and hydrogeological situations in the LEC watershed. Thus, a BFI max GA-Analyzer module (BFI max Genetic Algorithm-Analyzer module) was developed and integrated into the WHAT system for determination of the optimum BFI max parameter and filter parameter of the Eckhardt digital filter. With the automated recession curve analysis method and BFI max GA-Analyzer module of the WHAT system, the optimum BFI max value of 0.491 and filter parameter value of 0.987 were determined for the LEC watershed. The comparison of L-THIA estimates with filtered direct runoff using an optimized BFI max and filter parameter resulted in an R2 value of 0.66 and the Nash-Sutcliffe coefficient value of 0.63. However, L-THIA estimates calibrated with the optimized BFI max and filter parameter increased by 33% and estimated NPS pollutant loadings increased by more than 20%. This indicates L-THIA model direct runoff estimates can be incorrect by 33% and NPS pollutant loading estimation by more than 20%, if the accuracy of the baseflow separation method is not validated for the study watershed prior to model comparison. This study shows the importance of baseflow separation in hydrologic and water quality modeling using the L-THIA model.
Mathematical models for predicting the transport and fate of pollutants in the environment require reactivity parameter values that is value of the physical and chemical constants that govern reactivity. Although empirical structure activity relationships have been developed th...
Ring rolling process simulation for microstructure optimization
NASA Astrophysics Data System (ADS)
Franchi, Rodolfo; Del Prete, Antonio; Donatiello, Iolanda; Calabrese, Maurizio
2017-10-01
Metal undergoes complicated microstructural evolution during Hot Ring Rolling (HRR), which determines the quality, mechanical properties and life of the ring formed. One of the principal microstructure properties which mostly influences the structural performances of forged components, is the value of the average grain size. In the present paper a ring rolling process has been studied and optimized in order to obtain anular components to be used in aerospace applications. In particular, the influence of process input parameters (feed rate of the mandrel and angular velocity of driver roll) on microstructural and on geometrical features of the final ring has been evaluated. For this purpose, a three-dimensional finite element model for HRR has been developed in SFTC DEFORM V11, taking into account also microstructural development of the material used (the nickel superalloy Waspalloy). The Finite Element (FE) model has been used to formulate a proper optimization problem. The optimization procedure has been developed in order to find the combination of process parameters which allows to minimize the average grain size. The Response Surface Methodology (RSM) has been used to find the relationship between input and output parameters, by using the exact values of output parameters in the control points of a design space explored through FEM simulation. Once this relationship is known, the values of the output parameters can be calculated for each combination of the input parameters. Then, an optimization procedure based on Genetic Algorithms has been applied. At the end, the minimum value of average grain size with respect to the input parameters has been found.
Wills, Chris J.; Weldon, Ray J.; Bryant, W.A.
2008-01-01
This report describes development of fault parameters for the 2007 update of the National Seismic Hazard Maps and the Working Group on California Earthquake Probabilities (WGCEP, 2007). These reference parameters are contained within a database intended to be a source of values for use by scientists interested in producing either seismic hazard or deformation models to better understand the current seismic hazards in California. These parameters include descriptions of the geometry and rates of movements of faults throughout the state. These values are intended to provide a starting point for development of more sophisticated deformation models which include known rates of movement on faults as well as geodetic measurements of crustal movement and the rates of movements of the tectonic plates. The values will be used in developing the next generation of the time-independent National Seismic Hazard Maps, and the time-dependant seismic hazard calculations being developed for the WGCEP. Due to the multiple uses of this information, development of these parameters has been coordinated between USGS, CGS and SCEC. SCEC provided the database development and editing tools, in consultation with USGS, Golden. This database has been implemented in Oracle and supports electronic access (e.g., for on-the-fly access). A GUI-based application has also been developed to aid in populating the database. Both the continually updated 'living' version of this database, as well as any locked-down official releases (e.g., used in a published model for calculating earthquake probabilities or seismic shaking hazards) are part of the USGS Quaternary Fault and Fold Database http://earthquake.usgs.gov/regional/qfaults/ . CGS has been primarily responsible for updating and editing of the fault parameters, with extensive input from USGS and SCEC scientists.
NASA Astrophysics Data System (ADS)
Dushkin, A. V.; Kasatkina, T. I.; Novoseltsev, V. I.; Ivanov, S. V.
2018-03-01
The article proposes a forecasting method that allows, based on the given values of entropy and error level of the first and second kind, to determine the allowable time for forecasting the development of the characteristic parameters of a complex information system. The main feature of the method under consideration is the determination of changes in the characteristic parameters of the development of the information system in the form of the magnitude of the increment in the ratios of its entropy. When a predetermined value of the prediction error ratio is reached, that is, the entropy of the system, the characteristic parameters of the system and the depth of the prediction in time are estimated. The resulting values of the characteristics and will be optimal, since at that moment the system possessed the best ratio of entropy as a measure of the degree of organization and orderliness of the structure of the system. To construct a method for estimating the depth of prediction, it is expedient to use the maximum principle of the value of entropy.
NASA Technical Reports Server (NTRS)
Ioup, George E.; Ioup, Juliette W.
1991-01-01
The final report for work on the determination of design and operation parameters for upper atmospheric research instrumentation to yield optimum resolution with deconvolution is presented. Papers and theses prepared during the research report period are included. Among all the research results reported, note should be made of the specific investigation of the determination of design and operation parameters for upper atmospheric research instrumentation to yield optimum resolution with deconvolution. A methodology was developed to determine design and operation parameters for error minimization when deconvolution is included in data analysis. An error surface is plotted versus the signal-to-noise ratio (SNR) and all parameters of interest. Instrumental characteristics will determine a curve in this space. The SNR and parameter values which give the projection from the curve to the surface, corresponding to the smallest value for the error, are the optimum values. These values are constrained by the curve and so will not necessarily correspond to an absolute minimum in the error surface.
Recovering Parameters of Johnson's SB Distribution
Bernard R. Parresol
2003-01-01
A new parameter recovery model for Johnson's SB distribution is developed. This latest alternative approach permits recovery of the range and both shape parameters. Previous models recovered only the two shape parameters. Also, a simple procedure for estimating the distribution minimum from sample values is presented. The new methodology...
Evaluation of stream water quality in Atlanta, Georgia, and the surrounding region (USA)
Peters, N.E.; Kandell, S.J.
1999-01-01
A water-quality index (WQI) was developed from historical data (1986-1995) for streams in the Atlanta Region and augmented with 'new' and generally more comprehensive biweekly data on four small urban streams, representing an industrial area, a developed medium-density residential area and developing and developed low-density residential areas. Parameter WQIs were derived from percentile ranks of individual water-quality parameter values for each site by normalizing the constituent ranks for values from all sites in the area for a base period, i.e. 1990-1995. WQIs were developed primarily for nutrient-related parameters due to data availability. Site WQIs, which were computed by averaging the parameter WQIs, range from 0.2 (good quality) to 0.8 (poor quality), and increased downstream of known nutrient sources. Also, annual site WQI decreases from 1986 to 1995 at most long-term monitoring sites. Annual site WQI for individual parameters correlated with annual hydrological characteristics, particularly runoff, precipitation quantity, and water yield, reflecting the effect of dilution on parameter values. The WQIs of the four small urban streams were evaluated for the core-nutrient-related parameters, parameters for specific dissolved trace metal concentrations and sediment characteristics, and a species diversity index for the macro-invertebrate taxa. The site WQI for the core-nutrient-related parameters used in the retrospective analysis was, as expected, the worst for the industrial area and the best for the low-density residential areas. However, macro-invertebrate data indicate that although the species at the medium-density residential site were diverse, the taxa at the site were for species tolerant of degraded water quality. Furthermore, although a species-diversity index indicates no substantial difference between the two low-density residential areas, the number for macro-invertebrates for the developing area was much less than that for the developed area, consistent with observations of recent sediment problems probably associated with construction in the basin. However, sediment parameters were similar for the two sites suggesting that the routine biweekly measurements may not capture the short-term increases in sediment transport associated with rainstorms. The WQI technique is limited by the number and types of parameters included in it, the general conditions of those parameters for the range of conditions in area streams, and by the effects of external factors, such as hydrology, and therefore, should be used with caution.
Estimating parameter values of a socio-hydrological flood model
NASA Astrophysics Data System (ADS)
Holkje Barendrecht, Marlies; Viglione, Alberto; Kreibich, Heidi; Vorogushyn, Sergiy; Merz, Bruno; Blöschl, Günter
2018-06-01
Socio-hydrological modelling studies that have been published so far show that dynamic coupled human-flood models are a promising tool to represent the phenomena and the feedbacks in human-flood systems. So far these models are mostly generic and have not been developed and calibrated to represent specific case studies. We believe that applying and calibrating these type of models to real world case studies can help us to further develop our understanding about the phenomena that occur in these systems. In this paper we propose a method to estimate the parameter values of a socio-hydrological model and we test it by applying it to an artificial case study. We postulate a model that describes the feedbacks between floods, awareness and preparedness. After simulating hypothetical time series with a given combination of parameters, we sample few data points for our variables and try to estimate the parameters given these data points using Bayesian Inference. The results show that, if we are able to collect data for our case study, we would, in theory, be able to estimate the parameter values for our socio-hydrological flood model.
Path loss variation of on-body UWB channel in the frequency bands of IEEE 802.15.6 standard.
Goswami, Dayananda; Sarma, Kanak C; Mahanta, Anil
2016-06-01
The wireless body area network (WBAN) has gaining tremendous attention among researchers and academicians for its envisioned applications in healthcare service. Ultra wideband (UWB) radio technology is considered as excellent air interface for communication among body area network devices. Characterisation and modelling of channel parameters are utmost prerequisite for the development of reliable communication system. The path loss of on-body UWB channel for each frequency band defined in IEEE 802.15.6 standard is experimentally determined. The parameters of path loss model are statistically determined by analysing measurement data. Both the line-of-sight and non-line-of-sight channel conditions are considered in the measurement. Variations of parameter values with the size of human body are analysed along with the variation of parameter values with the surrounding environments. It is observed that the parameters of the path loss model vary with the frequency band as well as with the body size and surrounding environment. The derived parameter values are specific to the particular frequency bands of IEEE 802.15.6 standard, which will be useful for the development of efficient UWB WBAN system.
NASA Astrophysics Data System (ADS)
Norton, P. A., II
2015-12-01
The U. S. Geological Survey is developing a National Hydrologic Model (NHM) to support consistent hydrologic modeling across the conterminous United States (CONUS). The Precipitation-Runoff Modeling System (PRMS) simulates daily hydrologic and energy processes in watersheds, and is used for the NHM application. For PRMS each watershed is divided into hydrologic response units (HRUs); by default each HRU is assumed to have a uniform hydrologic response. The Geospatial Fabric (GF) is a database containing initial parameter values for input to PRMS and was created for the NHM. The parameter values in the GF were derived from datasets that characterize the physical features of the entire CONUS. The NHM application is composed of more than 100,000 HRUs from the GF. Selected parameter values commonly are adjusted by basin in PRMS using an automated calibration process based on calibration targets, such as streamflow. Providing each HRU with distinct values that captures variability within the CONUS may improve simulation performance of the NHM. During calibration of the NHM by HRU, selected parameter values are adjusted for PRMS based on calibration targets, such as streamflow, snow water equivalent (SWE) and actual evapotranspiration (AET). Simulated SWE, AET, and runoff were compared to value ranges derived from multiple sources (e.g. the Snow Data Assimilation System, the Moderate Resolution Imaging Spectroradiometer (i.e. MODIS) Global Evapotranspiration Project, the Simplified Surface Energy Balance model, and the Monthly Water Balance Model). This provides each HRU with a distinct set of parameter values that captures the variability within the CONUS, leading to improved model performance. We present simulation results from the NHM after preliminary calibration, including the results of basin-level calibration for the NHM using: 1) default initial GF parameter values, and 2) parameter values calibrated by HRU.
Basic research on design analysis methods for rotorcraft vibrations
NASA Technical Reports Server (NTRS)
Hanagud, S.
1991-01-01
The objective of the present work was to develop a method for identifying physically plausible finite element system models of airframe structures from test data. The assumed models were based on linear elastic behavior with general (nonproportional) damping. Physical plausibility of the identified system matrices was insured by restricting the identification process to designated physical parameters only and not simply to the elements of the system matrices themselves. For example, in a large finite element model the identified parameters might be restricted to the moduli for each of the different materials used in the structure. In the case of damping, a restricted set of damping values might be assigned to finite elements based on the material type and on the fabrication processes used. In this case, different damping values might be associated with riveted, bolted and bonded elements. The method itself is developed first, and several approaches are outlined for computing the identified parameter values. The method is applied first to a simple structure for which the 'measured' response is actually synthesized from an assumed model. Both stiffness and damping parameter values are accurately identified. The true test, however, is the application to a full-scale airframe structure. In this case, a NASTRAN model and actual measured modal parameters formed the basis for the identification of a restricted set of physically plausible stiffness and damping parameters.
Aspen succession in the Intermountain West: A deterministic model
Dale L. Bartos; Frederick R. Ward; George S. Innis
1983-01-01
A deterministic model of succession in aspen forests was developed using existing data and intuition. The degree of uncertainty, which was determined by allowing the parameter values to vary at random within limits, was larger than desired. This report presents results of an analysis of model sensitivity to changes in parameter values. These results have indicated...
Modeling polyvinyl chloride Plasma Modification by Neural Networks
NASA Astrophysics Data System (ADS)
Wang, Changquan
2018-03-01
Neural networks model were constructed to analyze the connection between dielectric barrier discharge parameters and surface properties of material. The experiment data were generated from polyvinyl chloride plasma modification by using uniform design. Discharge voltage, discharge gas gap and treatment time were as neural network input layer parameters. The measured values of contact angle were as the output layer parameters. A nonlinear mathematical model of the surface modification for polyvinyl chloride was developed based upon the neural networks. The optimum model parameters were obtained by the simulation evaluation and error analysis. The results of the optimal model show that the predicted value is very close to the actual test value. The prediction model obtained here are useful for discharge plasma surface modification analysis.
Application of the precipitation-runoff model in the Warrior coal field, Alabama
Kidd, Robert E.; Bossong, C.R.
1987-01-01
A deterministic precipitation-runoff model, the Precipitation-Runoff Modeling System, was applied in two small basins located in the Warrior coal field, Alabama. Each basin has distinct geologic, hydrologic, and land-use characteristics. Bear Creek basin (15.03 square miles) is undisturbed, is underlain almost entirely by consolidated coal-bearing rocks of Pennsylvanian age (Pottsville Formation), and is drained by an intermittent stream. Turkey Creek basin (6.08 square miles) contains a surface coal mine and is underlain by both the Pottsville Formation and unconsolidated clay, sand, and gravel deposits of Cretaceous age (Coker Formation). Aquifers in the Coker Formation sustain flow through extended rainless periods. Preliminary daily and storm calibrations were developed for each basin. Initial parameter and variable values were determined according to techniques recommended in the user's manual for the modeling system and through field reconnaissance. Parameters with meaningful sensitivity were identified and adjusted to match hydrograph shapes and to compute realistic water year budgets. When the developed calibrations were applied to data exclusive of the calibration period as a verification exercise, results were comparable to those for the calibration period. The model calibrations included preliminary parameter values for the various categories of geology and land use in each basin. The parameter values for areas underlain by the Pottsville Formation in the Bear Creek basin were transferred directly to similar areas in the Turkey Creek basin, and these parameter values were held constant throughout the model calibration. Parameter values for all geologic and land-use categories addressed in the two calibrations can probably be used in ungaged basins where similar conditions exist. The parameter transfer worked well, as a good calibration was obtained for Turkey Creek basin.
DOE Office of Scientific and Technical Information (OSTI.GOV)
M. Gross
2004-09-01
The purpose of this scientific analysis is to define the sampled values of stochastic (random) input parameters for (1) rockfall calculations in the lithophysal and nonlithophysal zones under vibratory ground motions, and (2) structural response calculations for the drip shield and waste package under vibratory ground motions. This analysis supplies: (1) Sampled values of ground motion time history and synthetic fracture pattern for analysis of rockfall in emplacement drifts in nonlithophysal rock (Section 6.3 of ''Drift Degradation Analysis'', BSC 2004 [DIRS 166107]); (2) Sampled values of ground motion time history and rock mechanical properties category for analysis of rockfall inmore » emplacement drifts in lithophysal rock (Section 6.4 of ''Drift Degradation Analysis'', BSC 2004 [DIRS 166107]); (3) Sampled values of ground motion time history and metal to metal and metal to rock friction coefficient for analysis of waste package and drip shield damage to vibratory motion in ''Structural Calculations of Waste Package Exposed to Vibratory Ground Motion'' (BSC 2004 [DIRS 167083]) and in ''Structural Calculations of Drip Shield Exposed to Vibratory Ground Motion'' (BSC 2003 [DIRS 163425]). The sampled values are indices representing the number of ground motion time histories, number of fracture patterns and rock mass properties categories. These indices are translated into actual values within the respective analysis and model reports or calculations. This report identifies the uncertain parameters and documents the sampled values for these parameters. The sampled values are determined by GoldSim V6.04.007 [DIRS 151202] calculations using appropriate distribution types and parameter ranges. No software development or model development was required for these calculations. The calculation of the sampled values allows parameter uncertainty to be incorporated into the rockfall and structural response calculations that support development of the seismic scenario for the Total System Performance Assessment for the License Application (TSPA-LA). The results from this scientific analysis also address project requirements related to parameter uncertainty, as specified in the acceptance criteria in ''Yucca Mountain Review Plan, Final Report'' (NRC 2003 [DIRS 163274]). This document was prepared under the direction of ''Technical Work Plan for: Regulatory Integration Modeling of Drift Degradation, Waste Package and Drip Shield Vibratory Motion and Seismic Consequences'' (BSC 2004 [DIRS 170528]) which directed the work identified in work package ARTM05. This document was prepared under procedure AP-SIII.9Q, ''Scientific Analyses''. There are no specific known limitations to this analysis.« less
An Extreme-Value Approach to Anomaly Vulnerability Identification
NASA Technical Reports Server (NTRS)
Everett, Chris; Maggio, Gaspare; Groen, Frank
2010-01-01
The objective of this paper is to present a method for importance analysis in parametric probabilistic modeling where the result of interest is the identification of potential engineering vulnerabilities associated with postulated anomalies in system behavior. In the context of Accident Precursor Analysis (APA), under which this method has been developed, these vulnerabilities, designated as anomaly vulnerabilities, are conditions that produce high risk in the presence of anomalous system behavior. The method defines a parameter-specific Parameter Vulnerability Importance measure (PVI), which identifies anomaly risk-model parameter values that indicate the potential presence of anomaly vulnerabilities, and allows them to be prioritized for further investigation. This entails analyzing each uncertain risk-model parameter over its credible range of values to determine where it produces the maximum risk. A parameter that produces high system risk for a particular range of values suggests that the system is vulnerable to the modeled anomalous conditions, if indeed the true parameter value lies in that range. Thus, PVI analysis provides a means of identifying and prioritizing anomaly-related engineering issues that at the very least warrant improved understanding to reduce uncertainty, such that true vulnerabilities may be identified and proper corrective actions taken.
The 4-parameter Compressible Packing Model (CPM) including a critical cavity size ratio
NASA Astrophysics Data System (ADS)
Roquier, Gerard
2017-06-01
The 4-parameter Compressible Packing Model (CPM) has been developed to predict the packing density of mixtures constituted by bidisperse spherical particles. The four parameters are: the wall effect and the loosening effect coefficients, the compaction index and a critical cavity size ratio. The two geometrical interactions have been studied theoretically on the basis of a spherical cell centered on a secondary class bead. For the loosening effect, a critical cavity size ratio, below which a fine particle can be inserted into a small cavity created by touching coarser particles, is introduced. This is the only parameter which requires adaptation to extend the model to other types of particles. The 4-parameter CPM demonstrates its efficiency on frictionless glass beads (300 values), spherical particles numerically simulated (20 values), round natural particles (125 values) and crushed particles (335 values) with correlation coefficients equal to respectively 99.0%, 98.7%, 97.8%, 96.4% and mean deviations equal to respectively 0.007, 0.006, 0.007, 0.010.
Davidson, Ross S; McKendrick, Iain J; Wood, Joanna C; Marion, Glenn; Greig, Alistair; Stevenson, Karen; Sharp, Michael; Hutchings, Michael R
2012-09-10
A common approach to the application of epidemiological models is to determine a single (point estimate) parameterisation using the information available in the literature. However, in many cases there is considerable uncertainty about parameter values, reflecting both the incomplete nature of current knowledge and natural variation, for example between farms. Furthermore model outcomes may be highly sensitive to different parameter values. Paratuberculosis is an infection for which many of the key parameter values are poorly understood and highly variable, and for such infections there is a need to develop and apply statistical techniques which make maximal use of available data. A technique based on Latin hypercube sampling combined with a novel reweighting method was developed which enables parameter uncertainty and variability to be incorporated into a model-based framework for estimation of prevalence. The method was evaluated by applying it to a simulation of paratuberculosis in dairy herds which combines a continuous time stochastic algorithm with model features such as within herd variability in disease development and shedding, which have not been previously explored in paratuberculosis models. Generated sample parameter combinations were assigned a weight, determined by quantifying the model's resultant ability to reproduce prevalence data. Once these weights are generated the model can be used to evaluate other scenarios such as control options. To illustrate the utility of this approach these reweighted model outputs were used to compare standard test and cull control strategies both individually and in combination with simple husbandry practices that aim to reduce infection rates. The technique developed has been shown to be applicable to a complex model incorporating realistic control options. For models where parameters are not well known or subject to significant variability, the reweighting scheme allowed estimated distributions of parameter values to be combined with additional sources of information, such as that available from prevalence distributions, resulting in outputs which implicitly handle variation and uncertainty. This methodology allows for more robust predictions from modelling approaches by allowing for parameter uncertainty and combining different sources of information, and is thus expected to be useful in application to a large number of disease systems.
Meier, Kimberly; Sum, Brian; Giaschi, Deborah
2016-10-01
Global motion sensitivity in typically developing children depends on the spatial (Δx) and temporal (Δt) displacement parameters of the motion stimulus. Specifically, sensitivity for small Δx values matures at a later age, suggesting it may be the most vulnerable to damage by amblyopia. To explore this possibility, we compared motion coherence thresholds of children with amblyopia (7-14years old) to age-matched controls. Three Δx values were used with two Δt values, yielding six conditions covering a range of speeds (0.3-30deg/s). We predicted children with amblyopia would show normal coherence thresholds for the same parameters on which 5-year-olds previously demonstrated mature performance, and elevated coherence thresholds for parameters on which 5-year-olds demonstrated immaturities. Consistent with this, we found that children with amblyopia showed deficits with amblyopic eye viewing compared to controls for small and medium Δx values, regardless of Δt value. The fellow eye showed similar results at the smaller Δt. These results confirm that global motion perception in children with amblyopia is particularly deficient at the finer spatial scales that typically mature later in development. An additional implication is that carefully designed stimuli that are adequately sensitive must be used to assess global motion function in developmental disorders. Stimulus parameters for which performance matures early in life may not reveal global motion perception deficits. Copyright © 2016 Elsevier Ltd. All rights reserved.
DD3MAT - a code for yield criteria anisotropy parameters identification.
NASA Astrophysics Data System (ADS)
Barros, P. D.; Carvalho, P. D.; Alves, J. L.; Oliveira, M. C.; Menezes, L. F.
2016-08-01
This work presents the main strategies and algorithms adopted in the DD3MAT inhouse code, specifically developed for identifying the anisotropy parameters. The algorithm adopted is based on the minimization of an error function, using a downhill simplex method. The set of experimental values can consider yield stresses and r -values obtained from in-plane tension, for different angles with the rolling direction (RD), yield stress and r -value obtained for biaxial stress state, and yield stresses from shear tests performed also for different angles to RD. All these values can be defined for a specific value of plastic work. Moreover, it can also include the yield stresses obtained from in-plane compression tests. The anisotropy parameters are identified for an AA2090-T3 aluminium alloy, highlighting the importance of the user intervention to improve the numerical fit.
Accuracy of Time Phasing Aircraft Development using the Continuous Distribution Function
2015-03-26
Breusch - Pagan test ; the reported p-value of 0.5264 fails to rejects the null hypothesis of constant... Breusch - Pagan Test : P-value – 0.6911 0 2 4 6 8 10 12 -1 -0.75 -0.5 -0.25 0 0.25 0.5 0.75 1 Shapiro-Wilk W Test Prob. < W: 0.9849 -1...Weibull Scale Parameter β – Constant Variance Breusch - Pagan Test : P-value – 0.5176 Beta Shape Parameter α – Influential Data
Estimation of pharmacokinetic parameters from non-compartmental variables using Microsoft Excel.
Dansirikul, Chantaratsamon; Choi, Malcolm; Duffull, Stephen B
2005-06-01
This study was conducted to develop a method, termed 'back analysis (BA)', for converting non-compartmental variables to compartment model dependent pharmacokinetic parameters for both one- and two-compartment models. A Microsoft Excel spreadsheet was implemented with the use of Solver and visual basic functions. The performance of the BA method in estimating pharmacokinetic parameter values was evaluated by comparing the parameter values obtained to a standard modelling software program, NONMEM, using simulated data. The results show that the BA method was reasonably precise and provided low bias in estimating fixed and random effect parameters for both one- and two-compartment models. The pharmacokinetic parameters estimated from the BA method were similar to those of NONMEM estimation.
2007-03-01
column experiments were used to obtain model parameters . Cost data used in the model were based on conventional GAC installations, as modified to...43 Calculation of Parameters ...66 Determination of Parameter Values
NASA Astrophysics Data System (ADS)
Norton, P. A., II; Haj, A. E., Jr.
2014-12-01
The United States Geological Survey is currently developing a National Hydrologic Model (NHM) to support and facilitate coordinated and consistent hydrologic modeling efforts at the scale of the continental United States. As part of this effort, the Geospatial Fabric (GF) for the NHM was created. The GF is a database that contains parameters derived from datasets that characterize the physical features of watersheds. The GF was used to aggregate catchments and flowlines defined in the National Hydrography Dataset Plus dataset for more than 100,000 hydrologic response units (HRUs), and to establish initial parameter values for input to the Precipitation-Runoff Modeling System (PRMS). Many parameter values are adjusted in PRMS using an automated calibration process. Using these adjusted parameter values, the PRMS model estimated variables such as evapotranspiration (ET), potential evapotranspiration (PET), snow-covered area (SCA), and snow water equivalent (SWE). In order to evaluate the effectiveness of parameter calibration, and model performance in general, several satellite-based Moderate Resolution Imaging Spectroradiometer (MODIS) and Snow Data Assimilation System (SNODAS) gridded datasets including ET, PET, SCA, and SWE were compared to PRMS-simulated values. The MODIS and SNODAS data were spatially averaged for each HRU, and compared to PRMS-simulated ET, PET, SCA, and SWE values for each HRU in the Upper Missouri River watershed. Default initial GF parameter values and PRMS calibration ranges were evaluated. Evaluation results, and the use of MODIS and SNODAS datasets to update GF parameter values and PRMS calibration ranges, are presented and discussed.
Nayak, Chitresh; Singh, Amit; Chaudhary, Himanshu; Unune, Deepak Rajendra
2017-10-23
Technological advances in prosthetics have attracted the curiosity of researchers in monitoring design and developments of the sockets to sustain maximum pressure without any soft tissue damage, skin breakdown, and painful sores. Numerous studies have been reported in the area of pressure measurement at the limb/socket interface, though, the relation between amputee's physiological parameters and the pressure developed at the limb/socket interface is still not studied. Therefore, the purpose of this work is to investigate the effects of patient-specific physiological parameters viz. height, weight, and stump length on the pressure development at the transtibial prosthetic limb/socket interface. Initially, the pressure values at the limb/socket interface were clinically measured during stance and walking conditions for different patients using strain gauges placed at critical locations of the stump. The measured maximum pressure data related to patient's physiological parameters was used to develop an artificial neural network (ANN) model. The effects of physiological parameters on the pressure development at the limb/socket interface were examined using the ANN model. The analyzed results indicated that the weight and stump length significantly affects the maximum pressure values. The outcomes of this work could be an important platform for the design and development of patient-specific prosthetic socket which can endure the maximum pressure conditions at stance and ambulation conditions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hosking, Jonathan R. M.; Natarajan, Ramesh
The computer creates a utility demand forecast model for weather parameters by receiving a plurality of utility parameter values, wherein each received utility parameter value corresponds to a weather parameter value. Determining that a range of weather parameter values lacks a sufficient amount of corresponding received utility parameter values. Determining one or more utility parameter values that corresponds to the range of weather parameter values. Creating a model which correlates the received and the determined utility parameter values with the corresponding weather parameters values.
A technique for estimating time of concentration and storage coefficient values for Illinois streams
Graf, Julia B.; Garklavs, George; Oberg, Kevin A.
1982-01-01
Values of the unit hydrograph parameters time of concentration (TC) and storage coefficient (R) can be estimated for streams in Illinois by a two-step technique developed from data for 98 gaged basins in the State. The sum of TC and R is related to stream length (L) and main channel slope (S) by the relation (TC + R)e = 35.2L0.39S-0.78. The variable R/(TC + R) is not significantly correlated with drainage area, slope, or length, but does exhibit a regional trend. Regional values of R/(TC + R) are used with the computed values of (TC + R)e to solve for estimated values of time of concentration (TCe) and storage coefficient (Re). The use of the variable R/(TC + R) is thought to account for variations in unit hydrograph parameters caused by physiographic variables such as basin topography, flood-plain development, and basin storage characteristics. (USGS)
NASA Technical Reports Server (NTRS)
Grams, G. W.
1981-01-01
A laser nephelometer developed for airborne measurements of polar scattering diagrams of atmospheric aerosols was flown on the NCAR Sabreliner aircraft to obtain data on light-scattering parameters for stratospheric aerosol particles over Alaska during July 1979. Observed values of the angular variation of scattered-light intensity were compared with those calculated for different values of the asymmetry parameter g in the Henyey-Greenstein phase function. The observations indicate that, for the time and location of the experiments, the Henyey-Greenstein phase function could be used to calculate polar scattering diagrams to within experimental errors for an asymmetry parameter value of 0.49 plus or minus 0.07.
Distributed activation energy model parameters of some Turkish coals
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gunes, M.; Gunes, S.K.
2008-07-01
A multi-reaction model based on distributed activation energy has been applied to some Turkish coals. The kinetic parameters of distributed activation energy model were calculated via computer program developed for this purpose. It was observed that the values of mean of activation energy distribution vary between 218 and 248 kJ/mol, and the values of standard deviation of activation energy distribution vary between 32 and 70 kJ/mol. The correlations between kinetic parameters of the distributed activation energy model and certain properties of coal have been investigated.
Parameter Estimation of Partial Differential Equation Models.
Xun, Xiaolei; Cao, Jiguo; Mallick, Bani; Carroll, Raymond J; Maity, Arnab
2013-01-01
Partial differential equation (PDE) models are commonly used to model complex dynamic systems in applied sciences such as biology and finance. The forms of these PDE models are usually proposed by experts based on their prior knowledge and understanding of the dynamic system. Parameters in PDE models often have interesting scientific interpretations, but their values are often unknown, and need to be estimated from the measurements of the dynamic system in the present of measurement errors. Most PDEs used in practice have no analytic solutions, and can only be solved with numerical methods. Currently, methods for estimating PDE parameters require repeatedly solving PDEs numerically under thousands of candidate parameter values, and thus the computational load is high. In this article, we propose two methods to estimate parameters in PDE models: a parameter cascading method and a Bayesian approach. In both methods, the underlying dynamic process modeled with the PDE model is represented via basis function expansion. For the parameter cascading method, we develop two nested levels of optimization to estimate the PDE parameters. For the Bayesian method, we develop a joint model for data and the PDE, and develop a novel hierarchical model allowing us to employ Markov chain Monte Carlo (MCMC) techniques to make posterior inference. Simulation studies show that the Bayesian method and parameter cascading method are comparable, and both outperform other available methods in terms of estimation accuracy. The two methods are demonstrated by estimating parameters in a PDE model from LIDAR data.
Engine monitoring display study
NASA Technical Reports Server (NTRS)
Hornsby, Mary E.
1992-01-01
The current study is part of a larger NASA effort to develop displays for an engine-monitoring system to enable the crew to monitor engine parameter trends more effectively. The objective was to evaluate the operational utility of adding three types of information to the basic Boeing Engine Indicating and Crew Alerting System (EICAS) display formats: alphanumeric alerting messages for engine parameters whose values exceed caution or warning limits; alphanumeric messages to monitor engine parameters that deviate from expected values; and a graphic depiction of the range of expected values for current conditions. Ten training and line pilots each flew 15 simulated flight scenarios with five variants of the basic EICAS format; these variants included different combinations of the added information. The pilots detected engine problems more quickly when engine alerting messages were included in the display; adding a graphic depiction of the range of expected values did not affect detection speed. The pilots rated both types of alphanumeric messages (alert and monitor parameter) as more useful and easier to interpret than the graphic depiction. Integrating engine parameter messages into the EICAS alerting system appears to be both useful and preferred.
Micromechanical Modeling of Storage Particles in Lithium Ion Batteries
NASA Astrophysics Data System (ADS)
Purkayastha, Rajlakshmi Tarun
The effect of stress on storage particles within a lithium ion battery, while acknowledged, is not understood very well. In this work three non-dimensional parameters were identified which govern the stress response within a spherical storage particle. These parameters are developed using material properties such as the diffusion coefficient, particle radius, partial molar volume and Young's modulus. Stress maps are then generated for various values of these parameters for fixed rates of insertion, applying boundary conditions similar to those found in a battery. Stress and concentration profiles for various values of these parameters show the coupling between stress and concentration is magnified depending on the values of the parameters. These maps can be used for different materials, depending on the value of the dimensionless parameters. The value of maximum stress generated is calculated for extraction as well as insertion of lithium into the particle. The model was then used to study to ellipsoidal particles in order to ascertain the effect of geometry on the maximum stress within the particle. By performing a parameter study, we can identify those materials for which particular aspect ratios of ellipsoids are more beneficial, in terms of reducing stress. We find that the stress peaks at certain aspect ratios, mostly at 2 and 1/ 2 . A parameter study was also performed on cubic particle. The values of maximum stresses for both insertion and extraction of lithium were plotted as contour plots. It was seen that the material parameters influenced the location of the maximum stress, with the maximum stress occurring either at the center of the edge between two faces or the point at the center of a face. Newer materials such as silicon are being touted as new lithium storage materials for batteries due to their higher capacity. Their tendency to rapidly loose capacity in a short period of time has led to a variety designs such are the use of carbon nanotubes or the use of coatings in order to mitigate the large expansion and stresses, which leads to spalling off of the material. We therefore extended the results for spherical storage particles to include the presence of an additional layer of material surrounding the storage particle. We perform a parameter study to see at which material properties are most beneficial in reducing stresses within the particle, and the results were tabulated. It was seen that thicker layers can lead to mitigation in the value of maximum stresses. A simple fracture analysis was carried out and the material parameters which would most likely cause crack growth to occur were identified. Finally an integrated 2-D model of a lithium ion battery was developed to study the mechanical stress in storage particles as a function of material properties. The effect of morphology on the stress and lithium concentration is studied for the case of extraction of lithium in terms of the previously developed non-dimensional parameters. Both, particles functioning in isolation were studied, as well as in closely-packed systems. The results show that the particle distance from the separator, in combination with the material properties of the particle, is critical in predicting the stress generated within the particle.
Lätt, Evelin; Jürimäe, Jaak; Haljaste, Kaja; Cicchella, Antonio; Purge, Priit; Jürimäe, Toivo
2009-02-01
The aim of the study was to examine the development of specific physical, physiological, and biomechanical parameters in 29 young male swimmers for whom measurements were made three times for two consecutive years. During the 400-m front-crawl swimming, the energy cost of swimming, and stroking parameters were assessed. Peak oxygen consumption (VO2 peak) was assessed by means of the backward-extrapolation technique recording VO2 during the first 20 sec. of recovery period after a maximal trial of 400-m distance. Swimming performance at different points of physical maturity was mainly related to the increases in body height and arm-span values from physical parameters, improvement in sport-specific VO2 peak value from physiological characteristics, and improvement in stroke indices on biomechanical parameters. In addition, biomechanical factors characterised best the 400-m swimming performance followed by physical and physiological factors during the 2-yr. study period for the young male swimmers.
Development of wavelet-ANN models to predict water quality parameters in Hilo Bay, Pacific Ocean.
Alizadeh, Mohamad Javad; Kavianpour, Mohamad Reza
2015-09-15
The main objective of this study is to apply artificial neural network (ANN) and wavelet-neural network (WNN) models for predicting a variety of ocean water quality parameters. In this regard, several water quality parameters in Hilo Bay, Pacific Ocean, are taken under consideration. Different combinations of water quality parameters are applied as input variables to predict daily values of salinity, temperature and DO as well as hourly values of DO. The results demonstrate that the WNN models are superior to the ANN models. Also, the hourly models developed for DO prediction outperform the daily models of DO. For the daily models, the most accurate model has R equal to 0.96, while for the hourly model it reaches up to 0.98. Overall, the results show the ability of the model to monitor the ocean parameters, in condition with missing data, or when regular measurement and monitoring are impossible. Copyright © 2015 Elsevier Ltd. All rights reserved.
Cuenca-Navalon, Elena; Laumen, Marco; Finocchiaro, Thomas; Steinseifer, Ulrich
2016-07-01
A physiological control algorithm is being developed to ensure an optimal physiological interaction between the ReinHeart total artificial heart (TAH) and the circulatory system. A key factor for that is the long-term, accurate determination of the hemodynamic state of the cardiovascular system. This study presents a method to determine estimation models for predicting hemodynamic parameters (pump chamber filling and afterload) from both left and right cardiovascular circulations. The estimation models are based on linear regression models that correlate filling and afterload values with pump intrinsic parameters derived from measured values of motor current and piston position. Predictions for filling lie in average within 5% from actual values, predictions for systemic afterload (AoPmean , AoPsys ) and mean pulmonary afterload (PAPmean ) lie in average within 9% from actual values. Predictions for systolic pulmonary afterload (PAPsys ) present an average deviation of 14%. The estimation models show satisfactory prediction and confidence intervals and are thus suitable to estimate hemodynamic parameters. This method and derived estimation models are a valuable alternative to implanted sensors and are an essential step for the development of a physiological control algorithm for a fully implantable TAH. Copyright © 2015 International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dana L. Kelly; Albert Malkhasyan
2010-06-01
There is a nearly ubiquitous assumption in PSA that parameter values are at least piecewise-constant in time. As a result, Bayesian inference tends to incorporate many years of plant operation, over which there have been significant changes in plant operational and maintenance practices, plant management, etc. These changes can cause significant changes in parameter values over time; however, failure to perform Bayesian inference in the proper time-dependent framework can mask these changes. Failure to question the assumption of constant parameter values, and failure to perform Bayesian inference in the proper time-dependent framework were noted as important issues in NUREG/CR-6813, performedmore » for the U. S. Nuclear Regulatory Commission’s Advisory Committee on Reactor Safeguards in 2003. That report noted that “industry lacks tools to perform time-trend analysis with Bayesian updating.” This paper describes an application of time-dependent Bayesian inference methods developed for the European Commission Ageing PSA Network. These methods utilize open-source software, implementing Markov chain Monte Carlo sampling. The paper also illustrates the development of a generic prior distribution, which incorporates multiple sources of generic data via weighting factors that address differences in key influences, such as vendor, component boundaries, conditions of the operating environment, etc.« less
Critical state of sand matrix soils.
Marto, Aminaton; Tan, Choy Soon; Makhtar, Ahmad Mahir; Kung Leong, Tiong
2014-01-01
The Critical State Soil Mechanic (CSSM) is a globally recognised framework while the critical states for sand and clay are both well established. Nevertheless, the development of the critical state of sand matrix soils is lacking. This paper discusses the development of critical state lines and corresponding critical state parameters for the investigated material, sand matrix soils using sand-kaolin mixtures. The output of this paper can be used as an interpretation framework for the research on liquefaction susceptibility of sand matrix soils in the future. The strain controlled triaxial test apparatus was used to provide the monotonic loading onto the reconstituted soil specimens. All tested soils were subjected to isotropic consolidation and sheared under undrained condition until critical state was ascertain. Based on the results of 32 test specimens, the critical state lines for eight different sand matrix soils were developed together with the corresponding values of critical state parameters, M, λ, and Γ. The range of the value of M, λ, and Γ is 0.803-0.998, 0.144-0.248, and 1.727-2.279, respectively. These values are comparable to the critical state parameters of river sand and kaolin clay. However, the relationship between fines percentages and these critical state parameters is too scattered to be correlated.
Critical State of Sand Matrix Soils
Marto, Aminaton; Tan, Choy Soon; Makhtar, Ahmad Mahir; Kung Leong, Tiong
2014-01-01
The Critical State Soil Mechanic (CSSM) is a globally recognised framework while the critical states for sand and clay are both well established. Nevertheless, the development of the critical state of sand matrix soils is lacking. This paper discusses the development of critical state lines and corresponding critical state parameters for the investigated material, sand matrix soils using sand-kaolin mixtures. The output of this paper can be used as an interpretation framework for the research on liquefaction susceptibility of sand matrix soils in the future. The strain controlled triaxial test apparatus was used to provide the monotonic loading onto the reconstituted soil specimens. All tested soils were subjected to isotropic consolidation and sheared under undrained condition until critical state was ascertain. Based on the results of 32 test specimens, the critical state lines for eight different sand matrix soils were developed together with the corresponding values of critical state parameters, M, λ, and Γ. The range of the value of M, λ, and Γ is 0.803–0.998, 0.144–0.248, and 1.727–2.279, respectively. These values are comparable to the critical state parameters of river sand and kaolin clay. However, the relationship between fines percentages and these critical state parameters is too scattered to be correlated. PMID:24757417
NASA Astrophysics Data System (ADS)
Alipour, M. H.; Kibler, Kelly M.
2018-02-01
A framework methodology is proposed for streamflow prediction in poorly-gauged rivers located within large-scale regions of sparse hydrometeorologic observation. A multi-criteria model evaluation is developed to select models that balance runoff efficiency with selection of accurate parameter values. Sparse observed data are supplemented by uncertain or low-resolution information, incorporated as 'soft' data, to estimate parameter values a priori. Model performance is tested in two catchments within a data-poor region of southwestern China, and results are compared to models selected using alternative calibration methods. While all models perform consistently with respect to runoff efficiency (NSE range of 0.67-0.78), models selected using the proposed multi-objective method may incorporate more representative parameter values than those selected by traditional calibration. Notably, parameter values estimated by the proposed method resonate with direct estimates of catchment subsurface storage capacity (parameter residuals of 20 and 61 mm for maximum soil moisture capacity (Cmax), and 0.91 and 0.48 for soil moisture distribution shape factor (B); where a parameter residual is equal to the centroid of a soft parameter value minus the calibrated parameter value). A model more traditionally calibrated to observed data only (single-objective model) estimates a much lower soil moisture capacity (residuals of Cmax = 475 and 518 mm and B = 1.24 and 0.7). A constrained single-objective model also underestimates maximum soil moisture capacity relative to a priori estimates (residuals of Cmax = 246 and 289 mm). The proposed method may allow managers to more confidently transfer calibrated models to ungauged catchments for streamflow predictions, even in the world's most data-limited regions.
Characterization of Initial Parameter Information for Lifetime Prediction of Electronic Devices.
Li, Zhigang; Liu, Boying; Yuan, Mengxiong; Zhang, Feifei; Guo, Jiaqiang
2016-01-01
Newly manufactured electronic devices are subject to different levels of potential defects existing among the initial parameter information of the devices. In this study, a characterization of electromagnetic relays that were operated at their optimal performance with appropriate and steady parameter values was performed to estimate the levels of their potential defects and to develop a lifetime prediction model. First, the initial parameter information value and stability were quantified to measure the performance of the electronics. In particular, the values of the initial parameter information were estimated using the probability-weighted average method, whereas the stability of the parameter information was determined by using the difference between the extrema and end points of the fitting curves for the initial parameter information. Second, a lifetime prediction model for small-sized samples was proposed on the basis of both measures. Finally, a model for the relationship of the initial contact resistance and stability over the lifetime of the sampled electromagnetic relays was proposed and verified. A comparison of the actual and predicted lifetimes of the relays revealed a 15.4% relative error, indicating that the lifetime of electronic devices can be predicted based on their initial parameter information.
Characterization of Initial Parameter Information for Lifetime Prediction of Electronic Devices
Li, Zhigang; Liu, Boying; Yuan, Mengxiong; Zhang, Feifei; Guo, Jiaqiang
2016-01-01
Newly manufactured electronic devices are subject to different levels of potential defects existing among the initial parameter information of the devices. In this study, a characterization of electromagnetic relays that were operated at their optimal performance with appropriate and steady parameter values was performed to estimate the levels of their potential defects and to develop a lifetime prediction model. First, the initial parameter information value and stability were quantified to measure the performance of the electronics. In particular, the values of the initial parameter information were estimated using the probability-weighted average method, whereas the stability of the parameter information was determined by using the difference between the extrema and end points of the fitting curves for the initial parameter information. Second, a lifetime prediction model for small-sized samples was proposed on the basis of both measures. Finally, a model for the relationship of the initial contact resistance and stability over the lifetime of the sampled electromagnetic relays was proposed and verified. A comparison of the actual and predicted lifetimes of the relays revealed a 15.4% relative error, indicating that the lifetime of electronic devices can be predicted based on their initial parameter information. PMID:27907188
Fateen, Seif-Eddeen K; Khalil, Menna M; Elnabawy, Ahmed O
2013-03-01
Peng-Robinson equation of state is widely used with the classical van der Waals mixing rules to predict vapor liquid equilibria for systems containing hydrocarbons and related compounds. This model requires good values of the binary interaction parameter kij . In this work, we developed a semi-empirical correlation for kij partly based on the Huron-Vidal mixing rules. We obtained values for the adjustable parameters of the developed formula for over 60 binary systems and over 10 categories of components. The predictions of the new equation system were slightly better than the constant-kij model in most cases, except for 10 systems whose predictions were considerably improved with the new correlation.
Barczi, Jean-François; Rey, Hervé; Caraglio, Yves; de Reffye, Philippe; Barthélémy, Daniel; Dong, Qiao Xue; Fourcaud, Thierry
2008-05-01
AmapSim is a tool that implements a structural plant growth model based on a botanical theory and simulates plant morphogenesis to produce accurate, complex and detailed plant architectures. This software is the result of more than a decade of research and development devoted to plant architecture. New advances in the software development have yielded plug-in external functions that open up the simulator to functional processes. The simulation of plant topology is based on the growth of a set of virtual buds whose activity is modelled using stochastic processes. The geometry of the resulting axes is modelled by simple descriptive functions. The potential growth of each bud is represented by means of a numerical value called physiological age, which controls the value for each parameter in the model. The set of possible values for physiological ages is called the reference axis. In order to mimic morphological and architectural metamorphosis, the value allocated for the physiological age of buds evolves along this reference axis according to an oriented finite state automaton whose occupation and transition law follows a semi-Markovian function. Simulations were performed on tomato plants to demonstrate how the AmapSim simulator can interface external modules, e.g. a GREENLAB growth model and a radiosity model. The algorithmic ability provided by AmapSim, e.g. the reference axis, enables unified control to be exercised over plant development parameter values, depending on the biological process target: how to affect the local pertinent process, i.e. the pertinent parameter(s), while keeping the rest unchanged. This opening up to external functions also offers a broadened field of applications and thus allows feedback between plant growth and the physical environment.
Barczi, Jean-François; Rey, Hervé; Caraglio, Yves; de Reffye, Philippe; Barthélémy, Daniel; Dong, Qiao Xue; Fourcaud, Thierry
2008-01-01
Background and Aims AmapSim is a tool that implements a structural plant growth model based on a botanical theory and simulates plant morphogenesis to produce accurate, complex and detailed plant architectures. This software is the result of more than a decade of research and development devoted to plant architecture. New advances in the software development have yielded plug-in external functions that open up the simulator to functional processes. Methods The simulation of plant topology is based on the growth of a set of virtual buds whose activity is modelled using stochastic processes. The geometry of the resulting axes is modelled by simple descriptive functions. The potential growth of each bud is represented by means of a numerical value called physiological age, which controls the value for each parameter in the model. The set of possible values for physiological ages is called the reference axis. In order to mimic morphological and architectural metamorphosis, the value allocated for the physiological age of buds evolves along this reference axis according to an oriented finite state automaton whose occupation and transition law follows a semi-Markovian function. Key Results Simulations were performed on tomato plants to demostrate how the AmapSim simulator can interface external modules, e.g. a GREENLAB growth model and a radiosity model. Conclusions The algorithmic ability provided by AmapSim, e.g. the reference axis, enables unified control to be exercised over plant development parameter values, depending on the biological process target: how to affect the local pertinent process, i.e. the pertinent parameter(s), while keeping the rest unchanged. This opening up to external functions also offers a broadened field of applications and thus allows feedback between plant growth and the physical environment. PMID:17766310
Shah, A A; Xing, W W; Triantafyllidis, V
2017-04-01
In this paper, we develop reduced-order models for dynamic, parameter-dependent, linear and nonlinear partial differential equations using proper orthogonal decomposition (POD). The main challenges are to accurately and efficiently approximate the POD bases for new parameter values and, in the case of nonlinear problems, to efficiently handle the nonlinear terms. We use a Bayesian nonlinear regression approach to learn the snapshots of the solutions and the nonlinearities for new parameter values. Computational efficiency is ensured by using manifold learning to perform the emulation in a low-dimensional space. The accuracy of the method is demonstrated on a linear and a nonlinear example, with comparisons with a global basis approach.
Xing, W. W.; Triantafyllidis, V.
2017-01-01
In this paper, we develop reduced-order models for dynamic, parameter-dependent, linear and nonlinear partial differential equations using proper orthogonal decomposition (POD). The main challenges are to accurately and efficiently approximate the POD bases for new parameter values and, in the case of nonlinear problems, to efficiently handle the nonlinear terms. We use a Bayesian nonlinear regression approach to learn the snapshots of the solutions and the nonlinearities for new parameter values. Computational efficiency is ensured by using manifold learning to perform the emulation in a low-dimensional space. The accuracy of the method is demonstrated on a linear and a nonlinear example, with comparisons with a global basis approach. PMID:28484327
Lyapunov dimension formula for the global attractor of the Lorenz system
NASA Astrophysics Data System (ADS)
Leonov, G. A.; Kuznetsov, N. V.; Korzhemanova, N. A.; Kusakin, D. V.
2016-12-01
The exact Lyapunov dimension formula for the Lorenz system for a positive measure set of parameters, including classical values, was analytically obtained first by G.A. Leonov in 2002. Leonov used the construction technique of special Lyapunov-type functions, which was developed by him in 1991 year. Later it was shown that the consideration of larger class of Lyapunov-type functions permits proving the validity of this formula for all parameters, of the system, such that all the equilibria of the system are hyperbolically unstable. In the present work it is proved the validity of the formula for Lyapunov dimension for a wider variety of parameters values including all parameters, which satisfy the classical physical limitations.
Superduck Marine Meteorological Experiment Data Summary: Mean Values and Turbulence Parameters.
1988-08-01
number) This report summarizes the Mean values and turbulence parameters Of Meteorological measurements made during an experiment at Duck, NC, during...Sept-Oct 1986. The measure- ments wore made to Calculate wind stress in the nearshore area. Wind stress is a primary forcing function for nearshore waves...measure. Only in recent years has technology made it possible to accurately measure its fluctuations. The krypton hygrometer is a recent development
NASA Astrophysics Data System (ADS)
Viswanathan, V. R.; Makhoul, J.; Schwartz, R. M.; Huggins, A. W. F.
1982-04-01
The variable frame rate (VFR) transmission methodology developed, implemented, and tested in the years 1973-1978 for efficiently transmitting linear predictive coding (LPC) vocoder parameters extracted from the input speech at a fixed frame rate is reviewed. With the VFR method, parameters are transmitted only when their values have changed sufficiently over the interval since their preceding transmission. Two distinct approaches to automatic implementation of the VFR method are discussed. The first bases the transmission decisions on comparisons between the parameter values of the present frame and the last transmitted frame. The second, which is based on a functional perceptual model of speech, compares the parameter values of all the frames that lie in the interval between the present frame and the last transmitted frame against a linear model of parameter variation over that interval. Also considered is the application of VFR transmission to the design of narrow-band LPC speech coders with average bit rates of 2000-2400 bts/s.
Essa, Khalid S
2014-01-01
A new fast least-squares method is developed to estimate the shape factor (q-parameter) of a buried structure using normalized residual anomalies obtained from gravity data. The problem of shape factor estimation is transformed into a problem of finding a solution of a non-linear equation of the form f(q) = 0 by defining the anomaly value at the origin and at different points on the profile (N-value). Procedures are also formulated to estimate the depth (z-parameter) and the amplitude coefficient (A-parameter) of the buried structure. The method is simple and rapid for estimating parameters that produced gravity anomalies. This technique is used for a class of geometrically simple anomalous bodies, including the semi-infinite vertical cylinder, the infinitely long horizontal cylinder, and the sphere. The technique is tested and verified on theoretical models with and without random errors. It is also successfully applied to real data sets from Senegal and India, and the inverted-parameters are in good agreement with the known actual values.
Essa, Khalid S.
2013-01-01
A new fast least-squares method is developed to estimate the shape factor (q-parameter) of a buried structure using normalized residual anomalies obtained from gravity data. The problem of shape factor estimation is transformed into a problem of finding a solution of a non-linear equation of the form f(q) = 0 by defining the anomaly value at the origin and at different points on the profile (N-value). Procedures are also formulated to estimate the depth (z-parameter) and the amplitude coefficient (A-parameter) of the buried structure. The method is simple and rapid for estimating parameters that produced gravity anomalies. This technique is used for a class of geometrically simple anomalous bodies, including the semi-infinite vertical cylinder, the infinitely long horizontal cylinder, and the sphere. The technique is tested and verified on theoretical models with and without random errors. It is also successfully applied to real data sets from Senegal and India, and the inverted-parameters are in good agreement with the known actual values. PMID:25685472
Monaural room acoustic parameters from music and speech.
Kendrick, Paul; Cox, Trevor J; Li, Francis F; Zhang, Yonggang; Chambers, Jonathon A
2008-07-01
This paper compares two methods for extracting room acoustic parameters from reverberated speech and music. An approach which uses statistical machine learning, previously developed for speech, is extended to work with music. For speech, reverberation time estimations are within a perceptual difference limen of the true value. For music, virtually all early decay time estimations are within a difference limen of the true value. The estimation accuracy is not good enough in other cases due to differences between the simulated data set used to develop the empirical model and real rooms. The second method carries out a maximum likelihood estimation on decay phases at the end of notes or speech utterances. This paper extends the method to estimate parameters relating to the balance of early and late energies in the impulse response. For reverberation time and speech, the method provides estimations which are within the perceptual difference limen of the true value. For other parameters such as clarity, the estimations are not sufficiently accurate due to the natural reverberance of the excitation signals. Speech is a better test signal than music because of the greater periods of silence in the signal, although music is needed for low frequency measurement.
Trybula, Elizabeth M.; Cibin, Raj; Burks, Jennifer L.; ...
2014-06-13
The Soil and Water Assessment Tool (SWAT) is increasingly used to quantify hydrologic and water quality impacts of bioenergy production, but crop-growth parameters for candidate perennial rhizomatous grasses (PRG) Miscanthus × giganteus and upland ecotypes of Panicum virgatum (switchgrass) are limited by the availability of field data. Crop-growth parameter ranges and suggested values were developed in this study using agronomic and weather data collected at the Purdue University Water Quality Field Station in northwestern Indiana. During the process of parameterization, the comparison of measured data with conceptual representation of PRG growth in the model led to three changes in themore » SWAT 2009 code: the harvest algorithm was modified to maintain belowground biomass over winter, plant respiration was extended via modified-DLAI to better reflect maturity and leaf senescence, and nutrient uptake algorithms were revised to respond to temperature, water, and nutrient stress. Parameter values and changes to the model resulted in simulated biomass yield and leaf area index consistent with reported values for the region. Code changes in the SWAT model improved nutrient storage during dormancy period and nitrogen and phosphorus uptake by both switchgrass and Miscanthus.« less
Analysis of the methods for assessing socio-economic development level of urban areas
NASA Astrophysics Data System (ADS)
Popova, Olga; Bogacheva, Elena
2017-01-01
The present paper provides a targeted analysis of current approaches (ratings) in the assessment of socio-economic development of urban areas. The survey focuses on identifying standardized methodologies to area assessment techniques formation that will result in developing the system of intelligent monitoring, dispatching, building management, scheduling and effective management of an administrative-territorial unit. This system is characterized by complex hierarchical structure, including tangible and intangible properties (parameters, attributes). Investigating the abovementioned methods should increase the administrative-territorial unit's attractiveness for investors and residence. The research aims at studying methods for evaluating socio-economic development level of the Russian Federation territories. Experimental and theoretical territory estimating methods were revealed. Complex analysis of the characteristics of the areas was carried out and evaluation parameters were determined. Integral indicators (resulting rating criteria values) as well as the overall rankings (parameters, characteristics) were analyzed. The inventory of the most widely used partial indicators (parameters, characteristics) of urban areas was revealed. The resulting criteria of rating values homogeneity were verified and confirmed by determining the root mean square deviation, i.e. divergence of indices. The principal shortcomings of assessment methodologies were revealed. The assessment methods with enhanced effectiveness and homogeneity were proposed.
NASA Astrophysics Data System (ADS)
Zhou, H.; Liu, W.; Ning, T.
2017-12-01
Land surface actual evapotranspiration plays a key role in the global water and energy cycles. Accurate estimation of evapotranspiration is crucial for understanding the interactions between the land surface and the atmosphere, as well as for managing water resources. The nonlinear advection-aridity approach was formulated by Brutsaert to estimate actual evapotranspiration in 2015. Subsequently, this approach has been verified, applied and developed by many scholars. The estimation, impact factors and correlation analysis of the parameter alpha (αe) of this approach has become important aspects of the research. According to the principle of this approach, the potential evapotranspiration (ETpo) (taking αe as 1) and the apparent potential evapotranspiration (ETpm) were calculated using the meteorological data of 123 sites of the Loess Plateau and its surrounding areas. Then the mean spatial values of precipitation (P), ETpm and ETpo for 13 catchments were obtained by a CoKriging interpolation algorithm. Based on the runoff data of the 13 catchments, actual evapotranspiration was calculated using the catchment water balance equation at the hydrological year scale (May to April of the following year) by ignoring the change of catchment water storage. Thus, the parameter was estimated, and its relationships with P, ETpm and aridity index (ETpm/P) were further analyzed. The results showed that the general range of annual parameter value was 0.385-1.085, with an average value of 0.751 and a standard deviation of 0.113. The mean annual parameter αe value showed different spatial characteristics, with lower values in northern and higher values in southern. The annual scale parameter linearly related with annual P (R2=0.89) and ETpm (R2=0.49), while it exhibited a power function relationship with the aridity index (R2=0.83). Considering the ETpm is a variable in the nonlinear advection-aridity approach in which its effect has been incorporated, the relationship of precipitation and parameter (αe=1.0×10-3*P+0.301) was developed. The value of αe in this study is lower than those in the published literature. The reason is unclear at this point and yet need further investigation. The preliminary application of the nonlinear advection-aridity approach in the Loess Plateau has shown promising results.
NWP model forecast skill optimization via closure parameter variations
NASA Astrophysics Data System (ADS)
Järvinen, H.; Ollinaho, P.; Laine, M.; Solonen, A.; Haario, H.
2012-04-01
We present results of a novel approach to tune predictive skill of numerical weather prediction (NWP) models. These models contain tunable parameters which appear in parameterizations schemes of sub-grid scale physical processes. The current practice is to specify manually the numerical parameter values, based on expert knowledge. We developed recently a concept and method (QJRMS 2011) for on-line estimation of the NWP model parameters via closure parameter variations. The method called EPPES ("Ensemble prediction and parameter estimation system") utilizes ensemble prediction infra-structure for parameter estimation in a very cost-effective way: practically no new computations are introduced. The approach provides an algorithmic decision making tool for model parameter optimization in operational NWP. In EPPES, statistical inference about the NWP model tunable parameters is made by (i) generating an ensemble of predictions so that each member uses different model parameter values, drawn from a proposal distribution, and (ii) feeding-back the relative merits of the parameter values to the proposal distribution, based on evaluation of a suitable likelihood function against verifying observations. In this presentation, the method is first illustrated in low-order numerical tests using a stochastic version of the Lorenz-95 model which effectively emulates the principal features of ensemble prediction systems. The EPPES method correctly detects the unknown and wrongly specified parameters values, and leads to an improved forecast skill. Second, results with an ensemble prediction system emulator, based on the ECHAM5 atmospheric GCM show that the model tuning capability of EPPES scales up to realistic models and ensemble prediction systems. Finally, preliminary results of EPPES in the context of ECMWF forecasting system are presented.
Quantifying Groundwater Model Uncertainty
NASA Astrophysics Data System (ADS)
Hill, M. C.; Poeter, E.; Foglia, L.
2007-12-01
Groundwater models are characterized by the (a) processes simulated, (b) boundary conditions, (c) initial conditions, (d) method of solving the equation, (e) parameterization, and (f) parameter values. Models are related to the system of concern using data, some of which form the basis of observations used most directly, through objective functions, to estimate parameter values. Here we consider situations in which parameter values are determined by minimizing an objective function. Other methods of model development are not considered because their ad hoc nature generally prohibits clear quantification of uncertainty. Quantifying prediction uncertainty ideally includes contributions from (a) to (f). The parameter values of (f) tend to be continuous with respect to both the simulated equivalents of the observations and the predictions, while many aspects of (a) through (e) are discrete. This fundamental difference means that there are options for evaluating the uncertainty related to parameter values that generally do not exist for other aspects of a model. While the methods available for (a) to (e) can be used for the parameter values (f), the inferential methods uniquely available for (f) generally are less computationally intensive and often can be used to considerable advantage. However, inferential approaches require calculation of sensitivities. Whether the numerical accuracy and stability of the model solution required for accurate sensitivities is more broadly important to other model uses is an issue that needs to be addressed. Alternative global methods can require 100 or even 1,000 times the number of runs needed by inferential methods, though methods of reducing the number of needed runs are being developed and tested. Here we present three approaches for quantifying model uncertainty and investigate their strengths and weaknesses. (1) Represent more aspects as parameters so that the computationally efficient methods can be broadly applied. This approach is attainable through universal model analysis software such as UCODE-2005, PEST, and joint use of these programs, which allow many aspects of a model to be defined as parameters. (2) Use highly parameterized models to quantify aspects of (e). While promising, this approach implicitly includes parameterizations that may be considered unreasonable if investigated explicitly, so that resulting measures of uncertainty may be too large. (3) Use a combination of inferential and global methods that can be facilitated using the new software MMA (Multi-Model Analysis), which is constructed using the JUPITER API. Here we consider issues related to the model discrimination criteria calculated by MMA.
Continued development of a detailed model of arc discharge dynamics
NASA Technical Reports Server (NTRS)
Beers, B. L.; Pine, V. W.; Ives, S. T.
1982-01-01
Using a previously developed set of codes (SEMC, CASCAD, ACORN), a parametric study was performed to quantify the parameters which describe the development of a single electron indicated avalanche into a negative tip streamer. The electron distribution function in Teflon is presented for values of the electric field in the range of four-hundred million volts/meter to four billon volts/meter. A formulation of the scattering parameters is developed which shows that the transport can be represented by three independent variables. The distribution of ionization sites is used to indicate an avalanche. The self consistent evolution of the avalanche is computed over the parameter range of scattering set.
Oxidative status parameters in children with urinary tract infection.
Petrovic, Stanislava; Bogavac-Stanojevic, Natasa; Kotur-Stevuljevic, Jelena; Peco-Antic, Amira; Ivanisevic, Ivana; Ivanisevic, Jasmina; Paripovic, Dusan; Jelic-Ivanovic, Zorana
2014-01-01
Urinary tract infection (UTI) is one of the most common bacterial infectious diseases in children. The aim of this study was to determine the total prooxidant and antioxidant capacity of children with UTI, as well as changes of oxidative status parameters according to acute inflammation persistence and acute kidney injury (AKI) development. The patients enrolled in the study comprised 50 Caucasian children (median age was 6 months) with UTI. Total oxidant status (TOS), total antioxidant status (TAS), oxidative stress index (OSI), inflammation marker C-reactive protein (CRP) and renal function parameters urea and creatinine were analyzed in patient's serums. According to duration of inflammation during UTI, TAS values were significantly higher (0.99 vs. 0.58 mmol/L, P = 0.017) and OSI values were significantly lower (0.032 vs. 0.041 AU, P = 0.037) in the subjects with longer duration of inflammation than in the subjects with shorter duration of inflammation. We did not find significant difference in basal values of oxidative status parameters according to AKI development. OSI values could detect the simultaneous change of TAS and TOS due to change in the oxidative-antioxidant balance during the recovery of children with UTI. TAS and OSI as markers of oxidative stress during UTI are sensitive to accompanying inflammatory condition. Further investigations are needed to evaluate whether TAS, TOS and OSI could be used to monitor disease severity in children with UTI.
NASA Astrophysics Data System (ADS)
Kavimani, V.; Prakash, K. Soorya
2017-11-01
This paper deals with the fabrication of reduced graphene oxide (r-GO) reinforced Magnesium Metal Matrix Composite (MMC) through a novel solvent based powder metallurgy route. Investigations over basic and functional properties of developed MMC reveals that addition of r-GO improvises the microhardness upto 64 HV but however decrement in specific wear rate is also notified. Visualization of worn out surfaces through SEM images clearly explains for the occurrence of plastic deformation and the presence of wear debris because of ploughing out action. Taguchi coupled Artificial Neural Network (ANN) technique is adopted to arrive at optimal values of the input parameters such as load, reinforcement weight percentage, sliding distance and sliding velocity and thereby achieve minimal target output value viz. specific wear rate. Influence of any of the input parameter over specific wear rate studied through ANOVA reveals that load acting on pin has a major influence with 38.85% followed by r-GO wt. % of 25.82%. ANN model developed to predict specific wear rate value based on the variation of input parameter facilitates better predictability with R-value of 98.4% when compared with the outcomes of regression model.
NASA Astrophysics Data System (ADS)
Lika, Konstadia; Kearney, Michael R.; Kooijman, Sebastiaan A. L. M.
2011-11-01
The covariation method for estimating the parameters of the standard Dynamic Energy Budget (DEB) model provides a single-step method of accessing all the core DEB parameters from commonly available empirical data. In this study, we assess the robustness of this parameter estimation procedure and analyse the role of pseudo-data using elasticity coefficients. In particular, we compare the performance of Maximum Likelihood (ML) vs. Weighted Least Squares (WLS) approaches and find that the two approaches tend to converge in performance as the number of uni-variate data sets increases, but that WLS is more robust when data sets comprise single points (zero-variate data). The efficiency of the approach is shown to be high, and the prior parameter estimates (pseudo-data) have very little influence if the real data contain information about the parameter values. For instance, the effects of the pseudo-value for the allocation fraction κ is reduced when there is information for both growth and reproduction, that for the energy conductance is reduced when information on age at birth and puberty is given, and the effects of the pseudo-value for the maturity maintenance rate coefficient are insignificant. The estimation of some parameters (e.g., the zoom factor and the shape coefficient) requires little information, while that of others (e.g., maturity maintenance rate, puberty threshold and reproduction efficiency) require data at several food levels. The generality of the standard DEB model, in combination with the estimation of all of its parameters, allows comparison of species on the basis of parameter values. We discuss a number of preliminary patterns emerging from the present collection of parameter estimates across a wide variety of taxa. We make the observation that the estimated value of the fraction κ of mobilised reserve that is allocated to soma is far away from the value that maximises reproduction. We recognise this as the reason why two very different parameter sets must exist that fit most data set reasonably well, and give arguments why, in most cases, the set with the large value of κ should be preferred. The continued development of a parameter database through the estimation procedures described here will provide a strong basis for understanding evolutionary patterns in metabolic organisation across the diversity of life.
Udhayarasu, Madhanlal; Ramakrishnan, Kalpana; Periasamy, Soundararajan
2017-12-01
Periodical monitoring of renal function, specifically for subjects with history of diabetic or hypertension would prevent them from entering into chronic kidney disease (CKD) condition. The recent increase in numbers may be due to food habits or lack of physical exercise, necessitates a rapid kidney function monitoring system. Presently, it is determined by evaluating glomerular filtration rate (GFR) that is mainly dependent on serum creatinine value and demographic parameters and ethnic value. Attempted here is to develop ethnic parameter based on skin texture for every individual. This value when used in GFR computation, the results are much agreeable with GFR obtained through standard modification of diet in renal disease and CKD epidemiology collaboration equations. Once correlation between CKD and skin texture is established, classification tool using artificial neural network is built to categorise CKD level based on demographic values and parameter obtained through skin texture (without using creatinine). This network when tested gives almost at par results with the network that is trained with demographic and creatinine values. The results of this Letter demonstrate the possibility of non-invasively determining kidney function and hence for making a device that would readily assess the kidney function even at home.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mestrovic, Ante; Clark, Brenda G.; Department of Medical Physics, British Columbia Cancer Agency, Vancouver, British Columbia
2005-11-01
Purpose: To develop a method of predicting the values of dose distribution parameters of different radiosurgery techniques for treatment of arteriovenous malformation (AVM) based on internal geometric parameters. Methods and Materials: For each of 18 previously treated AVM patients, four treatment plans were created: circular collimator arcs, dynamic conformal arcs, fixed conformal fields, and intensity-modulated radiosurgery. An algorithm was developed to characterize the target and critical structure shape complexity and the position of the critical structures with respect to the target. Multiple regression was employed to establish the correlation between the internal geometric parameters and the dose distribution for differentmore » treatment techniques. The results from the model were applied to predict the dosimetric outcomes of different radiosurgery techniques and select the optimal radiosurgery technique for a number of AVM patients. Results: Several internal geometric parameters showing statistically significant correlation (p < 0.05) with the treatment planning results for each technique were identified. The target volume and the average minimum distance between the target and the critical structures were the most effective predictors for normal tissue dose distribution. The structure overlap volume with the target and the mean distance between the target and the critical structure were the most effective predictors for critical structure dose distribution. The predicted values of dose distribution parameters of different radiosurgery techniques were in close agreement with the original data. Conclusions: A statistical model has been described that successfully predicts the values of dose distribution parameters of different radiosurgery techniques and may be used to predetermine the optimal technique on a patient-to-patient basis.« less
Hayat, T.; Hussain, Zakir; Alsaedi, A.; Farooq, M.
2016-01-01
This article examines the effects of homogeneous-heterogeneous reactions and Newtonian heating in magnetohydrodynamic (MHD) flow of Powell-Eyring fluid by a stretching cylinder. The nonlinear partial differential equations of momentum, energy and concentration are reduced to the nonlinear ordinary differential equations. Convergent solutions of momentum, energy and reaction equations are developed by using homotopy analysis method (HAM). This method is very efficient for development of series solutions of highly nonlinear differential equations. It does not depend on any small or large parameter like the other methods i. e., perturbation method, δ—perturbation expansion method etc. We get more accurate result as we increase the order of approximations. Effects of different parameters on the velocity, temperature and concentration distributions are sketched and discussed. Comparison of present study with the previous published work is also made in the limiting sense. Numerical values of skin friction coefficient and Nusselt number are also computed and analyzed. It is noticed that the flow accelerates for large values of Powell-Eyring fluid parameter. Further temperature profile decreases and concentration profile increases when Powell-Eyring fluid parameter enhances. Concentration distribution is decreasing function of homogeneous reaction parameter while opposite influence of heterogeneous reaction parameter appears. PMID:27280883
Hayat, T; Hussain, Zakir; Alsaedi, A; Farooq, M
2016-01-01
This article examines the effects of homogeneous-heterogeneous reactions and Newtonian heating in magnetohydrodynamic (MHD) flow of Powell-Eyring fluid by a stretching cylinder. The nonlinear partial differential equations of momentum, energy and concentration are reduced to the nonlinear ordinary differential equations. Convergent solutions of momentum, energy and reaction equations are developed by using homotopy analysis method (HAM). This method is very efficient for development of series solutions of highly nonlinear differential equations. It does not depend on any small or large parameter like the other methods i. e., perturbation method, δ-perturbation expansion method etc. We get more accurate result as we increase the order of approximations. Effects of different parameters on the velocity, temperature and concentration distributions are sketched and discussed. Comparison of present study with the previous published work is also made in the limiting sense. Numerical values of skin friction coefficient and Nusselt number are also computed and analyzed. It is noticed that the flow accelerates for large values of Powell-Eyring fluid parameter. Further temperature profile decreases and concentration profile increases when Powell-Eyring fluid parameter enhances. Concentration distribution is decreasing function of homogeneous reaction parameter while opposite influence of heterogeneous reaction parameter appears.
An extended harmonic balance method based on incremental nonlinear control parameters
NASA Astrophysics Data System (ADS)
Khodaparast, Hamed Haddad; Madinei, Hadi; Friswell, Michael I.; Adhikari, Sondipon; Coggon, Simon; Cooper, Jonathan E.
2017-02-01
A new formulation for calculating the steady-state responses of multiple-degree-of-freedom (MDOF) non-linear dynamic systems due to harmonic excitation is developed. This is aimed at solving multi-dimensional nonlinear systems using linear equations. Nonlinearity is parameterised by a set of 'non-linear control parameters' such that the dynamic system is effectively linear for zero values of these parameters and nonlinearity increases with increasing values of these parameters. Two sets of linear equations which are formed from a first-order truncated Taylor series expansion are developed. The first set of linear equations provides the summation of sensitivities of linear system responses with respect to non-linear control parameters and the second set are recursive equations that use the previous responses to update the sensitivities. The obtained sensitivities of steady-state responses are then used to calculate the steady state responses of non-linear dynamic systems in an iterative process. The application and verification of the method are illustrated using a non-linear Micro-Electro-Mechanical System (MEMS) subject to a base harmonic excitation. The non-linear control parameters in these examples are the DC voltages that are applied to the electrodes of the MEMS devices.
Accuracy Estimation and Parameter Advising for Protein Multiple Sequence Alignment
DeBlasio, Dan
2013-01-01
Abstract We develop a novel and general approach to estimating the accuracy of multiple sequence alignments without knowledge of a reference alignment, and use our approach to address a new task that we call parameter advising: the problem of choosing values for alignment scoring function parameters from a given set of choices to maximize the accuracy of a computed alignment. For protein alignments, we consider twelve independent features that contribute to a quality alignment. An accuracy estimator is learned that is a polynomial function of these features; its coefficients are determined by minimizing its error with respect to true accuracy using mathematical optimization. Compared to prior approaches for estimating accuracy, our new approach (a) introduces novel feature functions that measure nonlocal properties of an alignment yet are fast to evaluate, (b) considers more general classes of estimators beyond linear combinations of features, and (c) develops new regression formulations for learning an estimator from examples; in addition, for parameter advising, we (d) determine the optimal parameter set of a given cardinality, which specifies the best parameter values from which to choose. Our estimator, which we call Facet (for “feature-based accuracy estimator”), yields a parameter advisor that on the hardest benchmarks provides more than a 27% improvement in accuracy over the best default parameter choice, and for parameter advising significantly outperforms the best prior approaches to assessing alignment quality. PMID:23489379
ERIC Educational Resources Information Center
Zierer, Ernesto
This monograph discusses the problem of the language barrier in scientific and technological development in terms of several parameters describing the flow of scientific information from one language to another. The numerical values of the language barrier parameters of the model are calculated in the field of information on second language…
The heuristic value of redundancy models of aging.
Boonekamp, Jelle J; Briga, Michael; Verhulst, Simon
2015-11-01
Molecular studies of aging aim to unravel the cause(s) of aging bottom-up, but linking these mechanisms to organismal level processes remains a challenge. We propose that complementary top-down data-directed modelling of organismal level empirical findings may contribute to developing these links. To this end, we explore the heuristic value of redundancy models of aging to develop a deeper insight into the mechanisms causing variation in senescence and lifespan. We start by showing (i) how different redundancy model parameters affect projected aging and mortality, and (ii) how variation in redundancy model parameters relates to variation in parameters of the Gompertz equation. Lifestyle changes or medical interventions during life can modify mortality rate, and we investigate (iii) how interventions that change specific redundancy parameters within the model affect subsequent mortality and actuarial senescence. Lastly, as an example of data-directed modelling and the insights that can be gained from this, (iv) we fit a redundancy model to mortality patterns observed by Mair et al. (2003; Science 301: 1731-1733) in Drosophila that were subjected to dietary restriction and temperature manipulations. Mair et al. found that dietary restriction instantaneously reduced mortality rate without affecting aging, while temperature manipulations had more transient effects on mortality rate and did affect aging. We show that after adjusting model parameters the redundancy model describes both effects well, and a comparison of the parameter values yields a deeper insight in the mechanisms causing these contrasting effects. We see replacement of the redundancy model parameters by more detailed sub-models of these parameters as a next step in linking demographic patterns to underlying molecular mechanisms. Copyright © 2015 Elsevier Inc. All rights reserved.
Assessing the quality of life history information in publicly available databases.
Thorson, James T; Cope, Jason M; Patrick, Wesley S
2014-01-01
Single-species life history parameters are central to ecological research and management, including the fields of macro-ecology, fisheries science, and ecosystem modeling. However, there has been little independent evaluation of the precision and accuracy of the life history values in global and publicly available databases. We therefore develop a novel method based on a Bayesian errors-in-variables model that compares database entries with estimates from local experts, and we illustrate this process by assessing the accuracy and precision of entries in FishBase, one of the largest and oldest life history databases. This model distinguishes biases among seven life history parameters, two types of information available in FishBase (i.e., published values and those estimated from other parameters), and two taxa (i.e., bony and cartilaginous fishes) relative to values from regional experts in the United States, while accounting for additional variance caused by sex- and region-specific life history traits. For published values in FishBase, the model identifies a small positive bias in natural mortality and negative bias in maximum age, perhaps caused by unacknowledged mortality caused by fishing. For life history values calculated by FishBase, the model identified large and inconsistent biases. The model also demonstrates greatest precision for body size parameters, decreased precision for values derived from geographically distant populations, and greatest between-sex differences in age at maturity. We recommend that our bias and precision estimates be used in future errors-in-variables models as a prior on measurement errors. This approach is broadly applicable to global databases of life history traits and, if used, will encourage further development and improvements in these databases.
Numerical weather prediction model tuning via ensemble prediction system
NASA Astrophysics Data System (ADS)
Jarvinen, H.; Laine, M.; Ollinaho, P.; Solonen, A.; Haario, H.
2011-12-01
This paper discusses a novel approach to tune predictive skill of numerical weather prediction (NWP) models. NWP models contain tunable parameters which appear in parameterizations schemes of sub-grid scale physical processes. Currently, numerical values of these parameters are specified manually. In a recent dual manuscript (QJRMS, revised) we developed a new concept and method for on-line estimation of the NWP model parameters. The EPPES ("Ensemble prediction and parameter estimation system") method requires only minimal changes to the existing operational ensemble prediction infra-structure and it seems very cost-effective because practically no new computations are introduced. The approach provides an algorithmic decision making tool for model parameter optimization in operational NWP. In EPPES, statistical inference about the NWP model tunable parameters is made by (i) generating each member of the ensemble of predictions using different model parameter values, drawn from a proposal distribution, and (ii) feeding-back the relative merits of the parameter values to the proposal distribution, based on evaluation of a suitable likelihood function against verifying observations. In the presentation, the method is first illustrated in low-order numerical tests using a stochastic version of the Lorenz-95 model which effectively emulates the principal features of ensemble prediction systems. The EPPES method correctly detects the unknown and wrongly specified parameters values, and leads to an improved forecast skill. Second, results with an atmospheric general circulation model based ensemble prediction system show that the NWP model tuning capacity of EPPES scales up to realistic models and ensemble prediction systems. Finally, a global top-end NWP model tuning exercise with preliminary results is published.
Distribution Development for STORM Ingestion Input Parameters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fulton, John
The Sandia-developed Transport of Radioactive Materials (STORM) code suite is used as part of the Radioisotope Power System Launch Safety (RPSLS) program to perform statistical modeling of the consequences due to release of radioactive material given a launch accident. As part of this modeling, STORM samples input parameters from probability distributions with some parameters treated as constants. This report described the work done to convert four of these constant inputs (Consumption Rate, Average Crop Yield, Cropland to Landuse Database Ratio, and Crop Uptake Factor) to sampled values. Consumption rate changed from a constant value of 557.68 kg / yr tomore » a normal distribution with a mean of 102.96 kg / yr and a standard deviation of 2.65 kg / yr. Meanwhile, Average Crop Yield changed from a constant value of 3.783 kg edible / m 2 to a normal distribution with a mean of 3.23 kg edible / m 2 and a standard deviation of 0.442 kg edible / m 2 . The Cropland to Landuse Database ratio changed from a constant value of 0.0996 (9.96%) to a normal distribution with a mean value of 0.0312 (3.12%) and a standard deviation of 0.00292 (0.29%). Finally the crop uptake factor changed from a constant value of 6.37e -4 (Bq crop /kg)/(Bq soil /kg) to a lognormal distribution with a geometric mean value of 3.38e -4 (Bq crop /kg)/(Bq soil /kg) and a standard deviation value of 3.33 (Bq crop /kg)/(Bq soil /kg)« less
Using Active Learning for Speeding up Calibration in Simulation Models.
Cevik, Mucahit; Ergun, Mehmet Ali; Stout, Natasha K; Trentham-Dietz, Amy; Craven, Mark; Alagoz, Oguzhan
2016-07-01
Most cancer simulation models include unobservable parameters that determine disease onset and tumor growth. These parameters play an important role in matching key outcomes such as cancer incidence and mortality, and their values are typically estimated via a lengthy calibration procedure, which involves evaluating a large number of combinations of parameter values via simulation. The objective of this study is to demonstrate how machine learning approaches can be used to accelerate the calibration process by reducing the number of parameter combinations that are actually evaluated. Active learning is a popular machine learning method that enables a learning algorithm such as artificial neural networks to interactively choose which parameter combinations to evaluate. We developed an active learning algorithm to expedite the calibration process. Our algorithm determines the parameter combinations that are more likely to produce desired outputs and therefore reduces the number of simulation runs performed during calibration. We demonstrate our method using the previously developed University of Wisconsin breast cancer simulation model (UWBCS). In a recent study, calibration of the UWBCS required the evaluation of 378 000 input parameter combinations to build a race-specific model, and only 69 of these combinations produced results that closely matched observed data. By using the active learning algorithm in conjunction with standard calibration methods, we identify all 69 parameter combinations by evaluating only 5620 of the 378 000 combinations. Machine learning methods hold potential in guiding model developers in the selection of more promising parameter combinations and hence speeding up the calibration process. Applying our machine learning algorithm to one model shows that evaluating only 1.49% of all parameter combinations would be sufficient for the calibration. © The Author(s) 2015.
Using Active Learning for Speeding up Calibration in Simulation Models
Cevik, Mucahit; Ali Ergun, Mehmet; Stout, Natasha K.; Trentham-Dietz, Amy; Craven, Mark; Alagoz, Oguzhan
2015-01-01
Background Most cancer simulation models include unobservable parameters that determine the disease onset and tumor growth. These parameters play an important role in matching key outcomes such as cancer incidence and mortality and their values are typically estimated via lengthy calibration procedure, which involves evaluating large number of combinations of parameter values via simulation. The objective of this study is to demonstrate how machine learning approaches can be used to accelerate the calibration process by reducing the number of parameter combinations that are actually evaluated. Methods Active learning is a popular machine learning method that enables a learning algorithm such as artificial neural networks to interactively choose which parameter combinations to evaluate. We develop an active learning algorithm to expedite the calibration process. Our algorithm determines the parameter combinations that are more likely to produce desired outputs, therefore reduces the number of simulation runs performed during calibration. We demonstrate our method using previously developed University of Wisconsin Breast Cancer Simulation Model (UWBCS). Results In a recent study, calibration of the UWBCS required the evaluation of 378,000 input parameter combinations to build a race-specific model and only 69 of these combinations produced results that closely matched observed data. By using the active learning algorithm in conjunction with standard calibration methods, we identify all 69 parameter combinations by evaluating only 5620 of the 378,000 combinations. Conclusion Machine learning methods hold potential in guiding model developers in the selection of more promising parameter combinations and hence speeding up the calibration process. Applying our machine learning algorithm to one model shows that evaluating only 1.49% of all parameter combinations would be sufficient for the calibration. PMID:26471190
Experimental Modal Analysis and Dynamic Component Synthesis. Volume 3. Modal Parameter Estimation
1987-12-01
residues as well as poles is achieved. A singular value decomposition method has been used to develop a complex mode indicator function ( CMIF )[70...which can be used to help determine the number of poles before the analysis. The CMIF is formed by performing a singular value decomposition of all of...servo systems which can include both low and high damping modes. "• CMIF can be used to indicate close or repeated eigenvalues before the parameter
Measurements of Cuspal Slope Inclination Angles in Palaeoanthropological Applications
NASA Astrophysics Data System (ADS)
Gaboutchian, A. V.; Knyaz, V. A.; Leybova, N. A.
2017-05-01
Tooth crown morphological features, studied in palaeoanthropology, provide valuable information about human evolution and development of civilization. Tooth crown morphology represents biological and historical data of high taxonomical value as it characterizes genetically conditioned tooth relief features averse to substantial changes under environmental factors during lifetime. Palaeoanthropological studies are still based mainly on descriptive techniques and manual measurements of limited number of morphological parameters. Feature evaluation and measurement result analysis are expert-based. Development of new methods and techniques in 3D imaging creates a background provides for better value of palaeoanthropological data processing, analysis and distribution. The goals of the presented research are to propose new features for automated odontometry and to explore their applicability to paleoanthropological studies. A technique for automated measuring of given morphological tooth parameters needed for anthropological study is developed. It is based on using original photogrammetric system as a teeth 3D models acquisition device and on a set of algorithms for given tooth parameters estimation.
Multirate sampled-data yaw-damper and modal suppression system design
NASA Technical Reports Server (NTRS)
Berg, Martin C.; Mason, Gregory S.
1990-01-01
A multirate control law synthesized algorithm based on an infinite-time quadratic cost function, was developed along with a method for analyzing the robustness of multirate systems. A generalized multirate sampled-data control law structure (GMCLS) was introduced. A new infinite-time-based parameter optimization multirate sampled-data control law synthesis method and solution algorithm were developed. A singular-value-based method for determining gain and phase margins for multirate systems was also developed. The finite-time-based parameter optimization multirate sampled-data control law synthesis algorithm originally intended to be applied to the aircraft problem was instead demonstrated by application to a simpler problem involving the control of the tip position of a two-link robot arm. The GMCLS, the infinite-time-based parameter optimization multirate control law synthesis method and solution algorithm, and the singular-value based method for determining gain and phase margins were all demonstrated by application to the aircraft control problem originally proposed for this project.
Control and Diagnostic Model of Brushless Dc Motor
NASA Astrophysics Data System (ADS)
Abramov, Ivan V.; Nikitin, Yury R.; Abramov, Andrei I.; Sosnovich, Ella V.; Božek, Pavol
2014-09-01
A simulation model of brushless DC motor (BLDC) control and diagnostics is considered. The model has been developed using a freeware complex "Modeling in technical devices". Faults and diagnostic parameters of BLDC are analyzed. A logicallinguistic diagnostic model of BLDC has been developed on basis of fuzzy logic. The calculated rules determine dependence of technical condition on diagnostic parameters, their trends and utilized lifetime of BLDC. Experimental results of BLDC technical condition diagnostics are discussed. It is shown that in the course of BLDC degradation the motor condition change depends on diagnostic parameter values
Mathematical models for predicting the transport and fate of pollutants in the environment require reactivity parameter values-- that is value of the physical and chemical constants that govern reactivity. Although empirical structure activity relationships have been developed t...
NASA Astrophysics Data System (ADS)
Dehghani, H.; Ataee-Pour, M.
2012-12-01
The block economic value (EV) is one of the most important parameters in mine evaluation. This parameter can affect significant factors such as mining sequence, final pit limit and net present value. Nowadays, the aim of open pit mine planning is to define optimum pit limits and an optimum life of mine production scheduling that maximizes the pit value under some technical and operational constraints. Therefore, it is necessary to calculate the block economic value at the first stage of the mine planning process, correctly. Unrealistic block economic value estimation may cause the mining project managers to make the wrong decision and thus may impose inexpiable losses to the project. The effective parameters such as metal price, operating cost, grade and so forth are always assumed certain in the conventional methods of EV calculation. While, obviously, these parameters have uncertain nature. Therefore, usually, the conventional methods results are far from reality. In order to solve this problem, a new technique is used base on an invented binomial tree which is developed in this research. This method can calculate the EV and project PV under economic uncertainty. In this paper, the EV and project PV were initially determined using Whittle formula based on certain economic parameters and a multivariate binomial tree based on the economic uncertainties such as the metal price and cost uncertainties. Finally the results were compared. It is concluded that applying the metal price and cost uncertainties causes the calculated block economic value and net present value to be more realistic than certain conditions.
Optimizing Methods of Obtaining Stellar Parameters for the H3 Survey
NASA Astrophysics Data System (ADS)
Ivory, KeShawn; Conroy, Charlie; Cargile, Phillip
2018-01-01
The Stellar Halo at High Resolution with Hectochelle Survey (H3) is in the process of observing and collecting stellar parameters for stars in the Milky Way's halo. With a goal of measuring radial velocities for fainter stars, it is crucial that we have optimal methods of obtaining this and other parameters from the data from these stars.The method currently developed is The Payne, named after Cecilia Payne-Gaposchkin, a code that uses neural networks and Markov Chain Monte Carlo methods to utilize both spectra and photometry to obtain values for stellar parameters. This project was to investigate the benefit of fitting both spectra and spectral energy distributions (SED). Mock spectra using the parameters of the Sun were created and noise was inserted at various signal to noise values. The Payne then fit each mock spectrum with and without a mock SED also generated from solar parameters. The result was that at high signal to noise, the spectrum dominated and the effect of fitting the SED was minimal. But at low signal to noise, the addition of the SED greatly decreased the standard deviation of the data and resulted in more accurate values for temperature and metallicity.
Cox, Melissa D; Myerscough, Mary R
2003-07-21
This paper develops and explores a model of foraging in honey bee colonies. The model may be applied to forage sources with various properties, and to colonies with different foraging-related parameters. In particular, we examine the effect of five foraging-related parameters on the foraging response and consequent nectar intake of a homogeneous colony. The parameters investigated affect different quantities critical to the foraging cycle--visit rate (affected by g), probability of dancing (mpd and bpd), duration of dancing (mcirc), or probability of abandonment (A). We show that one parameter, A, affects nectar intake in a nonlinear way. Further, we show that colonies with a midrange value of any foraging parameter perform better than the average of colonies with high- and low-range values, when profitable sources are available. Together these observations suggest that a heterogeneous colony, in which a range of parameter values are present, may perform better than a homogeneous colony. We modify the model to represent heterogeneous colonies and use it to show that the most important effect of heterogeneous foraging behaviour within the colony is to reduce the variance in the average quantity of nectar collected by heterogeneous colonies.
Inverse sequential detection of parameter changes in developing time series
NASA Technical Reports Server (NTRS)
Radok, Uwe; Brown, Timothy J.
1992-01-01
Progressive values of two probabilities are obtained for parameter estimates derived from an existing set of values and from the same set enlarged by one or more new values, respectively. One probability is that of erroneously preferring the second of these estimates for the existing data ('type 1 error'), while the second probability is that of erroneously accepting their estimates for the enlarged test ('type 2 error'). A more stable combined 'no change' probability which always falls between 0.5 and 0 is derived from the (logarithmic) width of the uncertainty region of an equivalent 'inverted' sequential probability ratio test (SPRT, Wald 1945) in which the error probabilities are calculated rather than prescribed. A parameter change is indicated when the compound probability undergoes a progressive decrease. The test is explicitly formulated and exemplified for Gaussian samples.
System and method for motor parameter estimation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Luhrs, Bin; Yan, Ting
2014-03-18
A system and method for determining unknown values of certain motor parameters includes a motor input device connectable to an electric motor having associated therewith values for known motor parameters and an unknown value of at least one motor parameter. The motor input device includes a processing unit that receives a first input from the electric motor comprising values for the known motor parameters for the electric motor and receive a second input comprising motor data on a plurality of reference motors, including values for motor parameters corresponding to the known motor parameters of the electric motor and values formore » motor parameters corresponding to the at least one unknown motor parameter value of the electric motor. The processor determines the unknown value of the at least one motor parameter from the first input and the second input and determines a motor management strategy for the electric motor based thereon.« less
Estimation of Geodetic and Geodynamical Parameters with VieVS
NASA Technical Reports Server (NTRS)
Spicakova, Hana; Bohm, Johannes; Bohm, Sigrid; Nilsson, tobias; Pany, Andrea; Plank, Lucia; Teke, Kamil; Schuh, Harald
2010-01-01
Since 2008 the VLBI group at the Institute of Geodesy and Geophysics at TU Vienna has focused on the development of a new VLBI data analysis software called VieVS (Vienna VLBI Software). One part of the program, currently under development, is a unit for parameter estimation in so-called global solutions, where the connection of the single sessions is done by stacking at the normal equation level. We can determine time independent geodynamical parameters such as Love and Shida numbers of the solid Earth tides. Apart from the estimation of the constant nominal values of Love and Shida numbers for the second degree of the tidal potential, it is possible to determine frequency dependent values in the diurnal band together with the resonance frequency of Free Core Nutation. In this paper we show first results obtained from the 24-hour IVS R1 and R4 sessions.
Arefi-Oskoui, Samira; Khataee, Alireza; Vatanpour, Vahid
2017-07-10
In this research, MgAl-CO 3 2- nanolayered double hydroxide (NLDH) was synthesized through a facile coprecipitation method, followed by a hydrothermal treatment. The prepared NLDHs were used as a hydrophilic nanofiller for improving the performance of the PVDF-based ultrafiltration membranes. The main objective of this research was to obtain the optimized formula of NLDH/PVDF nanocomposite membrane presenting the best performance using computational techniques as a cost-effective method. For this aim, an artificial neural network (ANN) model was developed for modeling and expressing the relationship between the performance of the nanocomposite membrane (pure water flux, protein flux and flux recovery ratio) and the affecting parameters including the NLDH, PVP 29000 and polymer concentrations. The effects of the mentioned parameters and the interaction between the parameters were investigated using the contour plot predicted with the developed model. Scanning electron microscopy (SEM), atomic force microscopy (AFM), and water contact angle techniques were applied to characterize the nanocomposite membranes and to interpret the predictions of the ANN model. The developed ANN model was introduced to genetic algorithm (GA) as a bioinspired optimizer to determine the optimum values of input parameters leading to high pure water flux, protein flux, and flux recovery ratio. The optimum values for NLDH, PVP 29000 and the PVDF concentration were determined to be 0.54, 1, and 18 wt %, respectively. The performance of the nanocomposite membrane prepared using the optimum values proposed by GA was investigated experimentally, in which the results were in good agreement with the values predicted by ANN model with error lower than 6%. This good agreement confirmed that the nanocomposite membranes prformance could be successfully modeled and optimized by ANN-GA system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Belley, M; Schmidt, M; Knutson, N
Purpose: Physics second-checks for external beam radiation therapy are performed, in-part, to verify that the machine parameters in the Record-and-Verify (R&V) system that will ultimately be sent to the LINAC exactly match the values initially calculated by the Treatment Planning System (TPS). While performing the second-check, a large portion of the physicists’ time is spent navigating and arranging display windows to locate and compare the relevant numerical values (MLC position, collimator rotation, field size, MU, etc.). Here, we describe the development of a software tool that guides the physicist by aggregating and succinctly displaying machine parameter data relevant to themore » physics second-check process. Methods: A data retrieval software tool was developed using Python to aggregate data and generate a list of machine parameters that are commonly verified during the physics second-check process. This software tool imported values from (i) the TPS RT Plan DICOM file and (ii) the MOSAIQ (R&V) Structured Query Language (SQL) database. The machine parameters aggregated for this study included: MLC positions, X&Y jaw positions, collimator rotation, gantry rotation, MU, dose rate, wedges and accessories, cumulative dose, energy, machine name, couch angle, and more. Results: A GUI interface was developed to generate a side-by-side display of the aggregated machine parameter values for each field, and presented to the physicist for direct visual comparison. This software tool was tested for 3D conformal, static IMRT, sliding window IMRT, and VMAT treatment plans. Conclusion: This software tool facilitated the data collection process needed in order for the physicist to conduct a second-check, thus yielding an optimized second-check workflow that was both more user friendly and time-efficient. Utilizing this software tool, the physicist was able to spend less time searching through the TPS PDF plan document and the R&V system and focus the second-check efforts on assessing the patient-specific plan-quality.« less
The AmP project: Comparing species on the basis of dynamic energy budget parameters.
Marques, Gonçalo M; Augustine, Starrlight; Lika, Konstadia; Pecquerie, Laure; Domingos, Tiago; Kooijman, Sebastiaan A L M
2018-05-01
We developed new methods for parameter estimation-in-context and, with the help of 125 authors, built the AmP (Add-my-Pet) database of Dynamic Energy Budget (DEB) models, parameters and referenced underlying data for animals, where each species constitutes one database entry. The combination of DEB parameters covers all aspects of energetics throughout the full organism's life cycle, from the start of embryo development to death by aging. The species-specific parameter values capture biodiversity and can now, for the first time, be compared between animals species. An important insight brought by the AmP project is the classification of animal energetics according to a family of related DEB models that is structured on the basis of the mode of metabolic acceleration, which links up with the development of larval stages. We discuss the evolution of metabolism in this context, among animals in general, and ray-finned fish, mollusks and crustaceans in particular. New DEBtool code for estimating DEB parameters from data has been written. AmPtool code for analyzing patterns in parameter values has also been created. A new web-interface supports multiple ways to visualize data, parameters, and implied properties from the entire collection as well as on an entry by entry basis. The DEB models proved to fit data well, the median relative error is only 0.07, for the 1035 animal species at 2018/03/12, including some extinct ones, from all large phyla and all chordate orders, spanning a range of body masses of 16 orders of magnitude. This study is a first step to include evolutionary aspects into parameter estimation, allowing to infer properties of species for which very little is known.
The AmP project: Comparing species on the basis of dynamic energy budget parameters
Lika, Konstadia; Pecquerie, Laure; Kooijman, Sebastiaan A. L. M.
2018-01-01
We developed new methods for parameter estimation-in-context and, with the help of 125 authors, built the AmP (Add-my-Pet) database of Dynamic Energy Budget (DEB) models, parameters and referenced underlying data for animals, where each species constitutes one database entry. The combination of DEB parameters covers all aspects of energetics throughout the full organism’s life cycle, from the start of embryo development to death by aging. The species-specific parameter values capture biodiversity and can now, for the first time, be compared between animals species. An important insight brought by the AmP project is the classification of animal energetics according to a family of related DEB models that is structured on the basis of the mode of metabolic acceleration, which links up with the development of larval stages. We discuss the evolution of metabolism in this context, among animals in general, and ray-finned fish, mollusks and crustaceans in particular. New DEBtool code for estimating DEB parameters from data has been written. AmPtool code for analyzing patterns in parameter values has also been created. A new web-interface supports multiple ways to visualize data, parameters, and implied properties from the entire collection as well as on an entry by entry basis. The DEB models proved to fit data well, the median relative error is only 0.07, for the 1035 animal species at 2018/03/12, including some extinct ones, from all large phyla and all chordate orders, spanning a range of body masses of 16 orders of magnitude. This study is a first step to include evolutionary aspects into parameter estimation, allowing to infer properties of species for which very little is known. PMID:29742099
Detecting influential observations in nonlinear regression modeling of groundwater flow
Yager, Richard M.
1998-01-01
Nonlinear regression is used to estimate optimal parameter values in models of groundwater flow to ensure that differences between predicted and observed heads and flows do not result from nonoptimal parameter values. Parameter estimates can be affected, however, by observations that disproportionately influence the regression, such as outliers that exert undue leverage on the objective function. Certain statistics developed for linear regression can be used to detect influential observations in nonlinear regression if the models are approximately linear. This paper discusses the application of Cook's D, which measures the effect of omitting a single observation on a set of estimated parameter values, and the statistical parameter DFBETAS, which quantifies the influence of an observation on each parameter. The influence statistics were used to (1) identify the influential observations in the calibration of a three-dimensional, groundwater flow model of a fractured-rock aquifer through nonlinear regression, and (2) quantify the effect of omitting influential observations on the set of estimated parameter values. Comparison of the spatial distribution of Cook's D with plots of model sensitivity shows that influential observations correspond to areas where the model heads are most sensitive to certain parameters, and where predicted groundwater flow rates are largest. Five of the six discharge observations were identified as influential, indicating that reliable measurements of groundwater flow rates are valuable data in model calibration. DFBETAS are computed and examined for an alternative model of the aquifer system to identify a parameterization error in the model design that resulted in overestimation of the effect of anisotropy on horizontal hydraulic conductivity.
Effects of historical and predictive information on ability of transport pilot to predict an alert
NASA Technical Reports Server (NTRS)
Trujillo, Anna C.
1994-01-01
In the aviation community, the early detection of the development of a possible subsystem problem during a flight is potentially useful for increasing the safety of the flight. Commercial airlines are currently using twin-engine aircraft for extended transport operations over water, and the early detection of a possible problem might increase the flight crew's options for safely landing the aircraft. One method for decreasing the severity of a developing problem is to predict the behavior of the problem so that appropriate corrective actions can be taken. To investigate the pilots' ability to predict long-term events, a computer workstation experiment was conducted in which 18 airline pilots predicted the alert time (the time to an alert) using 3 different dial displays and 3 different parameter behavior complexity levels. The three dial displays were as follows: standard (resembling current aircraft round dial presentations); history (indicating the current value plus the value of the parameter 5 sec in the past); and predictive (indicating the current value plus the value of the parameter 5 sec into the future). The time profiles describing the behavior of the parameter consisted of constant rate-of-change profiles, decelerating profiles, and accelerating-then-decelerating profiles. Although the pilots indicated that they preferred the near term predictive dial, the objective data did not support its use. The objective data did show that the time profiles had the most significant effect on performance in estimating the time to an alert.
Covey, Curt; Lucas, Donald D.; Tannahill, John; ...
2013-07-01
Modern climate models contain numerous input parameters, each with a range of possible values. Since the volume of parameter space increases exponentially with the number of parameters N, it is generally impossible to directly evaluate a model throughout this space even if just 2-3 values are chosen for each parameter. Sensitivity screening algorithms, however, can identify input parameters having relatively little effect on a variety of output fields, either individually or in nonlinear combination.This can aid both model development and the uncertainty quantification (UQ) process. Here we report results from a parameter sensitivity screening algorithm hitherto untested in climate modeling,more » the Morris one-at-a-time (MOAT) method. This algorithm drastically reduces the computational cost of estimating sensitivities in a high dimensional parameter space because the sample size grows linearly rather than exponentially with N. It nevertheless samples over much of the N-dimensional volume and allows assessment of parameter interactions, unlike traditional elementary one-at-a-time (EOAT) parameter variation. We applied both EOAT and MOAT to the Community Atmosphere Model (CAM), assessing CAM’s behavior as a function of 27 uncertain input parameters related to the boundary layer, clouds, and other subgrid scale processes. For radiation balance at the top of the atmosphere, EOAT and MOAT rank most input parameters similarly, but MOAT identifies a sensitivity that EOAT underplays for two convection parameters that operate nonlinearly in the model. MOAT’s ranking of input parameters is robust to modest algorithmic variations, and it is qualitatively consistent with model development experience. Supporting information is also provided at the end of the full text of the article.« less
Hukkerikar, Amol Shivajirao; Kalakul, Sawitree; Sarup, Bent; Young, Douglas M; Sin, Gürkan; Gani, Rafiqul
2012-11-26
The aim of this work is to develop group-contribution(+) (GC(+)) method (combined group-contribution (GC) method and atom connectivity index (CI) method) based property models to provide reliable estimations of environment-related properties of organic chemicals together with uncertainties of estimated property values. For this purpose, a systematic methodology for property modeling and uncertainty analysis is used. The methodology includes a parameter estimation step to determine parameters of property models and an uncertainty analysis step to establish statistical information about the quality of parameter estimation, such as the parameter covariance, the standard errors in predicted properties, and the confidence intervals. For parameter estimation, large data sets of experimentally measured property values of a wide range of chemicals (hydrocarbons, oxygenated chemicals, nitrogenated chemicals, poly functional chemicals, etc.) taken from the database of the US Environmental Protection Agency (EPA) and from the database of USEtox is used. For property modeling and uncertainty analysis, the Marrero and Gani GC method and atom connectivity index method have been considered. In total, 22 environment-related properties, which include the fathead minnow 96-h LC(50), Daphnia magna 48-h LC(50), oral rat LD(50), aqueous solubility, bioconcentration factor, permissible exposure limit (OSHA-TWA), photochemical oxidation potential, global warming potential, ozone depletion potential, acidification potential, emission to urban air (carcinogenic and noncarcinogenic), emission to continental rural air (carcinogenic and noncarcinogenic), emission to continental fresh water (carcinogenic and noncarcinogenic), emission to continental seawater (carcinogenic and noncarcinogenic), emission to continental natural soil (carcinogenic and noncarcinogenic), and emission to continental agricultural soil (carcinogenic and noncarcinogenic) have been modeled and analyzed. The application of the developed property models for the estimation of environment-related properties and uncertainties of the estimated property values is highlighted through an illustrative example. The developed property models provide reliable estimates of environment-related properties needed to perform process synthesis, design, and analysis of sustainable chemical processes and allow one to evaluate the effect of uncertainties of estimated property values on the calculated performance of processes giving useful insights into quality and reliability of the design of sustainable processes.
NASA Astrophysics Data System (ADS)
Dunn, S. M.; Lilly, A.
2001-10-01
There are now many examples of hydrological models that utilise the capabilities of Geographic Information Systems to generate spatially distributed predictions of behaviour. However, the spatial variability of hydrological parameters relating to distributions of soils and vegetation can be hard to establish. In this paper, the relationship between a soil hydrological classification Hydrology of Soil Types (HOST) and the spatial parameters of a conceptual catchment-scale model is investigated. A procedure involving inverse modelling using Monte-Carlo simulations on two catchments is developed to identify relative values for soil related parameters of the DIY model. The relative values determine the internal variability of hydrological processes as a function of the soil type. For three out of the four soil parameters studied, the variability between HOST classes was found to be consistent across two catchments when tested independently. Problems in identifying values for the fourth 'fast response distance' parameter have highlighted a potential limitation with the present structure of the model. The present assumption that this parameter can be related simply to soil type rather than topography appears to be inadequate. With the exclusion of this parameter, calibrated parameter sets from one catchment can be converted into equivalent parameter sets for the alternate catchment on the basis of their HOST distributions, to give a reasonable simulation of flow. Following further testing on different catchments, and modifications to the definition of the fast response distance parameter, the technique provides a methodology whereby it is possible to directly derive spatial soil parameters for new catchments.
Epstein, F H; Mugler, J P; Brookeman, J R
1994-02-01
A number of pulse sequence techniques, including magnetization-prepared gradient echo (MP-GRE), segmented GRE, and hybrid RARE, employ a relatively large number of variable pulse sequence parameters and acquire the image data during a transient signal evolution. These sequences have recently been proposed and/or used for clinical applications in the brain, spine, liver, and coronary arteries. Thus, the need for a method of deriving optimal pulse sequence parameter values for this class of sequences now exists. Due to the complexity of these sequences, conventional optimization approaches, such as applying differential calculus to signal difference equations, are inadequate. We have developed a general framework for adapting the simulated annealing algorithm to pulse sequence parameter value optimization, and applied this framework to the specific case of optimizing the white matter-gray matter signal difference for a T1-weighted variable flip angle 3D MP-RAGE sequence. Using our algorithm, the values of 35 sequence parameters, including the magnetization-preparation RF pulse flip angle and delay time, 32 flip angles in the variable flip angle gradient-echo acquisition sequence, and the magnetization recovery time, were derived. Optimized 3D MP-RAGE achieved up to a 130% increase in white matter-gray matter signal difference compared with optimized 3D RF-spoiled FLASH with the same total acquisition time. The simulated annealing approach was effective at deriving optimal parameter values for a specific 3D MP-RAGE imaging objective, and may be useful for other imaging objectives and sequences in this general class.
NASA Technical Reports Server (NTRS)
Johnson, R. W.
1974-01-01
A mathematical model of an ecosystem is developed. Secondary productivity is evaluated in terms of man related and controllable factors. Information from an existing physical parameters model is used as well as pertinent biological measurements. Predictive information of value to estuarine management is presented. Biological, chemical, and physical parameters measured in order to develop models of ecosystems are identified.
Nondestructive prediction of pork freshness parameters using multispectral scattering images
NASA Astrophysics Data System (ADS)
Tang, Xiuying; Li, Cuiling; Peng, Yankun; Chao, Kuanglin; Wang, Mingwu
2012-05-01
Optical technology is an important and immerging technology for non-destructive and rapid detection of pork freshness. This paper studied on the possibility of using multispectral imaging technique and scattering characteristics to predict the freshness parameters of pork meat. The pork freshness parameters selected for prediction included total volatile basic nitrogen (TVB-N), color parameters (L *, a *, b *), and pH value. Multispectral scattering images were obtained from pork sample surface by a multispectral imaging system developed by ourselves; they were acquired at the selected narrow wavebands whose center wavelengths were 517,550, 560, 580, 600, 760, 810 and 910nm. In order to extract scattering characteristics from multispectral images at multiple wavelengths, a Lorentzian distribution (LD) function with four parameters (a: scattering asymptotic value; b: scattering peak; c: scattering width; d: scattering slope) was used to fit the scattering curves at the selected wavelengths. The results show that the multispectral imaging technique combined with scattering characteristics is promising for predicting the freshness parameters of pork meat.
Reliability-Based Design Optimization of a Composite Airframe Component
NASA Technical Reports Server (NTRS)
Pai, Shantaram S.; Coroneos, Rula; Patnaik, Surya N.
2011-01-01
A stochastic optimization methodology (SDO) has been developed to design airframe structural components made of metallic and composite materials. The design method accommodates uncertainties in load, strength, and material properties that are defined by distribution functions with mean values and standard deviations. A response parameter, like a failure mode, has become a function of reliability. The primitive variables like thermomechanical loads, material properties, and failure theories, as well as variables like depth of beam or thickness of a membrane, are considered random parameters with specified distribution functions defined by mean values and standard deviations.
Numerical modeling of the transmission dynamics of drug-sensitive and drug-resistant HSV-2
NASA Astrophysics Data System (ADS)
Gumel, A. B.
2001-03-01
A competitive finite-difference method will be constructed and used to solve a modified deterministic model for the spread of herpes simplex virus type-2 (HSV-2) within a given population. The model monitors the transmission dynamics and control of drug-sensitive and drug-resistant HSV-2. Unlike the fourth-order Runge-Kutta method (RK4), which fails when the discretization parameters exceed certain values, the novel numerical method to be developed in this paper gives convergent results for all parameter values.
1981-12-01
preventing the generation of 16 6 negative location estimators. Because of the invariant pro- perty of the EDF statistics, this transformation will...likelihood. If the parameter estimation method developed by Harter and Moore is used, care must be taken to prevent the location estimators from being...vs A 2 Critical Values, Level-.Ol, n-30 128 , 0 6N m m • w - APPENDIX E Computer Prgrams 129 Program to Calculate the Cramer-von Mises Critical Values
F-8C adaptive control law refinement and software development
NASA Technical Reports Server (NTRS)
Hartmann, G. L.; Stein, G.
1981-01-01
An explicit adaptive control algorithm based on maximum likelihood estimation of parameters was designed. To avoid iterative calculations, the algorithm uses parallel channels of Kalman filters operating at fixed locations in parameter space. This algorithm was implemented in NASA/DFRC's Remotely Augmented Vehicle (RAV) facility. Real-time sensor outputs (rate gyro, accelerometer, surface position) are telemetered to a ground computer which sends new gain values to an on-board system. Ground test data and flight records were used to establish design values of noise statistics and to verify the ground-based adaptive software.
Quantitative evaluation of the lumbosacral sagittal alignment in degenerative lumbar spinal stenosis
Makirov, Serik K.; Jahaf, Mohammed T.; Nikulina, Anastasia A.
2015-01-01
Goal of the study This study intends to develop a method of quantitative sagittal balance parameters assessment, based on a geometrical model of lumbar spine and sacrum. Methods One hundred eight patients were divided into 2 groups. In the experimental group have been included 59 patients with lumbar spinal stenosis on L1-5 level. Forty-nine healthy volunteers without history of any lumbar spine pathlogy were included in the control group. All patients have been examined with supine MRI. Lumbar lordosis has been adopted as circular arc and described either anatomical (lumbar lordosis angle), or geometrical (chord length, circle segment height, the central angle, circle radius) parameters. Moreover, 2 sacral parameters have been assessed for all patients: sacral slope and sacral deviation angle. Both parameters characterize sacrum disposition in horizontal and vertical axis respectively. Results Significant correlation was observed between anatomical and geometrical lumbo-sacral parameters. Significant differences between stenosis group and control group were observed in the value of the “central angle” and “sacral deviation” parameters. We propose additional parameters: lumbar coefficient, as ratio of the lordosis angle to the segmental angle (Kl); sacral coefficient, as ratio of the sacral tilt (ST) to the sacral deviation (SD) angle (Ks); and assessment modulus of the mathematical difference between sacral and lumbar coefficients has been used for determining lumbosacral balance (LSB). Statistically significant differences between main and control group have been obtained for all described coefficients (p = 0.006, p = 0.0001, p = 0.0001, accordingly). Median of LSB value of was 0.18 and 0.34 for stenosis and control groups, accordingly. Conclusion Based on these results we believe that that spinal stenosis is associated with an acquired deformity that is measureable by the described parameters. It's possible that spinal stenosis occurs in patients with an LSB of 0.2 or less, so this value can be predictable for its development. It may suggest that spinal stenosis is more likely to occur in patients with the spinal curvature of this type because of abnormal distribution of the spine loads. This fact may have prognostic significance for develop vertebral column disease and evaluation of treatment results. PMID:26767160
NASA Astrophysics Data System (ADS)
Vugmeyster, Liliya; Ostrovsky, Dmitry; Fu, Riqiang
2015-10-01
In this work, we assess the usefulness of static 15N NMR techniques for the determination of the 15N chemical shift anisotropy (CSA) tensor parameters and 15N-1H dipolar splittings in powder protein samples. By using five single labeled samples of the villin headpiece subdomain protein in a hydrated lyophilized powder state, we determine the backbone 15N CSA tensors at two temperatures, 22 and -35 °C, in order to get a snapshot of the variability across the residues and as a function of temperature. All sites probed belonged to the hydrophobic core and most of them were part of α-helical regions. The values of the anisotropy (which include the effect of the dynamics) varied between 130 and 156 ppm at 22 °C, while the values of the asymmetry were in the 0.32-0.082 range. The Leu-75 and Leu-61 backbone sites exhibited high mobility based on the values of their temperature-dependent anisotropy parameters. Under the assumption that most differences stem from dynamics, we obtained the values of the motional order parameters for the 15N backbone sites. While a simple one-dimensional line shape experiment was used for the determination of the 15N CSA parameters, a more advanced approach based on the ;magic sandwich; SAMMY pulse sequence (Nevzorov and Opella, 2003) was employed for the determination of the 15N-1H dipolar patterns, which yielded estimates of the dipolar couplings. Accordingly, the motional order parameters for the dipolar interaction were obtained. It was found that the order parameters from the CSA and dipolar measurements are highly correlated, validating that the variability between the residues is governed by the differences in dynamics. The values of the parameters obtained in this work can serve as reference values for developing more advanced magic-angle spinning recoupling techniques for multiple labeled samples.
USDA-ARS?s Scientific Manuscript database
Several bio-optical algorithms were developed to estimate the chlorophyll-a (Chl-a) and phycocyanin (PC) concentrations in inland waters. This study aimed at identifying the influence of the algorithm parameters and wavelength bands on output variables and searching optimal parameter values. The opt...
Ma, Yuntao; Li, Baoguo; Zhan, Zhigang; Guo, Yan; Luquet, Delphine; de Reffye, Philippe; Dingkuhn, Michael
2007-01-01
Background and Aims It is increasingly accepted that crop models, if they are to simulate genotype-specific behaviour accurately, should simulate the morphogenetic process generating plant architecture. A functional–structural plant model, GREENLAB, was previously presented and validated for maize. The model is based on a recursive mathematical process, with parameters whose values cannot be measured directly and need to be optimized statistically. This study aims at evaluating the stability of GREENLAB parameters in response to three types of phenotype variability: (1) among individuals from a common population; (2) among populations subjected to different environments (seasons); and (3) among different development stages of the same plants. Methods Five field experiments were conducted in the course of 4 years on irrigated fields near Beijing, China. Detailed observations were conducted throughout the seasons on the dimensions and fresh biomass of all above-ground plant organs for each metamer. Growth stage-specific target files were assembled from the data for GREENLAB parameter optimization. Optimization was conducted for specific developmental stages or the entire growth cycle, for individual plants (replicates), and for different seasons. Parameter stability was evaluated by comparing their CV with that of phenotype observation for the different sources of variability. A reduced data set was developed for easier model parameterization using one season, and validated for the four other seasons. Key Results and Conclusions The analysis of parameter stability among plants sharing the same environment and among populations grown in different environments indicated that the model explains some of the inter-seasonal variability of phenotype (parameters varied less than the phenotype itself), but not inter-plant variability (parameter and phenotype variability were similar). Parameter variability among developmental stages was small, indicating that parameter values were largely development-stage independent. The authors suggest that the high level of parameter stability observed in GREENLAB can be used to conduct comparisons among genotypes and, ultimately, genetic analyses. PMID:17158141
Sample, Bradley E; Fairbrother, Anne; Kaiser, Ashley; Law, Sheryl; Adams, Bill
2014-10-01
Ecological soil-screening levels (Eco-SSLs) were developed by the United States Environmental Protection Agency (USEPA) for the purposes of setting conservative soil screening values that can be used to eliminate the need for further ecological assessment for specific analytes at a given site. Ecological soil-screening levels for wildlife represent a simplified dietary exposure model solved in terms of soil concentrations to produce exposure equal to a no-observed-adverse-effect toxicity reference value (TRV). Sensitivity analyses were performed for 6 avian and mammalian model species, and 16 metals/metalloids for which Eco-SSLs have been developed. The relative influence of model parameters was expressed as the absolute value of the range of variation observed in the resulting soil concentration when exposure is equal to the TRV. Rank analysis of variance was used to identify parameters with greatest influence on model output. For both birds and mammals, soil ingestion displayed the broadest overall range (variability), although TRVs consistently had the greatest influence on calculated soil concentrations; bioavailability in food was consistently the least influential parameter, although an important site-specific variable. Relative importance of parameters differed by trophic group. Soil ingestion ranked 2nd for carnivores and herbivores, but was 4th for invertivores. Different patterns were exhibited, depending on which parameter, trophic group, and analyte combination was considered. The approach for TRV selection was also examined in detail, with Cu as the representative analyte. The underlying assumption that generic body-weight-normalized TRVs can be used to derive protective levels for any species is not supported by the data. Whereas the use of site-, species-, and analyte-specific exposure parameters is recommended to reduce variation in exposure estimates (soil protection level), improvement of TRVs is more problematic. © 2014 The Authors. Environmental Toxicology and Chemistry Published by Wiley Periodicals, Inc.
Sample, Bradley E; Fairbrother, Anne; Kaiser, Ashley; Law, Sheryl; Adams, Bill
2014-01-01
Ecological soil-screening levels (Eco-SSLs) were developed by the United States Environmental Protection Agency (USEPA) for the purposes of setting conservative soil screening values that can be used to eliminate the need for further ecological assessment for specific analytes at a given site. Ecological soil-screening levels for wildlife represent a simplified dietary exposure model solved in terms of soil concentrations to produce exposure equal to a no-observed-adverse-effect toxicity reference value (TRV). Sensitivity analyses were performed for 6 avian and mammalian model species, and 16 metals/metalloids for which Eco-SSLs have been developed. The relative influence of model parameters was expressed as the absolute value of the range of variation observed in the resulting soil concentration when exposure is equal to the TRV. Rank analysis of variance was used to identify parameters with greatest influence on model output. For both birds and mammals, soil ingestion displayed the broadest overall range (variability), although TRVs consistently had the greatest influence on calculated soil concentrations; bioavailability in food was consistently the least influential parameter, although an important site-specific variable. Relative importance of parameters differed by trophic group. Soil ingestion ranked 2nd for carnivores and herbivores, but was 4th for invertivores. Different patterns were exhibited, depending on which parameter, trophic group, and analyte combination was considered. The approach for TRV selection was also examined in detail, with Cu as the representative analyte. The underlying assumption that generic body-weight–normalized TRVs can be used to derive protective levels for any species is not supported by the data. Whereas the use of site-, species-, and analyte-specific exposure parameters is recommended to reduce variation in exposure estimates (soil protection level), improvement of TRVs is more problematic. Environ Toxicol Chem 2014;33:2386–2398. PMID:24944000
NASA Astrophysics Data System (ADS)
Christensen, H. M.; Moroz, I.; Palmer, T.
2015-12-01
It is now acknowledged that representing model uncertainty in atmospheric simulators is essential for the production of reliable probabilistic ensemble forecasts, and a number of different techniques have been proposed for this purpose. Stochastic convection parameterization schemes use random numbers to represent the difference between a deterministic parameterization scheme and the true atmosphere, accounting for the unresolved sub grid-scale variability associated with convective clouds. An alternative approach varies the values of poorly constrained physical parameters in the model to represent the uncertainty in these parameters. This study presents new perturbed parameter schemes for use in the European Centre for Medium Range Weather Forecasts (ECMWF) convection scheme. Two types of scheme are developed and implemented. Both schemes represent the joint uncertainty in four of the parameters in the convection parametrisation scheme, which was estimated using the Ensemble Prediction and Parameter Estimation System (EPPES). The first scheme developed is a fixed perturbed parameter scheme, where the values of uncertain parameters are changed between ensemble members, but held constant over the duration of the forecast. The second is a stochastically varying perturbed parameter scheme. The performance of these schemes was compared to the ECMWF operational stochastic scheme, Stochastically Perturbed Parametrisation Tendencies (SPPT), and to a model which does not represent uncertainty in convection. The skill of probabilistic forecasts made using the different models was evaluated. While the perturbed parameter schemes improve on the stochastic parametrisation in some regards, the SPPT scheme outperforms the perturbed parameter approaches when considering forecast variables that are particularly sensitive to convection. Overall, SPPT schemes are the most skilful representations of model uncertainty due to convection parametrisation. Reference: H. M. Christensen, I. M. Moroz, and T. N. Palmer, 2015: Stochastic and Perturbed Parameter Representations of Model Uncertainty in Convection Parameterization. J. Atmos. Sci., 72, 2525-2544.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Jinsong; Kemna, Andreas; Hubbard, Susan S.
2008-05-15
We develop a Bayesian model to invert spectral induced polarization (SIP) data for Cole-Cole parameters using Markov chain Monte Carlo (MCMC) sampling methods. We compare the performance of the MCMC based stochastic method with an iterative Gauss-Newton based deterministic method for Cole-Cole parameter estimation through inversion of synthetic and laboratory SIP data. The Gauss-Newton based method can provide an optimal solution for given objective functions under constraints, but the obtained optimal solution generally depends on the choice of initial values and the estimated uncertainty information is often inaccurate or insufficient. In contrast, the MCMC based inversion method provides extensive globalmore » information on unknown parameters, such as the marginal probability distribution functions, from which we can obtain better estimates and tighter uncertainty bounds of the parameters than with the deterministic method. Additionally, the results obtained with the MCMC method are independent of the choice of initial values. Because the MCMC based method does not explicitly offer single optimal solution for given objective functions, the deterministic and stochastic methods can complement each other. For example, the stochastic method can first be used to obtain the means of the unknown parameters by starting from an arbitrary set of initial values and the deterministic method can then be initiated using the means as starting values to obtain the optimal estimates of the Cole-Cole parameters.« less
Analysis and Sizing for Transient Thermal Heating of Insulated Aerospace Vehicle Structures
NASA Technical Reports Server (NTRS)
Blosser, Max L.
2012-01-01
An analytical solution was derived for the transient response of an insulated structure subjected to a simplified heat pulse. The solution is solely a function of two nondimensional parameters. Simpler functions of these two parameters were developed to approximate the maximum structural temperature over a wide range of parameter values. Techniques were developed to choose constant, effective thermal properties to represent the relevant temperature and pressure-dependent properties for the insulator and structure. A technique was also developed to map a time-varying surface temperature history to an equivalent square heat pulse. Equations were also developed for the minimum mass required to maintain the inner, unheated surface below a specified temperature. In the course of the derivation, two figures of merit were identified. Required insulation masses calculated using the approximate equation were shown to typically agree with finite element results within 10%-20% over the relevant range of parameters studied.
Salari, Marjan; Salami Shahid, Esmaeel; Afzali, Seied Hosein; Ehteshami, Majid; Conti, Gea Oliveri; Derakhshan, Zahra; Sheibani, Solmaz Nikbakht
2018-04-22
Today, due to the increase in the population, the growth of industry and the variety of chemical compounds, the quality of drinking water has decreased. Five important river water quality properties such as: dissolved oxygen (DO), total dissolved solids (TDS), total hardness (TH), alkalinity (ALK) and turbidity (TU) were estimated by parameters such as: electric conductivity (EC), temperature (T), and pH that could be measured easily with almost no costs. Simulate water quality parameters were examined with two methods of modeling include mathematical and Artificial Neural Networks (ANN). Mathematical methods are based on polynomial fitting with least square method and ANN modeling algorithms are feed-forward networks. All conditions/circumstances covered by neural network modeling were tested for all parameters in this study, except for Alkalinity. All optimum ANN models developed to simulate water quality parameters had precision value as R-value close to 0.99. The ANN model extended to simulate alkalinity with R-value equals to 0.82. Moreover, Surface fitting techniques were used to refine data sets. Presented models and equations are reliable/useable tools for studying water quality parameters at similar rivers, as a proper replacement for traditional water quality measuring equipment's. Copyright © 2018 Elsevier Ltd. All rights reserved.
Image parameters for maturity determination of a composted material containing sewage sludge
NASA Astrophysics Data System (ADS)
Kujawa, S.; Nowakowski, K.; Tomczak, R. J.; Boniecki, P.; Dach, J.
2013-07-01
Composting is one of the best methods for management of sewage sludge. In a reasonably conducted composting process it is important to early identify the moment in which a material reaches the young compost stage. The objective of this study was to determine parameters contained in images of composted material's samples that can be used for evaluation of the degree of compost maturity. The study focused on two types of compost: containing sewage sludge with corn straw and sewage sludge with rapeseed straw. The photographing of the samples was carried out on a prepared stand for the image acquisition using VIS, UV-A and mixed (VIS + UV-A) light. In the case of UV-A light, three values of the exposure time were assumed. The values of 46 parameters were estimated for each of the images extracted from the photographs of the composted material's samples. Exemplary averaged values of selected parameters obtained from the images of the composted material in the following sampling days were presented. All of the parameters obtained from the composted material's images are the basis for preparation of training, validation and test data sets necessary in development of neural models for classification of the young compost stage.
A Novel Scale Up Model for Prediction of Pharmaceutical Film Coating Process Parameters.
Suzuki, Yasuhiro; Suzuki, Tatsuya; Minami, Hidemi; Terada, Katsuhide
2016-01-01
In the pharmaceutical tablet film coating process, we clarified that a difference in exhaust air relative humidity can be used to detect differences in process parameters values, the relative humidity of exhaust air was different under different atmospheric air humidity conditions even though all setting values of the manufacturing process parameters were the same, and the water content of tablets was correlated with the exhaust air relative humidity. Based on this experimental data, the exhaust air relative humidity index (EHI), which is an empirical equation that includes as functional parameters the pan coater type, heated air flow rate, spray rate of coating suspension, saturated water vapor pressure at heated air temperature, and partial water vapor pressure at atmospheric air pressure, was developed. The predictive values of exhaust relative humidity using EHI were in good correlation with the experimental data (correlation coefficient of 0.966) in all datasets. EHI was verified using the date of seven different drug products of different manufacturing scales. The EHI model will support formulation researchers by enabling them to set film coating process parameters when the batch size or pan coater type changes, and without the time and expense of further extensive testing.
Optimization of Protein Backbone Dihedral Angles by Means of Hamiltonian Reweighting
2016-01-01
Molecular dynamics simulations depend critically on the accuracy of the underlying force fields in properly representing biomolecules. Hence, it is crucial to validate the force-field parameter sets in this respect. In the context of the GROMOS force field, this is usually achieved by comparing simulation data to experimental observables for small molecules. In this study, we develop new amino acid backbone dihedral angle potential energy parameters based on the widely used 54A7 parameter set by matching to experimental J values and secondary structure propensity scales. In order to find the most appropriate backbone parameters, close to 100 000 different combinations of parameters have been screened. However, since the sheer number of combinations considered prohibits actual molecular dynamics simulations for each of them, we instead predicted the values for every combination using Hamiltonian reweighting. While the original 54A7 parameter set fails to reproduce the experimental data, we are able to provide parameters that match significantly better. However, to ensure applicability in the context of larger peptides and full proteins, further studies have to be undertaken. PMID:27559757
Finding Top-kappa Unexplained Activities in Video
2012-03-09
parameters that define an UAP instance affect the running time by varying the values of each parameter while keeping the others fixed to a default...value. Runtime of Top-k TUA. Table 1 reports the values we considered for each parameter along with the corresponding default value. Parameter Values...Default value k 1, 2, 5, All All τ 0.4, 0.6, 0.8 0.6 L 160, 200, 240, 280 200 # worlds 7 E+04, 4 E+05, 2 E+07 2 E+07 TABLE 1: Parameter values used in
Value as a parameter to consider in operational strategies for CSP plants
NASA Astrophysics Data System (ADS)
de Meyer, Oelof; Dinter, Frank; Govender, Saneshan
2017-06-01
This paper introduced a value parameter to consider when analyzing operational strategies for CSP plants. The electric system in South Africa, used as case study, is severely constrained with an influx of renewables in the early phase of deployment. The energy demand curve for the system is analyzed showing the total wind and solar photovoltaic contributions for winter and summer. Due to the intermittent nature and meteorological operating conditions of wind and solar photovoltaic plants, the value of CSP plants within the electric system is introduced. Analyzing CSP plants based on the value parameter alone will remain only a philosophical view. Currently there is no quantifiable measure to translate the philosophical view or subjective value and it solely remains the position of the stakeholder. By introducing three other parameters, Cost, Plant and System to a holistic representation of the Operating Strategies of generation plants, the Value parameter can be translated into a quantifiable measure. Utilizing the country's current procurement program as case study, CSP operating under the various PPA within the Bid Windows are analyzed. The Value Cost Plant System diagram developed is used to quantify the value parameter. This paper concluded that no value is obtained from CSP plants operating under the Bid Window 1 & 2 Power Purchase Agreement. However, by recognizing the dispatchability potential of CSP plants in Bid Window 3 & 3.5, the value of CSP in the electric system can be quantified utilizing Value Added Relationship VCPS-diagram. Similarly ancillary services to the system were analyzed. One of the relationships that have not yet been explored within the industry is an interdependent relationship. It was emphasized that the cost and value structure is shared between the plant and system. Although this relationship is functional when the plant and system belongs to the same entity, additional value is achieved by marginalizing the cost structure. A tradeoff between the plant performance indicators and system operations are achieved. CSP plants have demonstrated its capabilities by adapting to various operating strategies. With adequate storage capabilities and appropriate system boundary conditions in place, CSP plants offer solutions as base-load generation plants, peaking plants, intermittent generation and ancillary services to the system. Depending on the electric system structure, the value obtained from CSP plants are quantifiable under the right boundary conditions. An interdependent relationship between the plant and system attains the most value in operating strategies for CSP.
Koski, Antti; Tossavainen, Timo; Juhola, Martti
2004-01-01
Electrocardiogram (ECG) signals are the most prominent biomedical signal type used in clinical medicine. Their compression is important and widely researched in the medical informatics community. In the previous literature compression efficacy has been investigated only in the context of how much known or developed methods reduced the storage required by compressed forms of original ECG signals. Sometimes statistical signal evaluations based on, for example, root mean square error were studied. In previous research we developed a refined method for signal compression and tested it jointly with several known techniques for other biomedical signals. Our method of so-called successive approximation quantization used with wavelets was one of the most successful in those tests. In this paper, we studied to what extent these lossy compression methods altered values of medical parameters (medical information) computed from signals. Since the methods are lossy, some information is lost due to the compression when a high enough compression ratio is reached. We found that ECG signals sampled at 400 Hz could be compressed to one fourth of their original storage space, but the values of their medical parameters changed less than 5% due to compression, which indicates reliable results.
Development of a Three Dimensional Perfectly Matched Layer for Transient Elasto-Dynamic Analyses
2006-12-01
MacLean [Ref. 47] intro- duced a small tracked vehicle with dual inertial mass shakers mounted on top as a mobile source. It excited Rayleigh waves, but...routine initializes and set default values for; * the aplication parameters * the material data base parameters * the entries to appear on the...Underground seismic array experiments. National In- stitute of Nuclear Physics, 2005. [47] D. J. MacLean. Mobile source development for seismic-sonar based
A Bayesian Approach to Determination of F, D, and Z Values Used in Steam Sterilization Validation.
Faya, Paul; Stamey, James D; Seaman, John W
2017-01-01
For manufacturers of sterile drug products, steam sterilization is a common method used to provide assurance of the sterility of manufacturing equipment and products. The validation of sterilization processes is a regulatory requirement and relies upon the estimation of key resistance parameters of microorganisms. Traditional methods have relied upon point estimates for the resistance parameters. In this paper, we propose a Bayesian method for estimation of the well-known D T , z , and F o values that are used in the development and validation of sterilization processes. A Bayesian approach allows the uncertainty about these values to be modeled using probability distributions, thereby providing a fully risk-based approach to measures of sterility assurance. An example is given using the survivor curve and fraction negative methods for estimation of resistance parameters, and we present a means by which a probabilistic conclusion can be made regarding the ability of a process to achieve a specified sterility criterion. LAY ABSTRACT: For manufacturers of sterile drug products, steam sterilization is a common method used to provide assurance of the sterility of manufacturing equipment and products. The validation of sterilization processes is a regulatory requirement and relies upon the estimation of key resistance parameters of microorganisms. Traditional methods have relied upon point estimates for the resistance parameters. In this paper, we propose a Bayesian method for estimation of the critical process parameters that are evaluated in the development and validation of sterilization processes. A Bayesian approach allows the uncertainty about these parameters to be modeled using probability distributions, thereby providing a fully risk-based approach to measures of sterility assurance. An example is given using the survivor curve and fraction negative methods for estimation of resistance parameters, and we present a means by which a probabilistic conclusion can be made regarding the ability of a process to achieve a specified sterility criterion. © PDA, Inc. 2017.
Parameter extraction with neural networks
NASA Astrophysics Data System (ADS)
Cazzanti, Luca; Khan, Mumit; Cerrina, Franco
1998-06-01
In semiconductor processing, the modeling of the process is becoming more and more important. While the ultimate goal is that of developing a set of tools for designing a complete process (Technology CAD), it is also necessary to have modules to simulate the various technologies and, in particular, to optimize specific steps. This need is particularly acute in lithography, where the continuous decrease in CD forces the technologies to operate near their limits. In the development of a 'model' for a physical process, we face several levels of challenges. First, it is necessary to develop a 'physical model,' i.e. a rational description of the process itself on the basis of know physical laws. Second, we need an 'algorithmic model' to represent in a virtual environment the behavior of the 'physical model.' After a 'complete' model has been developed and verified, it becomes possible to do performance analysis. In many cases the input parameters are poorly known or not accessible directly to experiment. It would be extremely useful to obtain the values of these 'hidden' parameters from experimental results by comparing model to data. This is particularly severe, because the complexity and costs associated with semiconductor processing make a simple 'trial-and-error' approach infeasible and cost- inefficient. Even when computer models of the process already exists, obtaining data through simulations may be time consuming. Neural networks (NN) are powerful computational tools to predict the behavior of a system from an existing data set. They are able to adaptively 'learn' input/output mappings and to act as universal function approximators. In this paper we use artificial neural networks to build a mapping from the input parameters of the process to output parameters which are indicative of the performance of the process. Once the NN has been 'trained,' it is also possible to observe the process 'in reverse,' and to extract the values of the inputs which yield outputs with desired characteristics. Using this method, we can extract optimum values for the parameters and determine the process latitude very quickly.
Optimal solutions for a bio mathematical model for the evolution of smoking habit
NASA Astrophysics Data System (ADS)
Sikander, Waseem; Khan, Umar; Ahmed, Naveed; Mohyud-Din, Syed Tauseef
In this study, we apply Variation of Parameter Method (VPM) coupled with an auxiliary parameter to obtain the approximate solutions for the epidemic model for the evolution of smoking habit in a constant population. Convergence of the developed algorithm, namely VPM with an auxiliary parameter is studied. Furthermore, a simple way is considered for obtaining an optimal value of auxiliary parameter via minimizing the total residual error over the domain of problem. Comparison of the obtained results with standard VPM shows that an auxiliary parameter is very feasible and reliable in controlling the convergence of approximate solutions.
Bhatt, Darshak R; Maheria, Kalpana C; Parikh, Jigisha K
2015-12-30
A simple and new approach in cloud point extraction (CPE) method was developed for removal of picric acid (PA) by the addition of N,N,N,N',N',N'-hexaethyl-ethane-1,2-diammonium dibromide ionic liquid (IL) in non-ionic surfactant Triton X-114 (TX-114). A significant increase in extraction efficiency was found upon the addition of dicationic ionic liquid (DIL) at both nearly neutral and high acidic pH. The effects of different operating parameters such as pH, temperature, time, concentration of surfactant, PA and DIL on extraction of PA were investigated and optimum conditions were established. The extraction mechanism was also proposed. A developed Langmuir isotherm was used to compute the feed surfactant concentration required for the removal of PA up to an extraction efficiency of 90%. The effects of temperature and concentration of surfactant on various thermodynamic parameters were examined. It was found that the values of ΔG° increased with temperature and decreased with surfactant concentration. The values of ΔH° and ΔS° increased with surfactant concentration. The developed approach for DIL mediated CPE has proved to be an efficient and green route for extraction of PA from water sample. Copyright © 2015 Elsevier B.V. All rights reserved.
Quasiparticle interference in multiband superconductors with strong coupling
NASA Astrophysics Data System (ADS)
Dutt, A.; Golubov, A. A.; Dolgov, O. V.; Efremov, D. V.
2017-08-01
We develop a theory of the quasiparticle interference (QPI) in multiband superconductors based on the strong-coupling Eliashberg approach within the Born approximation. In the framework of this theory, we study dependencies of the QPI response function in the multiband superconductors with the nodeless s -wave superconductive order parameter. We pay special attention to the difference in the quasiparticle scattering between the bands having the same and opposite signs of the order parameter. We show that at the momentum values close to the momentum transfer between two bands, the energy dependence of the quasiparticle interference response function has three singularities. Two of these correspond to the values of the gap functions and the third one depends on both the gaps and the transfer momentum. We argue that only the singularity near the smallest band gap may be used as a universal tool to distinguish between the s++ and s± order parameters. The robustness of the sign of the response function peak near the smaller gap value, irrespective of the change in parameters, in both the symmetry cases is a promising feature that can be harnessed experimentally.
Ryu, Hyeuk; Luco, Nicolas; Baker, Jack W.; Karaca, Erdem
2008-01-01
A methodology was recently proposed for the development of hazard-compatible building fragility models using parameters of capacity curves and damage state thresholds from HAZUS (Karaca and Luco, 2008). In the methodology, HAZUS curvilinear capacity curves were used to define nonlinear dynamic SDOF models that were subjected to the nonlinear time history analysis instead of the capacity spectrum method. In this study, we construct a multilinear capacity curve with negative stiffness after an ultimate (capping) point for the nonlinear time history analysis, as an alternative to the curvilinear model provided in HAZUS. As an illustration, here we propose parameter values of the multilinear capacity curve for a moderate-code low-rise steel moment resisting frame building (labeled S1L in HAZUS). To determine the final parameter values, we perform nonlinear time history analyses of SDOF systems with various parameter values and investigate their effects on resulting fragility functions through sensitivity analysis. The findings improve capacity curves and thereby fragility and/or vulnerability models for generic types of structures.
The Use of the Nelder-Mead Method in Determining Projection Parameters for Globe Photographs
NASA Astrophysics Data System (ADS)
Gede, M.
2009-04-01
A photo of a terrestrial or celestial globe can be handled as a map. The only hard issue is its projection: the so-called Tilted Perspective Projection which, if the optical axis of the photo intersects the globe's centre, is simplified to the Vertical Near-Side Perspective Projection. When georeferencing such a photo, the exact parameters of the projections are also needed. These parameters depend on the position of the viewpoint of the camera. Several hundreds of globe photos had to be georeferenced during the Virtual Globes Museum project, which made necessary to automatize the calculation of the projection parameters. The author developed a program for this task which uses the Nelder-Mead Method in order to find the optimum parameters when a set of control points are given as input. The Nelder-Mead method is a numerical algorithm for minimizing a function in a many-dimensional space. The function in the present application is the average error of the control points calculated from the actual values of parameters. The parameters are the geographical coordinates of the projection centre, the image coordinates of the same point, the rotation of the projection, the height of the perspective point and the scale of the photo (calculated in pixels/km). The program reads the Global Mappers Ground Control Point (.GCP) file format as input and creates projection description files (.PRJ) for the same software. The initial values of the geographical coordinates of the projection centre are calculated as the average of the control points, while the other parameters are set to experimental values which represent the most common circumstances of taking a globe photograph. The algorithm runs until the change of the parameters sinks below a pre-defined limit. The minimum search can be refined by using the previous result parameter set as new initial values. This paper introduces the calculation mechanism and examples of the usage. Other possible other usages of the method are also discussed.
An Examination of Two Procedures for Identifying Consequential Item Parameter Drift
ERIC Educational Resources Information Center
Wells, Craig S.; Hambleton, Ronald K.; Kirkpatrick, Robert; Meng, Yu
2014-01-01
The purpose of the present study was to develop and evaluate two procedures flagging consequential item parameter drift (IPD) in an operational testing program. The first procedure was based on flagging items that exhibit a meaningful magnitude of IPD using a critical value that was defined to represent barely tolerable IPD. The second procedure…
Yobbi, D.K.
2000-01-01
A nonlinear least-squares regression technique for estimation of ground-water flow model parameters was applied to an existing model of the regional aquifer system underlying west-central Florida. The regression technique minimizes the differences between measured and simulated water levels. Regression statistics, including parameter sensitivities and correlations, were calculated for reported parameter values in the existing model. Optimal parameter values for selected hydrologic variables of interest are estimated by nonlinear regression. Optimal estimates of parameter values are about 140 times greater than and about 0.01 times less than reported values. Independently estimating all parameters by nonlinear regression was impossible, given the existing zonation structure and number of observations, because of parameter insensitivity and correlation. Although the model yields parameter values similar to those estimated by other methods and reproduces the measured water levels reasonably accurately, a simpler parameter structure should be considered. Some possible ways of improving model calibration are to: (1) modify the defined parameter-zonation structure by omitting and/or combining parameters to be estimated; (2) carefully eliminate observation data based on evidence that they are likely to be biased; (3) collect additional water-level data; (4) assign values to insensitive parameters, and (5) estimate the most sensitive parameters first, then, using the optimized values for these parameters, estimate the entire data set.
NASA Technical Reports Server (NTRS)
Holland, Frederic A., Jr.
2004-01-01
Modern engineering design practices are tending more toward the treatment of design parameters as random variables as opposed to fixed, or deterministic, values. The probabilistic design approach attempts to account for the uncertainty in design parameters by representing them as a distribution of values rather than as a single value. The motivations for this effort include preventing excessive overdesign as well as assessing and assuring reliability, both of which are important for aerospace applications. However, the determination of the probability distribution is a fundamental problem in reliability analysis. A random variable is often defined by the parameters of the theoretical distribution function that gives the best fit to experimental data. In many cases the distribution must be assumed from very limited information or data. Often the types of information that are available or reasonably estimated are the minimum, maximum, and most likely values of the design parameter. For these situations the beta distribution model is very convenient because the parameters that define the distribution can be easily determined from these three pieces of information. Widely used in the field of operations research, the beta model is very flexible and is also useful for estimating the mean and standard deviation of a random variable given only the aforementioned three values. However, an assumption is required to determine the four parameters of the beta distribution from only these three pieces of information (some of the more common distributions, like the normal, lognormal, gamma, and Weibull distributions, have two or three parameters). The conventional method assumes that the standard deviation is a certain fraction of the range. The beta parameters are then determined by solving a set of equations simultaneously. A new method developed in-house at the NASA Glenn Research Center assumes a value for one of the beta shape parameters based on an analogy with the normal distribution (ref.1). This new approach allows for a very simple and direct algebraic solution without restricting the standard deviation. The beta parameters obtained by the new method are comparable to the conventional method (and identical when the distribution is symmetrical). However, the proposed method generally produces a less peaked distribution with a slightly larger standard deviation (up to 7 percent) than the conventional method in cases where the distribution is asymmetric or skewed. The beta distribution model has now been implemented into the Fast Probability Integration (FPI) module used in the NESSUS computer code for probabilistic analyses of structures (ref. 2).
Concurrently adjusting interrelated control parameters to achieve optimal engine performance
Jiang, Li; Lee, Donghoon; Yilmaz, Hakan; Stefanopoulou, Anna
2015-12-01
Methods and systems for real-time engine control optimization are provided. A value of an engine performance variable is determined, a value of a first operating condition and a value of a second operating condition of a vehicle engine are detected, and initial values for a first engine control parameter and a second engine control parameter are determined based on the detected first operating condition and the detected second operating condition. The initial values for the first engine control parameter and the second engine control parameter are adjusted based on the determined value of the engine performance variable to cause the engine performance variable to approach a target engine performance variable. In order to cause the engine performance variable to approach the target engine performance variable, adjusting the initial value for the first engine control parameter necessitates a corresponding adjustment of the initial value for the second engine control parameter.
Chen, Ying; Lin, Li
2017-07-01
Preeclampsia is a relatively common complication of pregnancy and considered to be associated with different degrees of coagulation dysfunction. This study was developed to evaluate the potential value of coagulation parameters for suggesting preeclampsia during the third trimester of pregnancy. Data from 188 healthy pregnant women, 125 patients with preeclampsia in the third trimester and 120 age-matched nonpregnant women were analyzed. Prothrombin time, prothrombin activity, activated partial thromboplastin time, fibrinogen (Fg), antithrombin, platelet count, mean platelet volume, platelet distribution width and plateletcrit were tested. All parameters, excluding prothrombin time, platelet distribution width and plateletcrit, differed significantly between healthy pregnant women and those with preeclampsia. Platelet count, antithrombin and Fg were significantly lower and mean platelet volume and prothrombin activity were significantly higher in patients with preeclampsia (P < 0.001). Among these parameters, the largest area under the receiver operating characteristic curve for preeclampsia was 0.872 for Fg with an optimal cutoff value of ≤2.87g/L (sensitivity = 0.68 and specificity = 0.98). For severe preeclampsia, the area under the curve for Fg reached up to 0.922 with the same optimal cutoff value (sensitivity = 0.84, specificity = 0.98, positive predictive value = 0.96 and negative predictive value = 0.93). Fg is a biomarker suggestive of preeclampsia in the third trimester of pregnancy, and our data provide a potential cutoff value of Fg ≤ 2.87g/L for screening preeclampsia, especially severe preeclampsia. Copyright © 2017 Southern Society for Clinical Investigation. Published by Elsevier Inc. All rights reserved.
An automatic and effective parameter optimization method for model tuning
NASA Astrophysics Data System (ADS)
Zhang, T.; Li, L.; Lin, Y.; Xue, W.; Xie, F.; Xu, H.; Huang, X.
2015-05-01
Physical parameterizations in General Circulation Models (GCMs), having various uncertain parameters, greatly impact model performance and model climate sensitivity. Traditional manual and empirical tuning of these parameters is time consuming and ineffective. In this study, a "three-step" methodology is proposed to automatically and effectively obtain the optimum combination of some key parameters in cloud and convective parameterizations according to a comprehensive objective evaluation metrics. Different from the traditional optimization methods, two extra steps, one determines parameter sensitivity and the other chooses the optimum initial value of sensitive parameters, are introduced before the downhill simplex method to reduce the computational cost and improve the tuning performance. Atmospheric GCM simulation results show that the optimum combination of these parameters determined using this method is able to improve the model's overall performance by 9%. The proposed methodology and software framework can be easily applied to other GCMs to speed up the model development process, especially regarding unavoidable comprehensive parameters tuning during the model development stage.
Prediction of stream volatilization coefficients
Rathbun, Ronald E.
1990-01-01
Equations are developed for predicting the liquid-film and gas-film reference-substance parameters for quantifying volatilization of organic solutes from streams. Molecular weight and molecular-diffusion coefficients of the solute are used as correlating parameters. Equations for predicting molecular-diffusion coefficients of organic solutes in water and air are developed, with molecular weight and molal volume as parameters. Mean absolute errors of prediction for diffusion coefficients in water are 9.97% for the molecular-weight equation, 6.45% for the molal-volume equation. The mean absolute error for the diffusion coefficient in air is 5.79% for the molal-volume equation. Molecular weight is not a satisfactory correlating parameter for diffusion in air because two equations are necessary to describe the values in the data set. The best predictive equation for the liquid-film reference-substance parameter has a mean absolute error of 5.74%, with molal volume as the correlating parameter. The best equation for the gas-film parameter has a mean absolute error of 7.80%, with molecular weight as the correlating parameter.
Model Adaptation in Parametric Space for POD-Galerkin Models
NASA Astrophysics Data System (ADS)
Gao, Haotian; Wei, Mingjun
2017-11-01
The development of low-order POD-Galerkin models is largely motivated by the expectation to use the model developed with a set of parameters at their native values to predict the dynamic behaviors of the same system under different parametric values, in other words, a successful model adaptation in parametric space. However, most of time, even small deviation of parameters from their original value may lead to large deviation or unstable results. It has been shown that adding more information (e.g. a steady state, mean value of a different unsteady state, or an entire different set of POD modes) may improve the prediction of flow with other parametric states. For a simple case of the flow passing a fixed cylinder, an orthogonal mean mode at a different Reynolds number may stabilize the POD-Galerkin model when Reynolds number is changed. For a more complicated case of the flow passing an oscillatory cylinder, a global POD-Galerkin model is first applied to handle the moving boundaries, then more information (e.g. more POD modes) is required to predicate the flow under different oscillatory frequencies. Supported by ARL.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Overton, J.H.; Jarabek, A.M.
1989-01-01
The U.S. EPA advocates the assessment of health-effects data and calculation of inhaled reference doses as benchmark values for gauging systemic toxicity to inhaled gases. The assessment often requires an inter- or intra-species dose extrapolation from no observed adverse effect level (NOAEL) exposure concentrations in animals to human equivalent NOAEL exposure concentrations. To achieve this, a dosimetric extrapolation procedure was developed based on the form or type of equations that describe the uptake and disposition of inhaled volatile organic compounds (VOCs) in physiologically-based pharmacokinetic (PB-PK) models. The procedure assumes allometric scaling of most physiological parameters and that the value ofmore » the time-integrated human arterial-blood concentration must be limited to no more than to that of experimental animals. The scaling assumption replaces the need for most parameter values and allows the derivation of a simple formula for dose extrapolation of VOCs that gives equivalent or more-conservative exposure concentrations values than those that would be obtained using a PB-PK model in which scaling was assumed.« less
Analysis of Mathematical Modelling on Potentiometric Biosensors
Mehala, N.; Rajendran, L.
2014-01-01
A mathematical model of potentiometric enzyme electrodes for a nonsteady condition has been developed. The model is based on the system of two coupled nonlinear time-dependent reaction diffusion equations for Michaelis-Menten formalism that describes the concentrations of substrate and product within the enzymatic layer. Analytical expressions for the concentration of substrate and product and the corresponding flux response have been derived for all values of parameters using the new homotopy perturbation method. Furthermore, the complex inversion formula is employed in this work to solve the boundary value problem. The analytical solutions obtained allow a full description of the response curves for only two kinetic parameters (unsaturation/saturation parameter and reaction/diffusion parameter). Theoretical descriptions are given for the two limiting cases (zero and first order kinetics) and relatively simple approaches for general cases are presented. All the analytical results are compared with simulation results using Scilab/Matlab program. The numerical results agree with the appropriate theories. PMID:25969765
Analysis of mathematical modelling on potentiometric biosensors.
Mehala, N; Rajendran, L
2014-01-01
A mathematical model of potentiometric enzyme electrodes for a nonsteady condition has been developed. The model is based on the system of two coupled nonlinear time-dependent reaction diffusion equations for Michaelis-Menten formalism that describes the concentrations of substrate and product within the enzymatic layer. Analytical expressions for the concentration of substrate and product and the corresponding flux response have been derived for all values of parameters using the new homotopy perturbation method. Furthermore, the complex inversion formula is employed in this work to solve the boundary value problem. The analytical solutions obtained allow a full description of the response curves for only two kinetic parameters (unsaturation/saturation parameter and reaction/diffusion parameter). Theoretical descriptions are given for the two limiting cases (zero and first order kinetics) and relatively simple approaches for general cases are presented. All the analytical results are compared with simulation results using Scilab/Matlab program. The numerical results agree with the appropriate theories.
An Eigensystem Realization Algorithm (ERA) for modal parameter identification and model reduction
NASA Technical Reports Server (NTRS)
Juang, J. N.; Pappa, R. S.
1985-01-01
A method, called the Eigensystem Realization Algorithm (ERA), is developed for modal parameter identification and model reduction of dynamic systems from test data. A new approach is introduced in conjunction with the singular value decomposition technique to derive the basic formulation of minimum order realization which is an extended version of the Ho-Kalman algorithm. The basic formulation is then transformed into modal space for modal parameter identification. Two accuracy indicators are developed to quantitatively identify the system modes and noise modes. For illustration of the algorithm, examples are shown using simulation data and experimental data for a rectangular grid structure.
Data collection handbook to support modeling the impacts of radioactive material in soil
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yu, C.; Cheng, J.J.; Jones, L.G.
1993-04-01
A pathway analysis computer code called RESRAD has been developed for implementing US Department of Energy Residual Radioactive Material Guidelines. Hydrogeological, meteorological, geochemical, geometrical (size, area, depth), and material-related (soil, concrete) parameters are used in the RESRAD code. This handbook discusses parameter definitions, typical ranges, variations, measurement methodologies, and input screen locations. Although this handbook was developed primarily to support the application of RESRAD, the discussions and values are valid for other model applications.
Bayesian Inference for Time Trends in Parameter Values using Weighted Evidence Sets
DOE Office of Scientific and Technical Information (OSTI.GOV)
D. L. Kelly; A. Malkhasyan
2010-09-01
There is a nearly ubiquitous assumption in PSA that parameter values are at least piecewise-constant in time. As a result, Bayesian inference tends to incorporate many years of plant operation, over which there have been significant changes in plant operational and maintenance practices, plant management, etc. These changes can cause significant changes in parameter values over time; however, failure to perform Bayesian inference in the proper time-dependent framework can mask these changes. Failure to question the assumption of constant parameter values, and failure to perform Bayesian inference in the proper time-dependent framework were noted as important issues in NUREG/CR-6813, performedmore » for the U. S. Nuclear Regulatory Commission’s Advisory Committee on Reactor Safeguards in 2003. That report noted that “in-dustry lacks tools to perform time-trend analysis with Bayesian updating.” This paper describes an applica-tion of time-dependent Bayesian inference methods developed for the European Commission Ageing PSA Network. These methods utilize open-source software, implementing Markov chain Monte Carlo sampling. The paper also illustrates an approach to incorporating multiple sources of data via applicability weighting factors that address differences in key influences, such as vendor, component boundaries, conditions of the operating environment, etc.« less
UCODE, a computer code for universal inverse modeling
Poeter, E.P.; Hill, M.C.
1999-01-01
This article presents the US Geological Survey computer program UCODE, which was developed in collaboration with the US Army Corps of Engineers Waterways Experiment Station and the International Ground Water Modeling Center of the Colorado School of Mines. UCODE performs inverse modeling, posed as a parameter-estimation problem, using nonlinear regression. Any application model or set of models can be used; the only requirement is that they have numerical (ASCII or text only) input and output files and that the numbers in these files have sufficient significant digits. Application models can include preprocessors and postprocessors as well as models related to the processes of interest (physical, chemical and so on), making UCODE extremely powerful for model calibration. Estimated parameters can be defined flexibly with user-specified functions. Observations to be matched in the regression can be any quantity for which a simulated equivalent value can be produced, thus simulated equivalent values are calculated using values that appear in the application model output files and can be manipulated with additive and multiplicative functions, if necessary. Prior, or direct, information on estimated parameters also can be included in the regression. The nonlinear regression problem is solved by minimizing a weighted least-squares objective function with respect to the parameter values using a modified Gauss-Newton method. Sensitivities needed for the method are calculated approximately by forward or central differences and problems and solutions related to this approximation are discussed. Statistics are calculated and printed for use in (1) diagnosing inadequate data or identifying parameters that probably cannot be estimated with the available data, (2) evaluating estimated parameter values, (3) evaluating the model representation of the actual processes and (4) quantifying the uncertainty of model simulated values. UCODE is intended for use on any computer operating system: it consists of algorithms programmed in perl, a freeware language designed for text manipulation and Fortran90, which efficiently performs numerical calculations.
Inverse gas chromatographic determination of solubility parameters of excipients.
Adamska, Katarzyna; Voelkel, Adam
2005-11-04
The principle aim of this work was an application of inverse gas chromatography (IGC) for the estimation of solubility parameter for pharmaceutical excipients. The retention data of number of test solutes were used to calculate Flory-Huggins interaction parameter (chi1,2infinity) and than solubility parameter (delta2), corrected solubility parameter (deltaT) and its components (deltad, deltap, deltah) by using different procedures. The influence of different values of test solutes solubility parameter (delta1) over calculated values was estimated. The solubility parameter values obtained for all excipients from the slope, from Guillet and co-workers' procedure are higher than that obtained from components according Voelkel and Janas procedure. It was found that solubility parameter's value of the test solutes influences, but not significantly, values of solubility parameter of excipients.
Methods of Optimizing X-Ray Optical Prescriptions for Wide-Field Applications
NASA Technical Reports Server (NTRS)
Elsner, R. F.; O'Dell, S. L.; Ramsey, B. D.; Weisskopf, M. C.
2010-01-01
We are working on the development of a method for optimizing wide-field x-ray telescope mirror prescriptions, including polynomial coefficients, mirror shell relative displacements, and (assuming 4 focal plane detectors) detector placement and tilt that does not require a search through the multi-dimensional parameter space. Under the assumption that the parameters are small enough that second order expansions are valid, we show that the performance at the detector surface can be expressed as a quadratic function of the parameters with numerical coefficients derived from a ray trace through the underlying Wolter I optic. The best values for the parameters are found by solving the linear system of equations creating by setting derivatives of this function with respect to each parameter to zero. We describe the present status of this development effort.
Mathematical modeling of a thermovoltaic cell
NASA Technical Reports Server (NTRS)
White, Ralph E.; Kawanami, Makoto
1992-01-01
A new type of battery named 'Vaporvolt' cell is in the early stage of its development. A mathematical model of a CuO/Cu 'Vaporvolt' cell is presented that can be used to predict the potential and the transport behavior of the cell during discharge. A sensitivity analysis of the various transport and electrokinetic parameters indicates which parameters have the most influence on the predicted energy and power density of the 'Vaporvolt' cell. This information can be used to decide which parameters should be optimized or determined more accurately through further modeling or experimental studies. The optimal thicknesses of electrodes and separator, the concentration of the electrolyte, and the current density are determined by maximizing the power density. These parameter sensitivities and optimal design parameter values will help in the development of a better CuO/Cu 'Vaporvolt' cell.
Analysing the 21 cm signal from the epoch of reionization with artificial neural networks
NASA Astrophysics Data System (ADS)
Shimabukuro, Hayato; Semelin, Benoit
2017-07-01
The 21 cm signal from the epoch of reionization should be observed within the next decade. While a simple statistical detection is expected with Square Kilometre Array (SKA) pathfinders, the SKA will hopefully produce a full 3D mapping of the signal. To extract from the observed data constraints on the parameters describing the underlying astrophysical processes, inversion methods must be developed. For example, the Markov Chain Monte Carlo method has been successfully applied. Here, we test another possible inversion method: artificial neural networks (ANNs). We produce a training set that consists of 70 individual samples. Each sample is made of the 21 cm power spectrum at different redshifts produced with the 21cmFast code plus the value of three parameters used in the seminumerical simulations that describe astrophysical processes. Using this set, we train the network to minimize the error between the parameter values it produces as an output and the true values. We explore the impact of the architecture of the network on the quality of the training. Then we test the trained network on the new set of 54 test samples with different values of the parameters. We find that the quality of the parameter reconstruction depends on the sensitivity of the power spectrum to the different parameters at a given redshift, that including thermal noise and sample variance decreases the quality of the reconstruction and that using the power spectrum at several redshifts as an input to the ANN improves the quality of the reconstruction. We conclude that ANNs are a viable inversion method whose main strength is that they require a sparse exploration of the parameter space and thus should be usable with full numerical simulations.
Lopes, Antonio Augusto; dos Anjos Miranda, Rogério; Gonçalves, Rilvani Cavalcante; Thomaz, Ana Maria
2009-01-01
BACKGROUND: In patients with congenital heart disease undergoing cardiac catheterization for hemodynamic purposes, parameter estimation by the indirect Fick method using a single predicted value of oxygen consumption has been a matter of criticism. OBJECTIVE: We developed a computer-based routine for rapid estimation of replicate hemodynamic parameters using multiple predicted values of oxygen consumption. MATERIALS AND METHODS: Using Microsoft® Excel facilities, we constructed a matrix containing 5 models (equations) for prediction of oxygen consumption, and all additional formulas needed to obtain replicate estimates of hemodynamic parameters. RESULTS: By entering data from 65 patients with ventricular septal defects, aged 1 month to 8 years, it was possible to obtain multiple predictions for oxygen consumption, with clear between-age groups (P <.001) and between-methods (P <.001) differences. Using these predictions in the individual patient, it was possible to obtain the upper and lower limits of a likely range for any given parameter, which made estimation more realistic. CONCLUSION: The organized matrix allows for rapid obtainment of replicate parameter estimates, without error due to exhaustive calculations. PMID:19641642
Modelling of intermittent microwave convective drying: parameter sensitivity
NASA Astrophysics Data System (ADS)
Zhang, Zhijun; Qin, Wenchao; Shi, Bin; Gao, Jingxin; Zhang, Shiwei
2017-06-01
The reliability of the predictions of a mathematical model is a prerequisite to its utilization. A multiphase porous media model of intermittent microwave convective drying is developed based on the literature. The model considers the liquid water, gas and solid matrix inside of food. The model is simulated by COMSOL software. Its sensitivity parameter is analysed by changing the parameter values by ±20%, with the exception of several parameters. The sensitivity analysis of the process of the microwave power level shows that each parameter: ambient temperature, effective gas diffusivity, and evaporation rate constant, has significant effects on the process. However, the surface mass, heat transfer coefficient, relative and intrinsic permeability of the gas, and capillary diffusivity of water do not have a considerable effect. The evaporation rate constant has minimal parameter sensitivity with a ±20% value change, until it is changed 10-fold. In all results, the temperature and vapour pressure curves show the same trends as the moisture content curve. However, the water saturation at the medium surface and in the centre show different results. Vapour transfer is the major mass transfer phenomenon that affects the drying process.
Cooley, Richard L.
1993-01-01
Calibration data (observed values corresponding to model-computed values of dependent variables) are incorporated into a general method of computing exact Scheffé-type confidence intervals analogous to the confidence intervals developed in part 1 (Cooley, this issue) for a function of parameters derived from a groundwater flow model. Parameter uncertainty is specified by a distribution of parameters conditioned on the calibration data. This distribution was obtained as a posterior distribution by applying Bayes' theorem to the hydrogeologically derived prior distribution of parameters from part 1 and a distribution of differences between the calibration data and corresponding model-computed dependent variables. Tests show that the new confidence intervals can be much smaller than the intervals of part 1 because the prior parameter variance-covariance structure is altered so that combinations of parameters that give poor model fit to the data are unlikely. The confidence intervals of part 1 and the new confidence intervals can be effectively employed in a sequential method of model construction whereby new information is used to reduce confidence interval widths at each stage.
A comparative method for processing immunological parameters: developing an "Immunogram".
Ortolani, Riccardo; Bellavite, Paolo; Paiola, Fiorenza; Martini, Morena; Marchesini, Martina; Veneri, Dino; Franchini, Massimo; Chirumbolo, Salvatore; Tridente, Giuseppe; Vella, Antonio
2010-04-01
The immune system is a network of numerous cells that communicate both directly and indirectly with each other. The system is very sensitive to antigenic stimuli, which are memorised, and is closely connected with the endocrine and nervous systems. Therefore, in order to study the immune system correctly, it must be considered in all its complexity by analysing its components with multiparametric tools that take its dynamic characteristic into account. We analysed lymphocyte subpopulations by using monoclonal antibodies with six different fluorochromes; the monoclonal panel employed included CD45, CD3, CD4, CD8, CD16, CD56, CD57, CD19, CD23, CD27, CD5, and HLA-DR. This panel has enabled us to measure many lymphocyte subsets in different states and with different functions: helper, suppressor, activated, effector, naïve, memory, and regulatory. A database was created to collect the values of immunological parameters of approximately 8,000 subjects who have undergone testing since 2000. When the distributions of the values for these parameters were compared with the medians of reference values published in the literature, we found that most of the values from the subjects included in the database were close to the medians in the literature. To process the data we used a comparative method that calculates the percentile rank of the values of a subject by comparing them with the values for others subjects of the same age. From this data processing we obtained a set of percentile ranks that represent the positions of the various parameters with regard to the data for other age-matched subjects included in the database. These positions, relative to both the absolute values and percentages, are plotted in a graph. We have called the final plot, which can be likened to that subject's immunological fingerprint, an "Immunogram". In order to perform the necessary calculations automatically, we developed dedicated software (Immunogramma) which provides at least two different "pictures" for each subject: the first is based on a comparison of the individual's data with those from all age-related subjects, while the second provides a comparison with only age and disease-related subjects. In addition, we can superimpose two fingerprints from the same subject, calculated at different times, in order to produce a dynamic picture, for instance before and after treatment. Finally, with the aim of interpreting the clinical and diagnostic meaning of a set of positions for the values of the measured parameters, we can also search the database to determine whether it contains other subjects who have a similar pattern for some selected immune parameters. This method helps to study and follow-up immune parameters over time. The software enables automation of the process and data sharing with other departments and laboratories, so the database can grow rapidly, thus expanding its informational capacity.
Nakamura, Kengo; Yasutaka, Tetsuo; Kuwatani, Tatsu; Komai, Takeshi
2017-11-01
In this study, we applied sparse multiple linear regression (SMLR) analysis to clarify the relationships between soil properties and adsorption characteristics for a range of soils across Japan and identify easily-obtained physical and chemical soil properties that could be used to predict K and n values of cadmium, lead and fluorine. A model was first constructed that can easily predict the K and n values from nine soil parameters (pH, cation exchange capacity, specific surface area, total carbon, soil organic matter from loss on ignition and water holding capacity, the ratio of sand, silt and clay). The K and n values of cadmium, lead and fluorine of 17 soil samples were used to verify the SMLR models by the root mean square error values obtained from 512 combinations of soil parameters. The SMLR analysis indicated that fluorine adsorption to soil may be associated with organic matter, whereas cadmium or lead adsorption to soil is more likely to be influenced by soil pH, IL. We found that an accurate K value can be predicted from more than three soil parameters for most soils. Approximately 65% of the predicted values were between 33 and 300% of their measured values for the K value; 76% of the predicted values were within ±30% of their measured values for the n value. Our findings suggest that adsorption properties of lead, cadmium and fluorine to soil can be predicted from the soil physical and chemical properties using the presented models. Copyright © 2017 Elsevier Ltd. All rights reserved.
The power and robustness of maximum LOD score statistics.
Yoo, Y J; Mendell, N R
2008-07-01
The maximum LOD score statistic is extremely powerful for gene mapping when calculated using the correct genetic parameter value. When the mode of genetic transmission is unknown, the maximum of the LOD scores obtained using several genetic parameter values is reported. This latter statistic requires higher critical value than the maximum LOD score statistic calculated from a single genetic parameter value. In this paper, we compare the power of maximum LOD scores based on three fixed sets of genetic parameter values with the power of the LOD score obtained after maximizing over the entire range of genetic parameter values. We simulate family data under nine generating models. For generating models with non-zero phenocopy rates, LOD scores maximized over the entire range of genetic parameters yielded greater power than maximum LOD scores for fixed sets of parameter values with zero phenocopy rates. No maximum LOD score was consistently more powerful than the others for generating models with a zero phenocopy rate. The power loss of the LOD score maximized over the entire range of genetic parameters, relative to the maximum LOD score calculated using the correct genetic parameter value, appeared to be robust to the generating models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hayat, T.; Nonlinear Analysis and Applied Mathematics; Muhammad, Taseer
Development of human society greatly depends upon solar energy. Heat, electricity and water from nature can be obtained through solar power. Sustainable energy generation at present is a critical issue in human society development. Solar energy is regarded one of the best sources of renewable energy. Hence the purpose of present study is to construct a model for radiative effects in three-dimensional of nanofluid. Flow of second grade fluid by an exponentially stretching surface is considered. Thermophoresis and Brownian motion effects are taken into account in presence of heat source/sink and chemical reaction. Results are derived for the dimensionless velocities,more » temperature and concentration. Graphs are plotted to examine the impacts of physical parameters on the temperature and concentration. Numerical computations are presented to examine the values of skin-friction coefficients, Nusselt and Sherwood numbers. It is observed that the values of skin-friction coefficients are more for larger values of second grade parameter. Moreover the radiative effects on the temperature and concentration are quite reverse.« less
K-ε Turbulence Model Parameter Estimates Using an Approximate Self-similar Jet-in-Crossflow Solution
DeChant, Lawrence; Ray, Jaideep; Lefantzi, Sophia; ...
2017-06-09
The k-ε turbulence model has been described as perhaps “the most widely used complete turbulence model.” This family of heuristic Reynolds Averaged Navier-Stokes (RANS) turbulence closures is supported by a suite of model parameters that have been estimated by demanding the satisfaction of well-established canonical flows such as homogeneous shear flow, log-law behavior, etc. While this procedure does yield a set of so-called nominal parameters, it is abundantly clear that they do not provide a universally satisfactory turbulence model that is capable of simulating complex flows. Recent work on the Bayesian calibration of the k-ε model using jet-in-crossflow wind tunnelmore » data has yielded parameter estimates that are far more predictive than nominal parameter values. In this paper, we develop a self-similar asymptotic solution for axisymmetric jet-in-crossflow interactions and derive analytical estimates of the parameters that were inferred using Bayesian calibration. The self-similar method utilizes a near field approach to estimate the turbulence model parameters while retaining the classical far-field scaling to model flow field quantities. Our parameter values are seen to be far more predictive than the nominal values, as checked using RANS simulations and experimental measurements. They are also closer to the Bayesian estimates than the nominal parameters. A traditional simplified jet trajectory model is explicitly related to the turbulence model parameters and is shown to yield good agreement with measurement when utilizing the analytical derived turbulence model coefficients. Finally, the close agreement between the turbulence model coefficients obtained via Bayesian calibration and the analytically estimated coefficients derived in this paper is consistent with the contention that the Bayesian calibration approach is firmly rooted in the underlying physical description.« less
Near infrared spectroscopy (NIRS) for on-line determination of quality parameters in intact olives.
Salguero-Chaparro, Lourdes; Baeten, Vincent; Fernández-Pierna, Juan A; Peña-Rodríguez, Francisco
2013-08-15
The acidity, moisture and fat content in intact olive fruits were determined on-line using a NIR diode array instrument, operating on a conveyor belt. Four sets of calibrations models were obtained by means of different combinations from samples collected during 2009-2010 and 2010-2011, using full-cross and external validation. Several preprocessing treatments such as derivatives and scatter correction were investigated by using the root mean square error of cross-validation (RMSECV) and prediction (RMSEP), as control parameters. The results obtained showed RMSECV values of 2.54-3.26 for moisture, 2.35-2.71 for fat content and 2.50-3.26 for acidity parameters, depending on the calibration model developed. Calibrations for moisture, fat content and acidity gave residual predictive deviation (RPD) values of 2.76, 2.37 and 1.60, respectively. Although, it is concluded that the on-line NIRS prediction results were acceptable for the three parameters measured in intact olive samples in movement, the models developed must be improved in order to increase their accuracy before final NIRS implementation at mills. Copyright © 2013 Elsevier Ltd. All rights reserved.
An Analytical Solution for Transient Thermal Response of an Insulated Structure
NASA Technical Reports Server (NTRS)
Blosser, Max L.
2012-01-01
An analytical solution was derived for the transient response of an insulated aerospace vehicle structure subjected to a simplified heat pulse. This simplified problem approximates the thermal response of a thermal protection system of an atmospheric entry vehicle. The exact analytical solution is solely a function of two non-dimensional parameters. A simpler function of these two parameters was developed to approximate the maximum structural temperature over a wide range of parameter values. Techniques were developed to choose constant, effective properties to represent the relevant temperature and pressure-dependent properties for the insulator and structure. A technique was also developed to map a time-varying surface temperature history to an equivalent square heat pulse. Using these techniques, the maximum structural temperature rise was calculated using the analytical solutions and shown to typically agree with finite element simulations within 10 to 20 percent over the relevant range of parameters studied.
Environment Modeling Using Runtime Values for JPF-Android
NASA Technical Reports Server (NTRS)
van der Merwe, Heila; Tkachuk, Oksana; Nel, Seal; van der Merwe, Brink; Visser, Willem
2015-01-01
Software applications are developed to be executed in a specific environment. This environment includes external native libraries to add functionality to the application and drivers to fire the application execution. For testing and verification, the environment of an application is simplified abstracted using models or stubs. Empty stubs, returning default values, are simple to generate automatically, but they do not perform well when the application expects specific return values. Symbolic execution is used to find input parameters for drivers and return values for library stubs, but it struggles to detect the values of complex objects. In this work-in-progress paper, we explore an approach to generate drivers and stubs based on values collected during runtime instead of using default values. Entry-points and methods that need to be modeled are instrumented to log their parameters and return values. The instrumented applications are then executed using a driver and instrumented libraries. The values collected during runtime are used to generate driver and stub values on- the-fly that improve coverage during verification by enabling the execution of code that previously crashed or was missed. We are implementing this approach to improve the environment model of JPF-Android, our model checking and analysis tool for Android applications.
NASA Technical Reports Server (NTRS)
Claassen, J. P.; Fung, A. K.
1975-01-01
As part of an effort to demonstrate the value of the microwave scatterometer as a remote sea wind sensor, the interaction between an arbitrarily polarized scatterometer antenna and a noncoherent distributive target was derived and applied to develop a measuring technique to recover all the scattering parameters. The results are helpful for specifying antenna polarization properties for accurate retrieval of the parameters not only for the sea but also for other distributive scenes.
NASA Astrophysics Data System (ADS)
Sullivan, Z.; Fan, X.
2015-12-01
Currently, the Noah Land-Surface Model (Noah-LSM) coupled with the Weather Research and Forecasting (WRF) model does not have a representation of the physical behavior of a karst terrain found in a large area of Tennessee and Kentucky and 25% of land area worldwide. The soluble nature of the bedrock within a karst geologic terrains allows for the formation of caverns, joints, fissures, sinkholes, and underground streams which affect the hydrological behavior of the region. The Highland Rim of Tennessee and the Pennyroyal Plateau and Bluegrass region of Kentucky make up a larger karst area known as the Interior Low Plateau. The highly weathered upper portion of the karst terrain, known as the epikarst, allows for more rapid transport of water through the system. For this study, hydrological aspects, such as bedrock porosity and the hydraulic conductivity, were chosen within this region in order to determine the most representative subsurface parameters for the Noah-LSM. These values along with the use of similar proxy values were chosen to calculate and represent the remaining eight parameters within the SOILPARM.TBL for the WRF model. Hydraulic conductivity values show a variation ranging from around 10-7 and 10-5 ms-1 for the karst bedrock within this region. A sand and clay soil type was used along with bedrock parameters to determine an average soil parameter type for the epikarst bedrock located within this region. Results from this study show parameters for an epikarst bedrock type displaying higher water transport through the system, similar to that of a sandy soil type with a water retention similar to that of a loam type soil. The physical nature of epikarst may lead to a decrease in latent heat values over this region and increase sensible heat values. This, in turn, may effect boundary layer growth which could lead to convective development. Future modeling work can be conducted using these values by way of coupling the soil parameters with the karst regions of the Tennessee/Kentucky area.
Su, Jingjun; Du, Xinzhong; Li, Xuyong
2018-05-16
Uncertainty analysis is an important prerequisite for model application. However, the existing phosphorus (P) loss indexes or indicators were rarely evaluated. This study applied generalized likelihood uncertainty estimation (GLUE) method to assess the uncertainty of parameters and modeling outputs of a non-point source (NPS) P indicator constructed in R language. And the influences of subjective choices of likelihood formulation and acceptability threshold of GLUE on model outputs were also detected. The results indicated the following. (1) Parameters RegR 2 , RegSDR 2 , PlossDP fer , PlossDP man , DPDR, and DPR were highly sensitive to overall TP simulation and their value ranges could be reduced by GLUE. (2) Nash efficiency likelihood (L 1 ) seemed to present better ability in accentuating high likelihood value simulations than the exponential function (L 2 ) did. (3) The combined likelihood integrating the criteria of multiple outputs acted better than single likelihood in model uncertainty assessment in terms of reducing the uncertainty band widths and assuring the fitting goodness of whole model outputs. (4) A value of 0.55 appeared to be a modest choice of threshold value to balance the interests between high modeling efficiency and high bracketing efficiency. Results of this study could provide (1) an option to conduct NPS modeling under one single computer platform, (2) important references to the parameter setting for NPS model development in similar regions, (3) useful suggestions for the application of GLUE method in studies with different emphases according to research interests, and (4) important insights into the watershed P management in similar regions.
Gutierrez-Magness, Angelica L.
2006-01-01
Rapid population increases, agriculture, and industrial practices have been identified as important sources of excessive nutrients and sediments in the Delaware Inland Bays watershed. The amount and effect of excessive nutrients and sediments in the Inland Bays watershed have been well documented by the Delaware Geological Survey, the Delaware Department of Natural Resources and Environmental Control, the U.S. Environmental Protection Agency's National Estuary Program, the Delaware Center for Inland Bays, the University of Delaware, and other agencies. This documentation and data previously were used to develop a hydrologic and water-quality model of the Delaware Inland Bays watershed to simulate nutrients and sediment concentrations and loads, and to calibrate the model by comparing concentrations and streamflow data at six stations in the watershed over a limited period of time (October 1998 through April 2000). Although the model predictions of nutrient and sediment concentrations for the calibrated segments were fairly accurate, the predictions for the 28 ungaged segments located near tidal areas, where stream data were not available, were above the range of values measured in the area. The cooperative study established in 2000 by the Delaware Department of Natural Resources and Environmental Control, the Delaware Geological Survey, and the U.S. Geological Survey was extended to evaluate the model predictions in ungaged segments and to ensure that the model, developed as a planning and management tool, could accurately predict nutrient and sediment concentrations within the measured range of values in the area. The evaluation of the predictions was limited to the period of calibration (1999) of the 2003 model. To develop estimates on ungaged watersheds, parameter values from calibrated segments are transferred to the ungaged segments; however, accurate predictions are unlikely where parameter transference is subject to error. The unexpected nutrient and sediment concentrations simulated with the 2003 model were likely the result of inappropriate criteria for the transference of parameter values. From a model-simulation perspective, it is a common practice to transfer parameter values based on the similarity of soils or the similarity of land-use proportions between segments. For the Inland Bays model, the similarity of soils between segments was used as the basis to transfer parameter values. An alternative approach, which is documented in this report, is based on the similarity of the spatial distribution of the land use between segments and the similarity of land-use proportions, as these can be important factors for the transference of parameter values in lumped models. Previous work determined that the difference in the variation of runoff due to various spatial distributions of land use within a watershed can cause substantialloss of accuracy in the model predictions. The incorporation of the spatial distribution of land use to transfer parameter values from calibrated to uncalibrated segments provided more consistent and rational predictions of flow, especially during the summer, and consequently, predictions of lower nutrient concentrations during the same period. For the segments where the similarity of spatial distribution of land use was not clearly established with a calibrated segment, the similarity of the location of the most impervious areas was also used as a criterion for the transference of parameter values. The model predictions from the 28 ungaged segments were verified through comparison with measured in-stream concentrations from local and nearby streams provided by the Delaware Department of Natural Resources and Environmental Control. Model results indicated that the predicted edge-of-stream total suspended solids loads in the Inland Bays watershed were low in comparison to loads reported for the Eastern Shore of Maryland from the Chesapeake Bay watershed model. The flatness of the ter
An automatic and effective parameter optimization method for model tuning
NASA Astrophysics Data System (ADS)
Zhang, T.; Li, L.; Lin, Y.; Xue, W.; Xie, F.; Xu, H.; Huang, X.
2015-11-01
Physical parameterizations in general circulation models (GCMs), having various uncertain parameters, greatly impact model performance and model climate sensitivity. Traditional manual and empirical tuning of these parameters is time-consuming and ineffective. In this study, a "three-step" methodology is proposed to automatically and effectively obtain the optimum combination of some key parameters in cloud and convective parameterizations according to a comprehensive objective evaluation metrics. Different from the traditional optimization methods, two extra steps, one determining the model's sensitivity to the parameters and the other choosing the optimum initial value for those sensitive parameters, are introduced before the downhill simplex method. This new method reduces the number of parameters to be tuned and accelerates the convergence of the downhill simplex method. Atmospheric GCM simulation results show that the optimum combination of these parameters determined using this method is able to improve the model's overall performance by 9 %. The proposed methodology and software framework can be easily applied to other GCMs to speed up the model development process, especially regarding unavoidable comprehensive parameter tuning during the model development stage.
Space Shuttle propulsion parameter estimation using optimal estimation techniques, volume 1
NASA Technical Reports Server (NTRS)
1983-01-01
The mathematical developments and their computer program implementation for the Space Shuttle propulsion parameter estimation project are summarized. The estimation approach chosen is the extended Kalman filtering with a modified Bryson-Frazier smoother. Its use here is motivated by the objective of obtaining better estimates than those available from filtering and to eliminate the lag associated with filtering. The estimation technique uses as the dynamical process the six degree equations-of-motion resulting in twelve state vector elements. In addition to these are mass and solid propellant burn depth as the ""system'' state elements. The ""parameter'' state elements can include aerodynamic coefficient, inertia, center-of-gravity, atmospheric wind, etc. deviations from referenced values. Propulsion parameter state elements have been included not as options just discussed but as the main parameter states to be estimated. The mathematical developments were completed for all these parameters. Since the systems dynamics and measurement processes are non-linear functions of the states, the mathematical developments are taken up almost entirely by the linearization of these equations as required by the estimation algorithms.
Compensatory parameters of intracranial space in giant hydrocephalus.
Cieślicki, Krzysztof; Czepko, Ryszard
2009-01-01
The main goal of the present study is to examine compensatory parameters of intracranial space in giant hydrocephalus. We also assess the early and late outcome and analyse complications in shunted cases. Nine cases of giant hydrocephalus characterised by the value of Evans ratio > 0.5, ventricular index > 1.5, and the width of the third ventricle > 20 mm were considered. Using the lumbar infusion test and developed software we analysed the intracranial compensatory parameters typical for hydrocephalus. Based on the Marmarou model, the method depended on a repeated search for the best fitting curve corresponding to the progress of the test was used. Eight out of nine patients were therefore shunted. Patients were followed up for 9 months. Five out of eight shunted patients undoubtedly improved in a few days after surgery (62%). Complications (subdural hygromas/haematomas and intracerebral haematoma) developed in 5 (62%) cases in longer follow-up. A definite improvement was noted in 4 out of 8 operated cases (50%). To get the stable values of compensatory parameters, the duration of the infusion test must at least double the inflexion time of the test curve. All but one considered cases of giant hydrocephalus were characterized by lack of intracranial space reserve, significantly reduced rate of CSF secretion and by various degrees of elevated value of the resistance to outflow. Due to the significant number of complications and uncertain long-term improvement, great caution in decision making for shunting has to be taken.
Evaluation of Potential Evapotranspiration from a Hydrologic Model on a National Scale
NASA Astrophysics Data System (ADS)
Hakala, K. A.; Hay, L.; Markstrom, S. L.
2014-12-01
The US Geological Survey has developed a National Hydrologic Model (NHM) to support coordinated, comprehensive and consistent hydrologic model development and facilitate the application of simulations on the scale of the continental US. The NHM has a consistent geospatial fabric for modeling, consisting of over 100,000 hydrologic response units (HRUs). Each HRU requires accurate parameter estimates, some of which are attained from automated calibration. However, improved calibration can be achieved by initially utilizing as many parameters as possible from national data sets. This presentation investigates the effectiveness of calculating potential evapotranspiration (PET) parameters based on mean monthly values from the NOAA PET Atlas. Additional PET products are then used to evaluate the PET parameters. Effectively utilizing existing national-scale data sets can simplify the effort in establishing a robust NHM.
Kodama, Ayuto; Kume, Yu; Tsugaruya, Megumi; Ishikawa, Takashi
2016-01-01
The circadian rhythm in older adults is commonly known to change with a decrease in physical activity. However, the association between circadian rhythm metrics and physical activity remains unclear. The objective of this study was to examine circadian activity patterns in older people with and without dementia and to determine the amount of physical activity conducive to a good circadian measurement. Circadian parameters were collected from 117 older community-dwelling people (66 subjects without dementia and 52 subjects with dementia); the parameters were measured continuously using actigraphy for 7 days. A receiver operating characteristic (ROC) curve was applied to determine reference values for the circadian rhythm parameters, consisting of interdaily stability (IS), intradaily variability (IV), and relative amplitude (RA), in older subjects. The ROC curve revealed reference values of 0.55 for IS, 1.10 for IV, and 0.82 for RA. In addition, as a result of the ROC curve in the moderate-to-vigorous physical Activity (MVPA) conducive to the reference value of the Non-parametric Circadian Rhythm Analysis per day, the optimal reference values were 51 minutes for IV and 55 minutes for RA. However, the IS had no classification accuracy. Our results demonstrated the reference values derived from the circadian parameters of older Japanese population with or without dementia. Also, we determined the MVPA conducive to a good circadian rest-active pattern. This reference value for physical activity conducive to a good circadian rhythm might be useful for developing a new index for health promotion in the older community-dwelling population.
NASA Technical Reports Server (NTRS)
Shin, Jong-Yeob; Belcastro, Christine
2008-01-01
Formal robustness analysis of aircraft control upset prevention and recovery systems could play an important role in their validation and ultimate certification. As a part of the validation process, this paper describes an analysis method for determining a reliable flight regime in the flight envelope within which an integrated resilent control system can achieve the desired performance of tracking command signals and detecting additive faults in the presence of parameter uncertainty and unmodeled dynamics. To calculate a reliable flight regime, a structured singular value analysis method is applied to analyze the closed-loop system over the entire flight envelope. To use the structured singular value analysis method, a linear fractional transform (LFT) model of a transport aircraft longitudinal dynamics is developed over the flight envelope by using a preliminary LFT modeling software tool developed at the NASA Langley Research Center, which utilizes a matrix-based computational approach. The developed LFT model can capture original nonlinear dynamics over the flight envelope with the ! block which contains key varying parameters: angle of attack and velocity, and real parameter uncertainty: aerodynamic coefficient uncertainty and moment of inertia uncertainty. Using the developed LFT model and a formal robustness analysis method, a reliable flight regime is calculated for a transport aircraft closed-loop system.
Acceptable Tolerances for Matching Icing Similarity Parameters in Scaling Applications
NASA Technical Reports Server (NTRS)
Anderson, David N.
2003-01-01
This paper reviews past work and presents new data to evaluate how changes in similarity parameters affect ice shapes and how closely scale values of the parameters should match reference values. Experimental ice shapes presented are from tests by various researchers in the NASA Glenn Icing Research Tunnel. The parameters reviewed are the modified inertia parameter (which determines the stagnation collection efficiency), accumulation parameter, freezing fraction, Reynolds number, and Weber number. It was demonstrated that a good match of scale and reference ice shapes could sometimes be achieved even when values of the modified inertia parameter did not match precisely. Consequently, there can be some flexibility in setting scale droplet size, which is the test condition determined from the modified inertia parameter. A recommended guideline is that the modified inertia parameter be chosen so that the scale stagnation collection efficiency is within 10 percent of the reference value. The scale accumulation parameter and freezing fraction should also be within 10 percent of their reference values. The Weber number based on droplet size and water properties appears to be a more important scaling parameter than one based on model size and air properties. Scale values of both the Reynolds and Weber numbers need to be in the range of 60 to 160 percent of the corresponding reference values. The effects of variations in other similarity parameters have yet to be established.
NASA Astrophysics Data System (ADS)
da Silva, Ricardo Siqueira; Kumar, Lalit; Shabani, Farzin; Picanço, Marcelo Coutinho
2018-04-01
A sensitivity analysis can categorize levels of parameter influence on a model's output. Identifying parameters having the most influence facilitates establishing the best values for parameters of models, providing useful implications in species modelling of crops and associated insect pests. The aim of this study was to quantify the response of species models through a CLIMEX sensitivity analysis. Using open-field Solanum lycopersicum and Neoleucinodes elegantalis distribution records, and 17 fitting parameters, including growth and stress parameters, comparisons were made in model performance by altering one parameter value at a time, in comparison to the best-fit parameter values. Parameters that were found to have a greater effect on the model results are termed "sensitive". Through the use of two species, we show that even when the Ecoclimatic Index has a major change through upward or downward parameter value alterations, the effect on the species is dependent on the selection of suitability categories and regions of modelling. Two parameters were shown to have the greatest sensitivity, dependent on the suitability categories of each species in the study. Results enhance user understanding of which climatic factors had a greater impact on both species distributions in our model, in terms of suitability categories and areas, when parameter values were perturbed by higher or lower values, compared to the best-fit parameter values. Thus, the sensitivity analyses have the potential to provide additional information for end users, in terms of improving management, by identifying the climatic variables that are most sensitive.
Barkat, K; Ahmad, M; Minhas, M U; Malik, M Z; Sohail, M
2014-07-01
The objective of study was to develop an accurate and reproducible HPLC method for determination of piracetam in human plasma and to evaluate pharmacokinetic parameters of 800 mg piracetam. A simple, rapid, accurate, precise and sensitive high pressure liquid chromatography method has been developed and subsequently validated for determination of piracetam. This study represents the results of a randomized, single-dose and single-period in 18 healthy male volunteers to assess pharmacokinetic parameters of 800 mg piracetam tablets. Various pharmacokinetic parameters were determined from plasma for piracetam and found to be in good agreement with previous reported values. The data was analyzed by using Kinetica® version 4.4 according to non-compartment model of pharmacokinetic analysis and after comparison with previous studies, no significant differences were found in present study of tested product. The major pharmacokinetic parameters for piracetam were as follows: t1/2 was (4.40 ± 0.179) h; Tmax value was (2.33 ± 0.105) h; Cmax was (14.53 ± 0.282) µg/mL; the AUC(0-∞) was (59.19 ± 4.402) µg · h/mL. AUMC(0-∞) was (367.23 ± 38.96) µg. (h)(2)/mL; Ke was (0.16 ± 0.006) h; MRT was (5.80 ± 0.227) h; Vd was (96.36 ± 8.917 L). A rapid, accurate and precise high pressure liquid chromatography method was developed and validated before the study. It is concluded that this method is very useful for the analysis of pharmacokinetic parameters, in human plasma and assured the safety and efficacy of piracetam, can be effectively used in medical practice. © Georg Thieme Verlag KG Stuttgart · New York.
3D segmentation of lung CT data with graph-cuts: analysis of parameter sensitivities
NASA Astrophysics Data System (ADS)
Cha, Jung won; Dunlap, Neal; Wang, Brian; Amini, Amir
2016-03-01
Lung boundary image segmentation is important for many tasks including for example in development of radiation treatment plans for subjects with thoracic malignancies. In this paper, we describe a method and parameter settings for accurate 3D lung boundary segmentation based on graph-cuts from X-ray CT data1. Even though previously several researchers have used graph-cuts for image segmentation, to date, no systematic studies have been performed regarding the range of parameter that give accurate results. The energy function in the graph-cuts algorithm requires 3 suitable parameter settings: K, a large constant for assigning seed points, c, the similarity coefficient for n-links, and λ, the terminal coefficient for t-links. We analyzed the parameter sensitivity with four lung data sets from subjects with lung cancer using error metrics. Large values of K created artifacts on segmented images, and relatively much larger value of c than the value of λ influenced the balance between the boundary term and the data term in the energy function, leading to unacceptable segmentation results. For a range of parameter settings, we performed 3D image segmentation, and in each case compared the results with the expert-delineated lung boundaries. We used simple 6-neighborhood systems for n-link in 3D. The 3D image segmentation took 10 minutes for a 512x512x118 ~ 512x512x190 lung CT image volume. Our results indicate that the graph-cuts algorithm was more sensitive to the K and λ parameter settings than to the C parameter and furthermore that amongst the range of parameters tested, K=5 and λ=0.5 yielded good results.
Bütof, Rebecca; Hofheinz, Frank; Zöphel, Klaus; Stadelmann, Tobias; Schmollack, Julia; Jentsch, Christina; Löck, Steffen; Kotzerke, Jörg; Baumann, Michael; van den Hoff, Jörg
2015-08-01
Despite ongoing efforts to develop new treatment options, the prognosis for patients with inoperable esophageal carcinoma is still poor and the reliability of individual therapy outcome prediction based on clinical parameters is not convincing. The aim of this work was to investigate whether PET can provide independent prognostic information in such a patient group and whether the tumor-to-blood standardized uptake ratio (SUR) can improve the prognostic value of tracer uptake values. (18)F-FDG PET/CT was performed in 130 consecutive patients (mean age ± SD, 63 ± 11 y; 113 men, 17 women) with newly diagnosed esophageal cancer before definitive radiochemotherapy. In the PET images, the metabolically active tumor volume (MTV) of the primary tumor was delineated with an adaptive threshold method. The blood standardized uptake value (SUV) was determined by manually delineating the aorta in the low-dose CT. SUR values were computed as the ratio of tumor SUV and blood SUV. Uptake values were scan-time-corrected to 60 min after injection. Univariate Cox regression and Kaplan-Meier analysis with respect to overall survival (OS), distant metastases-free survival (DM), and locoregional tumor control (LRC) was performed. Additionally, a multivariate Cox regression including clinically relevant parameters was performed. In multivariate Cox regression with respect to OS, including T stage, N stage, and smoking state, MTV- and SUR-based parameters were significant prognostic factors for OS with similar effect size. Multivariate analysis with respect to DM revealed smoking state, MTV, and all SUR-based parameters as significant prognostic factors. The highest hazard ratios (HRs) were found for scan-time-corrected maximum SUR (HR = 3.9) and mean SUR (HR = 4.4). None of the PET parameters was associated with LRC. Univariate Cox regression with respect to LRC revealed a significant effect only for N stage greater than 0 (P = 0.048). PET provides independent prognostic information for OS and DM but not for LRC in patients with locally advanced esophageal carcinoma treated with definitive radiochemotherapy in addition to clinical parameters. Among the investigated uptake-based parameters, only SUR was an independent prognostic factor for OS and DM. These results suggest that the prognostic value of tracer uptake can be improved when characterized by SUR instead of SUV. Further investigations are required to confirm these preliminary results. © 2015 by the Society of Nuclear Medicine and Molecular Imaging, Inc.
On the relationship between NMR-derived amide order parameters and protein backbone entropy changes
Sharp, Kim A.; O’Brien, Evan; Kasinath, Vignesh; Wand, A. Joshua
2015-01-01
Molecular dynamics simulations are used to analyze the relationship between NMR-derived squared generalized order parameters of amide NH groups and backbone entropy. Amide order parameters (O2NH) are largely determined by the secondary structure and average values appear unrelated to the overall flexibility of the protein. However, analysis of the more flexible subset (O2NH < 0.8) shows that these report both on the local flexibility of the protein and on a different component of the conformational entropy than that reported by the side chain methyl axis order parameters, O2axis. A calibration curve for backbone entropy vs. O2NH is developed which accounts for both correlations between amide group motions of different residues, and correlations between backbone and side chain motions. This calibration curve can be used with experimental values of O2NH changes obtained by NMR relaxation measurements to extract backbone entropy changes, e.g. upon ligand binding. In conjunction with our previous calibration for side chain entropy derived from measured O2axis values this provides a prescription for determination of the total protein conformational entropy changes from NMR relaxation measurements. PMID:25739366
On the relationship between NMR-derived amide order parameters and protein backbone entropy changes.
Sharp, Kim A; O'Brien, Evan; Kasinath, Vignesh; Wand, A Joshua
2015-05-01
Molecular dynamics simulations are used to analyze the relationship between NMR-derived squared generalized order parameters of amide NH groups and backbone entropy. Amide order parameters (O(2) NH ) are largely determined by the secondary structure and average values appear unrelated to the overall flexibility of the protein. However, analysis of the more flexible subset (O(2) NH < 0.8) shows that these report both on the local flexibility of the protein and on a different component of the conformational entropy than that reported by the side chain methyl axis order parameters, O(2) axis . A calibration curve for backbone entropy vs. O(2) NH is developed, which accounts for both correlations between amide group motions of different residues, and correlations between backbone and side chain motions. This calibration curve can be used with experimental values of O(2) NH changes obtained by NMR relaxation measurements to extract backbone entropy changes, for example, upon ligand binding. In conjunction with our previous calibration for side chain entropy derived from measured O(2) axis values this provides a prescription for determination of the total protein conformational entropy changes from NMR relaxation measurements. © 2015 Wiley Periodicals, Inc.
NASA Technical Reports Server (NTRS)
Palmer, Michael T.; Abbott, Kathy H.
1994-01-01
This study identifies improved methods to present system parameter information for detecting abnormal conditions and to identify system status. Two workstation experiments were conducted. The first experiment determined if including expected-value-range information in traditional parameter display formats affected subject performance. The second experiment determined if using a nontraditional parameter display format, which presented relative deviation from expected value, was better than traditional formats with expected-value ranges included. The inclusion of expected-value-range information onto traditional parameter formats was found to have essentially no effect. However, subjective results indicated support for including this information. The nontraditional column deviation parameter display format resulted in significantly fewer errors compared with traditional formats with expected-value-ranges included. In addition, error rates for the column deviation parameter display format remained stable as the scenario complexity increased, whereas error rates for the traditional parameter display formats with expected-value ranges increased. Subjective results also indicated that the subjects preferred this new format and thought that their performance was better with it. The column deviation parameter display format is recommended for display applications that require rapid recognition of out-of-tolerance conditions, especially for a large number of parameters.
Development of uncertainty-based work injury model using Bayesian structural equation modelling.
Chatterjee, Snehamoy
2014-01-01
This paper proposed a Bayesian method-based structural equation model (SEM) of miners' work injury for an underground coal mine in India. The environmental and behavioural variables for work injury were identified and causal relationships were developed. For Bayesian modelling, prior distributions of SEM parameters are necessary to develop the model. In this paper, two approaches were adopted to obtain prior distribution for factor loading parameters and structural parameters of SEM. In the first approach, the prior distributions were considered as a fixed distribution function with specific parameter values, whereas, in the second approach, prior distributions of the parameters were generated from experts' opinions. The posterior distributions of these parameters were obtained by applying Bayesian rule. The Markov Chain Monte Carlo sampling in the form Gibbs sampling was applied for sampling from the posterior distribution. The results revealed that all coefficients of structural and measurement model parameters are statistically significant in experts' opinion-based priors, whereas, two coefficients are not statistically significant when fixed prior-based distributions are applied. The error statistics reveals that Bayesian structural model provides reasonably good fit of work injury with high coefficient of determination (0.91) and less mean squared error as compared to traditional SEM.
Modeling, analysis, and simulation of the co-development of road networks and vehicle ownership
NASA Astrophysics Data System (ADS)
Xu, Mingtao; Ye, Zhirui; Shan, Xiaofeng
2016-01-01
A two-dimensional logistic model is proposed to describe the co-development of road networks and vehicle ownership. The endogenous interaction between road networks and vehicle ownership and how natural market forces and policies transformed into their co-development are considered jointly in this model. If the involved parameters satisfy a certain condition, the proposed model can arrive at a steady equilibrium level and the final development scale will be within the maximum capacity of an urban traffic system; otherwise, the co-development process will be unstable and even manifest chaotic behavior. Then sensitivity tests are developed to determine the proper values for a series of parameters in this model. Finally, a case study, using Beijing City as an example, is conducted to explore the applicability of the proposed model to the real condition. Results demonstrate that the proposed model can effectively simulate the co-development of road network and vehicle ownership for Beijing City. Furthermore, we can obtain that their development process will arrive at a stable equilibrium level in the years 2040 and 2045 respectively, and the equilibrium values are within the maximum capacity.
Ground temperature measurement by PRT-5 for maps experiment
NASA Technical Reports Server (NTRS)
Gupta, S. K.; Tiwari, S. N.
1978-01-01
A simple algorithm and computer program were developed for determining the actual surface temperature from the effective brightness temperature as measured remotely by a radiation thermometer called PRT-5. This procedure allows the computation of atmospheric correction to the effective brightness temperature without performing detailed radiative transfer calculations. Model radiative transfer calculations were performed to compute atmospheric corrections for several values of the surface and atmospheric parameters individually and in combination. Polynomial regressions were performed between the magnitudes or deviations of these parameters and the corresponding computed corrections to establish simple analytical relations between them. Analytical relations were also developed to represent combined correction for simultaneous variation of parameters in terms of their individual corrections.
Wahl, Jochen; Barleon, Lorenz; Morfeld, Peter; Lichtmeß, Andrea; Haas-Brähler, Sibylle; Pfeiffer, Norbert
2016-01-01
Purpose To develop an expert system for glaucoma screening in a working population based on a human expert procedure using images of optic nerve head (ONH), visual field (frequency doubling technology, FDT) and intraocular pressure (IOP). Methods 4167 of 13037 (32%) employees between 40 and 65 years of Evonik Industries were screened. An experienced glaucoma expert (JW) assessed papilla parameters and evaluated all individual screening results. His classification into “no glaucoma”, “possible glaucoma” and “probable glaucoma” was defined as “gold standard”. A screening model was developed which was tested versus the gold-standard. This model took into account the assessment of the ONH. Values and relationships of CDR and IOP and the FDT were considered additionally and a glaucoma score was generated. The structure of the screening model was specified a priori whereas values of the parameters were chosen post-hoc to optimize sensitivity and specificity of the algorithm. Simple screening models based on IOP and / or FDT were investigated for comparison. Results 111 persons (2.66%) were classified as glaucoma suspects, thereof 13 (0.31%) as probable and 98 (2.35%) as possible glaucoma suspects by the expert. Re-evaluation by the screening model revealed a sensitivity of 83.8% and a specificity of 99.6% for all glaucoma suspects. The positive predictive value of the model was 80.2%, the negative predictive value 99.6%. Simple screening models showed insufficient diagnostic accuracy. Conclusion Adjustment of ONH and symmetry parameters with respect to excavation and IOP in an expert system produced sufficiently satisfying diagnostic accuracy. This screening model seems to be applicable in such a working population with relatively low age and low glaucoma prevalence. Different experts should validate the model in different populations. PMID:27479301
Wahl, Jochen; Barleon, Lorenz; Morfeld, Peter; Lichtmeß, Andrea; Haas-Brähler, Sibylle; Pfeiffer, Norbert
2016-01-01
To develop an expert system for glaucoma screening in a working population based on a human expert procedure using images of optic nerve head (ONH), visual field (frequency doubling technology, FDT) and intraocular pressure (IOP). 4167 of 13037 (32%) employees between 40 and 65 years of Evonik Industries were screened. An experienced glaucoma expert (JW) assessed papilla parameters and evaluated all individual screening results. His classification into "no glaucoma", "possible glaucoma" and "probable glaucoma" was defined as "gold standard". A screening model was developed which was tested versus the gold-standard. This model took into account the assessment of the ONH. Values and relationships of CDR and IOP and the FDT were considered additionally and a glaucoma score was generated. The structure of the screening model was specified a priori whereas values of the parameters were chosen post-hoc to optimize sensitivity and specificity of the algorithm. Simple screening models based on IOP and / or FDT were investigated for comparison. 111 persons (2.66%) were classified as glaucoma suspects, thereof 13 (0.31%) as probable and 98 (2.35%) as possible glaucoma suspects by the expert. Re-evaluation by the screening model revealed a sensitivity of 83.8% and a specificity of 99.6% for all glaucoma suspects. The positive predictive value of the model was 80.2%, the negative predictive value 99.6%. Simple screening models showed insufficient diagnostic accuracy. Adjustment of ONH and symmetry parameters with respect to excavation and IOP in an expert system produced sufficiently satisfying diagnostic accuracy. This screening model seems to be applicable in such a working population with relatively low age and low glaucoma prevalence. Different experts should validate the model in different populations.
TU-FG-201-09: Predicting Accelerator Dysfunction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Able, C; Nguyen, C; Baydush, A
Purpose: To develop an integrated statistical process control (SPC) framework using digital performance and component data accumulated within the accelerator system that can detect dysfunction prior to unscheduled downtime. Methods: Seven digital accelerators were monitored for twelve to 18 months. The accelerators were operated in a ‘run to failure mode’ with the individual institutions determining when service would be initiated. Institutions were required to submit detailed service reports. Trajectory and text log files resulting from a robust daily VMAT QA delivery were decoded and evaluated using Individual and Moving Range (I/MR) control charts. The SPC evaluation was presented in amore » customized dashboard interface that allows the user to review 525 monitored parameters (480 MLC parameters). Chart limits were calculated using a hybrid technique that includes the standard SPC 3σ limits and an empirical factor based on the parameter/system specification. The individual (I) grand mean values and control limit ranges of the I/MR charts of all accelerators were compared using statistical (ranked analysis of variance (ANOVA)) and graphical analyses to determine consistency of operating parameters. Results: When an alarm or warning was directly connected to field service, process control charts predicted dysfunction consistently on beam generation related parameters (BGP)– RF Driver Voltage, Gun Grid Voltage, and Forward Power (W); beam uniformity parameters – angle and position steering coil currents; and Gantry position accuracy parameter: cross correlation max-value. Control charts for individual MLC – cross correlation max-value/position detected 50% to 60% of MLCs serviced prior to dysfunction or failure. In general, non-random changes were detected 5 to 80 days prior to a service intervention. The ANOVA comparison of BGP determined that each accelerator parameter operated at a distinct value. Conclusion: The SPC framework shows promise. Long term monitoring coordinated with service will be required to definitively determine the effectiveness of the model. Varian Medical System, Inc. provided funding in support of the research presented.« less
Omori's Law Applied to Mining-Induced Seismicity and Re-entry Protocol Development
NASA Astrophysics Data System (ADS)
Vallejos, J. A.; McKinnon, S. D.
2010-02-01
This paper describes a detailed study of the Modified Omori's law n( t) = K/( c + t) p applied to 163 mining-induced aftershock sequences from four different mine environments in Ontario, Canada. We demonstrate, using a rigorous statistical analysis, that this equation can be adequately used to describe the decay rate of mining-induced aftershock sequences. The parameters K, p and c are estimated using a uniform method that employs the maximum likelihood procedure and the Anderson-Darling statistic. To estimate consistent decay parameters, the method considers only the time interval that satisfies power-law behavior. The p value differs from sequence to sequence, with most (98%) ranging from 0.4 to 1.6. The parameter K can be satisfactorily expressed by: K = κN 1, where κ is an activity ratio and N 1 is the measured number of events occurring during the first hour after the principal event. The average κ values are in a well-defined range. Theoretically κ ≤ 0.8, and empirically κ ∈ [0.3-0.5]. These two findings enable us to develop a real-time event rate re-entry protocol 1 h after the principal event. Despite the fact that the Omori formula is temporally self-similar, we found a characteristic time T MC at the maximum curvature point, which is a function of Omori's law parameters. For a time sequence obeying an Omori process, T MC marks the transition from highest to lowest event rate change. Using solely the aftershock decay rate, therefore, we recommend T MC as a preliminary estimate of the time at which it may be considered appropriate to re-enter an area affected by a blast or large event. We found that T MC can be estimated without specifying a p value by the expression: T MC = a N {1/ b }, where a and b are two parameters dependent on local conditions. Both parameters presented well-constrained empirical ranges for the sites analyzed: a ∈ [0.3-0.5] and b ∈ [0.5-0.7]. These findings provide concise and well-justified guidelines for event rate re-entry protocol development.
Nakatsui, M; Horimoto, K; Lemaire, F; Ürgüplü, A; Sedoglavic, A; Boulier, F
2011-09-01
Recent remarkable advances in computer performance have enabled us to estimate parameter values by the huge power of numerical computation, the so-called 'Brute force', resulting in the high-speed simultaneous estimation of a large number of parameter values. However, these advancements have not been fully utilised to improve the accuracy of parameter estimation. Here the authors review a novel method for parameter estimation using symbolic computation power, 'Bruno force', named after Bruno Buchberger, who found the Gröbner base. In the method, the objective functions combining the symbolic computation techniques are formulated. First, the authors utilise a symbolic computation technique, differential elimination, which symbolically reduces an equivalent system of differential equations to a system in a given model. Second, since its equivalent system is frequently composed of large equations, the system is further simplified by another symbolic computation. The performance of the authors' method for parameter accuracy improvement is illustrated by two representative models in biology, a simple cascade model and a negative feedback model in comparison with the previous numerical methods. Finally, the limits and extensions of the authors' method are discussed, in terms of the possible power of 'Bruno force' for the development of a new horizon in parameter estimation.
NASA Astrophysics Data System (ADS)
Llorens-Chiralt, R.; Weiss, P.; Mikonsaari, I.
2014-05-01
Material characterization is one of the key steps when conductive polymers are developed. The dispersion of carbon nanotubes (CNTs) in a polymeric matrix using melt mixing influence final composite properties. The compounding becomes trial and error using a huge amount of materials, spending time and money to obtain competitive composites. Traditional methods to carry out electrical conductivity characterization include compression and injection molding. Both methods need extra equipments and moulds to obtain standard bars. This study aims to investigate the accuracy of the data obtained from absolute resistance recorded during the melt compounding, using an on-line setup developed by our group, and to correlate these values with off-line characterization and processing parameters (screw/barrel configuration, throughput, screw speed, temperature profile and CNTs percentage). Compounds developed with different percentages of multi walled carbon nanotubes (MWCNTs) and polycarbonate has been characterized during and after extrusion. Measurements, on-line resistance and off-line resistivity, showed parallel response and reproducibility, confirming method validity. The significance of the results obtained stems from the fact that we are able to measure on-line resistance and to change compounding parameters during production to achieve reference values reducing production/testing cost and ensuring material quality. Also, this method removes errors which can be found in test bars development, showing better correlation with compounding parameters.
Calibration of sea ice dynamic parameters in an ocean-sea ice model using an ensemble Kalman filter
NASA Astrophysics Data System (ADS)
Massonnet, F.; Goosse, H.; Fichefet, T.; Counillon, F.
2014-07-01
The choice of parameter values is crucial in the course of sea ice model development, since parameters largely affect the modeled mean sea ice state. Manual tuning of parameters will soon become impractical, as sea ice models will likely include more parameters to calibrate, leading to an exponential increase of the number of possible combinations to test. Objective and automatic methods for parameter calibration are thus progressively called on to replace the traditional heuristic, "trial-and-error" recipes. Here a method for calibration of parameters based on the ensemble Kalman filter is implemented, tested and validated in the ocean-sea ice model NEMO-LIM3. Three dynamic parameters are calibrated: the ice strength parameter P*, the ocean-sea ice drag parameter Cw, and the atmosphere-sea ice drag parameter Ca. In twin, perfect-model experiments, the default parameter values are retrieved within 1 year of simulation. Using 2007-2012 real sea ice drift data, the calibration of the ice strength parameter P* and the oceanic drag parameter Cw improves clearly the Arctic sea ice drift properties. It is found that the estimation of the atmospheric drag Ca is not necessary if P* and Cw are already estimated. The large reduction in the sea ice speed bias with calibrated parameters comes with a slight overestimation of the winter sea ice areal export through Fram Strait and a slight improvement in the sea ice thickness distribution. Overall, the estimation of parameters with the ensemble Kalman filter represents an encouraging alternative to manual tuning for ocean-sea ice models.
Managing an "Organistically-Oriented" School Intervention Program.
ERIC Educational Resources Information Center
Barrilleaux, Louis E.; Schermerhorn, John R., Jr.
An "organic" relationship between change agents and clients is a well-popularized goal in recent literature on planned change and organization development. There is, however, a dearth of literature on how to manage change programs under parameters set by this organic value. This paper examines one school organization development program for the…
Filatov, B N; Britanov, N G; Tochilkina, L P; Zhukov, V E; Maslennikov, A A; Ignatenko, M N; Volchek, K
2011-01-01
The threat of industrial chemical accidents and terrorist attacks requires the development of safety regulations for the cleanup of contaminated surfaces. This paper presents principles and a methodology for the development of a new toxicological parameter, "relative value unit" (RVU) as the primary decontamination standard.
Measurement of the Acoustic Nonlinearity Parameter for Biological Media.
NASA Astrophysics Data System (ADS)
Cobb, Wesley Nelson
In vitro measurements of the acoustic nonlinearity parameter are presented for several biological media. With these measurements it is possible to predict the distortion of a finite amplitude wave in biological tissues of current diagnostic and research interest. The measurement method is based on the finite amplitude distortion of a sine wave that is emmitted by a piston source. The growth of the second harmonic component of this wave is measured by a piston receiver which is coaxial with and has the same size as the source. The experimental measurements and theory are compared in order to determine the nonlinearity parameter. The density, sound speed, and attenuation for the medium are determined in order to make this comparison. The theory developed for this study accounts for the influence of both diffraction and attenuation on the experimental measurements. The effects of dispersion, tissue inhomogeneity and gas bubbles within the excised tissues are studied. To test the measurement method, experimental results are compared with established values for the nonlinearity parameter of distilled water, ethylene glycol and glycerol. The agreement between these values suggests that the measurement uncertainty is (+OR-) 5% for liquids and (+OR-) 10% for solid tissues. Measurements are presented for dog blood and bovine serum albumen as a function of concentration. The nonlinearity parameters for liver, kidney and spleen are reported for both human and canine tissues. The values for the fresh tissues displayed little variation (6.8 to 7.8). Measurements for fixed, normal and cirrhotic tissues indicated that the nonlinearity parameter does not depend strongly on pathology. However, the values for fixed tissues were somewhat higher than those of the fresh tissues.
Dregely, Isabel; Mugler, John P.; Ruset, Iulian C.; Altes, Talissa A.; Mata, Jaime F.; Miller, G. Wilson; Ketel, Jeffrey; Ketel, Steve; Distelbrink, Jan; Hersman, F.W.; Ruppert, Kai
2011-01-01
Purpose To develop and test a method to non-invasively assess the functional lung microstructure. Materials and Methods The Multiple exchange time Xenon polarization Transfer Contrast technique (MXTC) encodes xenon gas-exchange contrast at multiple delay times permitting two lung-function parameters to be derived: 1) MXTC-F, the long exchange-time depolarization value, which is proportional to the tissue to alveolar-volume ratio and 2) MXTC-S, the square root of the xenon exchange-time constant, which characterizes thickness and composition of alveolar septa. Three healthy volunteers, one asthmatic and two COPD (GOLD stage I and II) subjects were imaged with MXTC MRI. In a subset of subjects, hyperpolarized xenon-129 ADC MRI and CT imaging were also performed. Results The MXTC-S parameter was found to be elevated in subjects with lung disease (p-value = 0.018). In the MXTC-F parameter map it was feasible to identify regional loss of functional tissue in a COPD patient. Further, the MXTC-F map showed excellent regional correlation with CT and ADC (ρ ≥ 0.90) in one COPD subject. Conclusion The functional tissue-density parameter MXTC-F showed regional agreement with other imaging techniques. The newly developed parameter MXTC-S, which characterizes the functional thickness of alveolar septa, has potential as a novel biomarker for regional parenchymal inflammation or thickening. PMID:21509861
ERIC Educational Resources Information Center
Pinkston, Jonathan W.; Branch, Marc N.
2004-01-01
Daily administration of cocaine often results in the development of tolerance to its effects on responding maintained by fixed-ratio schedules. Such effects have been observed to be greater when the ratio value is small, whereas less or no tolerance has been observed at large ratio values. Similar schedule-parameter-dependent tolerance, however,…
Charge relaxation and dynamics in organic semiconductors
NASA Astrophysics Data System (ADS)
Kwok, H. L.
2006-08-01
Charge relaxation in dispersive materials is often described in terms of the stretched exponential function (Kohlrausch law). The process can be explained using a "hopping" model which in principle, also applies to charge transport such as current conduction. This work analyzed reported transient photoconductivity data on functionalized pentacene single crystals using a geometric hopping model developed by B. Sturman et al and extracted values (or range of values) on the materials parameters relevant to charge relaxation as well as charge transport. Using the correlated disorder model (CDM), we estimated values of the carrier mobility for the pentacene samples. From these results, we observed the following: i) the transport site density appeared to be of the same order of magnitude as the carrier density; ii) it was possible to extract lower bound values on the materials parameters linked to the transport process; and iii) by matching the simulated charge decay to the transient photoconductivity data, we were able to refine estimates on the materials parameters. The data also allowed us to simulate the stretched exponential decay. Our observations suggested that the stretching index and the carrier mobility were related. Physically, such interdependence would allow one to demarcate between localized molecular interactions and distant coulomb interactions.
Determination and correction of persistent biases in quantum annealers
Perdomo-Ortiz, Alejandro; O’Gorman, Bryan; Fluegemann, Joseph; Biswas, Rupak; Smelyanskiy, Vadim N.
2016-01-01
Calibration of quantum computers is essential to the effective utilisation of their quantum resources. Specifically, the performance of quantum annealers is likely to be significantly impaired by noise in their programmable parameters, effectively misspecification of the computational problem to be solved, often resulting in spurious suboptimal solutions. We developed a strategy to determine and correct persistent, systematic biases between the actual values of the programmable parameters and their user-specified values. We applied the recalibration strategy to two D-Wave Two quantum annealers, one at NASA Ames Research Center in Moffett Field, California, and another at D-Wave Systems in Burnaby, Canada. We show that the recalibration procedure not only reduces the magnitudes of the biases in the programmable parameters but also enhances the performance of the device on a set of random benchmark instances. PMID:26783120
NASA Technical Reports Server (NTRS)
Stewart, Eric C.
1991-01-01
An analysis of flight measurements made near a wake vortex was conducted to explore the feasibility of providing a pilot with useful wake avoidance information. The measurements were made with relatively low cost flow and motion sensors on a light airplane flying near the wake vortex of a turboprop airplane weighing approximately 90000 lbs. Algorithms were developed which removed the response of the airplane to control inputs from the total airplane response and produced parameters which were due solely to the flow field of the vortex. These parameters were compared with values predicted by potential theory. The results indicated that the presence of the vortex could be detected by a combination of parameters derived from the simple sensors. However, the location and strength of the vortex cannot be determined without additional and more accurate sensors.
Hussain, Faraz; Jha, Sumit K; Jha, Susmit; Langmead, Christopher J
2014-01-01
Stochastic models are increasingly used to study the behaviour of biochemical systems. While the structure of such models is often readily available from first principles, unknown quantitative features of the model are incorporated into the model as parameters. Algorithmic discovery of parameter values from experimentally observed facts remains a challenge for the computational systems biology community. We present a new parameter discovery algorithm that uses simulated annealing, sequential hypothesis testing, and statistical model checking to learn the parameters in a stochastic model. We apply our technique to a model of glucose and insulin metabolism used for in-silico validation of artificial pancreata and demonstrate its effectiveness by developing parallel CUDA-based implementation for parameter synthesis in this model.
Lord, Dominique
2006-07-01
There has been considerable research conducted on the development of statistical models for predicting crashes on highway facilities. Despite numerous advancements made for improving the estimation tools of statistical models, the most common probabilistic structure used for modeling motor vehicle crashes remains the traditional Poisson and Poisson-gamma (or Negative Binomial) distribution; when crash data exhibit over-dispersion, the Poisson-gamma model is usually the model of choice most favored by transportation safety modelers. Crash data collected for safety studies often have the unusual attributes of being characterized by low sample mean values. Studies have shown that the goodness-of-fit of statistical models produced from such datasets can be significantly affected. This issue has been defined as the "low mean problem" (LMP). Despite recent developments on methods to circumvent the LMP and test the goodness-of-fit of models developed using such datasets, no work has so far examined how the LMP affects the fixed dispersion parameter of Poisson-gamma models used for modeling motor vehicle crashes. The dispersion parameter plays an important role in many types of safety studies and should, therefore, be reliably estimated. The primary objective of this research project was to verify whether the LMP affects the estimation of the dispersion parameter and, if it is, to determine the magnitude of the problem. The secondary objective consisted of determining the effects of an unreliably estimated dispersion parameter on common analyses performed in highway safety studies. To accomplish the objectives of the study, a series of Poisson-gamma distributions were simulated using different values describing the mean, the dispersion parameter, and the sample size. Three estimators commonly used by transportation safety modelers for estimating the dispersion parameter of Poisson-gamma models were evaluated: the method of moments, the weighted regression, and the maximum likelihood method. In an attempt to complement the outcome of the simulation study, Poisson-gamma models were fitted to crash data collected in Toronto, Ont. characterized by a low sample mean and small sample size. The study shows that a low sample mean combined with a small sample size can seriously affect the estimation of the dispersion parameter, no matter which estimator is used within the estimation process. The probability the dispersion parameter becomes unreliably estimated increases significantly as the sample mean and sample size decrease. Consequently, the results show that an unreliably estimated dispersion parameter can significantly undermine empirical Bayes (EB) estimates as well as the estimation of confidence intervals for the gamma mean and predicted response. The paper ends with recommendations about minimizing the likelihood of producing Poisson-gamma models with an unreliable dispersion parameter for modeling motor vehicle crashes.
Designing occupancy studies when false-positive detections occur
Clement, Matthew
2016-01-01
1.Recently, estimators have been developed to estimate occupancy probabilities when false-positive detections occur during presence-absence surveys. Some of these estimators combine different types of survey data to improve estimates of occupancy. With these estimators, there is a tradeoff between the number of sample units surveyed, and the number and type of surveys at each sample unit. Guidance on efficient design of studies when false positives occur is unavailable. 2.For a range of scenarios, I identified survey designs that minimized the mean square error of the estimate of occupancy. I considered an approach that uses one survey method and two observation states and an approach that uses two survey methods. For each approach, I used numerical methods to identify optimal survey designs when model assumptions were met and parameter values were correctly anticipated, when parameter values were not correctly anticipated, and when the assumption of no unmodelled detection heterogeneity was violated. 3.Under the approach with two observation states, false positive detections increased the number of recommended surveys, relative to standard occupancy models. If parameter values could not be anticipated, pessimism about detection probabilities avoided poor designs. Detection heterogeneity could require more or fewer repeat surveys, depending on parameter values. If model assumptions were met, the approach with two survey methods was inefficient. However, with poor anticipation of parameter values, with detection heterogeneity, or with removal sampling schemes, combining two survey methods could improve estimates of occupancy. 4.Ignoring false positives can yield biased parameter estimates, yet false positives greatly complicate the design of occupancy studies. Specific guidance for major types of false-positive occupancy models, and for two assumption violations common in field data, can conserve survey resources. This guidance can be used to design efficient monitoring programs and studies of species occurrence, species distribution, or habitat selection, when false positives occur during surveys.
Effects of developer exhaustion on DFL Contrast FV-58 and Kodak Insight dental films.
de Carvalho, Fabiano Pachêco; da Silveira, M M F; Frazão, M A G; de Santana, S T; dos Anjos Pontual, M L
2011-09-01
The aim of this study was to compare the properties of the DFL Contrast FV-58 F-speed film (DFL Co., Rio de Janerio, Brazil) with the Kodak Insight E/F speed film (Eastman Kodak, Rochester, NY) in fresh and exhausted processing solutions. The parameters studied were the speed, average gradient and latitude. Five samples of each type of film were exposed under standardized conditions over 5 weeks. The films were developed in fresh and progressively exhausted processing solutions. Characteristic curves were constructed from values of optical density and radiation dose and were used to calculate the parameters. An analysis of variance was performed separately for film type and time. DFL Contrast FV-58 film has a speed and average gradient that is significantly higher than Insight film, whereas the values of latitude are lower. Exhausted processing solutions were not significant in the parameters studied. DFL Contrast FV-58 film has stable properties when exhausted manual processing solutions are used and can be recommended for use in dental practice, contributing to dose reduction.
Effects of developer exhaustion on DFL Contrast FV-58 and Kodak Insight dental films
de Carvalho, FP; da Silveira, MMF; Frazão, MAG; de Santana, ST; dos Anjos Pontual, ML
2011-01-01
Objectives The aim of this study was to compare the properties of the DFL Contrast FV-58 F-speed film (DFL Co., Rio de Janerio, Brazil) with the Kodak Insight E/F speed film (Eastman Kodak, Rochester, NY) in fresh and exhausted processing solutions. The parameters studied were the speed, average gradient and latitude. Methods Five samples of each type of film were exposed under standardized conditions over 5 weeks. The films were developed in fresh and progressively exhausted processing solutions. Characteristic curves were constructed from values of optical density and radiation dose and were used to calculate the parameters. An analysis of variance was performed separately for film type and time. Results DFL Contrast FV-58 film has a speed and average gradient that is significantly higher than Insight film, whereas the values of latitude are lower. Exhausted processing solutions were not significant in the parameters studied. Conclusion DFL Contrast FV-58 film has stable properties when exhausted manual processing solutions are used and can be recommended for use in dental practice, contributing to dose reduction. PMID:21831975
Liu, Feng; Chen, Long; Rao, Hui-Ying; Teng, Xiao; Ren, Ya-Yun; Lu, Yan-Qiang; Zhang, Wei; Wu, Nan; Liu, Fang-Fang; Wei, Lai
2017-01-01
Animal models provide a useful platform for developing and testing new drugs to treat liver fibrosis. Accordingly, we developed a novel automated system to evaluate liver fibrosis in rodent models. This system uses second-harmonic generation (SHG)/two-photon excited fluorescence (TPEF) microscopy to assess a total of four mouse and rat models, using chemical treatment with either thioacetamide (TAA) or carbon tetrachloride (CCl 4 ), and a surgical method, bile duct ligation (BDL). The results obtained by the new technique were compared with that using Ishak fibrosis scores and two currently used quantitative methods for determining liver fibrosis: the collagen proportionate area (CPA) and measurement of hydroxyproline (HYP) content. We show that 11 shared morphological parameters faithfully recapitulate Ishak fibrosis scores in the models, with high area under the receiver operating characteristic (ROC) curve (AUC) performance. The AUC values of 11 shared parameters were greater than that of the CPA (TAA: 0.758-0.922 vs 0.752-0.908; BDL: 0.874-0.989 vs 0.678-0.966) in the TAA mice and BDL rat models and similar to that of the CPA in the TAA rat and CCl 4 mouse models. Similarly, based on the trends in these parameters at different time points, 9, 10, 7, and 2 model-specific parameters were selected for the TAA rats, TAA mice, CCl 4 mice, and BDL rats, respectively. These parameters identified differences among the time points in the four models, with high AUC accuracy, and the corresponding AUC values of these parameters were greater compared with those of the CPA in the TAA rat and mouse models (rats: 0.769-0.894 vs 0.64-0.799; mice: 0.87-0.93 vs 0.739-0.836) and similar to those of the CPA in the CCl 4 mouse and BDL rat models. Similarly, the AUC values of 11 shared parameters and model-specific parameters were greater than those of HYP in the TAA rats, TAA mice, and CCl 4 mouse models and were similar to those of HYP in the BDL rat models. The automated evaluation system, combined with 11 shared parameters and model-specific parameters, could specifically, accurately, and quantitatively stage liver fibrosis in animal models.
NASA Astrophysics Data System (ADS)
Lee, Haenghwa; Choi, Sunghoon; Jo, Byungdu; Kim, Hyemi; Lee, Donghoon; Kim, Dohyeon; Choi, Seungyeon; Lee, Youngjin; Kim, Hee-Joung
2017-03-01
Chest digital tomosynthesis (CDT) is a new 3D imaging technique that can be expected to improve the detection of subtle lung disease over conventional chest radiography. Algorithm development for CDT system is challenging in that a limited number of low-dose projections are acquired over a limited angular range. To confirm the feasibility of algebraic reconstruction technique (ART) method under variations in key imaging parameters, quality metrics were conducted using LUNGMAN phantom included grand-glass opacity (GGO) tumor. Reconstructed images were acquired from the total 41 projection images over a total angular range of +/-20°. We evaluated contrast-to-noise ratio (CNR) and artifacts spread function (ASF) to investigate the effect of reconstruction parameters such as number of iterations, relaxation parameter and initial guess on image quality. We found that proper value of ART relaxation parameter could improve image quality from the same projection. In this study, proper value of relaxation parameters for zero-image (ZI) and back-projection (BP) initial guesses were 0.4 and 0.6, respectively. Also, the maximum CNR values and the minimum full width at half maximum (FWHM) of ASF were acquired in the reconstructed images after 20 iterations and 3 iterations, respectively. According to the results, BP initial guess for ART method could provide better image quality than ZI initial guess. In conclusion, ART method with proper reconstruction parameters could improve image quality due to the limited angular range in CDT system.
Sumner, T; Shephard, E; Bogle, I D L
2012-09-07
One of the main challenges in the development of mathematical and computational models of biological systems is the precise estimation of parameter values. Understanding the effects of uncertainties in parameter values on model behaviour is crucial to the successful use of these models. Global sensitivity analysis (SA) can be used to quantify the variability in model predictions resulting from the uncertainty in multiple parameters and to shed light on the biological mechanisms driving system behaviour. We present a new methodology for global SA in systems biology which is computationally efficient and can be used to identify the key parameters and their interactions which drive the dynamic behaviour of a complex biological model. The approach combines functional principal component analysis with established global SA techniques. The methodology is applied to a model of the insulin signalling pathway, defects of which are a major cause of type 2 diabetes and a number of key features of the system are identified.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Salajegheh, Nima; Abedrabbo, Nader; Pourboghrat, Farhang
An efficient integration algorithm for continuum damage based elastoplastic constitutive equations is implemented in LS-DYNA. The isotropic damage parameter is defined as the ratio of the damaged surface area over the total cross section area of the representative volume element. This parameter is incorporated into the integration algorithm as an internal variable. The developed damage model is then implemented in the FEM code LS-DYNA as user material subroutine (UMAT). Pure stretch experiments of a hemispherical punch are carried out for copper sheets and the results are compared against the predictions of the implemented damage model. Evaluation of damage parameters ismore » carried out and the optimized values that correctly predicted the failure in the sheet are reported. Prediction of failure in the numerical analysis is performed through element deletion using the critical damage value. The set of failure parameters which accurately predict the failure behavior in copper sheets compared to experimental data is reported as well.« less
NASA Astrophysics Data System (ADS)
Han, Xiao; Gao, Xiguang; Song, Yingdong
2017-10-01
An approach to identify parameters of interface friction model for Ceramic Matrix composites based on stress-strain response was developed. The stress distribution of fibers in the interface slip region and intact region of the damaged composite was determined by adopting the interface friction model. The relation between maximum strain, secant moduli of hysteresis loop and interface shear stress, interface de-bonding stress was established respectively with the method of symbolic-graphic combination. By comparing the experimental strain, secant moduli of hysteresis loop with computation values, the interface shear stress and interface de-bonding stress corresponding to first cycle were identified. Substituting the identification of parameters into interface friction model, the stress-strain curves were predicted and the predicted results fit experiments well. Besides, the influence of number of data points on identifying the value of interface parameters was discussed. And the approach was compared with the method based on the area of hysteresis loop.
Method for Calculating the Optical Diffuse Reflection Coefficient for the Ocular Fundus
NASA Astrophysics Data System (ADS)
Lisenko, S. A.; Kugeiko, M. M.
2016-07-01
We have developed a method for calculating the optical diffuse reflection coefficient for the ocular fundus, taking into account multiple scattering of light in its layers (retina, epithelium, choroid) and multiple refl ection of light between layers. The method is based on the formulas for optical "combination" of the layers of the medium, in which the optical parameters of the layers (absorption and scattering coefficients) are replaced by some effective values, different for cases of directional and diffuse illumination of the layer. Coefficients relating the effective optical parameters of the layers and the actual values were established based on the results of a Monte Carlo numerical simulation of radiation transport in the medium. We estimate the uncertainties in retrieval of the structural and morphological parameters for the fundus from its diffuse reflectance spectrum using our method. We show that the simulated spectra correspond to the experimental data and that the estimates of the fundus parameters obtained as a result of solving the inverse problem are reasonable.
Welch, Stephen M.; White, Jeffrey W.; Thorp, Kelly R.; Bello, Nora M.
2018-01-01
Ecophysiological crop models encode intra-species behaviors using parameters that are presumed to summarize genotypic properties of individual lines or cultivars. These genotype-specific parameters (GSP’s) can be interpreted as quantitative traits that can be mapped or otherwise analyzed, as are more conventional traits. The goal of this study was to investigate the estimation of parameters controlling maize anthesis date with the CERES-Maize model, based on 5,266 maize lines from 11 plantings at locations across the eastern United States. High performance computing was used to develop a database of 356 million simulated anthesis dates in response to four CERES-Maize model parameters. Although the resulting estimates showed high predictive value (R2 = 0.94), three issues presented serious challenges for use of GSP’s as traits. First (expressivity), the model was unable to express the observed data for 168 to 3,339 lines (depending on the combination of site-years), many of which ended up sharing the same parameter value irrespective of genetics. Second, for 2,254 lines, the model reproduced the data, but multiple parameter sets were equally effective (equifinality). Third, parameter values were highly dependent (p<10−6919) on the sets of environments used to estimate them (instability), calling in to question the assumption that they represent fundamental genetic traits. The issues of expressivity, equifinality and instability must be addressed before the genetic mapping of GSP’s becomes a robust means to help solve the genotype-to-phenotype problem in crops. PMID:29672629
A new zonation algorithm with parameter estimation using hydraulic head and subsidence observations.
Zhang, Meijing; Burbey, Thomas J; Nunes, Vitor Dos Santos; Borggaard, Jeff
2014-01-01
Parameter estimation codes such as UCODE_2005 are becoming well-known tools in groundwater modeling investigations. These programs estimate important parameter values such as transmissivity (T) and aquifer storage values (Sa ) from known observations of hydraulic head, flow, or other physical quantities. One drawback inherent in these codes is that the parameter zones must be specified by the user. However, such knowledge is often unknown even if a detailed hydrogeological description is available. To overcome this deficiency, we present a discrete adjoint algorithm for identifying suitable zonations from hydraulic head and subsidence measurements, which are highly sensitive to both elastic (Sske) and inelastic (Sskv) skeletal specific storage coefficients. With the advent of interferometric synthetic aperture radar (InSAR), distributed spatial and temporal subsidence measurements can be obtained. A synthetic conceptual model containing seven transmissivity zones, one aquifer storage zone and three interbed zones for elastic and inelastic storage coefficients were developed to simulate drawdown and subsidence in an aquifer interbedded with clay that exhibits delayed drainage. Simulated delayed land subsidence and groundwater head data are assumed to be the observed measurements, to which the discrete adjoint algorithm is called to create approximate spatial zonations of T, Sske , and Sskv . UCODE-2005 is then used to obtain the final optimal parameter values. Calibration results indicate that the estimated zonations calculated from the discrete adjoint algorithm closely approximate the true parameter zonations. This automation algorithm reduces the bias established by the initial distribution of zones and provides a robust parameter zonation distribution. © 2013, National Ground Water Association.
Plasma Charge Current for Controlling and Monitoring Electron Beam Welding with Beam Oscillation
Trushnikov, Dmitriy; Belenkiy, Vladimir; Shchavlev, Valeriy; Piskunov, Anatoliy; Abdullin, Aleksandr; Mladenov, Georgy
2012-01-01
Electron beam welding (EBW) shows certain problems with the control of focus regime. The electron beam focus can be controlled in electron-beam welding based on the parameters of a secondary signal. In this case, the parameters like secondary emissions and focus coil current have extreme relationships. There are two values of focus coil current which provide equal value signal parameters. Therefore, adaptive systems of electron beam focus control use low-frequency scanning of focus, which substantially limits the operation speed of these systems and has a negative effect on weld joint quality. The purpose of this study is to develop a method for operational control of the electron beam focus during welding in the deep penetration mode. The method uses the plasma charge current signal as an additional informational parameter. This parameter allows identification of the electron beam focus regime in electron-beam welding without application of additional low-frequency scanning of focus. It can be used for working out operational electron beam control methods focusing exactly on the welding. In addition, use of this parameter allows one to observe the shape of the keyhole during the welding process. PMID:23242276
Correlations among Stress Parameters, Meat and Carcass Quality Parameters in Pigs
Dokmanovic, Marija; Baltic, Milan Z.; Duric, Jelena; Ivanovic, Jelena; Popovic, Ljuba; Todorovic, Milica; Markovic, Radmila; Pantic, Srdan
2015-01-01
Relationships among different stress parameters (lairage time and blood level of lactate and cortisol), meat quality parameters (initial and ultimate pH value, temperature, drip loss, sensory and instrumental colour, marbling) and carcass quality parameters (degree of rigor mortis and skin damages, hot carcass weight, carcass fat thickness, meatiness) were determined in pigs (n = 100) using Pearson correlations. After longer lairage, blood lactate (p<0.05) and degree of injuries (p<0.001) increased, meat became darker (p<0.001), while drip loss decreased (p<0.05). Higher lactate was associated with lower initial pH value (p<0.01), higher temperature (p<0.001) and skin blemishes score (p<0.05) and more developed rigor mortis (p<0.05), suggesting that lactate could be a predictor of both meat quality and the level of preslaughter stress. Cortisol affected carcass quality, so higher levels of cortisol were associated with increased hot carcass weight, carcass fat thickness on the back and at the sacrum and marbling, but also with decreased meatiness. The most important meat quality parameters (pH and temperature after 60 minutes) deteriorated when blood lactate concentration was above 12 mmol/L. PMID:25656214
Plasma charge current for controlling and monitoring electron beam welding with beam oscillation.
Trushnikov, Dmitriy; Belenkiy, Vladimir; Shchavlev, Valeriy; Piskunov, Anatoliy; Abdullin, Aleksandr; Mladenov, Georgy
2012-12-14
Electron beam welding (EBW) shows certain problems with the control of focus regime. The electron beam focus can be controlled in electron-beam welding based on the parameters of a secondary signal. In this case, the parameters like secondary emissions and focus coil current have extreme relationships. There are two values of focus coil current which provide equal value signal parameters. Therefore, adaptive systems of electron beam focus control use low-frequency scanning of focus, which substantially limits the operation speed of these systems and has a negative effect on weld joint quality. The purpose of this study is to develop a method for operational control of the electron beam focus during welding in the deep penetration mode. The method uses the plasma charge current signal as an additional informational parameter. This parameter allows identification of the electron beam focus regime in electron-beam welding without application of additional low-frequency scanning of focus. It can be used for working out operational electron beam control methods focusing exactly on the welding. In addition, use of this parameter allows one to observe the shape of the keyhole during the welding process.
Inverse estimation of parameters for an estuarine eutrophication model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shen, J.; Kuo, A.Y.
1996-11-01
An inverse model of an estuarine eutrophication model with eight state variables is developed. It provides a framework to estimate parameter values of the eutrophication model by assimilation of concentration data of these state variables. The inverse model using the variational technique in conjunction with a vertical two-dimensional eutrophication model is general enough to be applicable to aid model calibration. The formulation is illustrated by conducting a series of numerical experiments for the tidal Rappahannock River, a western shore tributary of the Chesapeake Bay. The numerical experiments of short-period model simulations with different hypothetical data sets and long-period model simulationsmore » with limited hypothetical data sets demonstrated that the inverse model can be satisfactorily used to estimate parameter values of the eutrophication model. The experiments also showed that the inverse model is useful to address some important questions, such as uniqueness of the parameter estimation and data requirements for model calibration. Because of the complexity of the eutrophication system, degrading of speed of convergence may occur. Two major factors which cause degradation of speed of convergence are cross effects among parameters and the multiple scales involved in the parameter system.« less
NASA Astrophysics Data System (ADS)
Shamarokov, A. S.; Zorin, V. M.; Dai, Fam Kuang
2016-03-01
At the current stage of development of nuclear power engineering, high demands on nuclear power plants (NPP), including on their economy, are made. In these conditions, improving the quality of NPP means, in particular, the need to reasonably choose the values of numerous managed parameters of technological (heat) scheme. Furthermore, the chosen values should correspond to the economic conditions of NPP operation, which are postponed usually a considerable time interval from the point of time of parameters' choice. The article presents the technique of optimization of controlled parameters of the heat circuit of a steam turbine plant for the future. Its particularity is to obtain the results depending on a complex parameter combining the external economic and operating parameters that are relatively stable under the changing economic environment. The article presents the results of optimization according to this technique of the minimum temperature driving forces in the surface heaters of the heat regeneration system of the steam turbine plant of a K-1200-6.8/50 type. For optimization, the collector-screen heaters of high and low pressure developed at the OAO All-Russia Research and Design Institute of Nuclear Power Machine Building, which, in the authors' opinion, have the certain advantages over other types of heaters, were chosen. The optimality criterion in the task was the change in annual reduced costs for NPP compared to the version accepted as the baseline one. The influence on the decision of the task of independent variables that are not included in the complex parameter was analyzed. An optimization task was decided using the alternating-variable descent method. The obtained values of minimum temperature driving forces can guide the design of new nuclear plants with a heat circuit, similar to that accepted in the considered task.
Natural parameter values for generalized gene adjacency.
Yang, Zhenyu; Sankoff, David
2010-09-01
Given the gene orders in two modern genomes, it may be difficult to decide if some genes are close enough in both genomes to infer some ancestral proximity or some functional relationship. Current methods all depend on arbitrary parameters. We explore a class of gene proximity criteria and find two kinds of natural values for their parameters. One kind has to do with the parameter value where the expected information contained in two genomes about each other is maximized. The other kind of natural value has to do with parameter values beyond which all genes are clustered. We analyze these using combinatorial and probabilistic arguments as well as simulations.
Evaluation of Potential Evapotranspiration from a Hydrologic Model on a National Scale
NASA Astrophysics Data System (ADS)
Hakala, Kirsti; Markstrom, Steven; Hay, Lauren
2015-04-01
The U.S. Geological Survey has developed a National Hydrologic Model (NHM) to support coordinated, comprehensive and consistent hydrologic model development and facilitate the application of simulations on the scale of the continental U.S. The NHM has a consistent geospatial fabric for modeling, consisting of over 100,000 hydrologic response units HRUs). Each HRU requires accurate parameter estimates, some of which are attained from automated calibration. However, improved calibration can be achieved by initially utilizing as many parameters as possible from national data sets. This presentation investigates the effectiveness of calculating potential evapotranspiration (PET) parameters based on mean monthly values from the NOAA PET Atlas. Additional PET products are then used to evaluate the PET parameters. Effectively utilizing existing national-scale data sets can simplify the effort in establishing a robust NHM.
Automatic management system for dose parameters in interventional radiology and cardiology.
Ten, J I; Fernandez, J M; Vaño, E
2011-09-01
The purpose of this work was to develop an automatic management system to archive and analyse the major study parameters and patient doses for fluoroscopy guided procedures performed in cardiology and interventional radiology systems. The X-ray systems used for this trial have the capability to export at the end of the procedure and via e-mail the technical parameters of the study and the patient dose values. An application was developed to query and retrieve from a mail server, all study reports sent by the imaging modality and store them on a Microsoft SQL Server data base. The results from 3538 interventional study reports generated by 7 interventional systems were processed. In the case of some technical parameters and patient doses, alarms were added to receive malfunction alerts so as to immediately take appropriate corrective actions.
The predicted influence of climate change on lesser prairie-chicken reproductive parameters
Grisham, Blake A.; Boal, Clint W.; Haukos, David A.; Davis, D.; Boydston, Kathy K.; Dixon, Charles; Heck, Willard R.
2013-01-01
The Southern High Plains is anticipated to experience significant changes in temperature and precipitation due to climate change. These changes may influence the lesser prairie-chicken (Tympanuchus pallidicinctus) in positive or negative ways. We assessed the potential changes in clutch size, incubation start date, and nest survival for lesser prairie-chickens for the years 2050 and 2080 based on modeled predictions of climate change and reproductive data for lesser prairie-chickens from 2001-2011 on the Southern High Plains of Texas and New Mexico. We developed 9 a priori models to assess the relationship between reproductive parameters and biologically relevant weather conditions. We selected weather variable(s) with the most model support and then obtained future predicted values from climatewizard.org. We conducted 1,000 simulations using each reproductive parameter's linear equation obtained from regression calculations, and the future predicted value for each weather variable to predict future reproductive parameter values for lesser prairie-chickens. There was a high degree of model uncertainty for each reproductive value. Winter temperature had the greatest effect size for all three parameters, suggesting a negative relationship between above-average winter temperature and reproductive output. The above-average winter temperatures are correlated to La Nina events, which negatively affect lesser prairie-chickens through resulting drought conditions. By 2050 and 2080, nest survival was predicted to be below levels considered viable for population persistence; however, our assessment did not consider annual survival of adults, chick survival, or the positive benefit of habitat management and conservation, which may ultimately offset the potentially negative effect of drought on nest survival.
The predicted influence of climate change on lesser prairie-chicken reproductive parameters.
Grisham, Blake A; Boal, Clint W; Haukos, David A; Davis, Dawn M; Boydston, Kathy K; Dixon, Charles; Heck, Willard R
2013-01-01
The Southern High Plains is anticipated to experience significant changes in temperature and precipitation due to climate change. These changes may influence the lesser prairie-chicken (Tympanuchus pallidicinctus) in positive or negative ways. We assessed the potential changes in clutch size, incubation start date, and nest survival for lesser prairie-chickens for the years 2050 and 2080 based on modeled predictions of climate change and reproductive data for lesser prairie-chickens from 2001-2011 on the Southern High Plains of Texas and New Mexico. We developed 9 a priori models to assess the relationship between reproductive parameters and biologically relevant weather conditions. We selected weather variable(s) with the most model support and then obtained future predicted values from climatewizard.org. We conducted 1,000 simulations using each reproductive parameter's linear equation obtained from regression calculations, and the future predicted value for each weather variable to predict future reproductive parameter values for lesser prairie-chickens. There was a high degree of model uncertainty for each reproductive value. Winter temperature had the greatest effect size for all three parameters, suggesting a negative relationship between above-average winter temperature and reproductive output. The above-average winter temperatures are correlated to La Niña events, which negatively affect lesser prairie-chickens through resulting drought conditions. By 2050 and 2080, nest survival was predicted to be below levels considered viable for population persistence; however, our assessment did not consider annual survival of adults, chick survival, or the positive benefit of habitat management and conservation, which may ultimately offset the potentially negative effect of drought on nest survival.
Development of reference equations for spirometry in Japanese children aged 6-18 years.
Takase, Masato; Sakata, Hiroshi; Shikada, Masahiro; Tatara, Katsuyoshi; Fukushima, Takayoshi; Miyakawa, Tomoo
2013-01-01
Spirometry is the most widely used pulmonary function test and the measured values of spirometric parameters need to be evaluated using reference values predicted for the corresponding race, sex, age, and height. However, none of the existing reference equations for Japanese children covers the entire age range of 6-18 years. The Japanese Society of Pediatric Pulmonology had organized a working group in 2006, in order to develop a new set of national standard reference equations for commonly used spirometric parameters that are applicable through the age range of 6-18 years. Quality assured spirometric data were collected through 2006-2008, from 14 institutions in Japan. We applied multiple regression analysis, using age in years (A), square of age (A(2)), height in meters (H), square of height (H(2)), and the product of age and height (AH) as explanatory variables to predict forced vital capacity (FVC), forced expiratory volume in 1 sec (FEV(1)), peak expiratory flow (PEF), forced expiratory flow between 25% and 75% of the FVC (FEF(25-75%)), instantaneous forced expiratory flow when 50% (FEF(50%)) or 75% (FEF(75%)) of the FVC have been expired. Finally, 1,296 tests (674 boys, 622 girls) formed the reference data set. Distributions of the percent predicted values did not differ by ages, confirming excellent fit of the prediction equations throughout the entire age range from 6 to 18 years. Cut-off values (around 5 percentile points) for the parameters were also determined. We recommend the use of this new set of prediction equations together with suggested cut-off values, for assessment of spirometry in Japanese children and adolescents. Copyright © 2012 Wiley Periodicals, Inc.
Thermomechanical conditions and stresses on the friction stir welding tool
NASA Astrophysics Data System (ADS)
Atthipalli, Gowtam
Friction stir welding has been commercially used as a joining process for aluminum and other soft materials. However, the use of this process in joining of hard alloys is still developing primarily because of the lack of cost effective, long lasting tools. Here I have developed numerical models to understand the thermo mechanical conditions experienced by the FSW tool and to improve its reusability. A heat transfer and visco-plastic flow model is used to calculate the torque, and traverse force on the tool during FSW. The computed values of torque and traverse force are validated using the experimental results for FSW of AA7075, AA2524, AA6061 and Ti-6Al-4V alloys. The computed torque components are used to determine the optimum tool shoulder diameter based on the maximum use of torque and maximum grip of the tool on the plasticized workpiece material. The estimation of the optimum tool shoulder diameter for FSW of AA6061 and AA7075 was verified with experimental results. The computed values of traverse force and torque are used to calculate the maximum shear stress on the tool pin to determine the load bearing ability of the tool pin. The load bearing ability calculations are used to explain the failure of H13 steel tool during welding of AA7075 and commercially pure tungsten during welding of L80 steel. Artificial neural network (ANN) models are developed to predict the important FSW output parameters as function of selected input parameters. These ANN consider tool shoulder radius, pin radius, pin length, welding velocity, tool rotational speed and axial pressure as input parameters. The total torque, sliding torque, sticking torque, peak temperature, traverse force, maximum shear stress and bending stress are considered as the output for ANN models. These output parameters are selected since they define the thermomechanical conditions around the tool during FSW. The developed ANN models are used to understand the effect of various input parameters on the total torque and traverse force during FSW of AA7075 and 1018 mild steel. The ANN models are also used to determine tool safety factor for wide range of input parameters. A numerical model is developed to calculate the strain and strain rates along the streamlines during FSW. The strain and strain rate values are calculated for FSW of AA2524. Three simplified models are also developed for quick estimation of output parameters such as material velocity field, torque and peak temperature. The material velocity fields are computed by adopting an analytical method of calculating velocities for flow of non-compressible fluid between two discs where one is rotating and other is stationary. The peak temperature is estimated based on a non-dimensional correlation with dimensionless heat input. The dimensionless heat input is computed using known welding parameters and material properties. The torque is computed using an analytical function based on shear strength of the workpiece material. These simplified models are shown to be able to predict these output parameters successfully.
Observations of Ion Diffusion Regions in the Geomagnetic Tail
NASA Astrophysics Data System (ADS)
Rogers, A. J.; Farrugia, C. J.; Torbert, R. B.; Argall, M. R.; Strangeway, R. J.; Ergun, R.
2017-12-01
We present analysis of two Ion Diffusion Regions (IDRs) in the geomagnetic tail, as observed by the Magnetospheric Multiscale Mission (MMS). Analysis of each event is centered around discussion of parameters commonly associated with IDRs such as enhanced electric field magnitude, guiding center expansion parameter, and ion velocity. Characteristic values for these parameters are determined, as well as other common attributes of IDRs, and used to develop a searching algorithm to automate identification of possible IDRs for closer inspection. Preliminary results of applying this algorithm to in situ MMS observations are also presented
Reanalysis of 24 Nearby Open Clusters using Gaia data
NASA Astrophysics Data System (ADS)
Yen, Steffi X.; Reffert, Sabine; Röser, Siegfried; Schilbach, Elena; Kharchenko, Nina V.; Piskunov, Anatoly E.
2018-04-01
We have developed a fully automated cluster characterization pipeline, which simultaneously determines cluster membership and fits the fundamental cluster parameters: distance, reddening, and age. We present results for 24 established clusters and compare them to literature values. Given the large amount of stellar data for clusters available from Gaia DR2 in 2018, this pipeline will be beneficial to analyzing the parameters of open clusters in our Galaxy.
An Evaluation of Compressed Work Schedules and Their Impact on Electricity Use
2010-03-01
problems by introducing uncertainty to the known parameters of a given process ( Sobol , 1975). The MCS output represents approximate values of the...process within the observed parameters; the output is provided within a statistical distribution of likely outcomes ( Sobol , 1975). 31 In this...The Monte Carlo method is appropriate for “any process whose development is affected by random factors” ( Sobol , 1975:10). MCS introduces
Serrancolí, Gil; Kinney, Allison L.; Fregly, Benjamin J.; Font-Llagunes, Josep M.
2016-01-01
Though walking impairments are prevalent in society, clinical treatments are often ineffective at restoring lost function. For this reason, researchers have begun to explore the use of patient-specific computational walking models to develop more effective treatments. However, the accuracy with which models can predict internal body forces in muscles and across joints depends on how well relevant model parameter values can be calibrated for the patient. This study investigated how knowledge of internal knee contact forces affects calibration of neuromusculoskeletal model parameter values and subsequent prediction of internal knee contact and leg muscle forces during walking. Model calibration was performed using a novel two-level optimization procedure applied to six normal walking trials from the Fourth Grand Challenge Competition to Predict In Vivo Knee Loads. The outer-level optimization adjusted time-invariant model parameter values to minimize passive muscle forces, reserve actuator moments, and model parameter value changes with (Approach A) and without (Approach B) tracking of experimental knee contact forces. Using the current guess for model parameter values but no knee contact force information, the inner-level optimization predicted time-varying muscle activations that were close to experimental muscle synergy patterns and consistent with the experimental inverse dynamic loads (both approaches). For all the six gait trials, Approach A predicted knee contact forces with high accuracy for both compartments (average correlation coefficient r = 0.99 and root mean square error (RMSE) = 52.6 N medial; average r = 0.95 and RMSE = 56.6 N lateral). In contrast, Approach B overpredicted contact force magnitude for both compartments (average RMSE = 323 N medial and 348 N lateral) and poorly matched contact force shape for the lateral compartment (average r = 0.90 medial and −0.10 lateral). Approach B had statistically higher lateral muscle forces and lateral optimal muscle fiber lengths but lower medial, central, and lateral normalized muscle fiber lengths compared to Approach A. These findings suggest that poorly calibrated model parameter values may be a major factor limiting the ability of neuromusculoskeletal models to predict knee contact and leg muscle forces accurately for walking. PMID:27210105
Chaos control of Hastings-Powell model by combining chaotic motions.
Danca, Marius-F; Chattopadhyay, Joydev
2016-04-01
In this paper, we propose a Parameter Switching (PS) algorithm as a new chaos control method for the Hastings-Powell (HP) system. The PS algorithm is a convergent scheme that switches the control parameter within a set of values while the controlled system is numerically integrated. The attractor obtained with the PS algorithm matches the attractor obtained by integrating the system with the parameter replaced by the averaged value of the switched parameter values. The switching rule can be applied periodically or randomly over a set of given values. In this way, every stable cycle of the HP system can be approximated if its underlying parameter value equalizes the average value of the switching values. Moreover, the PS algorithm can be viewed as a generalization of Parrondo's game, which is applied for the first time to the HP system, by showing that losing strategy can win: "losing + losing = winning." If "loosing" is replaced with "chaos" and, "winning" with "order" (as the opposite to "chaos"), then by switching the parameter value in the HP system within two values, which generate chaotic motions, the PS algorithm can approximate a stable cycle so that symbolically one can write "chaos + chaos = regular." Also, by considering a different parameter control, new complex dynamics of the HP model are revealed.
Chaos control of Hastings-Powell model by combining chaotic motions
NASA Astrophysics Data System (ADS)
Danca, Marius-F.; Chattopadhyay, Joydev
2016-04-01
In this paper, we propose a Parameter Switching (PS) algorithm as a new chaos control method for the Hastings-Powell (HP) system. The PS algorithm is a convergent scheme that switches the control parameter within a set of values while the controlled system is numerically integrated. The attractor obtained with the PS algorithm matches the attractor obtained by integrating the system with the parameter replaced by the averaged value of the switched parameter values. The switching rule can be applied periodically or randomly over a set of given values. In this way, every stable cycle of the HP system can be approximated if its underlying parameter value equalizes the average value of the switching values. Moreover, the PS algorithm can be viewed as a generalization of Parrondo's game, which is applied for the first time to the HP system, by showing that losing strategy can win: "losing + losing = winning." If "loosing" is replaced with "chaos" and, "winning" with "order" (as the opposite to "chaos"), then by switching the parameter value in the HP system within two values, which generate chaotic motions, the PS algorithm can approximate a stable cycle so that symbolically one can write "chaos + chaos = regular." Also, by considering a different parameter control, new complex dynamics of the HP model are revealed.
Bartolino, James R.
2007-01-01
A numerical flow model of the Spokane Valley-Rathdrum Prairie aquifer currently (2007) being developed requires the input of values for areally-distributed recharge, a parameter that is often the most uncertain component of water budgets and ground-water flow models because it is virtually impossible to measure over large areas. Data from six active weather stations in and near the study area were used in four recharge-calculation techniques or approaches; the Langbein method, in which recharge is estimated on the basis of empirical data from other basins; a method developed by the U.S. Department of Agriculture (USDA), in which crop consumptive use and effective precipitation are first calculated and then subtracted from actual precipitation to yield an estimate of recharge; an approach developed as part of the Eastern Snake Plain Aquifer Model (ESPAM) Enhancement Project in which recharge is calculated on the basis of precipitation-recharge relations from other basins; and an approach in which reference evapotranspiration is calculated by the Food and Agriculture Organization (FAO) Penman-Monteith equation, crop consumptive use is determined (using a single or dual coefficient approach), and recharge is calculated. Annual recharge calculated by the Langbein method for the six weather stations was 4 percent of annual mean precipitation, yielding the lowest values of the methods discussed in this report, however, the Langbein method can be only applied to annual time periods. Mean monthly recharge calculated by the USDA method ranged from 53 to 73 percent of mean monthly precipitation. Mean annual recharge ranged from 64 to 69 percent of mean annual precipitation. Separate mean monthly recharge calculations were made with the ESPAM method using initial input parameters to represent thin-soil, thick-soil, and lava-rock conditions. The lava-rock parameters yielded the highest recharge values and the thick-soil parameters the lowest. For thin-soil parameters, calculated monthly recharge ranged from 10 to 29 percent of mean monthly precipitation and annual recharge ranged from 16 to 23 percent of mean annual precipitation. For thick-soil parameters, calculated monthly recharge ranged from 1 to 5 percent of mean monthly precipitation and mean annual recharge ranged from 2 to 4 percent of mean annual precipitation. For lava-rock parameters, calculated mean monthly recharge ranged from 37 to 57 percent of mean monthly precipitation and mean annual recharge ranged from 45 to 52 percent of mean annual precipitation. Single-coefficient (crop coefficient) FAO Penman-Monteith mean monthly recharge values were calculated for Spokane Weather Service Office (WSO) Airport, the only station for which the necessary meteorological data were available. Grass-referenced values of mean monthly recharge ranged from 0 to 81 percent of mean monthly precipitation and mean annual recharge was 21 percent of mean annual precipitation; alfalfa-referenced values of mean monthly recharge ranged from 0 to 85 percent of mean monthly precipitation and mean annual recharge was 24 percent of mean annual precipitation. Single-coefficient FAO Penman-Monteith calculations yielded a mean monthly recharge of zero during the eight warmest and driest months of the year (March-October). In order to refine the mean monthly recharge estimates, dual-coefficient (basal crop and soil evaporation coefficients) FAO Penman-Monteith dual-crop evapotranspiration and deep-percolation calculations were applied to daily values from the Spokane WSO Airport for January 1990 through December 2005. The resultant monthly totals display a temporal variability that is absent from the mean monthly values and demonstrate that the daily amount and timing of precipitation dramatically affect calculated recharge. The dual-coefficient FAO Penman-Monteith calculations were made for the remaining five stations using wind-speed values for Spokane WSO Airport and other assumptions regarding
Peck, Jay; Oluwole, Oluwayemisi O; Wong, Hsi-Wu; Miake-Lye, Richard C
2013-03-01
To provide accurate input parameters to the large-scale global climate simulation models, an algorithm was developed to estimate the black carbon (BC) mass emission index for engines in the commercial fleet at cruise. Using a high-dimensional model representation (HDMR) global sensitivity analysis, relevant engine specification/operation parameters were ranked, and the most important parameters were selected. Simple algebraic formulas were then constructed based on those important parameters. The algorithm takes the cruise power (alternatively, fuel flow rate), altitude, and Mach number as inputs, and calculates BC emission index for a given engine/airframe combination using the engine property parameters, such as the smoke number, available in the International Civil Aviation Organization (ICAO) engine certification databank. The algorithm can be interfaced with state-of-the-art aircraft emissions inventory development tools, and will greatly improve the global climate simulations that currently use a single fleet average value for all airplanes. An algorithm to estimate the cruise condition black carbon emission index for commercial aircraft engines was developed. Using the ICAO certification data, the algorithm can evaluate the black carbon emission at given cruise altitude and speed.
Li, Yi Zhe; Zhang, Ting Long; Liu, Qiu Yu; Li, Ying
2018-01-01
The ecological process models are powerful tools for studying terrestrial ecosystem water and carbon cycle at present. However, there are many parameters for these models, and weather the reasonable values of these parameters were taken, have important impact on the models simulation results. In the past, the sensitivity and the optimization of model parameters were analyzed and discussed in many researches. But the temporal and spatial heterogeneity of the optimal parameters is less concerned. In this paper, the BIOME-BGC model was used as an example. In the evergreen broad-leaved forest, deciduous broad-leaved forest and C3 grassland, the sensitive parameters of the model were selected by constructing the sensitivity judgment index with two experimental sites selected under each vegetation type. The objective function was constructed by using the simulated annealing algorithm combined with the flux data to obtain the monthly optimal values of the sensitive parameters at each site. Then we constructed the temporal heterogeneity judgment index, the spatial heterogeneity judgment index and the temporal and spatial heterogeneity judgment index to quantitatively analyze the temporal and spatial heterogeneity of the optimal values of the model sensitive parameters. The results showed that the sensitivity of BIOME-BGC model parameters was different under different vegetation types, but the selected sensitive parameters were mostly consistent. The optimal values of the sensitive parameters of BIOME-BGC model mostly presented time-space heterogeneity to different degrees which varied with vegetation types. The sensitive parameters related to vegetation physiology and ecology had relatively little temporal and spatial heterogeneity while those related to environment and phenology had generally larger temporal and spatial heterogeneity. In addition, the temporal heterogeneity of the optimal values of the model sensitive parameters showed a significant linear correlation with the spatial heterogeneity under the three vegetation types. According to the temporal and spatial heterogeneity of the optimal values, the parameters of the BIOME-BGC model could be classified in order to adopt different parameter strategies in practical application. The conclusion could help to deeply understand the parameters and the optimal values of the ecological process models, and provide a way or reference for obtaining the reasonable values of parameters in models application.
NASA Astrophysics Data System (ADS)
Perdana, B. P.; Setiawan, Y.; Prasetyo, L. B.
2018-02-01
Recently, a highway development is required as a liaison between regions to support the economic development of the regions. Even the availability of highways give positive impacts, it also has negative impacts, especially related to the changes of vegetated lands. This study aims to determine the change of vegetation coverage in Jagorawi corridor Jakarta-Bogor during 37 years, and to analyze landscape patterns in the corridor based on distance factor from Jakarta to Bogor. In this study, we used a long-series of Landsat images taken by Landsat 2 MSS (1978), Landsat 5 TM (1988, 1995, and 2005) and Landsat 8 OLI/TIRS (2015). Analysis of landscape metrics was conducted through patch analysis approach to determine the change of landscape patterns in the Jagorawi corridor Jakarta-Bogor. Several parameters of landscape metrics used are Number of Patches (NumP), Mean Patch Size (MPS), Mean Shape Index (MSI), and Edge Density (ED). These parameters can be used to provide information of structural elements of landscape, composition and spatial distribution in the corridor. The results indicated that vegetation coverage in the Jagorawi corridor Jakarta-Bogor decreased about 48% for 35 years. Moreover, NumP value increased and decreasing of MPS value as a means of higher fragmentation level occurs with patch size become smaller. Meanwhile, The increase in ED parameters indicates that vegetated land is damaged annually. MSI parameter shows a decrease in every year which means land degradation on vegetated land. This indicates that the declining value of MSI will have an impact on land degradation.
Evaluation of random errors in Williams’ series coefficients obtained with digital image correlation
NASA Astrophysics Data System (ADS)
Lychak, Oleh V.; Holyns'kiy, Ivan S.
2016-03-01
The use of the Williams’ series parameters for fracture analysis requires valid information about their error values. The aim of this investigation is the development of the method for estimation of the standard deviation of random errors of the Williams’ series parameters, obtained from the measured components of the stress field. Also, the criteria for choosing the optimal number of terms in the truncated Williams’ series for derivation of their parameters with minimal errors is proposed. The method was used for the evaluation of the Williams’ parameters, obtained from the data, and measured by the digital image correlation technique for testing a three-point bending specimen.
The growth of the tearing mode - Boundary and scaling effects
NASA Technical Reports Server (NTRS)
Steinolfson, R. S.; Van Hoven, G.
1983-01-01
A numerical model of resistive magnetic tearing is developed in order to verify and relate the results of the principal approximations used in analytic analyses and to investigate the solutions and their growth-rate scalings over a large range of primary parameters which include parametric values applicable to the solar atmosphere. The computations cover the linear behavior for a variety of boundary conditions, emphasizing effects which differentiate magnetic tearing in astrophysical situations from that in laboratory devices. Eigenfunction profiles for long and short wavelengths are computed and the applicability of the 'constant psi' approximation is investigated. The growth rate is computed for values of the magnetic Reynolds number up to a trillion and of the dimensionless wavelength parameter down to 0.001. The analysis predicts significant effects due to differing values of the magnetic Reynolds number.
Lomnitz, Jason G.; Savageau, Michael A.
2016-01-01
Mathematical models of biochemical systems provide a means to elucidate the link between the genotype, environment, and phenotype. A subclass of mathematical models, known as mechanistic models, quantitatively describe the complex non-linear mechanisms that capture the intricate interactions between biochemical components. However, the study of mechanistic models is challenging because most are analytically intractable and involve large numbers of system parameters. Conventional methods to analyze them rely on local analyses about a nominal parameter set and they do not reveal the vast majority of potential phenotypes possible for a given system design. We have recently developed a new modeling approach that does not require estimated values for the parameters initially and inverts the typical steps of the conventional modeling strategy. Instead, this approach relies on architectural features of the model to identify the phenotypic repertoire and then predict values for the parameters that yield specific instances of the system that realize desired phenotypic characteristics. Here, we present a collection of software tools, the Design Space Toolbox V2 based on the System Design Space method, that automates (1) enumeration of the repertoire of model phenotypes, (2) prediction of values for the parameters for any model phenotype, and (3) analysis of model phenotypes through analytical and numerical methods. The result is an enabling technology that facilitates this radically new, phenotype-centric, modeling approach. We illustrate the power of these new tools by applying them to a synthetic gene circuit that can exhibit multi-stability. We then predict values for the system parameters such that the design exhibits 2, 3, and 4 stable steady states. In one example, inspection of the basins of attraction reveals that the circuit can count between three stable states by transient stimulation through one of two input channels: a positive channel that increases the count, and a negative channel that decreases the count. This example shows the power of these new automated methods to rapidly identify behaviors of interest and efficiently predict parameter values for their realization. These tools may be applied to understand complex natural circuitry and to aid in the rational design of synthetic circuits. PMID:27462346
Optimization of a Thermodynamic Model Using a Dakota Toolbox Interface
NASA Astrophysics Data System (ADS)
Cyrus, J.; Jafarov, E. E.; Schaefer, K. M.; Wang, K.; Clow, G. D.; Piper, M.; Overeem, I.
2016-12-01
Scientific modeling of the Earth physical processes is an important driver of modern science. The behavior of these scientific models is governed by a set of input parameters. It is crucial to choose accurate input parameters that will also preserve the corresponding physics being simulated in the model. In order to effectively simulate real world processes the models output data must be close to the observed measurements. To achieve this optimal simulation, input parameters are tuned until we have minimized the objective function, which is the error between the simulation model outputs and the observed measurements. We developed an auxiliary package, which serves as a python interface between the user and DAKOTA. The package makes it easy for the user to conduct parameter space explorations, parameter optimizations, as well as sensitivity analysis while tracking and storing results in a database. The ability to perform these analyses via a Python library also allows the users to combine analysis techniques, for example finding an approximate equilibrium with optimization then immediately explore the space around it. We used the interface to calibrate input parameters for the heat flow model, which is commonly used in permafrost science. We performed optimization on the first three layers of the permafrost model, each with two thermal conductivity coefficients input parameters. Results of parameter space explorations indicate that the objective function not always has a unique minimal value. We found that gradient-based optimization works the best for the objective functions with one minimum. Otherwise, we employ more advanced Dakota methods such as genetic optimization and mesh based convergence in order to find the optimal input parameters. We were able to recover 6 initially unknown thermal conductivity parameters within 2% accuracy of their known values. Our initial tests indicate that the developed interface for the Dakota toolbox could be used to perform analysis and optimization on a `black box' scientific model more efficiently than using just Dakota.
Patel, Nitin R; Ankolekar, Suresh; Antonijevic, Zoran; Rajicic, Natasa
2013-05-10
We describe a value-driven approach to optimizing pharmaceutical portfolios. Our approach incorporates inputs from research and development and commercial functions by simultaneously addressing internal and external factors. This approach differentiates itself from current practices in that it recognizes the impact of study design parameters, sample size in particular, on the portfolio value. We develop an integer programming (IP) model as the basis for Bayesian decision analysis to optimize phase 3 development portfolios using expected net present value as the criterion. We show how this framework can be used to determine optimal sample sizes and trial schedules to maximize the value of a portfolio under budget constraints. We then illustrate the remarkable flexibility of the IP model to answer a variety of 'what-if' questions that reflect situations that arise in practice. We extend the IP model to a stochastic IP model to incorporate uncertainty in the availability of drugs from earlier development phases for phase 3 development in the future. We show how to use stochastic IP to re-optimize the portfolio development strategy over time as new information accumulates and budget changes occur. Copyright © 2013 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Ito, Shin-ichi; Yoshie, Naoki; Okunishi, Takeshi; Ono, Tsuneo; Okazaki, Yuji; Kuwata, Akira; Hashioka, Taketo; Rose, Kenneth A.; Megrey, Bernard A.; Kishi, Michio J.; Nakamachi, Miwa; Shimizu, Yugo; Kakehi, Shigeho; Saito, Hiroaki; Takahashi, Kazutaka; Tadokoro, Kazuaki; Kusaka, Akira; Kasai, Hiromi
2010-10-01
The Oyashio region in the western North Pacific supports high biological productivity and has been well monitored. We applied the NEMURO (North Pacific Ecosystem Model for Understanding Regional Oceanography) model to simulate the nutrients, phytoplankton, and zooplankton dynamics. Determination of parameters values is very important, yet ad hoc calibration methods are often used. We used the automatic calibration software PEST (model-independent Parameter ESTimation), which has been used previously with NEMURO but in a system without ontogenetic vertical migration of the large zooplankton functional group. Determining the performance of PEST with vertical migration, and obtaining a set of realistic parameter values for the Oyashio, will likely be useful in future applications of NEMURO. Five identical twin simulation experiments were performed with the one-box version of NEMURO. The experiments differed in whether monthly snapshot or averaged state variables were used, in whether state variables were model functional groups or were aggregated (total phytoplankton, small plus large zooplankton), and in whether vertical migration of large zooplankton was included or not. We then applied NEMURO to monthly climatological field data covering 1 year for the Oyashio, and compared model fits and parameter values between PEST-determined estimates and values used in previous applications to the Oyashio region that relied on ad hoc calibration. We substituted the PEST and ad hoc calibrated parameter values into a 3-D version of NEMURO for the western North Pacific, and compared the two sets of spatial maps of chlorophyll- a with satellite-derived data. The identical twin experiments demonstrated that PEST could recover the known model parameter values when vertical migration was included, and that over-fitting can occur as a result of slight differences in the values of the state variables. PEST recovered known parameter values when using monthly snapshots of aggregated state variables, but estimated a different set of parameters with monthly averaged values. Both sets of parameters resulted in good fits of the model to the simulated data. Disaggregating the variables provided to PEST into functional groups did not solve the over-fitting problem, and including vertical migration seemed to amplify the problem. When we used the climatological field data, simulated values with PEST-estimated parameters were closer to these field data than with the previously determined ad hoc set of parameter values. When these same PEST and ad hoc sets of parameter values were substituted into 3-D-NEMURO (without vertical migration), the PEST-estimated parameter values generated spatial maps that were similar to the satellite data for the Kuroshio Extension during January and March and for the subarctic ocean from May to November. With non-linear problems, such as vertical migration, PEST should be used with caution because parameter estimates can be sensitive to how the data are prepared and to the values used for the searching parameters of PEST. We recommend the usage of PEST, or other parameter optimization methods, to generate first-order parameter estimates for simulating specific systems and for insertion into 2-D and 3-D models. The parameter estimates that are generated are useful, and the inconsistencies between simulated values and the available field data provide valuable information on model behavior and the dynamics of the ecosystem.
Prediction of quantitative intrathoracic fluid volume to diagnose pulmonary oedema using LabVIEW.
Urooj, Shabana; Khan, M; Ansari, A Q; Lay-Ekuakille, Aimé; Salhan, Ashok K
2012-01-01
Pulmonary oedema is a life-threatening disease that requires special attention in the area of research and clinical diagnosis. Computer-based techniques are rarely used to quantify the intrathoracic fluid volume (IFV) for diagnostic purposes. This paper discusses a software program developed to detect and diagnose pulmonary oedema using LabVIEW. The software runs on anthropometric dimensions and physiological parameters, mainly transthoracic electrical impedance (TEI). This technique is accurate and faster than existing manual techniques. The LabVIEW software was used to compute the parameters required to quantify IFV. An equation relating per cent control and IFV was obtained. The results of predicted TEI and measured TEI were compared with previously reported data to validate the developed program. It was found that the predicted values of TEI obtained from the computer-based technique were much closer to the measured values of TEI. Six new subjects were enrolled to measure and predict transthoracic impedance and hence to quantify IFV. A similar difference was also observed in the measured and predicted values of TEI for the new subjects.
Kinetics of color development of peanuts during dry roasting using a batch roaster
USDA-ARS?s Scientific Manuscript database
The kinetics of color development during peanut roasting were investigated at roasting temperatures from 149 to 204 °C which produced Hunter L color values of 25 to 65. Preliminary and equivalent roasting trials were conducted using a batch roaster simulating the parameters of an industrial continuo...
Bukhari, Mahwish; Awan, M. Ali; Qazi, Ishtiaq A.; Baig, M. Anwar
2012-01-01
This paper illustrates systematic development of a convenient analytical method for the determination of chromium and cadmium in tannery wastewater using laser-induced breakdown spectroscopy (LIBS). A new approach was developed by which liquid was converted into solid phase sample surface using absorption paper for subsequent LIBS analysis. The optimized values of LIBS parameters were 146.7 mJ for chromium and 89.5 mJ for cadmium (laser pulse energy), 4.5 μs (delay time), 70 mm (lens to sample surface distance), and 7 mm (light collection system to sample surface distance). Optimized values of LIBS parameters demonstrated strong spectrum lines for each metal keeping the background noise at minimum level. The new method of preparing metal standards on absorption papers exhibited calibration curves with good linearity with correlation coefficients, R2 in the range of 0.992 to 0.998. The developed method was tested on real tannery wastewater samples for determination of chromium and cadmium. PMID:22567570
Rigor mortis development in turkey breast muscle and the effect of electrical stunning.
Alvarado, C Z; Sams, A R
2000-11-01
Rigor mortis development in turkey breast muscle and the effect of electrical stunning on this process are not well characterized. Some electrical stunning procedures have been known to inhibit postmortem (PM) biochemical reactions, thereby delaying the onset of rigor mortis in broilers. Therefore, this study was designed to characterize rigor mortis development in stunned and unstunned turkeys. A total of 154 turkey toms in two trials were conventionally processed at 20 to 22 wk of age. Turkeys were either stunned with a pulsed direct current (500 Hz, 50% duty cycle) at 35 mA (40 V) in a saline bath for 12 seconds or left unstunned as controls. At 15 min and 1, 2, 4, 8, 12, and 24 h PM, pectoralis samples were collected to determine pH, R-value, L* value, sarcomere length, and shear value. In Trial 1, the samples obtained for pH, R-value, and sarcomere length were divided into surface and interior samples. There were no significant differences between the surface and interior samples among any parameters measured. Muscle pH significantly decreased over time in stunned and unstunned birds through 2 h PM. The R-values increased to 8 h PM in unstunned birds and 24 h PM in stunned birds. The L* values increased over time, with no significant differences after 1 h PM for the controls and 2 h PM for the stunned birds. Sarcomere length increased through 2 h PM in the controls and 12 h PM in the stunned fillets. Cooked meat shear values decreased through the 1 h PM deboning time in the control fillets and 2 h PM in the stunned fillets. These results suggest that stunning delayed the development of rigor mortis through 2 h PM, but had no significant effect on the measured parameters at later time points, and that deboning turkey breasts at 2 h PM or later will not significantly impair meat tenderness.
Physiological Information Database (PID)
EPA has developed a physiological information database (created using Microsoft ACCESS) intended to be used in PBPK modeling. The database contains physiological parameter values for humans from early childhood through senescence as well as similar data for laboratory animal spec...
NASA Astrophysics Data System (ADS)
Barsuk, Alexandr A.; Paladi, Florentin
2018-04-01
The dynamic behavior of thermodynamic system, described by one order parameter and one control parameter, in a small neighborhood of ordinary and bifurcation equilibrium values of the system parameters is studied. Using the general methods of investigating the branching (bifurcations) of solutions for nonlinear equations, we performed an exhaustive analysis of the order parameter dependences on the control parameter in a small vicinity of the equilibrium values of parameters, including the stability analysis of the equilibrium states, and the asymptotic behavior of the order parameter dependences on the control parameter (bifurcation diagrams). The peculiarities of the transition to an unstable state of the system are discussed, and the estimates of the transition time to the unstable state in the neighborhood of ordinary and bifurcation equilibrium values of parameters are given. The influence of an external field on the dynamic behavior of thermodynamic system is analyzed, and the peculiarities of the system dynamic behavior are discussed near the ordinary and bifurcation equilibrium values of parameters in the presence of external field. The dynamic process of magnetization of a ferromagnet is discussed by using the general methods of bifurcation and stability analysis presented in the paper.
Expert Knowledge-Based Automatic Sleep Stage Determination by Multi-Valued Decision Making Method
NASA Astrophysics Data System (ADS)
Wang, Bei; Sugi, Takenao; Kawana, Fusae; Wang, Xingyu; Nakamura, Masatoshi
In this study, an expert knowledge-based automatic sleep stage determination system working on a multi-valued decision making method is developed. Visual inspection by a qualified clinician is adopted to obtain the expert knowledge database. The expert knowledge database consists of probability density functions of parameters for various sleep stages. Sleep stages are determined automatically according to the conditional probability. Totally, four subjects were participated. The automatic sleep stage determination results showed close agreements with the visual inspection on sleep stages of awake, REM (rapid eye movement), light sleep and deep sleep. The constructed expert knowledge database reflects the distributions of characteristic parameters which can be adaptive to variable sleep data in hospitals. The developed automatic determination technique based on expert knowledge of visual inspection can be an assistant tool enabling further inspection of sleep disorder cases for clinical practice.
Guiavarc'h, Yann P; van Loey, Ann M; Hendrickx, Marc E
2005-02-01
The possibilities and limitations of single- and multicomponent time-temperature integrators (TTIs) for evaluating the impact of thermal processes on a target food attribute with a Ztarget value different from the zTTI value(s) of the TTI is far from sufficiently documented. In this study, several thousand time-temperature profiles were generated by heat transfer simulations based on a wide range of product and process thermal parameters and considering a Ztarget value of 10 degrees C and a reference temperature of 121.1 degrees C, both currently used to assess the safety of food sterilization processes. These simulations included 15 different Ztarget=10 degrees CF121.1 degrees C values in the range 3 to 60 min. The integration of the time-temperature profiles with ZTTI values of 5.5 to 20.5 degrees C in steps of 1 degrees C allowed generation of a large database containing for each combination of product and process parameters the correction factor to apply to the process value FmultiTTI, which was derived from a single- or multicomponent TTI, to obtain the target process value 10 degrees CF121.1 degrees C. The table and the graph results clearly demonstrated that multicomponent TTIs with z-values close to 10 degrees C can be used as an extremely efficient approach when a single-component TTI with a z-value of 10 degrees C is not available. In particular, a two-component TTI with z1 and z2 values respectively above and below the Ztarget value (10 degrees C in this study) would be the best option for the development of a TTI to assess the safety of sterilized foods. Whatever process and product parameters are used, such a TTI allows proper evaluation of the process value 10 degrees CF121.1 degrees C.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nuñez-Cumplido, E., E-mail: ejnc-mccg@hotmail.com; Hernandez-Armas, J.; Perez-Calatayud, J.
2015-08-15
Purpose: In clinical practice, specific air kerma strength (S{sub K}) value is used in treatment planning system (TPS) permanent brachytherapy implant calculations with {sup 125}I and {sup 103}Pd sources; in fact, commercial TPS provide only one S{sub K} input value for all implanted sources and the certified shipment average is typically used. However, the value for S{sub K} is dispersed: this dispersion is not only due to the manufacturing process and variation between different source batches but also due to the classification of sources into different classes according to their S{sub K} values. The purpose of this work is tomore » examine the impact of S{sub K} dispersion on typical implant parameters that are used to evaluate the dose volume histogram (DVH) for both planning target volume (PTV) and organs at risk (OARs). Methods: The authors have developed a new algorithm to compute dose distributions with different S{sub K} values for each source. Three different prostate volumes (20, 30, and 40 cm{sup 3}) were considered and two typical commercial sources of different radionuclides were used. Using a conventional TPS, clinically accepted calculations were made for {sup 125}I sources; for the palladium, typical implants were simulated. To assess the many different possible S{sub K} values for each source belonging to a class, the authors assigned an S{sub K} value to each source in a randomized process 1000 times for each source and volume. All the dose distributions generated for each set of simulations were assessed through the DVH distributions comparing with dose distributions obtained using a uniform S{sub K} value for all the implanted sources. The authors analyzed several dose coverage (V{sub 100} and D{sub 90}) and overdosage parameters for prostate and PTV and also the limiting and overdosage parameters for OARs, urethra and rectum. Results: The parameters analyzed followed a Gaussian distribution for the entire set of computed dosimetries. PTV and prostate V{sub 100} and D{sub 90} variations ranged between 0.2% and 1.78% for both sources. Variations for the overdosage parameters V{sub 150} and V{sub 200} compared to dose coverage parameters were observed and, in general, variations were larger for parameters related to {sup 125}I sources than {sup 103}Pd sources. For OAR dosimetry, variations with respect to the reference D{sub 0.1cm{sup 3}} were observed for rectum values, ranging from 2% to 3%, compared with urethra values, which ranged from 1% to 2%. Conclusions: Dose coverage for prostate and PTV was practically unaffected by S{sub K} dispersion, as was the maximum dose deposited in the urethra due to the implant technique geometry. However, the authors observed larger variations for the PTV V{sub 150}, rectum V{sub 100}, and rectum D{sub 0.1cm{sup 3}} values. The variations in rectum parameters were caused by the specific location of sources with S{sub K} value that differed from the average in the vicinity. Finally, on comparing the two sources, variations were larger for {sup 125}I than for {sup 103}Pd. This is because for {sup 103}Pd, a greater number of sources were used to obtain a valid dose distribution than for {sup 125}I, resulting in a lower variation for each S{sub K} value for each source (because the variations become averaged out statistically speaking)« less
NASA Astrophysics Data System (ADS)
Kishore Mugada, Krishna; Adepu, Kumar
2018-03-01
In this research article, the effect of increasing shoulder diameter on temperature and Zener Holloman (Z)-parameter for friction stir butt welded AA6082-T6 was studied. The temperature at the Advancing side (AS) of weld was measured using the K-Type thermocouple at four different equidistant locations. The developed analytical model is utilized to predict the maximum temperature (Tpeak) during the welding. The strain, strain rate, Z- Parameter for all the shoulders at four distinct locations were evaluated. The temperature increases with increase in shoulder diameter and the maximum temperature was recorded for 24mm shoulder diameter. The computed log Z values are compared with the available process map and results shows that the values are in stable flow region and near to stir zone the values are in Dynamic recrystallization region (DRX). The axial load (Fz) and total tool torque (N-m) are found to be higher for shoulder diameter of 21 mm i.e., 6.3 kN and 56.5 N-m respectively.
Yang, Chao; Song, Cunjiang; Geng, Weitao; Li, Qiang; Wang, Yuanyuan; Kong, Meimei; Wang, Shufang
2012-01-01
Environmentally Degradable Parameter (Ed K) is of importance in the describing of biodegradability of environmentally biodegradable polymers (BDPs). In this study, a concept Ed K was introduced. A test procedure of using the ISO 14852 method and detecting the evolved carbon dioxide as an analytical parameter was developed, and the calculated Ed K was used as an indicator for the ultimate biodegradability of materials. Starch and polyethylene used as reference materials were defined as the Ed K values of 100 and 0, respectively. Natural soil samples were inoculated into bioreactors, followed by determining the rates of biodegradation of the reference materials and 15 commercial BDPs over a 2-week test period. Finally, a formula was deduced to calculate the value of Ed K for each material. The Ed K values of the tested materials have a positive correlation to their biodegradation rates in the simulated soil environment, and they indicated the relative biodegradation rate of each material among all the tested materials. Therefore, the Ed K was shown to be a reliable indicator for quantitatively evaluating the potential biodegradability of BDPs in the natural environment. PMID:22675455
Development and validation of a habitat suitability model for ...
We developed a spatially-explicit, flexible 3-parameter habitat suitability model that can be used to identify and predict areas at higher risk for non-native dwarf eelgrass (Zostera japonica) invasion. The model uses simple environmental parameters (depth, nearshore slope, and salinity) to quantitatively describe habitat suitable for Z. japonica invasion based on ecology and physiology from the primary literature. Habitat suitability is defined with values ranging from zero to one, where one denotes areas most conducive to Z. japonica and zero denotes areas not likely to support Z. japonica growth. The model was applied to Yaquina Bay, Oregon, USA, an area that has well documented Z. japonica expansion over the last two decades. The highest suitability values for Z. japonica occurred in the mid to upper portions of the intertidal zone, with larger expanses occurring in the lower estuary. While the upper estuary did contain suitable habitat, most areas were not as large as in the lower estuary, due to inappropriate depth, a steeply sloping intertidal zone, and lower salinity. The lowest suitability values occurred below the lower intertidal zone, within the Yaquina River channel. The model was validated by comparison to a multi-year time series of Z. japonica maps, revealing a strong predictive capacity. Sensitivity analysis performed to evaluate the contribution of each parameter to the model prediction revealed that depth was the most important factor. Sh
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blais, AR; Dekaban, M; Lee, T-Y
2014-08-15
Quantitative analysis of dynamic positron emission tomography (PET) data usually involves minimizing a cost function with nonlinear regression, wherein the choice of starting parameter values and the presence of local minima affect the bias and variability of the estimated kinetic parameters. These nonlinear methods can also require lengthy computation time, making them unsuitable for use in clinical settings. Kinetic modeling of PET aims to estimate the rate parameter k{sub 3}, which is the binding affinity of the tracer to a biological process of interest and is highly susceptible to noise inherent in PET image acquisition. We have developed linearized kineticmore » models for kinetic analysis of dynamic contrast enhanced computed tomography (DCE-CT)/PET imaging, including a 2-compartment model for DCE-CT and a 3-compartment model for PET. Use of kinetic parameters estimated from DCE-CT can stabilize the kinetic analysis of dynamic PET data, allowing for more robust estimation of k{sub 3}. Furthermore, these linearized models are solved with a non-negative least squares algorithm and together they provide other advantages including: 1) only one possible solution and they do not require a choice of starting parameter values, 2) parameter estimates are comparable in accuracy to those from nonlinear models, 3) significantly reduced computational time. Our simulated data show that when blood volume and permeability are estimated with DCE-CT, the bias of k{sub 3} estimation with our linearized model is 1.97 ± 38.5% for 1,000 runs with a signal-to-noise ratio of 10. In summary, we have developed a computationally efficient technique for accurate estimation of k{sub 3} from noisy dynamic PET data.« less
Two-Player 2 × 2 Quantum Game in Spin System
NASA Astrophysics Data System (ADS)
Huang, Zhiming; Situ, Haozhen
2017-05-01
In this work, we study the payoffs of quantum Samaritan's dilemma played with the thermal entangled state of XXZ spin model in the presence of Dzyaloshinskii-Moriya (DM) interaction. We discuss the effect of anisotropy parameter, strength of DM interaction and temperature on quantum Samaritan's dilemma. It is shown that although increasing DM interaction and anisotropy parameter generate entanglement, players payoffs are not simply decided by entanglement and depend on other game components such as strategy and payoff measurement. In general, Entanglement and Alice's payoff evolve to a relatively stable value with anisotropy parameter, and develop to a fixed value with DM interaction strength, while Bob's payoff changes in the reverse direction. It is noted that the augment of Alice's payoff compensates for the loss of Bob's payoff. For different strategies, payoffs have different changes with temperature. Our results and discussions can be analogously generalized to other 2 × 2 quantum static games in various spin models.
Olivares, Alberto; Górriz, J M; Ramírez, J; Olivares, G
2016-05-01
With the advent of miniaturized inertial sensors many systems have been developed within the last decade to study and analyze human motion and posture, specially in the medical field. Data measured by the sensors are usually processed by algorithms based on Kalman Filters in order to estimate the orientation of the body parts under study. These filters traditionally include fixed parameters, such as the process and observation noise variances, whose value has large influence in the overall performance. It has been demonstrated that the optimal value of these parameters differs considerably for different motion intensities. Therefore, in this work, we show that, by applying frequency analysis to determine motion intensity, and varying the formerly fixed parameters accordingly, the overall precision of orientation estimation algorithms can be improved, therefore providing physicians with reliable objective data they can use in their daily practice. Copyright © 2015 Elsevier Ltd. All rights reserved.
Gondim Teixeira, Pedro Augusto; Leplat, Christophe; Chen, Bailiang; De Verbizier, Jacques; Beaumont, Marine; Badr, Sammy; Cotten, Anne; Blum, Alain
2017-12-01
To evaluate intra-tumour and striated muscle T1 value heterogeneity and the influence of different methods of T1 estimation on the variability of quantitative perfusion parameters. Eighty-two patients with a histologically confirmed musculoskeletal tumour were prospectively included in this study and, with ethics committee approval, underwent contrast-enhanced MR perfusion and T1 mapping. T1 value variations in viable tumour areas and in normal-appearing striated muscle were assessed. In 20 cases, normal muscle perfusion parameters were calculated using three different methods: signal based and gadolinium concentration based on fixed and variable T1 values. Tumour and normal muscle T1 values were significantly different (p = 0.0008). T1 value heterogeneity was higher in tumours than in normal muscle (variation of 19.8% versus 13%). The T1 estimation method had a considerable influence on the variability of perfusion parameters. Fixed T1 values yielded higher coefficients of variation than variable T1 values (mean 109.6 ± 41.8% and 58.3 ± 14.1% respectively). Area under the curve was the least variable parameter (36%). T1 values in musculoskeletal tumours are significantly different and more heterogeneous than normal muscle. Patient-specific T1 estimation is needed for direct inter-patient comparison of perfusion parameters. • T1 value variation in musculoskeletal tumours is considerable. • T1 values in muscle and tumours are significantly different. • Patient-specific T1 estimation is needed for comparison of inter-patient perfusion parameters. • Technical variation is higher in permeability than semiquantitative perfusion parameters.
Mejlholm, Ole; Dalgaard, Paw
2013-10-15
A new and extensive growth and growth boundary model for psychrotolerant Lactobacillus spp. was developed and validated for processed and unprocessed products of seafood and meat. The new model was developed by refitting and expanding an existing cardinal parameter model for growth and the growth boundary of lactic acid bacteria (LAB) in processed seafood (O. Mejlholm and P. Dalgaard, J. Food Prot. 70. 2485-2497, 2007). Initially, to estimate values for the maximum specific growth rate at the reference temperature of 25 °C (μref) and the theoretical minimum temperature that prevents growth of psychrotolerant LAB (T(min)), the existing LAB model was refitted to data from experiments with seafood and meat products reported not to include nitrite or any of the four organic acids evaluated in the present study. Next, dimensionless terms modelling the antimicrobial effect of nitrite, and acetic, benzoic, citric and sorbic acids on growth of Lactobacillus sakei were added to the refitted model, together with minimum inhibitory concentrations determined for the five environmental parameters. The new model including the effect of 12 environmental parameters, as well as their interactive effects, was successfully validated using 229 growth rates (μ(max) values) for psychrotolerant Lactobacillus spp. in seafood and meat products. Average bias and accuracy factor values of 1.08 and 1.27, respectively, were obtained when observed and predicted μ(max) values of psychrotolerant Lactobacillus spp. were compared. Thus, on average μ(max) values were only overestimated by 8%. The performance of the new model was equally good for seafood and meat products, and the importance of including the effect of acetic, benzoic, citric and sorbic acids and to a lesser extent nitrite in order to accurately predict growth of psychrotolerant Lactobacillus spp. was clearly demonstrated. The new model can be used to predict growth of psychrotolerant Lactobacillus spp. in seafood and meat products e.g. prediction of the time to a critical cell concentration of bacteria is considered useful for establishing the shelf life. In addition, the high number of environmental parameters included in the new model makes it flexible and suitable for product development as the effect of substituting one combination of preservatives with another can be predicted. In general, the performance of the new model was unacceptable for other types of LAB including Carnobacterium spp., Leuconostoc spp. and Weissella spp. © 2013.
Guidelines for the Selection of Near-Earth Thermal Environment Parameters for Spacecraft Design
NASA Technical Reports Server (NTRS)
Anderson, B. J.; Justus, C. G.; Batts, G. W.
2001-01-01
Thermal analysis and design of Earth orbiting systems requires specification of three environmental thermal parameters: the direct solar irradiance, Earth's local albedo, and outgoing longwave radiance (OLR). In the early 1990s data sets from the Earth Radiation Budget Experiment were analyzed on behalf of the Space Station Program to provide an accurate description of these parameters as a function of averaging time along the orbital path. This information, documented in SSP 30425 and, in more generic form in NASA/TM-4527, enabled the specification of the proper thermal parameters for systems of various thermal response time constants. However, working with the engineering community and SSP-30425 and TM-4527 products over a number of years revealed difficulties in interpretation and application of this material. For this reason it was decided to develop this guidelines document to help resolve these issues of practical application. In the process, the data were extensively reprocessed and a new computer code, the Simple Thermal Environment Model (STEM) was developed to simplify the process of selecting the parameters for input into extreme hot and cold thermal analyses and design specifications. In the process, greatly improved values for the cold case OLR values for high inclination orbits were derived. Thermal parameters for satellites in low, medium, and high inclination low-Earth orbit and with various system thermal time constraints are recommended for analysis of extreme hot and cold conditions. Practical information as to the interpretation and application of the information and an introduction to the STEM are included. Complete documentation for STEM is found in the user's manual, in preparation.
NASA Astrophysics Data System (ADS)
Hamada, Y.; Kitamura, M.; Yamada, Y.; Sanada, Y.; Moe, K.; Hirose, T.
2016-12-01
In-situ rock properties in/around seismogenic zone in an accretionary prism are key parameters to understand the development mechanisms of an accretionary prism, spatio-temporal variation of stress state, and so on. For the purpose of acquiring continuous-depth-profile of in-situ formation strength in an accretionary prism, here we propose the new method to evaluate the in-situ rock strength using drilling performance property. Drilling parameters are inevitably obtained by any drilling operation even in the non-coring intervals or at challenging environment where core recovery may be poor. The relationship between the rock properties and drilling parameters has been proposed by previous researches [e.g. Teale 1964]. We introduced the relationship theory of Teale [1964], and developed a converting method to estimate in-situ rock strength without depending on uncertain parameters such as weight on bit (WOB). Specifically, we first calculated equivalent specific toughness (EST) which represents gradient of the relationship between Torque energy and volume of penetration at arbitrary interval (in this study, five meters). Then the EST values were converted into strength using the drilling parameters-rock strengths correlation obtained by Karasawa et al. [2002]. This method was applied to eight drilling holes in the Site C0002 of IODP NanTroSEIZE in order to evaluate in-situ rock strength in shallow to deep accretionary prism. In the shallower part (0 - 300 mbsf), the calculated strength shows sharp increase up to 20 MPa. Then the strength has approximate constant value to 1500 mbsf without significant change even at unconformity around 1000 mbsf (boundary between forearc basin and accretionary prism). Below that depth, value of the strength gradually increases with depth up to 60 MPa at 3000 mbsf with variation between 10 and 80 MPa. Because the calculated strength is across approximately the same lithology, the increase trend can responds to the rock strength. This strength-depth curve correspond reasonably well with the strength data of core and cutting samples collected from hole C0002N and C0002P [Kitamura et al., 2016 AGU]. These results show the validity of the method evaluating in-situ strength from the drilling parameters.
Use of DandD for dose assessment under NRC`s radiological criteria for license termination rule
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gallegos, D.P.; Brown, T.J.; Davis, P.A.
The Decontamination and Decommissioning (DandD) software package has been developed by Sandia National Laboratories for the Nuclear Regulatory Commission (NRC) specifically for the purpose of providing a user-friendly analytical tool to address the dose criteria contained in NRC`s Radiological Criteria for License Termination rule (10 CFR Part 20 Subpart E; NRC, 1997). Specifically, DandD embodies the NRC`s screening methodology to allow licensees to convert residual radioactivity contamination levels at their site to annual dose, in a manner consistent with both 10 CFR Part 20 and the corresponding implementation guidance developed by NRC. The screening methodology employs reasonably conservative scenarios, fatemore » and transport models, and default parameter values that have been developed to allow the NRC to quantitatively estimate the risk of releasing a site given only information about the level of contamination. Therefore, a licensee has the option of specifying only the level of contamination and running the code with the default parameter values, or in the case where site specific information is available to alter the appropriate parameter values and then calculate dose. DandD can evaluate dose for fur different scenarios: residential, building occupancy, building renovation, or drinking water. The screening methodology and DandD are part of a larger decision framework that allows and encourages licensees to optimize decisions on choice of alternative actions at their site, including collection of additional data and information. This decision framework is integrated into and documented in NRC`s technical guidance for decommissioning.« less
New Quality Standards of Testing Idlers for Highly Effective Belt Conveyors
NASA Astrophysics Data System (ADS)
Król, Robert; Gladysiewicz, Lech; Kaszuba, Damian; Kisielewski, Waldemar
2017-12-01
The paper presents result of research and analyses carried out into the belt conveyors idlers’ rotational resistance which is one of the key factor indicating the quality of idlers. Moreover, idlers’ rotational resistance is important factor in total resistance to motion of belt conveyor. The evaluation of the technical condition of belt conveyor idlers is carried out in accordance with actual national and international standards which determine the methodology of measurements and acceptable values of measured idlers’ parameters. Requirements defined by the standards, which determine the suitability of idlers to a specific application, despite the development of knowledge on idlers and quality of presently manufactured idlers maintain the same level of parameters values over long periods of time. Nowadays the need to implement new, efficient and economically justified solution for belt conveyor transportation systems characterized by long routes and energy-efficiency is often discussed as one of goals in belt conveyors’ future. One of the basic conditions for achieving this goal is to use only carefully selected idlers with low rotational resistance under the full range of operational loads and high durability. Due to this it is necessary to develop new guidelines for evaluation of the technical condition of belt conveyor idlers in accordance with actual standards and perfecting of existing and development of new methods of idlers testing. The changes in particular should concern updating of values of parameters used for evaluation of the technical condition of belt conveyor idlers in relation to belt conveyors’ operational challenges and growing demands in terms of belt conveyors’ energy efficiency.
Extending unified-theory-of-reinforcement neural networks to steady-state operant behavior.
Calvin, Olivia L; McDowell, J J
2016-06-01
The unified theory of reinforcement has been used to develop models of behavior over the last 20 years (Donahoe et al., 1993). Previous research has focused on the theory's concordance with the respondent behavior of humans and animals. In this experiment, neural networks were developed from the theory to extend the unified theory of reinforcement to operant behavior on single-alternative variable-interval schedules. This area of operant research was selected because previously developed neural networks could be applied to it without significant alteration. Previous research with humans and animals indicates that the pattern of their steady-state behavior is hyperbolic when plotted against the obtained rate of reinforcement (Herrnstein, 1970). A genetic algorithm was used in the first part of the experiment to determine parameter values for the neural networks, because values that were used in previous research did not result in a hyperbolic pattern of behavior. After finding these parameters, hyperbolic and other similar functions were fitted to the behavior produced by the neural networks. The form of the neural network's behavior was best described by an exponentiated hyperbola (McDowell, 1986; McLean and White, 1983; Wearden, 1981), which was derived from the generalized matching law (Baum, 1974). In post-hoc analyses the addition of a baseline rate of behavior significantly improved the fit of the exponentiated hyperbola and removed systematic residuals. The form of this function was consistent with human and animal behavior, but the estimated parameter values were not. Copyright © 2016 Elsevier B.V. All rights reserved.
Puranik, Ameya D; Nair, Gopinathan; Aggarwal, Rajiv; Bandyopadhyay, Abhijit; Shinto, Ajit; Zade, Anand
2013-04-01
The study aimed at developing a scoring system for scintigraphic grading of gastro-esophageal reflux (GER), on gastro-esophageal reflux scintigraphy (GERS) and comparison of clinical and scintigraphic scores, pre- and post-treatment. A total of 39 cases with clinically symptomatic GER underwent 99mTc sulfur colloid GERS; scores were assigned based on the clinical and scintigraphic parameters. Post domperidone GERS was performed after completion of treatment. Follow up GERS was performed and clinical and scintigraphic parameters were compared with baseline parameters. Paired t-test on pre and post domperidone treatment clinical scores showed that the decline in post-treatment scores was highly significant, with P value < 0.001. The scintigraphic scoring system had a sensitivity of 93.9% in assessing treatment response to domperidone, specificity of 83.3% i.e., 83.3% of children with no decline in scintigraphic scores show no clinical response to Domperidone. The scintigraphic scoring system had a positive predictive value of 96.9% and a negative predictive value of 71.4%. GERS with its quantitative parameters is a good investigation for assessing the severity of reflux and also for following children post-treatment.
An Anaylsis of Control Requirements and Control Parameters for Direct-Coupled Turbojet Engines
NASA Technical Reports Server (NTRS)
Novik, David; Otto, Edward W.
1947-01-01
Requirements of an automatic engine control, as affected by engine characteristics, have been analyzed for a direct-coupled turbojet engine. Control parameters for various conditions of engine operation are discussed. A hypothetical engine control is presented to illustrate the use of these parameters. An adjustable speed governor was found to offer a desirable method of over-all engine control. The selection of a minimum value of fuel flow was found to offer a means of preventing unstable burner operation during steady-state operation. Until satisfactory high-temperature-measuring devices are developed, air-fuel ratio is considered to be a satisfactory acceleration-control parameter for the attainment of the maximum acceleration rates consistent with safe turbine temperatures. No danger of unstable burner operation exists during acceleration if a temperature-limiting acceleration control is assumed to be effective. Deceleration was found to be accompanied by the possibility of burner blow-out even if a minimum fuel-flow control that prevents burner blow-out during steady-state operation is assumed to be effective. Burner blow-out during deceleration may be eliminated by varying the value of minimum fuel flow as a function of compressor-discharge pressure, but in no case should the fuel flow be allowed to fall below the value required for steady-state burner operation.
Computer simulation of storm runoff for three watersheds in Albuquerque, New Mexico
Knutilla, R.L.; Veenhuis, J.E.
1994-01-01
Rainfall-runoff data from three watersheds were selected for calibration and verification of the U.S. Geological Survey's Distributed Routing Rainfall-Runoff Model. The watersheds chosen are residentially developed. The conceptually based model uses an optimization process that adjusts selected parameters to achieve the best fit between measured and simulated runoff volumes and peak discharges. Three of these optimization parameters represent soil-moisture conditions, three represent infiltration, and one accounts for effective impervious area. Each watershed modeled was divided into overland-flow segments and channel segments. The overland-flow segments were further subdivided to reflect pervious and impervious areas. Each overland-flow and channel segment was assigned representative values of area, slope, percentage of imperviousness, and roughness coefficients. Rainfall-runoff data for each watershed were separated into two sets for use in calibration and verification. For model calibration, seven input parameters were optimized to attain a best fit of the data. For model verification, parameter values were set using values from model calibration. The standard error of estimate for calibration of runoff volumes ranged from 19 to 34 percent, and for peak discharge calibration ranged from 27 to 44 percent. The standard error of estimate for verification of runoff volumes ranged from 26 to 31 percent, and for peak discharge verification ranged from 31 to 43 percent.
Chaos control of Hastings–Powell model by combining chaotic motions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Danca, Marius-F., E-mail: danca@rist.ro; Chattopadhyay, Joydev, E-mail: joydev@isical.ac.in
2016-04-15
In this paper, we propose a Parameter Switching (PS) algorithm as a new chaos control method for the Hastings–Powell (HP) system. The PS algorithm is a convergent scheme that switches the control parameter within a set of values while the controlled system is numerically integrated. The attractor obtained with the PS algorithm matches the attractor obtained by integrating the system with the parameter replaced by the averaged value of the switched parameter values. The switching rule can be applied periodically or randomly over a set of given values. In this way, every stable cycle of the HP system can bemore » approximated if its underlying parameter value equalizes the average value of the switching values. Moreover, the PS algorithm can be viewed as a generalization of Parrondo's game, which is applied for the first time to the HP system, by showing that losing strategy can win: “losing + losing = winning.” If “loosing” is replaced with “chaos” and, “winning” with “order” (as the opposite to “chaos”), then by switching the parameter value in the HP system within two values, which generate chaotic motions, the PS algorithm can approximate a stable cycle so that symbolically one can write “chaos + chaos = regular.” Also, by considering a different parameter control, new complex dynamics of the HP model are revealed.« less
Monte-Carlo Method Application for Precising Meteor Velocity from TV Observations
NASA Astrophysics Data System (ADS)
Kozak, P.
2014-12-01
Monte-Carlo method (method of statistical trials) as an application for meteor observations processing was developed in author's Ph.D. thesis in 2005 and first used in his works in 2008. The idea of using the method consists in that if we generate random values of input data - equatorial coordinates of the meteor head in a sequence of TV frames - in accordance with their statistical distributions we get a possibility to plot the probability density distributions for all its kinematical parameters, and to obtain their mean values and dispersions. At that the theoretical possibility appears to precise the most important parameter - geocentric velocity of a meteor - which has the highest influence onto precision of meteor heliocentric orbit elements calculation. In classical approach the velocity vector was calculated in two stages: first we calculate the vector direction as a vector multiplication of vectors of poles of meteor trajectory big circles, calculated from two observational points. Then we calculated the absolute value of velocity independently from each observational point selecting any of them from some reasons as a final parameter. In the given method we propose to obtain a statistical distribution of velocity absolute value as an intersection of two distributions corresponding to velocity values obtained from different points. We suppose that such an approach has to substantially increase the precision of meteor velocity calculation and remove any subjective inaccuracies.
Conceptual Model Development for Sea Turtle Nesting Habitat: Support for USACE Navigation Projects
2015-08-01
regional values. • Beach Width: The width of the beach (m) defines the region from the shoreline to the dune toe . Loggerhead turtles tend to prefer...primary drivers of the model parameters. • Beach Elevation: Beach elevation (m) is measured from the shoreline to the dune toe . Elevation influences...mapping, and morphological features in combination with imagery-derived environmental parameters (i.e., dune vegetation) have not been attempted
Inverse sequential procedures for the monitoring of time series
NASA Technical Reports Server (NTRS)
Radok, Uwe; Brown, Timothy J.
1995-01-01
When one or more new values are added to a developing time series, they change its descriptive parameters (mean, variance, trend, coherence). A 'change index (CI)' is developed as a quantitative indicator that the changed parameters remain compatible with the existing 'base' data. CI formulate are derived, in terms of normalized likelihood ratios, for small samples from Poisson, Gaussian, and Chi-Square distributions, and for regression coefficients measuring linear or exponential trends. A substantial parameter change creates a rapid or abrupt CI decrease which persists when the length of the bases is changed. Except for a special Gaussian case, the CI has no simple explicit regions for tests of hypotheses. However, its design ensures that the series sampled need not conform strictly to the distribution form assumed for the parameter estimates. The use of the CI is illustrated with both constructed and observed data samples, processed with a Fortran code 'Sequitor'.
NASA Technical Reports Server (NTRS)
Weisskopf, M. C.; Elsner, R. F.; O'Dell, S. L.; Ramsey, B. D.
2010-01-01
We present a progress report on the various endeavors we are undertaking at MSFC in support of the Wide Field X-Ray Telescope development. In particular we discuss assembly and alignment techniques, in-situ polishing corrections, and the results of our efforts to optimize mirror prescriptions including polynomial coefficients, relative shell displacements, detector placements and tilts. This optimization does not require a blind search through the multi-dimensional parameter space. Under the assumption that the parameters are small enough so that second order expansions are valid, we show that the performance at the detector can be expressed as a quadratic function with numerical coefficients derived from a ray trace through the underlying Wolter I optic. The optimal values for the parameters are found by solving the linear system of equations creating by setting derivatives of this function with respect to each parameter to zero.
NASA Astrophysics Data System (ADS)
Pombo, Maíra; Denadai, Márcia Regina; Turra, Alexander
2013-05-01
Knowledge of population parameters and the ability to predict their responses to environmental changes are useful tools to aid in the appropriate management and conservation of natural resources. Samples of the sciaenid fish Stellifer rastrifer were taken from August 2003 through October 2004 in shallow areas of Caraguatatuba Bight, southeastern Brazil. The results showed a consistent presence of length-frequency classes throughout the year and low values of the gonadosomatic index of this species, indicating that the area is not used for spawning or residence of adults, but rather shelters individuals in late stages of development. The results may serve as a caveat for assessments of transitional areas such as the present one, the nursery function of which is neglected compared to estuaries and mangroves. The danger of mismanaging these areas by not considering their peculiarities is emphasized by using these data as a study case for the development of some broadly used population-parameter analyses. The individuals' body growth parameters from the von Bertalanffy model were estimated based on the most common approaches, and the best values obtained from traditional quantification methods of selection were very prone to bias. The low gonadosomatic index (GSI) estimated during the period was an important factor in stimulating us to select more reliable parameters of body growth (L∞ = 20.9, K = 0.37 and Z = 2.81), which were estimated based on assuming the existence of spatial segregation by size. The data obtained suggest that the estimated mortality rate included a high rate of migration of older individuals to deeper areas, where we assume that they completed their development.
NASA Astrophysics Data System (ADS)
Domanskyi, Sergii; Schilling, Joshua E.; Gorshkov, Vyacheslav; Libert, Sergiy; Privman, Vladimir
2016-09-01
We develop a theoretical approach that uses physiochemical kinetics modelling to describe cell population dynamics upon progression of viral infection in cell culture, which results in cell apoptosis (programmed cell death) and necrosis (direct cell death). Several model parameters necessary for computer simulation were determined by reviewing and analyzing available published experimental data. By comparing experimental data to computer modelling results, we identify the parameters that are the most sensitive to the measured system properties and allow for the best data fitting. Our model allows extraction of parameters from experimental data and also has predictive power. Using the model we describe interesting time-dependent quantities that were not directly measured in the experiment and identify correlations among the fitted parameter values. Numerical simulation of viral infection progression is done by a rate-equation approach resulting in a system of "stiff" equations, which are solved by using a novel variant of the stochastic ensemble modelling approach. The latter was originally developed for coupled chemical reactions.
NASA Astrophysics Data System (ADS)
Domanskyi, Sergii; Schilling, Joshua; Gorshkov, Vyacheslav; Libert, Sergiy; Privman, Vladimir
We develop a theoretical approach that uses physiochemical kinetics modelling to describe cell population dynamics upon progression of viral infection in cell culture, which results in cell apoptosis (programmed cell death) and necrosis (direct cell death). Several model parameters necessary for computer simulation were determined by reviewing and analyzing available published experimental data. By comparing experimental data to computer modelling results, we identify the parameters that are the most sensitive to the measured system properties and allow for the best data fitting. Our model allows extraction of parameters from experimental data and also has predictive power. Using the model we describe interesting time-dependent quantities that were not directly measured in the experiment and identify correlations among the fitted parameter values. Numerical simulation of viral infection progression is done by a rate-equation approach resulting in a system of ``stiff'' equations, which are solved by using a novel variant of the stochastic ensemble modelling approach. The latter was originally developed for coupled chemical reactions.
Parker, Maximilian G; Tyson, Sarah F; Weightman, Andrew P; Abbott, Bruce; Emsley, Richard; Mansell, Warren
2017-11-01
Computational models that simulate individuals' movements in pursuit-tracking tasks have been used to elucidate mechanisms of human motor control. Whilst there is evidence that individuals demonstrate idiosyncratic control-tracking strategies, it remains unclear whether models can be sensitive to these idiosyncrasies. Perceptual control theory (PCT) provides a unique model architecture with an internally set reference value parameter, and can be optimized to fit an individual's tracking behavior. The current study investigated whether PCT models could show temporal stability and individual specificity over time. Twenty adults completed three blocks of 15 1-min, pursuit-tracking trials. Two blocks (training and post-training) were completed in one session and the third was completed after 1 week (follow-up). The target moved in a one-dimensional, pseudorandom pattern. PCT models were optimized to the training data using a least-mean-squares algorithm, and validated with data from post-training and follow-up. We found significant inter-individual variability (partial η 2 : .464-.697) and intra-individual consistency (Cronbach's α: .880-.976) in parameter estimates. Polynomial regression revealed that all model parameters, including the reference value parameter, contribute to simulation accuracy. Participants' tracking performances were significantly more accurately simulated by models developed from their own tracking data than by models developed from other participants' data. We conclude that PCT models can be optimized to simulate the performance of an individual and that the test-retest reliability of individual models is a necessary criterion for evaluating computational models of human performance.
Li, Tingting; Cheng, Zhengguo; Zhang, Le
2017-01-01
Since they can provide a natural and flexible description of nonlinear dynamic behavior of complex system, Agent-based models (ABM) have been commonly used for immune system simulation. However, it is crucial for ABM to obtain an appropriate estimation for the key parameters of the model by incorporating experimental data. In this paper, a systematic procedure for immune system simulation by integrating the ABM and regression method under the framework of history matching is developed. A novel parameter estimation method by incorporating the experiment data for the simulator ABM during the procedure is proposed. First, we employ ABM as simulator to simulate the immune system. Then, the dimension-reduced type generalized additive model (GAM) is employed to train a statistical regression model by using the input and output data of ABM and play a role as an emulator during history matching. Next, we reduce the input space of parameters by introducing an implausible measure to discard the implausible input values. At last, the estimation of model parameters is obtained using the particle swarm optimization algorithm (PSO) by fitting the experiment data among the non-implausible input values. The real Influeza A Virus (IAV) data set is employed to demonstrate the performance of our proposed method, and the results show that the proposed method not only has good fitting and predicting accuracy, but it also owns favorable computational efficiency. PMID:29194393
DOE Office of Scientific and Technical Information (OSTI.GOV)
Crommentuijn, T.; Brils, J.; Van Straalen, N.M.
1993-10-01
To understand the consequences of soil pollution on higher levels of biological organization, the chain of effects of cadmium on several interrelated responses was studied in a chronic toxicity experiment using the collembolan species Folsomia candida (Willem) in an artificial soil. The individual parameters survival, growth, and number of offspring were determined after different time intervals up to 9 weeks. The accumulation of cadmium in springtails and the population increase during the experimental period were also determined. By combining all the mentioned parameters and their development in time, a detailed picture of the action of cadmium on F. candida wasmore » obtained. In order of decreasing sensitivity the EC50 values for Von Bertalanffy growth, number of offspring, population increase, and survival were 256, > 326, 475, and 850 micrograms Cd/g dry soil, respectively. The ultimate LC50 value and also the equilibrium body burden were reached after about 20 days. Reproduction started later because of retarded growth, but was not affected directly and eventually reached the control level. The results are discussed in light of the seemingly contradictory ideas of Halbach (1984, Hydrobiologia 109, 79-96) and Meyer et al. (1987, Environ. Toxicol. Chem. 6, 115-126) about the sensitivity of individual and population parameters. It appears to be very important to know how individual parameters develop in time so that the most sensitive parameter and the consequences for higher levels of biological organization can be determined.« less
Li, Tingting; Cheng, Zhengguo; Zhang, Le
2017-12-01
Since they can provide a natural and flexible description of nonlinear dynamic behavior of complex system, Agent-based models (ABM) have been commonly used for immune system simulation. However, it is crucial for ABM to obtain an appropriate estimation for the key parameters of the model by incorporating experimental data. In this paper, a systematic procedure for immune system simulation by integrating the ABM and regression method under the framework of history matching is developed. A novel parameter estimation method by incorporating the experiment data for the simulator ABM during the procedure is proposed. First, we employ ABM as simulator to simulate the immune system. Then, the dimension-reduced type generalized additive model (GAM) is employed to train a statistical regression model by using the input and output data of ABM and play a role as an emulator during history matching. Next, we reduce the input space of parameters by introducing an implausible measure to discard the implausible input values. At last, the estimation of model parameters is obtained using the particle swarm optimization algorithm (PSO) by fitting the experiment data among the non-implausible input values. The real Influeza A Virus (IAV) data set is employed to demonstrate the performance of our proposed method, and the results show that the proposed method not only has good fitting and predicting accuracy, but it also owns favorable computational efficiency.
Could CT screening for lung cancer ever be cost effective in the United Kingdom?
Whynes, David K
2008-01-01
Background The absence of trial evidence makes it impossible to determine whether or not mass screening for lung cancer would be cost effective and, indeed, whether a clinical trial to investigate the problem would be justified. Attempts have been made to resolve this issue by modelling, although the complex models developed to date have required more real-world data than are currently available. Being founded on unsubstantiated assumptions, they have produced estimates with wide confidence intervals and of uncertain relevance to the United Kingdom. Method I develop a simple, deterministic, model of a screening regimen potentially applicable to the UK. The model includes only a limited number of parameters, for the majority of which, values have already been established in non-trial settings. The component costs of screening are derived from government guidance and from published audits, whilst the values for test parameters are derived from clinical studies. The expected health gains as a result of screening are calculated by combining published survival data for screened and unscreened cohorts with data from Life Tables. When a degree of uncertainty over a parameter value exists, I use a conservative estimate, i.e. one likely to make screening appear less, rather than more, cost effective. Results The incremental cost effectiveness ratio of a single screen amongst a high-risk male population is calculated to be around £14,000 per quality-adjusted life year gained. The average cost of this screening regimen per person screened is around £200. It is possible that, when obtained experimentally in any future trial, parameter values will be found to differ from those previously obtained in non-trial settings. On the basis both of differing assumptions about evaluation conventions and of reasoned speculations as to how test parameters and costs might behave under screening, the model generates cost effectiveness ratios as high as around £20,000 and as low as around £7,000. Conclusion It is evident that eventually being able to identify a cost effective regimen of CT screening for lung cancer in the UK is by no means an unreasonable expectation. PMID:18302756
NASA Astrophysics Data System (ADS)
Feng, Haike; Zhang, Wei; Zhang, Jie; Chen, Xiaofei
2017-05-01
The perfectly matched layer (PML) is an efficient absorbing technique for numerical wave simulation. The complex frequency-shifted PML (CFS-PML) introduces two additional parameters in the stretching function to make the absorption frequency dependent. This can help to suppress converted evanescent waves from near grazing incident waves, but does not efficiently absorb low-frequency waves below the cut-off frequency. To absorb both the evanescent wave and the low-frequency wave, the double-pole CFS-PML having two poles in the coordinate stretching function was developed in computational electromagnetism. Several studies have investigated the performance of the double-pole CFS-PML for seismic wave simulations in the case of a narrowband seismic wavelet and did not find significant difference comparing to the CFS-PML. Another difficulty to apply the double-pole CFS-PML for real problems is that a practical strategy to set optimal parameter values has not been established. In this work, we study the performance of the double-pole CFS-PML for broad-band seismic wave simulation. We find that when the maximum to minimum frequency ratio is larger than 16, the CFS-PML will either fail to suppress the converted evanescent waves for grazing incident waves, or produce visible low-frequency reflection, depending on the value of α. In contrast, the double-pole CFS-PML can simultaneously suppress the converted evanescent waves and avoid low-frequency reflections with proper parameter values. We analyse the different roles of the double-pole CFS-PML parameters and propose optimal selections of these parameters. Numerical tests show that the double-pole CFS-PML with the optimal parameters can generate satisfactory results for broad-band seismic wave simulations.
Optimisation of process parameters on thin shell part using response surface methodology (RSM)
NASA Astrophysics Data System (ADS)
Faiz, J. M.; Shayfull, Z.; Nasir, S. M.; Fathullah, M.; Rashidi, M. M.
2017-09-01
This study is carried out to focus on optimisation of process parameters by simulation using Autodesk Moldflow Insight (AMI) software. The process parameters are taken as the input in order to analyse the warpage value which is the output in this study. There are some significant parameters that have been used which are melt temperature, mould temperature, packing pressure, and cooling time. A plastic part made of Polypropylene (PP) has been selected as the study part. Optimisation of process parameters is applied in Design Expert software with the aim to minimise the obtained warpage value. Response Surface Methodology (RSM) has been applied in this study together with Analysis of Variance (ANOVA) in order to investigate the interactions between parameters that are significant to the warpage value. Thus, the optimised warpage value can be obtained using the model designed using RSM due to its minimum error value. This study comes out with the warpage value improved by using RSM.
GRAM-86 - FOUR DIMENSIONAL GLOBAL REFERENCE ATMOSPHERE MODEL
NASA Technical Reports Server (NTRS)
Johnson, D.
1994-01-01
The Four-D Global Reference Atmosphere program was developed from an empirical atmospheric model which generates values for pressure, density, temperature, and winds from surface level to orbital altitudes. This program can be used to generate altitude profiles of atmospheric parameters along any simulated trajectory through the atmosphere. The program was developed for design applications in the Space Shuttle program, such as the simulation of external tank re-entry trajectories. Other potential applications would be global circulation and diffusion studies, and generating profiles for comparison with other atmospheric measurement techniques, such as satellite measured temperature profiles and infrasonic measurement of wind profiles. The program is an amalgamation of two empirical atmospheric models for the low (25km) and the high (90km) atmosphere, with a newly developed latitude-longitude dependent model for the middle atmosphere. The high atmospheric region above 115km is simulated entirely by the Jacchia (1970) model. The Jacchia program sections are in separate subroutines so that other thermosphericexospheric models could easily be adapted if required for special applications. The atmospheric region between 30km and 90km is simulated by a latitude-longitude dependent empirical model modification of the latitude dependent empirical model of Groves (1971). Between 90km and 115km a smooth transition between the modified Groves values and the Jacchia values is accomplished by a fairing technique. Below 25km the atmospheric parameters are computed by the 4-D worldwide atmospheric model of Spiegler and Fowler (1972). This data set is not included. Between 25km and 30km an interpolation scheme is used between the 4-D results and the modified Groves values. The output parameters consist of components for: (1) latitude, longitude, and altitude dependent monthly and annual means, (2) quasi-biennial oscillations (QBO), and (3) random perturbations to partially simulate the variability due to synoptic, diurnal, planetary wave, and gravity wave variations. Quasi-biennial and random variation perturbations are computed from parameters determined by various empirical studies and are added to the monthly mean values. The UNIVAC version of GRAM is written in UNIVAC FORTRAN and has been implemented on a UNIVAC 1110 under control of EXEC 8 with a central memory requirement of approximately 30K of 36 bit words. The GRAM program was developed in 1976 and GRAM-86 was released in 1986. The monthly data files were last updated in 1986. The DEC VAX version of GRAM is written in FORTRAN 77 and has been implemented on a DEC VAX 11/780 under control of VMS 4.X with a central memory requirement of approximately 100K of 8 bit bytes. The GRAM program was originally developed in 1976 and later converted to the VAX in 1986 (GRAM-86). The monthly data files were last updated in 1986.
Development and Performance Assessment of White LED Dimmer
NASA Astrophysics Data System (ADS)
Maiti, Pradip Kr.; Roy, Biswanath
2017-10-01
A microcontroller based electronic dimmer is developed using pulse width modulation technique. This dimmer is controllable by infra-red remote within a distance of 4 m and can be electrically connected between LED module and its driver. The performance of a developed LED dimmer is assessed on basis of variation of the photometric parameters of commercially available warm white and cool white LED luminaire used in indoor lighting applications. Four equally spaced dimming levels are considered to measure luminous efficacy, spectral power distribution, CIE 1931 chromaticity coordinates, CIE 1976 CIELUV color difference, correlated color temperature, general color rendering index and one specific color rendering index for saturated red color sample. Variations of above parameters are found out with reference to the values measured at rated voltage without the developed dimmer. Analysis of experimentally measured data shows that the developed LED dimmer is capable to vary light output of the WLED luminaire within a range of 25-100% without appreciable variation of its photometric and color parameters. The only exception is observed for the luminous efficacy parameter where it shows about 17 and 14.7% reduction for warm white and cool white LED luminaire at 25% dimming level.
The Value of Information in Decision-Analytic Modeling for Malaria Vector Control in East Africa.
Kim, Dohyeong; Brown, Zachary; Anderson, Richard; Mutero, Clifford; Miranda, Marie Lynn; Wiener, Jonathan; Kramer, Randall
2017-02-01
Decision analysis tools and mathematical modeling are increasingly emphasized in malaria control programs worldwide to improve resource allocation and address ongoing challenges with sustainability. However, such tools require substantial scientific evidence, which is costly to acquire. The value of information (VOI) has been proposed as a metric for gauging the value of reduced model uncertainty. We apply this concept to an evidenced-based Malaria Decision Analysis Support Tool (MDAST) designed for application in East Africa. In developing MDAST, substantial gaps in the scientific evidence base were identified regarding insecticide resistance in malaria vector control and the effectiveness of alternative mosquito control approaches, including larviciding. We identify four entomological parameters in the model (two for insecticide resistance and two for larviciding) that involve high levels of uncertainty and to which outputs in MDAST are sensitive. We estimate and compare a VOI for combinations of these parameters in evaluating three policy alternatives relative to a status quo policy. We find having perfect information on the uncertain parameters could improve program net benefits by up to 5-21%, with the highest VOI associated with jointly eliminating uncertainty about reproductive speed of malaria-transmitting mosquitoes and initial efficacy of larviciding at reducing the emergence of new adult mosquitoes. Future research on parameter uncertainty in decision analysis of malaria control policy should investigate the VOI with respect to other aspects of malaria transmission (such as antimalarial resistance), the costs of reducing uncertainty in these parameters, and the extent to which imperfect information about these parameters can improve payoffs. © 2016 Society for Risk Analysis.
Nicolas, Xavier; Djebli, Nassim; Rauch, Clémence; Brunet, Aurélie; Hurbin, Fabrice; Martinez, Jean-Marie; Fabre, David
2018-05-03
Alirocumab, a human monoclonal antibody against proprotein convertase subtilisin/kexin type 9 (PCSK9), significantly lowers low-density lipoprotein cholesterol levels. This analysis aimed to develop and qualify a population pharmacokinetic/pharmacodynamic model for alirocumab based on pooled data obtained from 13 phase I/II/III clinical trials. From a dataset of 2799 individuals (14,346 low-density lipoprotein-cholesterol values), individual pharmacokinetic parameters from the population pharmacokinetic model presented in Part I of this series were used to estimate alirocumab concentrations. As a second step, we then developed the current population pharmacokinetic/pharmacodynamic model using an indirect response model with a Hill coefficient, parameterized with increasing low-density lipoprotein cholesterol elimination, to relate alirocumab concentrations to low-density lipoprotein cholesterol values. The population pharmacokinetic/pharmacodynamic model allowed the characterization of the pharmacokinetic/pharmacodynamic properties of alirocumab in the target population and estimation of individual low-density lipoprotein cholesterol levels and derived pharmacodynamic parameters (the maximum decrease in low-density lipoprotein cholesterol values from baseline and the difference between baseline low-density lipoprotein cholesterol and the pre-dose value before the next alirocumab dose). Significant parameter-covariate relationships were retained in the model, with a total of ten covariates (sex, age, weight, free baseline PCSK9, total time-varying PCSK9, concomitant statin administration, total baseline PCSK9, co-administration of high-dose statins, disease status) included in the final population pharmacokinetic/pharmacodynamic model to explain between-subject variability. Nevertheless, the high number of covariates included in the model did not have a clinically meaningful impact on model-derived pharmacodynamic parameters. This model successfully allowed the characterization of the population pharmacokinetic/pharmacodynamic properties of alirocumab in its target population and the estimation of individual low-density lipoprotein cholesterol levels.
Visual Basic, Excel-based fish population modeling tool - The pallid sturgeon example
Moran, Edward H.; Wildhaber, Mark L.; Green, Nicholas S.; Albers, Janice L.
2016-02-10
The model presented in this report is a spreadsheet-based model using Visual Basic for Applications within Microsoft Excel (http://dx.doi.org/10.5066/F7057D0Z) prepared in cooperation with the U.S. Army Corps of Engineers and U.S. Fish and Wildlife Service. It uses the same model structure and, initially, parameters as used by Wildhaber and others (2015) for pallid sturgeon. The difference between the model structure used for this report and that used by Wildhaber and others (2015) is that variance is not partitioned. For the model of this report, all variance is applied at the iteration and time-step levels of the model. Wildhaber and others (2015) partition variance into parameter variance (uncertainty about the value of a parameter itself) applied at the iteration level and temporal variance (uncertainty caused by random environmental fluctuations with time) applied at the time-step level. They included implicit individual variance (uncertainty caused by differences between individuals) within the time-step level.The interface developed for the model of this report is designed to allow the user the flexibility to change population model structure and parameter values and uncertainty separately for every component of the model. This flexibility makes the modeling tool potentially applicable to any fish species; however, the flexibility inherent in this modeling tool makes it possible for the user to obtain spurious outputs. The value and reliability of the model outputs are only as good as the model inputs. Using this modeling tool with improper or inaccurate parameter values, or for species for which the structure of the model is inappropriate, could lead to untenable management decisions. By facilitating fish population modeling, this modeling tool allows the user to evaluate a range of management options and implications. The goal of this modeling tool is to be a user-friendly modeling tool for developing fish population models useful to natural resource managers to inform their decision-making processes; however, as with all population models, caution is needed, and a full understanding of the limitations of a model and the veracity of user-supplied parameters should always be considered when using such model output in the management of any species.
Portman, Michelle E.; Shabtay-Yanai, Ateret; Zanzuri, Asaf
2016-01-01
Developed decades ago for spatial choice problems related to zoning in the urban planning field, multicriteria analysis (MCA) has more recently been applied to environmental conflicts and presented in several documented cases for the creation of protected area management plans. Its application is considered here for the development of zoning as part of a proposed marine protected area management plan. The case study incorporates specially-explicit conservation features while considering stakeholder preferences, expert opinion and characteristics of data quality. It involves the weighting of criteria using a modified analytical hierarchy process. Experts ranked physical attributes which include socio-economically valued physical features. The parameters used for the ranking of (physical) attributes important for socio-economic reasons are derived from the field of ecosystem services assessment. Inclusion of these feature values results in protection that emphasizes those areas closest to shore, most likely because of accessibility and familiarity parameters and because of data biases. Therefore, other spatial conservation prioritization methods should be considered to supplement the MCA and efforts should be made to improve data about ecosystem service values farther from shore. Otherwise, the MCA method allows incorporation of expert and stakeholder preferences and ecosystem services values while maintaining the advantages of simplicity and clarity. PMID:27183224
Portman, Michelle E; Shabtay-Yanai, Ateret; Zanzuri, Asaf
2016-01-01
Developed decades ago for spatial choice problems related to zoning in the urban planning field, multicriteria analysis (MCA) has more recently been applied to environmental conflicts and presented in several documented cases for the creation of protected area management plans. Its application is considered here for the development of zoning as part of a proposed marine protected area management plan. The case study incorporates specially-explicit conservation features while considering stakeholder preferences, expert opinion and characteristics of data quality. It involves the weighting of criteria using a modified analytical hierarchy process. Experts ranked physical attributes which include socio-economically valued physical features. The parameters used for the ranking of (physical) attributes important for socio-economic reasons are derived from the field of ecosystem services assessment. Inclusion of these feature values results in protection that emphasizes those areas closest to shore, most likely because of accessibility and familiarity parameters and because of data biases. Therefore, other spatial conservation prioritization methods should be considered to supplement the MCA and efforts should be made to improve data about ecosystem service values farther from shore. Otherwise, the MCA method allows incorporation of expert and stakeholder preferences and ecosystem services values while maintaining the advantages of simplicity and clarity.
NASA Technical Reports Server (NTRS)
Morin, Cory; Monaghan, Andrew; Quattrochi, Dale; Crosson, William; Hayden, Mary; Ernst, Kacey
2015-01-01
Dengue fever is a mosquito-borne viral disease reemerging throughout much of the tropical Americas. Dengue virus transmission is explicitly influenced by climate and the environment through its primary vector, Aedes aegypti. Temperature regulates Ae. aegypti development, survival, and replication rates as well as the incubation period of the virus within the mosquito. Precipitation provides water for many of the preferred breeding habitats of the mosquito, including buckets, old tires, and other places water can collect. Although transmission regularly occurs along the border region in Mexico, dengue virus transmission in bordering Arizona has not occurred. Using NASA's TRMM (Tropical Rainfall Measuring Mission) satellite for precipitation input and Daymet for temperature and supplemental precipitation input, we modeled dengue transmission along a US-Mexico transect using a dynamic dengue transmission model that includes interacting vector ecology and epidemiological components. Model runs were performed for 5 cities in Sonora, Mexico and southern Arizona. Employing a Monte Carlo approach, we performed ensembles of several thousands of model simulations in order to resolve the model uncertainty arising from using different combinations of parameter values that are not well known. For cities with reported dengue case data, the top model simulations that best reproduced dengue case numbers were retained and their parameter values were extracted for comparison. These parameter values were used to run simulations in areas where dengue virus transmission does not occur or where dengue fever case data was unavailable. Additional model runs were performed to reveal how changes in climate or parameter values could alter transmission risk along the transect. The relative influence of climate variability and model parameters on dengue virus transmission is assessed to help public health workers prepare location specific infection prevention strategies.
Strategies for Efficient Computation of the Expected Value of Partial Perfect Information
Madan, Jason; Ades, Anthony E.; Price, Malcolm; Maitland, Kathryn; Jemutai, Julie; Revill, Paul; Welton, Nicky J.
2014-01-01
Expected value of information methods evaluate the potential health benefits that can be obtained from conducting new research to reduce uncertainty in the parameters of a cost-effectiveness analysis model, hence reducing decision uncertainty. Expected value of partial perfect information (EVPPI) provides an upper limit to the health gains that can be obtained from conducting a new study on a subset of parameters in the cost-effectiveness analysis and can therefore be used as a sensitivity analysis to identify parameters that most contribute to decision uncertainty and to help guide decisions around which types of study are of most value to prioritize for funding. A common general approach is to use nested Monte Carlo simulation to obtain an estimate of EVPPI. This approach is computationally intensive, can lead to significant sampling bias if an inadequate number of inner samples are obtained, and incorrect results can be obtained if correlations between parameters are not dealt with appropriately. In this article, we set out a range of methods for estimating EVPPI that avoid the need for nested simulation: reparameterization of the net benefit function, Taylor series approximations, and restricted cubic spline estimation of conditional expectations. For each method, we set out the generalized functional form that net benefit must take for the method to be valid. By specifying this functional form, our methods are able to focus on components of the model in which approximation is required, avoiding the complexities involved in developing statistical approximations for the model as a whole. Our methods also allow for any correlations that might exist between model parameters. We illustrate the methods using an example of fluid resuscitation in African children with severe malaria. PMID:24449434
Chase, K.J.
2011-01-01
This report documents the development of a precipitation-runoff model for the South Fork Flathead River Basin, Mont. The Precipitation-Runoff Modeling System model, developed in cooperation with the Bureau of Reclamation, can be used to simulate daily mean unregulated streamflow upstream and downstream from Hungry Horse Reservoir for water-resources planning. Two input files are required to run the model. The time-series data file contains daily precipitation data and daily minimum and maximum air-temperature data from climate stations in and near the South Fork Flathead River Basin. The parameter file contains values of parameters that describe the basin topography, the flow network, the distribution of the precipitation and temperature data, and the hydrologic characteristics of the basin soils and vegetation. A primary-parameter file was created for simulating streamflow during the study period (water years 1967-2005). The model was calibrated for water years 1991-2005 using the primary-parameter file. This calibration was further refined using snow-covered area data for water years 2001-05. The model then was tested for water years 1967-90. Calibration targets included mean monthly and daily mean unregulated streamflow upstream from Hungry Horse Reservoir, mean monthly unregulated streamflow downstream from Hungry Horse Reservoir, basin mean monthly solar radiation and potential evapotranspiration, and daily snapshots of basin snow-covered area. Simulated streamflow generally was in better agreement with observed streamflow at the upstream gage than at the downstream gage. Upstream from the reservoir, simulated mean annual streamflow was within 0.0 percent of observed mean annual streamflow for the calibration period and was about 2 percent higher than observed mean annual streamflow for the test period. Simulated mean April-July streamflow upstream from the reservoir was about 1 percent lower than observed streamflow for the calibration period and about 4 percent higher than observed for the test period. Downstream from the reservoir, simulated mean annual streamflow was 17 percent lower than observed streamflow for the calibration period and 12 percent lower than observed streamflow for the test period. Simulated mean April-July streamflow downstream from the reservoir was 13 percent lower than observed streamflow for the calibration period and 6 percent lower than observed streamflow for the test period. Calibrating to solar radiation, potential evapotranspiration, and snow-covered area improved the model representation of evapotranspiration, snow accumulation, and snowmelt processes. Simulated basin mean monthly solar radiation values for both the calibration and test periods were within 9 percent of observed values except during the month of December (28 percent different). Simulated basin potential evapotranspiration values for both the calibration and test periods were within 10 percent of observed values except during the months of January (100 percent different) and February (13 percent different). The larger percent errors in simulated potential evaporation occurred in the winter months when observed potential evapotranspiration values were very small; in January the observed value was 0.000 inches and in February the observed value was 0.009 inches. Simulated start of melting of the snowpack occurred at about the same time as observed start of melting. The simulated snowpack accumulated to 90-100 percent snow-covered area 1 to 3 months earlier than observed snowpack. This overestimated snowpack during the winter corresponded to underestimated streamflow during the same period. In addition to the primary-parameter file, four other parameter files were created: for a "recent" period (1991-2005), a historical period (1967-90), a "wet" period (1989-97), and a "dry" period (1998-2005). For each data file of projected precipitation and air temperature, a single parameter file can be used to simulate a s
Brownian motion model with stochastic parameters for asset prices
NASA Astrophysics Data System (ADS)
Ching, Soo Huei; Hin, Pooi Ah
2013-09-01
The Brownian motion model may not be a completely realistic model for asset prices because in real asset prices the drift μ and volatility σ may change over time. Presently we consider a model in which the parameter x = (μ,σ) is such that its value x (t + Δt) at a short time Δt ahead of the present time t depends on the value of the asset price at time t + Δt as well as the present parameter value x(t) and m-1 other parameter values before time t via a conditional distribution. The Malaysian stock prices are used to compare the performance of the Brownian motion model with fixed parameter with that of the model with stochastic parameter.
Modelling biological invasions: species traits, species interactions, and habitat heterogeneity.
Cannas, Sergio A; Marco, Diana E; Páez, Sergio A
2003-05-01
In this paper we explore the integration of different factors to understand, predict and control ecological invasions, through a general cellular automaton model especially developed. The model includes life history traits of several species in a modular structure interacting multiple cellular automata. We performed simulations using field values corresponding to the exotic Gleditsia triacanthos and native co-dominant trees in a montane area. Presence of G. triacanthos juvenile bank was a determinant condition for invasion success. Main parameters influencing invasion velocity were mean seed dispersal distance and minimum reproductive age. Seed production had a small influence on the invasion velocity. Velocities predicted by the model agreed well with estimations from field data. Values of population density predicted matched field values closely. The modular structure of the model, the explicit interaction between the invader and the native species, and the simplicity of parameters and transition rules are novel features of the model.
An expert system for prediction of aquatic toxicity of contaminants
Hickey, James P.; Aldridge, Andrew J.; Passino, Dora R. May; Frank, Anthony M.; Hushon, Judith M.
1990-01-01
The National Fisheries Research Center-Great Lakes has developed an interactive computer program in muLISP that runs on an IBM-compatible microcomputer and uses a linear solvation energy relationship (LSER) to predict acute toxicity to four representative aquatic species from the detailed structure of an organic molecule. Using the SMILES formalism for a chemical structure, the expert system identifies all structural components and uses a knowledge base of rules based on an LSER to generate four structure-related parameter values. A separate module then relates these values to toxicity. The system is designed for rapid screening of potential chemical hazards before laboratory or field investigations are conducted and can be operated by users with little toxicological background. This is the first expert system based on LSER, relying on the first comprehensive compilation of rules and values for the estimation of LSER parameters.
Li, Zhongwei; Xin, Yuezhen; Wang, Xun; Sun, Beibei; Xia, Shengyu; Li, Hui
2016-01-01
Phellinus is a kind of fungus and is known as one of the elemental components in drugs to avoid cancers. With the purpose of finding optimized culture conditions for Phellinus production in the laboratory, plenty of experiments focusing on single factor were operated and large scale of experimental data were generated. In this work, we use the data collected from experiments for regression analysis, and then a mathematical model of predicting Phellinus production is achieved. Subsequently, a gene-set based genetic algorithm is developed to optimize the values of parameters involved in culture conditions, including inoculum size, PH value, initial liquid volume, temperature, seed age, fermentation time, and rotation speed. These optimized values of the parameters have accordance with biological experimental results, which indicate that our method has a good predictability for culture conditions optimization. PMID:27610365
A New National MODIS-Derived Phenology Data Set Every 16 Days, 2002 through 2006
NASA Astrophysics Data System (ADS)
Hargrove, W. W.; Spruce, J.; Gasser, G.; Hoffman, F. M.; Lee, D.
2008-12-01
A new national phenology data set has been developed, comprised of a series of seamless 231m national maps, every 16 days from 2001 through 2006. The data set was developed jointly by the Eastern Forest Environmental Threat Assessment Center (EFETAC) of the USDA Forest Service, and contractors of the NASA Stennis Space Center. The data are available now for dissemination and use. The first half of the National Phenology Data Set is the cumulative area under the NDVI curve since Jan 1, and increases monotonically every 16 days until the end of the year. These cumulative data values "latch" in the event of clouds or snow, remaining at the value when we last saw this cell. The second half is a set of diagnostic parameters fit to the annual NDVI function. The spring minimum, the 20% rise, the 80% rise, the leaf-on maximum, the 80% fall, the 20% fall, and the trailing fall minimum are determined for each map cell. For each parameter, we produce both a national map of the NDVI value, and a map of the day-of-year when that NDVI value was reached. Length of growing season, as the difference between the spring and fall 20% DOYs, and date of middle of growing season can be mapped as well. The new dataset has permitted the development of a set of national phonological ecoregions, and has also proven useful for mapping Gypsy Moth defoliation, simultaneously delineating the aftermath of three Gulf Coast hurricanes, and quantifying suburban/ex-urban development surrounding metro Atlanta.
State, Parameter, and Unknown Input Estimation Problems in Active Automotive Safety Applications
NASA Astrophysics Data System (ADS)
Phanomchoeng, Gridsada
A variety of driver assistance systems such as traction control, electronic stability control (ESC), rollover prevention and lane departure avoidance systems are being developed by automotive manufacturers to reduce driver burden, partially automate normal driving operations, and reduce accidents. The effectiveness of these driver assistance systems can be significant enhanced if the real-time values of several vehicle parameters and state variables, namely tire-road friction coefficient, slip angle, roll angle, and rollover index, can be known. Since there are no inexpensive sensors available to measure these variables, it is necessary to estimate them. However, due to the significant nonlinear dynamics in a vehicle, due to unknown and changing plant parameters, and due to the presence of unknown input disturbances, the design of estimation algorithms for this application is challenging. This dissertation develops a new approach to observer design for nonlinear systems in which the nonlinearity has a globally (or locally) bounded Jacobian. The developed approach utilizes a modified version of the mean value theorem to express the nonlinearity in the estimation error dynamics as a convex combination of known matrices with time varying coefficients. The observer gains are then obtained by solving linear matrix inequalities (LMIs). A number of illustrative examples are presented to show that the developed approach is less conservative and more useful than the standard Lipschitz assumption based nonlinear observer. The developed nonlinear observer is utilized for estimation of slip angle, longitudinal vehicle velocity, and vehicle roll angle. In order to predict and prevent vehicle rollovers in tripped situations, it is necessary to estimate the vertical tire forces in the presence of unknown road disturbance inputs. An approach to estimate unknown disturbance inputs in nonlinear systems using dynamic model inversion and a modified version of the mean value theorem is presented. The developed theory is used to estimate vertical tire forces and predict tripped rollovers in situations involving road bumps, potholes, and lateral unknown force inputs. To estimate the tire-road friction coefficients at each individual tire of the vehicle, algorithms to estimate longitudinal forces and slip ratios at each tire are proposed. Subsequently, tire-road friction coefficients are obtained using recursive least squares parameter estimators that exploit the relationship between longitudinal force and slip ratio at each tire. The developed approaches are evaluated through simulations with industry standard software, CARSIM, with experimental tests on a Volvo XC90 sport utility vehicle and with experimental tests on a 1/8th scaled vehicle. The simulation and experimental results show that the developed approaches can reliably estimate the vehicle parameters and state variables needed for effective ESC and rollover prevention applications.
Models of Pilot Behavior and Their Use to Evaluate the State of Pilot Training
NASA Astrophysics Data System (ADS)
Jirgl, Miroslav; Jalovecky, Rudolf; Bradac, Zdenek
2016-07-01
This article discusses the possibilities of obtaining new information related to human behavior, namely the changes or progressive development of pilots' abilities during training. The main assumption is that a pilot's ability can be evaluated based on a corresponding behavioral model whose parameters are estimated using mathematical identification procedures. The mean values of the identified parameters are obtained via statistical methods. These parameters are then monitored and their changes evaluated. In this context, the paper introduces and examines relevant mathematical models of human (pilot) behavior, the pilot-aircraft interaction, and an example of the mathematical analysis.
Gupta, Jasmine; Nunes, Cletus; Vyas, Shyam; Jonnalagadda, Sriramakamal
2011-03-10
The objectives of this study were (i) to develop a computational model based on molecular dynamics technique to predict the miscibility of indomethacin in carriers (polyethylene oxide, glucose, and sucrose) and (ii) to experimentally verify the in silico predictions by characterizing the drug-carrier mixtures using thermoanalytical techniques. Molecular dynamics (MD) simulations were performed using the COMPASS force field, and the cohesive energy density and the solubility parameters were determined for the model compounds. The magnitude of difference in the solubility parameters of drug and carrier is indicative of their miscibility. The MD simulations predicted indomethacin to be miscible with polyethylene oxide and to be borderline miscible with sucrose and immiscible with glucose. The solubility parameter values obtained using the MD simulations values were in reasonable agreement with those calculated using group contribution methods. Differential scanning calorimetry showed melting point depression of polyethylene oxide with increasing levels of indomethacin accompanied by peak broadening, confirming miscibility. In contrast, thermal analysis of blends of indomethacin with sucrose and glucose verified general immiscibility. The findings demonstrate that molecular modeling is a powerful technique for determining the solubility parameters and predicting miscibility of pharmaceutical compounds. © 2011 American Chemical Society
The influence of cooling parameters on the speed of continuous steel casting
NASA Astrophysics Data System (ADS)
Tirian, G. O.; Gheorghiu, C. A.; Hepuţ, T.; Chioncel, C. P.
2018-01-01
This paper analyzes the cooling parameters of the continuous casting speed. In the researches carried out we aimed to establish some correlation equations between the parameters characterizing the continuous casting process, the temperature of the steel at the entrance to the crystallizer, the superheating of the steel and the flow of the cooling water in the crystallizer and different zones of the secondary cooling. Parallel to these parameters were also the values for the casting speed. The research was made for the casting of round ϕ270mm semi-finished steel products. The steel was developed in an electric EBT furnace with a capacity of 100t, treated in L.F. (Ladle - Furnace) and VD (Vacuum-Degassing) and poured in a 5-wire continuous casting plant. The obtained data was processed in MATLAB using three types of correlation equations. The obtained results are presented both in the analytical and graphical form, each correlation being analyzed from the technological point of view, indicating the optimal values for the independent parameters monitored. In the analysis we present a comparison between the results obtained after the three types of equations for each correlation.
Cooley, Richard L.
1993-01-01
A new method is developed to efficiently compute exact Scheffé-type confidence intervals for output (or other function of parameters) g(β) derived from a groundwater flow model. The method is general in that parameter uncertainty can be specified by any statistical distribution having a log probability density function (log pdf) that can be expanded in a Taylor series. However, for this study parameter uncertainty is specified by a statistical multivariate beta distribution that incorporates hydrogeologic information in the form of the investigator's best estimates of parameters and a grouping of random variables representing possible parameter values so that each group is defined by maximum and minimum bounds and an ordering according to increasing value. The new method forms the confidence intervals from maximum and minimum limits of g(β) on a contour of a linear combination of (1) the quadratic form for the parameters used by Cooley and Vecchia (1987) and (2) the log pdf for the multivariate beta distribution. Three example problems are used to compare characteristics of the confidence intervals for hydraulic head obtained using different weights for the linear combination. Different weights generally produced similar confidence intervals, whereas the method of Cooley and Vecchia (1987) often produced much larger confidence intervals.
Optimizing Vowel Formant Measurements in Four Acoustic Analysis Systems for Diverse Speaker Groups
Derdemezis, Ekaterini; Kent, Ray D.; Fourakis, Marios; Reinicke, Emily L.; Bolt, Daniel M.
2016-01-01
Purpose This study systematically assessed the effects of select linear predictive coding (LPC) analysis parameter manipulations on vowel formant measurements for diverse speaker groups using 4 trademarked Speech Acoustic Analysis Software Packages (SAASPs): CSL, Praat, TF32, and WaveSurfer. Method Productions of 4 words containing the corner vowels were recorded from 4 speaker groups with typical development (male and female adults and male and female children) and 4 speaker groups with Down syndrome (male and female adults and male and female children). Formant frequencies were determined from manual measurements using a consensus analysis procedure to establish formant reference values, and from the 4 SAASPs (using both the default analysis parameters and with adjustments or manipulations to select parameters). Smaller differences between values obtained from the SAASPs and the consensus analysis implied more optimal analysis parameter settings. Results Manipulations of default analysis parameters in CSL, Praat, and TF32 yielded more accurate formant measurements, though the benefit was not uniform across speaker groups and formants. In WaveSurfer, manipulations did not improve formant measurements. Conclusions The effects of analysis parameter manipulations on accuracy of formant-frequency measurements varied by SAASP, speaker group, and formant. The information from this study helps to guide clinical and research applications of SAASPs. PMID:26501214
Tempel, Zachary J; Gandhoke, Gurpreet S; Bolinger, Bryan D; Khattar, Nicolas K; Parry, Philip V; Chang, Yue-Fang; Okonkwo, David O; Kanter, Adam S
2017-06-01
Annual incidence of symptomatic adjacent level disease (ALD) following lumbar fusion surgery ranges from 0.6% to 3.9% per year. Sagittal malalignment may contribute to the development of ALD. To describe the relationship between pelvic incidence-lumbar lordosis (PI-LL) mismatch and the development of symptomatic ALD requiring revision surgery following single-level transforaminal lumbar interbody fusion for degenerative lumbar spondylosis and/or low-grade spondylolisthesis. All patients who underwent a single-level transforaminal lumbar interbody fusion at either L4/5 or L5/S1 between July 2006 and December 2012 were analyzed for pre- and postoperative spinopelvic parameters. Using univariate and logistic regression analysis, we compared the spinopelvic parameters of those patients who required revision surgery against those patients who did not develop symptomatic ALD. We calculated the predictive value of PI-LL mismatch. One hundred fifty-nine patients met the inclusion criteria. The results noted that, for a 1° increase in PI-LL mismatch (preop and postop), the odds of developing ALD requiring surgery increased by 1.3 and 1.4 fold, respectively, which were statistically significant increases. Based on our analysis, a PI-LL mismatch of >11° had a positive predictive value of 75% for the development of symptomatic ALD requiring revision surgery. A high PI-LL mismatch is strongly associated with the development of symptomatic ALD requiring revision lumbar spine surgery. The development of ALD may represent a global disease process as opposed to a focal condition. Spine surgeons may wish to consider assessment of spinopelvic parameters in the evaluation of degenerative lumbar spine pathology. Copyright © 2017 by the Congress of Neurological Surgeons
Application of Artificial Neural Network to Optical Fluid Analyzer
NASA Astrophysics Data System (ADS)
Kimura, Makoto; Nishida, Katsuhiko
1994-04-01
A three-layer artificial neural network has been applied to the presentation of optical fluid analyzer (OFA) raw data, and the accuracy of oil fraction determination has been significantly improved compared to previous approaches. To apply the artificial neural network approach to solving a problem, the first step is training to determine the appropriate weight set for calculating the target values. This involves using a series of data sets (each comprising a set of input values and an associated set of output values that the artificial neural network is required to determine) to tune artificial neural network weighting parameters so that the output of the neural network to the given set of input values is as close as possible to the required output. The physical model used to generate the series of learning data sets was the effective flow stream model, developed for OFA data presentation. The effectiveness of the training was verified by reprocessing the same input data as were used to determine the weighting parameters and then by comparing the results of the artificial neural network to the expected output values. The standard deviation of the expected and obtained values was approximately 10% (two sigma).
Direct computation of stochastic flow in reservoirs with uncertain parameters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dainton, M.P.; Nichols, N.K.; Goldwater, M.H.
1997-01-15
A direct method is presented for determining the uncertainty in reservoir pressure, flow, and net present value (NPV) using the time-dependent, one phase, two- or three-dimensional equations of flow through a porous medium. The uncertainty in the solution is modelled as a probability distribution function and is computed from given statistical data for input parameters such as permeability. The method generates an expansion for the mean of the pressure about a deterministic solution to the system equations using a perturbation to the mean of the input parameters. Hierarchical equations that define approximations to the mean solution at each point andmore » to the field convariance of the pressure are developed and solved numerically. The procedure is then used to find the statistics of the flow and the risked value of the field, defined by the NPV, for a given development scenario. This method involves only one (albeit complicated) solution of the equations and contrasts with the more usual Monte-Carlo approach where many such solutions are required. The procedure is applied easily to other physical systems modelled by linear or nonlinear partial differential equations with uncertain data. 14 refs., 14 figs., 3 tabs.« less
Investigation of Parametric Influence on the Properties of Al6061-SiCp Composite
NASA Astrophysics Data System (ADS)
Adebisi, A. A.; Maleque, M. A.; Bello, K. A.
2017-03-01
The influence of process parameter in stir casting play a major role on the development of aluminium reinforced silicon carbide particle (Al-SiCp) composite. This study aims to investigate the influence of process parameters on wear and density properties of Al-SiCp composite using stir casting technique. Experimental data are generated based on a four-factors-five-level central composite design of response surface methodology. Analysis of variance is utilized to confirm the adequacy and validity of developed models considering the significant model terms. Optimization of the process parameters adequately predicts the Al-SiCp composite properties with stirring speed as the most influencing factor. The aim of optimization process is to minimize wear and maximum density. The multiple objective optimization (MOO) achieved an optimal value of 14 wt% reinforcement fraction (RF), 460 rpm stirring speed (SS), 820 °C processing temperature (PTemp) and 150 secs processing time (PT). Considering the optimum parametric combination, wear mass loss achieved a minimum of 1 x 10-3 g and maximum density value of 2.780g/mm3 with a confidence and desirability level of 95.5%.
NASA Astrophysics Data System (ADS)
Gomez, Jamie; Nelson, Ruben; Kalu, Egwu E.; Weatherspoon, Mark H.; Zheng, Jim P.
2011-05-01
Equivalent circuit model (EMC) of a high-power Li-ion battery that accounts for both temperature and state of charge (SOC) effects known to influence battery performance is presented. Electrochemical impedance measurements of a commercial high power Li-ion battery obtained in the temperature range 20 to 50 °C at various SOC values was used to develop a simple EMC which was used in combination with a non-linear least squares fitting procedure that used thirteen parameters for the analysis of the Li-ion cell. The experimental results show that the solution and charge transfer resistances decreased with increase in cell operating temperature and decreasing SOC. On the other hand, the Warburg admittance increased with increasing temperature and decreasing SOC. The developed model correlations that are capable of being used in process control algorithms are presented for the observed impedance behavior with respect to temperature and SOC effects. The predicted model parameters for the impedance elements Rs, Rct and Y013 show low variance of 5% when compared to the experimental data and therefore indicates a good statistical agreement of correlation model to the actual experimental values.
Hematologic values of the endangered San Joaquin kit fox, Vulpes macrotis mutica
DOE Office of Scientific and Technical Information (OSTI.GOV)
McCue, P.M.; O'Farrell, T.P.
1986-01-01
Between 1981 and 1982 a total of 102 blood samples was collected from 91 San Joaquin kit foxes, Vulpes macrotis mutica, won the US Department of Energy's Naval Petroleum Reserve No. 1 (Elk Hills), in western Kern County, California. The goal of the study was to establish normal blood parameters for this endangered species and to determine whether changes in them could be used to assess the possible effects of petroleum developments on foxes. Adult foxes had the following average hematological characteristics: RBC, 8.4 x 10/sup 6/ cells/..mu..l; Hb, 14.9 g/dl; PCV, 46.9%; MCV, 56.4 fl; MCH, 18.2 pg; MCHC,more » 32.0 g/dl; and WBC, 6900/..mu..l. None of the parameters differed significantly between the sexes. RBC, Hb, PCV, MCV, and MCHC varied as a function of age for puppies between three and six months of age. The highest values of MCV and MCH were obtained in summer, 1982, and the highest value of MCHC was obtained in winter, 1981-1982. These were the only parameters that appeared to change with season. None of the blood parameters appeared to be affected by petroleum developments. Hematological data for kit foxes, coyotes, and wolves confirmed a previously published observation that within mammalian families RBC is inversely correlated with body weight, and that MCV is directly correlated with body weight. It was speculated that it was an adaptive advantage for kit foxes having a high weight-specific metabolic rate to have evolved a high RBC and low MCV, allowing increased oxygen transport and exchange, while PCV was maintained relatively constant, avoiding hemoconcentration and increased viscosity of blood. 33 refs., 1 fig., 6 tabs.« less
NASA Astrophysics Data System (ADS)
Hassani, B.; Atkinson, G. M.
2015-12-01
One of the most important issues in developing accurate ground-motion prediction equations (GMPEs) is the effective use of limited regional site information in developing a site effects model. In modern empirical GMPE models site effects are usually characterized by simplified parameters that describe the overall near-surface effects on input ground-motion shaking. The most common site effects parameter is the time-averaged shear-wave velocity in the upper 30 m (VS30), which has been used in the Next Generation Attenuation-West (NGA-West) and NGA-East GMPEs, and is widely used in building code applications. For the NGA-East GMPE database, only 6% of the stations have measured VS30 values, while the rest have proxy-based VS30 values. Proxy-based VS30 values are derived from a weighted average of different proxies' estimates such as topographic slope and surface geology proxies. For the proxy-based approaches, the uncertainty in the estimation of Vs30 is significantly higher (~0.25, log10 units) than that for stations with measured VS30(0.04, log10 units); this translates into error in site amplification and hence increased ground motion variability. We introduce a new VS30 proxy as a function of the site fundamental frequency (fpeak) using the NGA-East database, and show that fpeak is a particularly effective proxy for sites in central and eastern North America We first use horizontal to vertical spectra ratios (H/V) of 5%-damped pseudo spectral acceleration (PSA) to find the fpeak values for the recording stations. We develop an fpeak-based VS30 proxy by correlating the measured VS30 values with the corresponding fpeak value. The uncertainty of the VS30 estimate using the fpeak-based model is much lower (0.14, log10 units) than that for the proxy-based methods used in the NGA-East database (0.25 log10 units). The results of this study can be used to recalculate the VS30 values more accurately for stations with known fpeak values (23% of the stations), and potentially reduce the overall variability of the developed NGA-East GMPE models.
Comparison of in situ uranium KD values with a laboratory determined surface complexation model
Curtis, G.P.; Fox, P.; Kohler, M.; Davis, J.A.
2004-01-01
Reactive solute transport simulations in groundwater require a large number of parameters to describe hydrologic and chemical reaction processes. Appropriate methods for determining chemical reaction parameters required for reactive solute transport simulations are still under investigation. This work compares U(VI) distribution coefficients (i.e. KD values) measured under field conditions with KD values calculated from a surface complexation model developed in the laboratory. Field studies were conducted in an alluvial aquifer at a former U mill tailings site near the town of Naturita, CO, USA, by suspending approximately 10 g samples of Naturita aquifer background sediments (NABS) in 17-5.1-cm diameter wells for periods of 3 to 15 months. Adsorbed U(VI) on these samples was determined by extraction with a pH 9.45 NaHCO3/Na2CO3 solution. In wells where the chemical conditions in groundwater were nearly constant, adsorbed U concentrations for samples taken after 3 months of exposure to groundwater were indistinguishable from samples taken after 15 months. Measured in situ K D values calculated from the measurements of adsorbed and dissolved U(VI) ranged from 0.50 to 10.6 mL/g and the KD values decreased with increasing groundwater alkalinity, consistent with increased formation of soluble U(VI)-carbonate complexes at higher alkalinities. The in situ K D values were compared with KD values predicted from a surface complexation model (SCM) developed under laboratory conditions in a separate study. A good agreement between the predicted and measured in situ KD values was observed. The demonstration that the laboratory derived SCM can predict U(VI) adsorption in the field provides a critical independent test of a submodel used in a reactive transport model. ?? 2004 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Asfahani, J.; Tlas, M.
2015-10-01
An easy and practical method for interpreting residual gravity anomalies due to simple geometrically shaped models such as cylinders and spheres has been proposed in this paper. This proposed method is based on both the deconvolution technique and the simplex algorithm for linear optimization to most effectively estimate the model parameters, e.g., the depth from the surface to the center of a buried structure (sphere or horizontal cylinder) or the depth from the surface to the top of a buried object (vertical cylinder), and the amplitude coefficient from the residual gravity anomaly profile. The method was tested on synthetic data sets corrupted by different white Gaussian random noise levels to demonstrate the capability and reliability of the method. The results acquired show that the estimated parameter values derived by this proposed method are close to the assumed true parameter values. The validity of this method is also demonstrated using real field residual gravity anomalies from Cuba and Sweden. Comparable and acceptable agreement is shown between the results derived by this method and those derived from real field data.
Thermodynamic criteria for estimating the kinetic parameters of catalytic reactions
NASA Astrophysics Data System (ADS)
Mitrichev, I. I.; Zhensa, A. V.; Kol'tsova, E. M.
2017-01-01
Kinetic parameters are estimated using two criteria in addition to the traditional criterion that considers the consistency between experimental and modeled conversion data: thermodynamic consistency and the consistency with entropy production (i.e., the absolute rate of the change in entropy due to exchange with the environment is consistent with the rate of entropy production in the steady state). A special procedure is developed and executed on a computer to achieve the thermodynamic consistency of a set of kinetic parameters with respect to both the standard entropy of a reaction and the standard enthalpy of a reaction. A problem of multi-criterion optimization, reduced to a single-criterion problem by summing weighted values of the three criteria listed above, is solved. Using the reaction of NO reduction with CO on a platinum catalyst as an example, it is shown that the set of parameters proposed by D.B. Mantri and P. Aghalayam gives much worse agreement with experimental values than the set obtained on the basis of three criteria: the sum of the squares of deviations for conversion, the thermodynamic consistency, and the consistency with entropy production.
NASA Technical Reports Server (NTRS)
Lovrich, T. N.; Schwartz, S. H.
1975-01-01
The dimensionless parameters associated with the thermal stratification and pressure history of a heated container of liquid and its vapor were examined. The Modified Grashof number, the Fourier number, and an Interface number were parameterized using a single test liquid, Freon 113. Cylindrical test tanks with spherical dome end caps were built. Blanket heaters covered the tanks and thermocouples monitored the temperatures of the liquid, the ullage, the tank walls, and the foam insulation encapsulating the tank. A centrifuge was used for the 6 inch tank to preserve the same scaling parameter values between it and the larger tanks. Tests were conducted over a range of Gr* values and the degree of scaling was checked by comparing the dimensionless pressures and temperatures for each scaled pair of tests. Results indicate that the bulk liquid temperature, the surface temperature of the liquid, and the tank pressure can be scaled with the three dimensionless parameters. Some deviation was, however, found in the detailed temperature profiles between the scaled pairs of tests.
Kinematic analysis of crank -cam mechanism of process equipment
NASA Astrophysics Data System (ADS)
Podgornyj, Yu I.; Skeeba, V. Yu; Martynova, T. G.; Pechorkina, N. S.; Skeeba, P. Yu
2018-03-01
This article discusses how to define the kinematic parameters of a crank-cam mechanism. Using the mechanism design, the authors have developed a calculation model and a calculation algorithm that allowed the definition of kinematic parameters of the mechanism, including crank displacements, angular velocities and acceleration, as well as driven link (rocker arm) angular speeds and acceleration. All calculations were performed using the Mathcad mathematical package. The results of the calculations are reported as numerical values.
Logistic regression for circular data
NASA Astrophysics Data System (ADS)
Al-Daffaie, Kadhem; Khan, Shahjahan
2017-05-01
This paper considers the relationship between a binary response and a circular predictor. It develops the logistic regression model by employing the linear-circular regression approach. The maximum likelihood method is used to estimate the parameters. The Newton-Raphson numerical method is used to find the estimated values of the parameters. A data set from weather records of Toowoomba city is analysed by the proposed methods. Moreover, a simulation study is considered. The R software is used for all computations and simulations.
NASA Astrophysics Data System (ADS)
Piecuch, C. G.; Huybers, P. J.; Tingley, M.
2016-12-01
Sea level observations from coastal tide gauges are some of the longest instrumental records of the ocean. However, these data can be noisy, biased, and gappy, featuring missing values, and reflecting land motion and local effects. Coping with these issues in a formal manner is a challenging task. Some studies use Bayesian approaches to estimate sea level from tide gauge records, making inference probabilistically. Such methods are typically empirically Bayesian in nature: model parameters are treated as known and assigned point values. But, in reality, parameters are not perfectly known. Empirical Bayes methods thus neglect a potentially important source of uncertainty, and so may overestimate the precision (i.e., underestimate the uncertainty) of sea level estimates. We consider whether empirical Bayes methods underestimate uncertainty in sea level from tide gauge data, comparing to a full Bayes method that treats parameters as unknowns to be solved for along with the sea level field. We develop a hierarchical algorithm that we apply to tide gauge data on the North American northeast coast over 1893-2015. The algorithm is run in full Bayes mode, solving for the sea level process and parameters, and in empirical mode, solving only for the process using fixed parameter values. Error bars on sea level from the empirical method are smaller than from the full Bayes method, and the relative discrepancies increase with time; the 95% credible interval on sea level values from the empirical Bayes method in 1910 and 2010 is 23% and 56% narrower, respectively, than from the full Bayes approach. To evaluate the representativeness of the credible intervals, empirical Bayes and full Bayes methods are applied to corrupted data of a known surrogate field. Using rank histograms to evaluate the solutions, we find that the full Bayes method produces generally reliable error bars, whereas the empirical Bayes method gives too-narrow error bars, such that the 90% credible interval only encompasses 70% of true process values. Results demonstrate that parameter uncertainty is an important source of process uncertainty, and advocate for the fully Bayesian treatment of tide gauge records in ocean circulation and climate studies.
Carbon dioxide stripping in aquaculture -- part III: model verification
Colt, John; Watten, Barnaby; Pfeiffer, Tim
2012-01-01
Based on conventional mass transfer models developed for oxygen, the use of the non-linear ASCE method, 2-point method, and one parameter linear-regression method were evaluated for carbon dioxide stripping data. For values of KLaCO2 < approximately 1.5/h, the 2-point or ASCE method are a good fit to experimental data, but the fit breaks down at higher values of KLaCO2. How to correct KLaCO2 for gas phase enrichment remains to be determined. The one-parameter linear regression model was used to vary the C*CO2 over the test, but it did not result in a better fit to the experimental data when compared to the ASCE or fixed C*CO2 assumptions.
Evaluation of calibration efficacy under different levels of uncertainty
Heo, Yeonsook; Graziano, Diane J.; Guzowski, Leah; ...
2014-06-10
This study examines how calibration performs under different levels of uncertainty in model input data. It specifically assesses the efficacy of Bayesian calibration to enhance the reliability of EnergyPlus model predictions. A Bayesian approach can be used to update uncertain values of parameters, given measured energy-use data, and to quantify the associated uncertainty.We assess the efficacy of Bayesian calibration under a controlled virtual-reality setup, which enables rigorous validation of the accuracy of calibration results in terms of both calibrated parameter values and model predictions. Case studies demonstrate the performance of Bayesian calibration of base models developed from audit data withmore » differing levels of detail in building design, usage, and operation.« less
Shaded computer graphic techniques for visualizing and interpreting analytic fluid flow models
NASA Technical Reports Server (NTRS)
Parke, F. I.
1981-01-01
Mathematical models which predict the behavior of fluid flow in different experiments are simulated using digital computers. The simulations predict values of parameters of the fluid flow (pressure, temperature and velocity vector) at many points in the fluid. Visualization of the spatial variation in the value of these parameters is important to comprehend and check the data generated, to identify the regions of interest in the flow, and for effectively communicating information about the flow to others. The state of the art imaging techniques developed in the field of three dimensional shaded computer graphics is applied to visualization of fluid flow. Use of an imaging technique known as 'SCAN' for visualizing fluid flow, is studied and the results are presented.
Kurstjens, Rlm; de Wolf, Maf; Kleijnen, J; de Graaf, R; Wittens, Cha
2017-09-01
Objective The aim of this study was to investigate the predictive value of haemodynamic parameters on success of stenting or bypass surgery in patients with non-thrombotic or post-thrombotic deep venous obstruction. Methods EMBASE, MEDLINE and trial registries were searched up to 5 February 2016. Studies needed to investigate stenting or bypass surgery in patients with post-thrombotic obstruction or stenting for non-thrombotic iliac vein compression. Haemodynamic data needed to be available with prognostic analysis for success of treatment. Two authors, independently, selected studies and extracted data with risk bias assessment using the Quality in Prognosis Studies tool. Results Two studies using stenting and two using bypass surgery were included. Three investigated plethysmography, though results varied and confounding was not properly taken into account. Dorsal foot vein pressure and venous refill times appeared to be of influence in one study, though confounding by deep vein incompetence was likely. Another investigated femoral-central pressure gradients without finding statistical significance, though sample size was small without details on statistical methodology. Reduced femoral inflow was found to be a predictor for stent stenosis or occlusion in one study, though patients also received additional surgery to improve stent inflow. Data on prediction of haemodynamic parameters for stenting of non-thrombotic iliac vein compression were not available. Conclusions Data on the predictive value of haemodynamic parameters for success of treatment in deep venous obstructive disease are scant and of poor quality. Plethysmography does not seem to be of value in predicting outcome of stenting or bypass surgery in post-thrombotic disease. The relevance of pressure-related parameters is unclear. Reduced flow into the common femoral vein seems to be predictive for in-stent stenosis or occlusion. Further research into the predictive effect of haemodynamic parameters is warranted and the possibility of developing new techniques that evaluate various haemodynamic aspects should be explored.
Chen, I-chun; Ma, Hwong-wen
2013-02-01
Brownfield redevelopment involves numerous uncertain financial risks associated with market demand and land value. To reduce the uncertainty of the specific impact of land value and social costs, this study develops small-scale risk maps to determine the relationship between population risk (PR) and damaged land value (DLV) to facilitate flexible land reutilisation plans. This study used the spatial variability of exposure parameters in each village to develop the contaminated site-specific risk maps. In view of the combination of risk and cost, risk level that most affected land use was mainly 1.00×10(-6) to 1.00×10(-5) in this study area. Village 2 showed the potential for cost-effective conversion with contaminated land development. If the risk of remediation target was set at 5.00×10(-6), the DLV could be reduced by NT$15,005 million for the land developer. The land developer will consider the net benefit by quantifying the trade-off between the changes of land value and the cost of human health. In this study, small-scale risk maps can illuminate the economic incentive potential for contaminated site redevelopment through the adjustment of land value damage and human health risk. Copyright © 2012 Elsevier Ltd. All rights reserved.
An adaptive technique for a redundant-sensor navigation system.
NASA Technical Reports Server (NTRS)
Chien, T.-T.
1972-01-01
An on-line adaptive technique is developed to provide a self-contained redundant-sensor navigation system with a capability to utilize its full potentiality in reliability and performance. This adaptive system is structured as a multistage stochastic process of detection, identification, and compensation. It is shown that the detection system can be effectively constructed on the basis of a design value, specified by mission requirements, of the unknown parameter in the actual system, and of a degradation mode in the form of a constant bias jump. A suboptimal detection system on the basis of Wald's sequential analysis is developed using the concept of information value and information feedback. The developed system is easily implemented, and demonstrates a performance remarkably close to that of the optimal nonlinear detection system. An invariant transformation is derived to eliminate the effect of nuisance parameters such that the ambiguous identification system can be reduced to a set of disjoint simple hypotheses tests. By application of a technique of decoupled bias estimation in the compensation system the adaptive system can be operated without any complicated reorganization.
NASA Astrophysics Data System (ADS)
Di, Zhenhua; Duan, Qingyun; Wang, Chen; Ye, Aizhong; Miao, Chiyuan; Gong, Wei
2018-03-01
Forecasting skills of the complex weather and climate models have been improved by tuning the sensitive parameters that exert the greatest impact on simulated results based on more effective optimization methods. However, whether the optimal parameter values are still work when the model simulation conditions vary, which is a scientific problem deserving of study. In this study, a highly-effective optimization method, adaptive surrogate model-based optimization (ASMO), was firstly used to tune nine sensitive parameters from four physical parameterization schemes of the Weather Research and Forecasting (WRF) model to obtain better summer precipitation forecasting over the Greater Beijing Area in China. Then, to assess the applicability of the optimal parameter values, simulation results from the WRF model with default and optimal parameter values were compared across precipitation events, boundary conditions, spatial scales, and physical processes in the Greater Beijing Area. The summer precipitation events from 6 years were used to calibrate and evaluate the optimal parameter values of WRF model. Three boundary data and two spatial resolutions were adopted to evaluate the superiority of the calibrated optimal parameters to default parameters under the WRF simulations with different boundary conditions and spatial resolutions, respectively. Physical interpretations of the optimal parameters indicating how to improve precipitation simulation results were also examined. All the results showed that the optimal parameters obtained by ASMO are superior to the default parameters for WRF simulations for predicting summer precipitation in the Greater Beijing Area because the optimal parameters are not constrained by specific precipitation events, boundary conditions, and spatial resolutions. The optimal values of the nine parameters were determined from 127 parameter samples using the ASMO method, which showed that the ASMO method is very highly-efficient for optimizing WRF model parameters.
Automatic detection of malaria parasite in blood images using two parameters.
Kim, Jong-Dae; Nam, Kyeong-Min; Park, Chan-Young; Kim, Yu-Seop; Song, Hye-Jeong
2015-01-01
Malaria must be diagnosed quickly and accurately at the initial infection stage and treated early to cure it properly. The malaria diagnosis method using a microscope requires much labor and time of a skilled expert and the diagnosis results vary greatly between individual diagnosticians. Therefore, to be able to measure the malaria parasite infection quickly and accurately, studies have been conducted for automated classification techniques using various parameters. In this study, by measuring classification technique performance according to changes of two parameters, the parameter values were determined that best distinguish normal from plasmodium-infected red blood cells. To reduce the stain deviation of the acquired images, a principal component analysis (PCA) grayscale conversion method was used, and as parameters, we used a malaria infected area and a threshold value used in binarization. The parameter values with the best classification performance were determined by selecting the value (72) corresponding to the lowest error rate on the basis of cell threshold value 128 for the malaria threshold value for detecting plasmodium-infected red blood cells.
Investigations of subcritical crack propagation of the Empress 2 all-ceramic system.
Mitov, Gergo; Lohbauer, Ulrich; Rabbo, Mohammad Abed; Petschelt, Anselm; Pospiech, Peter
2008-02-01
The mechanical properties and slow crack propapagation of the all-porcelain system Empress 2 (Ivoclar Vivadent, Schaan, Liechtenstein) with its framework compound Empress 2 and the veneering compounds "Empress 2 and Eris were examined. For all materials, the fracture strength, Weibull parameter and elastic moduli were experimentally determined in a four-point-bending test. For the components of the Empress 2 system, the fracture toughness K(IC) was determined, and the crack propagation parameters n and A were determined in a dynamic fatigue method. Using these data, life data analysis was performed and lifetime diagrams were produced. The development of strength under static fatigue conditions was calculated for a period of 5 years. The newly developed veneering ceramic Eris showed a higher fracture strength (sigma(0)=66.1 MPa) at a failure probability of P(F)=63.2%, and crack growth parameters (n=12.9) compared to the veneering ceramic Empress 2 (sigma(0)=60.3 MPa). For Empress 2 veneer the crack propagation parameter n could only be estimated (n=9.5). This is reflected in the prognosis of long-term resistance presented in the SPT diagrams. For all materials investigated, the Weibull parameter m values (Empress 2 framework m=4.6; Empress 2 veneer m=7.9; Eris m=6.9) were much lower than the minimum demanded by the literature (m=15). The initial fracture strength value alone is not sufficient to characterize the mechanical resistance of ceramic materials, since their stressability is time-dependent. Knowledge about the crack propagation parameters n and A are of great importance when preclinically predicting the clinical suitability of dental ceramic materials. The use of SPT diagrams for lifetime calculation of ceramic materials is a valuable method for comparing different ceramics.
MAFsnp: A Multi-Sample Accurate and Flexible SNP Caller Using Next-Generation Sequencing Data
Hu, Jiyuan; Li, Tengfei; Xiu, Zidi; Zhang, Hong
2015-01-01
Most existing statistical methods developed for calling single nucleotide polymorphisms (SNPs) using next-generation sequencing (NGS) data are based on Bayesian frameworks, and there does not exist any SNP caller that produces p-values for calling SNPs in a frequentist framework. To fill in this gap, we develop a new method MAFsnp, a Multiple-sample based Accurate and Flexible algorithm for calling SNPs with NGS data. MAFsnp is based on an estimated likelihood ratio test (eLRT) statistic. In practical situation, the involved parameter is very close to the boundary of the parametric space, so the standard large sample property is not suitable to evaluate the finite-sample distribution of the eLRT statistic. Observing that the distribution of the test statistic is a mixture of zero and a continuous part, we propose to model the test statistic with a novel two-parameter mixture distribution. Once the parameters in the mixture distribution are estimated, p-values can be easily calculated for detecting SNPs, and the multiple-testing corrected p-values can be used to control false discovery rate (FDR) at any pre-specified level. With simulated data, MAFsnp is shown to have much better control of FDR than the existing SNP callers. Through the application to two real datasets, MAFsnp is also shown to outperform the existing SNP callers in terms of calling accuracy. An R package “MAFsnp” implementing the new SNP caller is freely available at http://homepage.fudan.edu.cn/zhangh/softwares/. PMID:26309201
Gailani, Joseph Z; Lackey, Tahirih C; King, David B; Bryant, Duncan; Kim, Sung-Chan; Shafer, Deborah J
2016-03-01
Model studies were conducted to investigate the potential coral reef sediment exposure from dredging associated with proposed development of a deepwater wharf in Apra Harbor, Guam. The Particle Tracking Model (PTM) was applied to quantify the exposure of coral reefs to material suspended by the dredging operations at two alternative sites. Key PTM features include the flexible capability of continuous multiple releases of sediment parcels, control of parcel/substrate interaction, and the ability to efficiently track vast numbers of parcels. This flexibility has facilitated simulating the combined effects of sediment released from clamshell dredging and chiseling within Apra Harbor. Because the rate of material released into the water column by some of the processes is not well understood or known a priori, the modeling approach was to bracket parameters within reasonable ranges to produce a suite of potential results from multiple model runs. Sensitivity analysis to model parameters is used to select the appropriate parameter values for bracketing. Data analysis results include mapping the time series and the maximum values of sedimentation, suspended sediment concentration, and deposition rate. Data were used to quantify various exposure processes that affect coral species in Apra Harbor. The goal of this research is to develop a robust methodology for quantifying and bracketing exposure mechanisms to coral (or other receptors) from dredging operations. These exposure values were utilized in an ecological assessment to predict effects (coral reef impacts) from various dredging scenarios. Copyright © 2015. Published by Elsevier Ltd.
A new method to estimate average hourly global solar radiation on the horizontal surface
NASA Astrophysics Data System (ADS)
Pandey, Pramod K.; Soupir, Michelle L.
2012-10-01
A new model, Global Solar Radiation on Horizontal Surface (GSRHS), was developed to estimate the average hourly global solar radiation on the horizontal surfaces (Gh). The GSRHS model uses the transmission function (Tf,ij), which was developed to control hourly global solar radiation, for predicting solar radiation. The inputs of the model were: hour of day, day (Julian) of year, optimized parameter values, solar constant (H0), latitude, and longitude of the location of interest. The parameter values used in the model were optimized at a location (Albuquerque, NM), and these values were applied into the model for predicting average hourly global solar radiations at four different locations (Austin, TX; El Paso, TX; Desert Rock, NV; Seattle, WA) of the United States. The model performance was assessed using correlation coefficient (r), Mean Absolute Bias Error (MABE), Root Mean Square Error (RMSE), and coefficient of determinations (R2). The sensitivities of parameter to prediction were estimated. Results show that the model performed very well. The correlation coefficients (r) range from 0.96 to 0.99, while coefficients of determination (R2) range from 0.92 to 0.98. For daily and monthly prediction, error percentages (i.e. MABE and RMSE) were less than 20%. The approach we proposed here can be potentially useful for predicting average hourly global solar radiation on the horizontal surface for different locations, with the use of readily available data (i.e. latitude and longitude of the location) as inputs.
Manfredini, A F; Malagoni, A M; Litmanen, H; Zhukovskaja, L; Jeannier, P; Dal Follo, D; Felisatti, M; Besseberg, A; Geistlinger, M; Bayer, P; Carrabre, J E
2011-03-01
Substances and methods used to increase oxygen blood transport and physical performance can be detected in the blood, but the screening of the athletes to be tested remains a critical issue for the International Federations. This project, AR.I.E.T.T.A., aimed to develop a software capable of analysing athletes' hematological and performance profiles to detect abnormal patterns. One-hundred eighty athletes belonging to the International Biathlon Union gave written informed consent to have their hematological data, previously collected according to anti-doping rules, used to develop the AR.I.E.T.T.A. software. Software was developed with the included sections: 1) log-in; 2) data-entry: where data are loaded, stored and grouped; 3) analysis: where data are analysed, validated scores are calculated, and parameters are simultaneously displayed as statistics, tables and graphs, and individual or subpopulation profiles; 4) screening: where an immediate evaluation of the risk score of the present sample and/or the athlete under study is obtained. The sample risk score or AR.I.E.T.T.A. score is calculated by a simple computational system combining different parameters (absolute values and intra-individual variations) considered concurrently. The AR.I.E.T.T.A. score is obtained by the sum of the deviation units derived from each parameter, considering the shift of the present value from the reference values, based on the number of standard deviations. AR.I.E.T.T.A. enables a quick evaluation of blood results assisting surveillance programs and perform timely target testing controls on athletes by the International Federations. Future studies aiming to validate the AR.I.E.T.T.A. score and improve the diagnostic accuracy will improve the system.
An innovative approach for determination of air quality health index.
Gorai, Amit Kumar; Kanchan; Upadhyay, Abhishek; Tuluri, Francis; Goyal, Pramila; Tchounwou, Paul B
2015-11-15
Fuzzy-analytical hierarchical process (F-AHP) can be extended to determine fuzzy air quality health index (FAQHI) for deducing health risk associated with local air pollution levels, and subjective parameters. The present work aims at determining FAQHI by considering five air pollutant parameters (SO2, NO2, O3, CO, and PM10) and three subjective parameters (population sensitivity, population density and location sensitivity). Each of the individual pollutants has varying impacts. Hence the combined health effects associated with the pollutants were estimated by aggregating the pollutants with different weights. Global weights for each evaluation alternatives were determined using fuzzy-AHP method. The developed model was applied to determine FAQHI in Howrah City, India from daily-observed concentrations of air pollutants over the three-year period between 2009 and 2011. The FAQHI values obtained through this method in Howrah City range from 1 to 3. Since the permissible value of FAQHI (as calculated for NAAQS) for residential areas is 1.78, higher index values are of public health concern to the exposed individuals. During the period of study, the observed FAQHI values were found to be higher than 1.78 in most of the day in the months of January to March, and October to December. However, the index values were below the recommended limit during rest of the months. In conclusion, FAQHI in Howrah city was above permissible limit in winter months and within acceptable values in summer and rainy months. Diurnal variations of FAQHI showed a similar trend during the three-year period of assessment. Copyright © 2015 Elsevier B.V. All rights reserved.
Rotor design for maneuver performance
NASA Technical Reports Server (NTRS)
Berry, John D.; Schrage, Daniel
1986-01-01
A method of determining the sensitivity of helicopter maneuver performance to changes in basic rotor design parameters is developed. Maneuver performance is measured by the time required, based on a simplified rotor/helicopter performance model, to perform a series of specified maneuvers. This method identifies parameter values which result in minimum time quickly because of the inherent simplicity of the rotor performance model used. For the specific case studied, this method predicts that the minimum time required is obtained with a low disk loading and a relatively high rotor solidity. The method was developed as part of the winning design effort for the American Helicopter Society student design competition for 1984/1985.
Comparison of Development Test and Evaluation and Overall Program Estimate at Completion
2011-03-01
of the overall model and parameter. In addition to 36 the Shapiro-Wilkes test , and Cook’s Distance overlay plot we used the Breusch - Pagan test to...Transformed Model Finally, we evaluated our log transformed model using the Breusch - Pagan test . The results return a value of .51, thus confirming our...COMPARISON OF DEVELOPMENT TEST AND EVALUATION AND OVERALL
Risk management for moisture related effects in dry manufacturing processes: a statistical approach.
Quiroz, Jorge; Strong, John; Zhang, Lanju
2016-03-01
A risk- and science-based approach to control the quality in pharmaceutical manufacturing includes a full understanding of how product attributes and process parameters relate to product performance through a proactive approach in formulation and process development. For dry manufacturing, where moisture content is not directly manipulated within the process, the variability in moisture of the incoming raw materials can impact both the processability and drug product quality attributes. A statistical approach is developed using individual raw material historical lots as a basis for the calculation of tolerance intervals for drug product moisture content so that risks associated with excursions in moisture content can be mitigated. The proposed method is based on a model-independent approach that uses available data to estimate parameters of interest that describe the population of blend moisture content values and which do not require knowledge of the individual blend moisture content values. Another advantage of the proposed tolerance intervals is that, it does not require the use of tabulated values for tolerance factors. This facilitates the implementation on any spreadsheet program like Microsoft Excel. A computational example is used to demonstrate the proposed method.
Ihssane, B; Bouchafra, H; El Karbane, M; Azougagh, M; Saffaj, T
2016-05-01
We propose in this work an efficient way to evaluate the measurement of uncertainty at the end of the development step of an analytical method, since this assessment provides an indication of the performance of the optimization process. The estimation of the uncertainty is done through a robustness test by applying a Placquett-Burman design, investigating six parameters influencing the simultaneous chromatographic assay of five water-soluble vitamins. The estimated effects of the variation of each parameter are translated into standard uncertainty value at each concentration level. The values obtained of the relative uncertainty do not exceed the acceptance limit of 5%, showing that the procedure development was well done. In addition, a statistical comparison conducted to compare standard uncertainty after the development stage and those of the validation step indicates that the estimated uncertainty are equivalent. The results obtained show clearly the performance and capacity of the chromatographic method to simultaneously assay the five vitamins and suitability for use in routine application. Copyright © 2015 Académie Nationale de Pharmacie. Published by Elsevier Masson SAS. All rights reserved.
Development of a Kinetic Assay for Late Endosome Movement.
Esner, Milan; Meyenhofer, Felix; Kuhn, Michael; Thomas, Melissa; Kalaidzidis, Yannis; Bickle, Marc
2014-08-01
Automated imaging screens are performed mostly on fixed and stained samples to simplify the workflow and increase throughput. Some processes, such as the movement of cells and organelles or measuring membrane integrity and potential, can be measured only in living cells. Developing such assays to screen large compound or RNAi collections is challenging in many respects. Here, we develop a live-cell high-content assay for tracking endocytic organelles in medium throughput. We evaluate the added value of measuring kinetic parameters compared with measuring static parameters solely. We screened 2000 compounds in U-2 OS cells expressing Lamp1-GFP to label late endosomes. All hits have phenotypes in both static and kinetic parameters. However, we show that the kinetic parameters enable better discrimination of the mechanisms of action. Most of the compounds cause a decrease of motility of endosomes, but we identify several compounds that increase endosomal motility. In summary, we show that kinetic data help to better discriminate phenotypes and thereby obtain more subtle phenotypic clustering. © 2014 Society for Laboratory Automation and Screening.
Nonlinear mathematical modeling and sensitivity analysis of hydraulic drive unit
NASA Astrophysics Data System (ADS)
Kong, Xiangdong; Yu, Bin; Quan, Lingxiao; Ba, Kaixian; Wu, Liujie
2015-09-01
The previous sensitivity analysis researches are not accurate enough and also have the limited reference value, because those mathematical models are relatively simple and the change of the load and the initial displacement changes of the piston are ignored, even experiment verification is not conducted. Therefore, in view of deficiencies above, a nonlinear mathematical model is established in this paper, including dynamic characteristics of servo valve, nonlinear characteristics of pressure-flow, initial displacement of servo cylinder piston and friction nonlinearity. The transfer function block diagram is built for the hydraulic drive unit closed loop position control, as well as the state equations. Through deriving the time-varying coefficient items matrix and time-varying free items matrix of sensitivity equations respectively, the expression of sensitivity equations based on the nonlinear mathematical model are obtained. According to structure parameters of hydraulic drive unit, working parameters, fluid transmission characteristics and measured friction-velocity curves, the simulation analysis of hydraulic drive unit is completed on the MATLAB/Simulink simulation platform with the displacement step 2 mm, 5 mm and 10 mm, respectively. The simulation results indicate that the developed nonlinear mathematical model is sufficient by comparing the characteristic curves of experimental step response and simulation step response under different constant load. Then, the sensitivity function time-history curves of seventeen parameters are obtained, basing on each state vector time-history curve of step response characteristic. The maximum value of displacement variation percentage and the sum of displacement variation absolute values in the sampling time are both taken as sensitivity indexes. The sensitivity indexes values above are calculated and shown visually in histograms under different working conditions, and change rules are analyzed. Then the sensitivity indexes values of four measurable parameters, such as supply pressure, proportional gain, initial position of servo cylinder piston and load force, are verified experimentally on test platform of hydraulic drive unit, and the experimental research shows that the sensitivity analysis results obtained through simulation are approximate to the test results. This research indicates each parameter sensitivity characteristics of hydraulic drive unit, the performance-affected main parameters and secondary parameters are got under different working conditions, which will provide the theoretical foundation for the control compensation and structure optimization of hydraulic drive unit.
Perceiving while producing: Modeling the dynamics of phonological planning
Roon, Kevin D.; Gafos, Adamantios I.
2016-01-01
We offer a dynamical model of phonological planning that provides a formal instantiation of how the speech production and perception systems interact during online processing. The model is developed on the basis of evidence from an experimental task that requires concurrent use of both systems, the so-called response-distractor task in which speakers hear distractor syllables while they are preparing to produce required responses. The model formalizes how ongoing response planning is affected by perception and accounts for a range of results reported across previous studies. It does so by explicitly addressing the setting of parameter values in representations. The key unit of the model is that of the dynamic field, a distribution of activation over the range of values associated with each representational parameter. The setting of parameter values takes place by the attainment of a stable distribution of activation over the entire field, stable in the sense that it persists even after the response cue in the above experiments has been removed. This and other properties of representations that have been taken as axiomatic in previous work are derived by the dynamics of the proposed model. PMID:27440947
Probable flood predictions in ungauged coastal basins of El Salvador
Friedel, M.J.; Smith, M.E.; Chica, A.M.E.; Litke, D.
2008-01-01
A regionalization procedure is presented and used to predict probable flooding in four ungauged coastal river basins of El Salvador: Paz, Jiboa, Grande de San Miguel, and Goascoran. The flood-prediction problem is sequentially solved for two regions: upstream mountains and downstream alluvial plains. In the upstream mountains, a set of rainfall-runoff parameter values and recurrent peak-flow discharge hydrographs are simultaneously estimated for 20 tributary-basin models. Application of dissimilarity equations among tributary basins (soft prior information) permitted development of a parsimonious parameter structure subject to information content in the recurrent peak-flow discharge values derived using regression equations based on measurements recorded outside the ungauged study basins. The estimated joint set of parameter values formed the basis from which probable minimum and maximum peak-flow discharge limits were then estimated revealing that prediction uncertainty increases with basin size. In the downstream alluvial plain, model application of the estimated minimum and maximum peak-flow hydrographs facilitated simulation of probable 100-year flood-flow depths in confined canyons and across unconfined coastal alluvial plains. The regionalization procedure provides a tool for hydrologic risk assessment and flood protection planning that is not restricted to the case presented herein. ?? 2008 ASCE.
Talpur, M Younis; Kara, Huseyin; Sherazi, S T H; Ayyildiz, H Filiz; Topkafa, Mustafa; Arslan, Fatma Nur; Naz, Saba; Durmaz, Fatih; Sirajuddin
2014-11-01
Single bounce attenuated total reflectance (SB-ATR) Fourier transform infrared (FTIR) spectroscopy in conjunction with chemometrics was used for accurate determination of free fatty acid (FFA), peroxide value (PV), iodine value (IV), conjugated diene (CD) and conjugated triene (CT) of cottonseed oil (CSO) during potato chips frying. Partial least square (PLS), stepwise multiple linear regression (SMLR), principal component regression (PCR) and simple Beer׳s law (SBL) were applied to develop the calibrations for simultaneous evaluation of five stated parameters of cottonseed oil (CSO) during frying of French frozen potato chips at 170°C. Good regression coefficients (R(2)) were achieved for FFA, PV, IV, CD and CT with value of >0.992 by PLS, SMLR, PCR, and SBL. Root mean square error of prediction (RMSEP) was found to be less than 1.95% for all determinations. Result of the study indicated that SB-ATR FTIR in combination with multivariate chemometrics could be used for accurate and simultaneous determination of different parameters during the frying process without using any toxic organic solvent. Copyright © 2014 Elsevier B.V. All rights reserved.
Compression for an effective management of telemetry data
NASA Technical Reports Server (NTRS)
Arcangeli, J.-P.; Crochemore, M.; Hourcastagnou, J.-N.; Pin, J.-E.
1993-01-01
A Technological DataBase (T.D.B.) records all the values taken by the physical on-board parameters of a satellite since launch time. The amount of temporal data is very large (about 15 Gbytes for the satellite TDF1) and an efficient system must allow users to have a fast access to any value. This paper presents a new solution for T.D.B. management. The main feature of our new approach is the use of lossless data compression methods. Several parametrizable data compression algorithms based on substitution, relative difference and run-length encoding are available. Each of them is dedicated to a specific type of variation of the parameters' values. For each parameter, an analysis of stability is performed at decommutation time, and then the best method is chosen and run. A prototype intended to process different sorts of satellites has been developed. Its performances are well beyond the requirements and prove that data compression is both time and space efficient. For instance, the amount of data for TDF1 has been reduced to 1.05 Gbytes (compression ratio is 1/13) and access time for a typical query has been reduced from 975 seconds to 14 seconds.
Parameter interdependence and uncertainty induced by lumping in a hydrologic model
NASA Astrophysics Data System (ADS)
Gallagher, Mark R.; Doherty, John
2007-05-01
Throughout the world, watershed modeling is undertaken using lumped parameter hydrologic models that represent real-world processes in a manner that is at once abstract, but nevertheless relies on algorithms that reflect real-world processes and parameters that reflect real-world hydraulic properties. In most cases, values are assigned to the parameters of such models through calibration against flows at watershed outlets. One criterion by which the utility of the model and the success of the calibration process are judged is that realistic values are assigned to parameters through this process. This study employs regularization theory to examine the relationship between lumped parameters and corresponding real-world hydraulic properties. It demonstrates that any kind of parameter lumping or averaging can induce a substantial amount of "structural noise," which devices such as Box-Cox transformation of flows and autoregressive moving average (ARMA) modeling of residuals are unlikely to render homoscedastic and uncorrelated. Furthermore, values estimated for lumped parameters are unlikely to represent average values of the hydraulic properties after which they are named and are often contaminated to a greater or lesser degree by the values of hydraulic properties which they do not purport to represent at all. As a result, the question of how rigidly they should be bounded during the parameter estimation process is still an open one.
Developing a probability-based model of aquifer vulnerability in an agricultural region
NASA Astrophysics Data System (ADS)
Chen, Shih-Kai; Jang, Cheng-Shin; Peng, Yi-Huei
2013-04-01
SummaryHydrogeological settings of aquifers strongly influence the regional groundwater movement and pollution processes. Establishing a map of aquifer vulnerability is considerably critical for planning a scheme of groundwater quality protection. This study developed a novel probability-based DRASTIC model of aquifer vulnerability in the Choushui River alluvial fan, Taiwan, using indicator kriging and to determine various risk categories of contamination potentials based on estimated vulnerability indexes. Categories and ratings of six parameters in the probability-based DRASTIC model were probabilistically characterized according to the parameter classification methods of selecting a maximum estimation probability and calculating an expected value. Moreover, the probability-based estimation and assessment gave us an excellent insight into propagating the uncertainty of parameters due to limited observation data. To examine the prediction capacity of pollutants for the developed probability-based DRASTIC model, medium, high, and very high risk categories of contamination potentials were compared with observed nitrate-N exceeding 0.5 mg/L indicating the anthropogenic groundwater pollution. The analyzed results reveal that the developed probability-based DRASTIC model is capable of predicting high nitrate-N groundwater pollution and characterizing the parameter uncertainty via the probability estimation processes.
NASA Astrophysics Data System (ADS)
Chen, Lin; Abbey, Craig K.; Boone, John M.
2013-03-01
Previous research has demonstrated that a parameter extracted from a power function fit to the anatomical noise power spectrum, β, may be predictive of breast mass lesion detectability in x-ray based medical images of the breast. In this investigation, the value of β was compared with a number of other more widely used parameters, in order to determine the relationship between β and these other parameters. This study made use of breast CT data sets, acquired on two breast CT systems developed in our laboratory. A total of 185 breast data sets in 183 women were used, and only the unaffected breast was used (where no lesion was suspected). The anatomical noise power spectrum computed from two-dimensional region of interests (ROIs), was fit to a power function (NPS(f) = α f-β), and the exponent parameter (β) was determined using log/log linear regression. Breast density for each of the volume data sets was characterized in previous work. The breast CT data sets analyzed in this study were part of a previous study which evaluated the receiver operating characteristic (ROC) curve performance using simulated spherical lesions and a pre-whitened matched filter computer observer. This ROC information was used to compute the detectability index as well as the sensitivity at 95% specificity. The fractal dimension was computed from the same ROIs which were used for the assessment of β. The value of β was compared to breast density, detectability index, sensitivity, and fractal dimension, and the slope of these relationships was investigated to assess statistical significance from zero slope. A statistically significant non-zero slope was considered to be a positive association in this investigation. All comparisons between β and breast density, detectability index, sensitivity at 95% specificity, and fractal dimension demonstrated statistically significant association with p < 0.001 in all cases. The value of β was also found to be associated with patient age and breast diameter, parameters both related to breast density. In all associations between other parameters, lower values of β were associated with increased breast cancer detection performance. Specifically, lower values of β were associated with lower breast density, higher detectability index, higher sensitivity, and lower fractal dimension values. While causality was not and probably cannot be demonstrated, the strong, statistically significant association between the β metric and the other more widely used parameters suggest that β may be considered as a surrogate measure for breast cancer detection performance. These findings are specific to breast parenchymal patterns and mass lesions only.
Effects of molecular and particle scatterings on the model parameter for remote-sensing reflectance.
Lee, ZhongPing; Carder, Kendall L; Du, KePing
2004-09-01
For optically deep waters, remote-sensing reflectance (r(rs)) is traditionally expressed as the ratio of the backscattering coefficient (b(b)) to the sum of absorption and backscattering coefficients (a + b(b)) that multiples a model parameter (g, or the so-called f'/Q). Parameter g is further expressed as a function of b(b)/(a + b(b)) (or b(b)/a) to account for its variation that is due to multiple scattering. With such an approach, the same g value will be derived for different a and b(b) values that provide the same ratio. Because g is partially a measure of the angular distribution of upwelling light, and the angular distribution from molecular scattering is quite different from that of particle scattering; g values are expected to vary with different scattering distributions even if the b(b)/a ratios are the same. In this study, after numerically demonstrating the effects of molecular and particle scatterings on the values of g, an innovative r(rs) model is developed. This new model expresses r(rs) in two separate terms: one governed by the phase function of molecular scattering and one governed by the phase function of particle scattering, with a model parameter introduced for each term. In this way the phase function effects from molecular and particle scatterings are explicitly separated and accounted for. This new model provides an analytical tool to understand and quantify the phase-function effects on r(rs), and a platform to calculate r(rs) spectrum quickly and accurately that is required for remote-sensing applications.
Karmonik, C; Anderson, J R; Beilner, J; Ge, J J; Partovi, S; Klucznik, R P; Diaz, O; Zhang, Y J; Britz, G W; Grossman, R G; Lv, N; Huang, Q
2016-07-26
To quantify the relationship and to demonstrate redundancies between hemodynamic and structural parameters before and after virtual treatment with a flow diverter device (FDD) in cerebral aneurysms. Steady computational fluid dynamics (CFD) simulations were performed for 10 cerebral aneurysms where FDD treatment with the SILK device was simulated by virtually reducing the porosity at the aneurysm ostium. Velocity and pressure values proximal and distal to and at the aneurysm ostium as well as inside the aneurysm were quantified. In addition, dome-to-neck ratios and size ratios were determined. Multiple correlation analysis (MCA) and hierarchical cluster analysis (HCA) were conducted to demonstrate dependencies between both structural and hemodynamic parameters. Velocities in the aneurysm were reduced by 0.14m/s on average and correlated significantly (p<0.05) with velocity values in the parent artery (average correlation coefficient: 0.70). Pressure changes in the aneurysm correlated significantly with pressure values in the parent artery and aneurysm (average correlation coefficient: 0.87). MCA found statistically significant correlations between velocity values and between pressure values, respectively. HCA sorted velocity parameters, pressure parameters and structural parameters into different hierarchical clusters. HCA of aneurysms based on the parameter values yielded similar results by either including all (n=22) or only non-redundant parameters (n=2, 3 and 4). Hemodynamic and structural parameters before and after virtual FDD treatment show strong inter-correlations. Redundancy of parameters was demonstrated with hierarchical cluster analysis. Copyright © 2015 Elsevier Ltd. All rights reserved.
Verification of the H2O Linelists with Theoretically Developed Tools
NASA Technical Reports Server (NTRS)
Ma, Qiancheng; Tipping, R.; Lavrentieva, N. N.; Dudaryonok, A. S.
2013-01-01
Two basic rules (i.e., the pair identity and the smooth variation rules) resulting from the properties of the energy levels and wave functions of H2O states govern how the spectroscopic parameters vary with the H2O lines within the individually defined groups of lines. With these rules, for those lines involving high j states in the same groups, variations of all their spectroscopic parameters (i.e., the transition frequency, intensity, pressure broadened half-width, pressure-induced shift, and temperature exponent) can be well monitored. Thus, the rules can serve as simple and effective tools to screen the H2O spectroscopic data listed in the HITRAN database and verify the latter's accuracies. By checking violations of the rules occurring among the data within the individual groups, possible errors can be picked up and also possible missing lines in the linelist whose intensities are above the threshold can be identified. We have used these rules to check the accuracies of the spectroscopic parameters and the completeness of the linelists for several important H2O vibrational bands. Based on our results, the accuracy of the line frequencies in HITRAN 2008 is consistent. For the line intensity, we have found that there are a substantial number of lines whose intensity values are questionable. With respect to other parameters, many mistakes have been found. The above claims are consistent with a well known fact that values of these parameters in HITRAN contain larger uncertainties. Furthermore, supplements of the missing line list consisting of line assignments and positions can be developed from the screening results.
Gorbenko, M V; Popova, T N; Shul'gin, K K; Popov, S S; Agarkov, A A
2014-01-01
The influence of melaxen and valdoxan on the biochemiluminescence parameters, aconitate hydratase activity and citrate level in rats heart and liver during development of experimental hyperthyroidism has been investigated. Administration of these substances promoted a decrease of biochemiluminescence parameters, which had been increased in tissues of rats in response to the development of oxidative stress under hyperthyroidism. Aconitate hydratase activity and citrate concentration in rats liver and heart, growing at pathological conditions, changed towards control value after administration of the drugs correcting melatonin level. The results indicate the positive effect of valdoxan and melaxen on oxidative status of the organism under the development of experimental hyperthyroidism that is associated with antioxidant action of melatonin.
Development of process parameters for 22 nm PMOS using 2-D analytical modeling
NASA Astrophysics Data System (ADS)
Maheran, A. H. Afifah; Menon, P. S.; Ahmad, I.; Shaari, S.; Faizah, Z. A. Noor
2015-04-01
The complementary metal-oxide-semiconductor field effect transistor (CMOSFET) has become major challenge to scaling and integration. Innovation in transistor structures and integration of novel materials are necessary to sustain this performance trend. CMOS variability in the scaling technology becoming very important concern due to limitation of process control; over statistically variability related to the fundamental discreteness and materials. Minimizing the transistor variation through technology optimization and ensuring robust product functionality and performance is the major issue.In this article, the continuation study on process parameters variations is extended and delivered thoroughly in order to achieve a minimum leakage current (ILEAK) on PMOS planar transistor at 22 nm gate length. Several device parameters are varies significantly using Taguchi method to predict the optimum combination of process parameters fabrication. A combination of high permittivity material (high-k) and metal gate are utilized accordingly as gate structure where the materials include titanium dioxide (TiO2) and tungsten silicide (WSix). Then the L9 of the Taguchi Orthogonal array is used to analyze the device simulation where the results of signal-to-noise ratio (SNR) of Smaller-the-Better (STB) scheme are studied through the percentage influences of the process parameters. This is to achieve a minimum ILEAK where the maximum predicted ILEAK value by International Technology Roadmap for Semiconductors (ITRS) 2011 is said to should not above 100 nA/µm. Final results shows that the compensation implantation dose acts as the dominant factor with 68.49% contribution in lowering the device's leakage current. The absolute process parameters combination results in ILEAK mean value of 3.96821 nA/µm where is far lower than the predicted value.
Development of process parameters for 22 nm PMOS using 2-D analytical modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maheran, A. H. Afifah; Menon, P. S.; Shaari, S.
2015-04-24
The complementary metal-oxide-semiconductor field effect transistor (CMOSFET) has become major challenge to scaling and integration. Innovation in transistor structures and integration of novel materials are necessary to sustain this performance trend. CMOS variability in the scaling technology becoming very important concern due to limitation of process control; over statistically variability related to the fundamental discreteness and materials. Minimizing the transistor variation through technology optimization and ensuring robust product functionality and performance is the major issue.In this article, the continuation study on process parameters variations is extended and delivered thoroughly in order to achieve a minimum leakage current (I{sub LEAK}) onmore » PMOS planar transistor at 22 nm gate length. Several device parameters are varies significantly using Taguchi method to predict the optimum combination of process parameters fabrication. A combination of high permittivity material (high-k) and metal gate are utilized accordingly as gate structure where the materials include titanium dioxide (TiO{sub 2}) and tungsten silicide (WSi{sub x}). Then the L9 of the Taguchi Orthogonal array is used to analyze the device simulation where the results of signal-to-noise ratio (SNR) of Smaller-the-Better (STB) scheme are studied through the percentage influences of the process parameters. This is to achieve a minimum I{sub LEAK} where the maximum predicted I{sub LEAK} value by International Technology Roadmap for Semiconductors (ITRS) 2011 is said to should not above 100 nA/µm. Final results shows that the compensation implantation dose acts as the dominant factor with 68.49% contribution in lowering the device’s leakage current. The absolute process parameters combination results in I{sub LEAK} mean value of 3.96821 nA/µm where is far lower than the predicted value.« less
Hierarchial mark-recapture models: a framework for inference about demographic processes
Link, W.A.; Barker, R.J.
2004-01-01
The development of sophisticated mark-recapture models over the last four decades has provided fundamental tools for the study of wildlife populations, allowing reliable inference about population sizes and demographic rates based on clearly formulated models for the sampling processes. Mark-recapture models are now routinely described by large numbers of parameters. These large models provide the next challenge to wildlife modelers: the extraction of signal from noise in large collections of parameters. Pattern among parameters can be described by strong, deterministic relations (as in ultrastructural models) but is more flexibly and credibly modeled using weaker, stochastic relations. Trend in survival rates is not likely to be manifest by a sequence of values falling precisely on a given parametric curve; rather, if we could somehow know the true values, we might anticipate a regression relation between parameters and explanatory variables, in which true value equals signal plus noise. Hierarchical models provide a useful framework for inference about collections of related parameters. Instead of regarding parameters as fixed but unknown quantities, we regard them as realizations of stochastic processes governed by hyperparameters. Inference about demographic processes is based on investigation of these hyperparameters. We advocate the Bayesian paradigm as a natural, mathematically and scientifically sound basis for inference about hierarchical models. We describe analysis of capture-recapture data from an open population based on hierarchical extensions of the Cormack-Jolly-Seber model. In addition to recaptures of marked animals, we model first captures of animals and losses on capture, and are thus able to estimate survival probabilities w (i.e., the complement of death or permanent emigration) and per capita growth rates f (i.e., the sum of recruitment and immigration rates). Covariation in these rates, a feature of demographic interest, is explicitly described in the model.
Instrument for the measurement and determination of chemical pulse column parameters
Marchant, Norman J.; Morgan, John P.
1990-01-01
An instrument for monitoring and measuring pneumatic driving force pulse parameters applied to chemical separation pulse columns obtains real time pulse frequency and root mean square amplitude values, calculates column inch values and compares these values against preset limits to alert column operators to the variations of pulse column operational parameters beyond desired limits.
Indices of Neonatal Prematurity as Discriminators of Development in Middle Childhood
ERIC Educational Resources Information Center
Taub, Harvey B.; And Others
1977-01-01
The comparative value of various parameters of neonatal prematurity for differentiating intellective, scholastic, and social functioning in middle childhood was assessed for a sample of 38 prematurely born and 26 maturely born subjects aged 7 to 9.5 years. (Author/JMB)
Determining "small parameters" for quasi-steady state
NASA Astrophysics Data System (ADS)
Goeke, Alexandra; Walcher, Sebastian; Zerz, Eva
2015-08-01
For a parameter-dependent system of ordinary differential equations we present a systematic approach to the determination of parameter values near which singular perturbation scenarios (in the sense of Tikhonov and Fenichel) arise. We call these special values Tikhonov-Fenichel parameter values. The principal application we intend is to equations that describe chemical reactions, in the context of quasi-steady state (or partial equilibrium) settings. Such equations have rational (or even polynomial) right-hand side. We determine the structure of the set of Tikhonov-Fenichel parameter values as a semi-algebraic set, and present an algorithmic approach to their explicit determination, using Groebner bases. Examples and applications (which include the irreversible and reversible Michaelis-Menten systems) illustrate that the approach is rather easy to implement.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chang, X; Liu, S; Kalet, A
Purpose: The purpose of this work was to investigate the ability of a machine-learning based probabilistic approach to detect radiotherapy treatment plan anomalies given initial disease classes information. Methods In total we obtained 1112 unique treatment plans with five plan parameters and disease information from a Mosaiq treatment management system database for use in the study. The plan parameters include prescription dose, fractions, fields, modality and techniques. The disease information includes disease site, and T, M and N disease stages. A Bayesian network method was employed to model the probabilistic relationships between tumor disease information, plan parameters and an anomalymore » flag. A Bayesian learning method with Dirichlet prior was useed to learn the joint probabilities between dependent variables in error-free plan data and data with artificially induced anomalies. In the study, we randomly sampled data with anomaly in a specified anomaly space.We tested the approach with three groups of plan anomalies – improper concurrence of values of all five plan parameters and values of any two out of five parameters, and all single plan parameter value anomalies. Totally, 16 types of plan anomalies were covered by the study. For each type, we trained an individual Bayesian network. Results: We found that the true positive rate (recall) and positive predictive value (precision) to detect concurrence anomalies of five plan parameters in new patient cases were 94.45±0.26% and 93.76±0.39% respectively. To detect other 15 types of plan anomalies, the average recall and precision were 93.61±2.57% and 93.78±3.54% respectively. The computation time to detect the plan anomaly of each type in a new plan is ∼0.08 seconds. Conclusion: The proposed method for treatment plan anomaly detection was found effective in the initial tests. The results suggest that this type of models could be applied to develop plan anomaly detection tools to assist manual and automated plan checks. The senior author received research grants from ViewRay Inc. and Varian Medical System.« less
Viking Orbiter 1975 articulation control subsystem design analysis
NASA Technical Reports Server (NTRS)
Horiuchi, H. H.; Vallas, L. J.
1973-01-01
The articulation control subsystem, developed for the Viking Orbiter 1975 spacecraft, is a digital, multiplexed, closed-loop servo system used to control the pointing and positioning of the science scan platform and the high-gain communication antenna, and to position the solar-energy controller louver blades for the thermal control of the propellant tanks. The development, design, and anlaysis of the subsystem is preliminary. The subsystem consists of a block-redundant control electronics multiplexed among eight control actuators. Each electronics block is capable of operating either individually or simultaneously with the second block. This provides the subsystem the capability of simultaneous two-actuator control or a single actuator control with the second block in a stand-by redundant mode. The result of the preliminary design and analysis indicates that the subsystem will perform satisfactorily in the Viking Orbiter 1975 mission. Some of the parameter values used, particularly those in the subsystem dynamics and the error estimates, are preliminary and the results will be updated as more accurate parameter values become available.
NASA Astrophysics Data System (ADS)
Kim, W.; Hahm, I.; Ahn, S. J.; Lim, D. H.
2005-12-01
This paper introduces a powerful method for determining hypocentral parameters for local earthquakes in 1-D using a genetic algorithm (GA) and two-point ray tracing. Using existing algorithms to determine hypocentral parameters is difficult, because these parameters can vary based on initial velocity models. We developed a new method to solve this problem by applying a GA to an existing algorithm, HYPO-71 (Lee and Larh, 1975). The original HYPO-71 algorithm was modified by applying two-point ray tracing and a weighting factor with respect to the takeoff angle at the source to reduce errors from the ray path and hypocenter depth. Artificial data, without error, were generated by computer using two-point ray tracing in a true model, in which velocity structure and hypocentral parameters were known. The accuracy of the calculated results was easily determined by comparing calculated and actual values. We examined the accuracy of this method for several cases by changing the true and modeled layer numbers and thicknesses. The computational results show that this method determines nearly exact hypocentral parameters without depending on initial velocity models. Furthermore, accurate and nearly unique hypocentral parameters were obtained, although the number of modeled layers and thicknesses differed from those in the true model. Therefore, this method can be a useful tool for determining hypocentral parameters in regions where reliable local velocity values are unknown. This method also provides the basic a priori information for 3-D studies. KEY -WORDS: hypocentral parameters, genetic algorithm (GA), two-point ray tracing
NASA Astrophysics Data System (ADS)
Kumar, Pradeep; Dutta, B. K.; Chattopadhyay, J.
2017-04-01
The miniaturized specimens are used to determine mechanical properties of the materials, such as yield stress, ultimate stress, fracture toughness etc. Use of such specimens is essential whenever limited quantity of material is available for testing, such as aged/irradiated materials. The miniaturized small punch test (SPT) is a technique which is widely used to determine change in mechanical properties of the materials. Various empirical correlations are proposed in the literature to determine the value of fracture toughness (JIC) using this technique. bi-axial fracture strain is determined using SPT tests. This parameter is then used to determine JIC using available empirical correlations. The correlations between JIC and biaxial fracture strain quoted in the literature are based on experimental data acquired for large number of materials. There are number of such correlations available in the literature, which are generally not in agreement with each other. In the present work, an attempt has been made to determine the correlation between biaxial fracture strain (εqf) and crack initiation toughness (Ji) numerically. About one hundred materials are digitally generated by varying yield stress, ultimate stress, hardening coefficient and Gurson parameters. Such set of each material is then used to analyze a SPT specimen and a standard TPB specimen. Analysis of SPT specimen generated biaxial fracture strain (εqf) and analysis of TPB specimen generated value of Ji. A graph is then plotted between these two parameters for all the digitally generated materials. The best fit straight line determines the correlation. It has been also observed that it is possible to have variation in Ji for the same value of biaxial fracture strain (εqf) within a limit. Such variation in the value of Ji has been also ascertained using the graph. Experimental SPT data acquired earlier for three materials were then used to get Ji by using newly developed correlation. A reasonable comparison of calculated Ji with the values quoted in literature confirmed usefulness of the correlation.
Kaiser, W; Faber, T S; Findeis, M
1996-01-01
The authors developed a computer program that detects myocardial infarction (MI) and left ventricular hypertrophy (LVH) in two steps: (1) by extracting parameter values from a 10-second, 12-lead electrocardiogram, and (2) by classifying the extracted parameter values with rule sets. Every disease has its dedicated set of rules. Hence, there are separate rule sets for anterior MI, inferior MI, and LVH. If at least one rule is satisfied, the disease is said to be detected. The computer program automatically develops these rule sets. A database (learning set) of healthy subjects and patients with MI, LVH, and mixed MI+LVH was used. After defining the rule type, initial limits, and expected quality of the rules (positive predictive value, minimum number of patients), the program creates a set of rules by varying the limits. The general rule type is defined as: disease = lim1l < p1 < or = lim1u and lim2l < p2 < or = lim2u and ... limnl < pn < or = limnu. When defining the rule types, only the parameters (p1 ... pn) that are known as clinical electrocardiographic criteria (amplitudes [mV] of Q, R, and T waves and ST-segment; duration [ms] of Q wave; frontal angle [degrees]) were used. This allowed for submitting the learned rule sets to an independent investigator for medical verification. It also allowed the creation of explanatory texts with the rules. These advantages are not offered by the neurons of a neural network. The learned rules were checked against a test set and the following results were obtained: MI: sensitivity 76.2%, positive predictive value 98.6%; LVH: sensitivity 72.3%, positive predictive value 90.9%. The specificity ratings for MI are better than 98%; for LVH, better than 90%.
Parameter estimation of variable-parameter nonlinear Muskingum model using excel solver
NASA Astrophysics Data System (ADS)
Kang, Ling; Zhou, Liwei
2018-02-01
Abstract . The Muskingum model is an effective flood routing technology in hydrology and water resources Engineering. With the development of optimization technology, more and more variable-parameter Muskingum models were presented to improve effectiveness of the Muskingum model in recent decades. A variable-parameter nonlinear Muskingum model (NVPNLMM) was proposed in this paper. According to the results of two real and frequently-used case studies by various models, the NVPNLMM could obtain better values of evaluation criteria, which are used to describe the superiority of the estimated outflows and compare the accuracies of flood routing using various models, and the optimal estimated outflows by the NVPNLMM were closer to the observed outflows than the ones by other models.
NASA Astrophysics Data System (ADS)
Krenn, Julia; Zangerl, Christian; Mergili, Martin
2017-04-01
r.randomwalk is a GIS-based, multi-functional, conceptual open source model application for forward and backward analyses of the propagation of mass flows. It relies on a set of empirically derived, uncertain input parameters. In contrast to many other tools, r.randomwalk accepts input parameter ranges (or, in case of two or more parameters, spaces) in order to directly account for these uncertainties. Parameter spaces represent a possibility to withdraw from discrete input values which in most cases are likely to be off target. r.randomwalk automatically performs multiple calculations with various parameter combinations in a given parameter space, resulting in the impact indicator index (III) which denotes the fraction of parameter value combinations predicting an impact on a given pixel. Still, there is a need to constrain the parameter space used for a certain process type or magnitude prior to performing forward calculations. This can be done by optimizing the parameter space in terms of bringing the model results in line with well-documented past events. As most existing parameter optimization algorithms are designed for discrete values rather than for ranges or spaces, the necessity for a new and innovative technique arises. The present study aims at developing such a technique and at applying it to derive guiding parameter spaces for the forward calculation of rock avalanches through back-calculation of multiple events. In order to automatize the work flow we have designed r.ranger, an optimization and sensitivity analysis tool for parameter spaces which can be directly coupled to r.randomwalk. With r.ranger we apply a nested approach where the total value range of each parameter is divided into various levels of subranges. All possible combinations of subranges of all parameters are tested for the performance of the associated pattern of III. Performance indicators are the area under the ROC curve (AUROC) and the factor of conservativeness (FoC). This strategy is best demonstrated for two input parameters, but can be extended arbitrarily. We use a set of small rock avalanches from western Austria, and some larger ones from Canada and New Zealand, to optimize the basal friction coefficient and the mass-to-drag ratio of the two-parameter friction model implemented with r.randomwalk. Thereby we repeat the optimization procedure with conservative and non-conservative assumptions of a set of complementary parameters and with different raster cell sizes. Our preliminary results indicate that the model performance in terms of AUROC achieved with broad parameter spaces is hardly surpassed by the performance achieved with narrow parameter spaces. However, broad spaces may result in very conservative or very non-conservative predictions. Therefore, guiding parameter spaces have to be (i) broad enough to avoid the risk of being off target; and (ii) narrow enough to ensure a reasonable level of conservativeness of the results. The next steps will consist in (i) extending the study to other types of mass flow processes in order to support forward calculations using r.randomwalk; and (ii) in applying the same strategy to the more complex, dynamic model r.avaflow.
Quantifying and predicting Drosophila larvae crawling phenotypes
NASA Astrophysics Data System (ADS)
Günther, Maximilian N.; Nettesheim, Guilherme; Shubeita, George T.
2016-06-01
The fruit fly Drosophila melanogaster is a widely used model for cell biology, development, disease, and neuroscience. The fly’s power as a genetic model for disease and neuroscience can be augmented by a quantitative description of its behavior. Here we show that we can accurately account for the complex and unique crawling patterns exhibited by individual Drosophila larvae using a small set of four parameters obtained from the trajectories of a few crawling larvae. The values of these parameters change for larvae from different genetic mutants, as we demonstrate for fly models of Alzheimer’s disease and the Fragile X syndrome, allowing applications such as genetic or drug screens. Using the quantitative model of larval crawling developed here we use the mutant-specific parameters to robustly simulate larval crawling, which allows estimating the feasibility of laborious experimental assays and aids in their design.
AAA gunnermodel based on observer theory. [predicting a gunner's tracking response
NASA Technical Reports Server (NTRS)
Kou, R. S.; Glass, B. C.; Day, C. N.; Vikmanis, M. M.
1978-01-01
The Luenberger observer theory is used to develop a predictive model of a gunner's tracking response in antiaircraft artillery systems. This model is composed of an observer, a feedback controller and a remnant element. An important feature of the model is that the structure is simple, hence a computer simulation requires only a short execution time. A parameter identification program based on the least squares curve fitting method and the Gauss Newton gradient algorithm is developed to determine the parameter values of the gunner model. Thus, a systematic procedure exists for identifying model parameters for a given antiaircraft tracking task. Model predictions of tracking errors are compared with human tracking data obtained from manned simulation experiments. Model predictions are in excellent agreement with the empirical data for several flyby and maneuvering target trajectories.
NASA Astrophysics Data System (ADS)
Rohdjeß, H.; Albers, D.; Bisplinghoff, J.; Bollmann, R.; Büßer, K.; Diehl, O.; Dohrmann, F.; Engelhardt, H.-P.; Eversheim, P. D.; Gasthuber, M.; Greiff, J.; Groß, A.; Groß-Hardt, R.; Hinterberger, F.; Igelbrink, M.; Langkau, R.; Maier, R.; Mosel, F.; Müller, M.; Münstermann, M.; Prasuhn, D.; von Rossen, P.; Scheid, H.; Schirm, N.; Schwandt, F.; Scobel, W.; Trelle, H. J.; Wellinghausen, A.; Wiedmann, W.; Woller, K.; Ziegler, R.
2006-01-01
The EDDA-detector at the cooler-synchrotron COSY/Jülich has been operated with an internal CH2 fiber target to measure proton-proton elastic scattering differential cross-sections. For data analysis knowledge of beam parameters, like position, width and angle, are indispensable. We have developed a method to obtain these values with high precision from the azimuthal and polar angles of the ejectiles only, by exploiting the coplanarity of the two final-state protons with the beam and the kinematic correlation. The formalism is described and results for beam parameters obtained during beam acceleration are given.
NASA Astrophysics Data System (ADS)
Ashat, Ali; Pratama, Heru Berian
2017-12-01
The successful Ciwidey-Patuha geothermal field size assessment required integration data analysis of all aspects to determined optimum capacity to be installed. Resources assessment involve significant uncertainty of subsurface information and multiple development scenarios from these field. Therefore, this paper applied the application of experimental design approach to the geothermal numerical simulation of Ciwidey-Patuha to generate probabilistic resource assessment result. This process assesses the impact of evaluated parameters affecting resources and interacting between these parameters. This methodology have been successfully estimated the maximum resources with polynomial function covering the entire range of possible values of important reservoir parameters.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guan, He; Lv, Hongliang; Guo, Hui, E-mail: hguan@stu.xidian.edu.cn
2015-11-21
Impact ionization affects the radio-frequency (RF) behavior of high-electron-mobility transistors (HEMTs), which have narrow-bandgap semiconductor channels, and this necessitates complex parameter extraction procedures for HEMT modeling. In this paper, an enhanced small-signal equivalent circuit model is developed to investigate the impact ionization, and an improved method is presented in detail for direct extraction of intrinsic parameters using two-step measurements in low-frequency and high-frequency regimes. The practicability of the enhanced model and the proposed direct parameter extraction method are verified by comparing the simulated S-parameters with published experimental data from an InAs/AlSb HEMT operating over a wide frequency range. The resultsmore » demonstrate that the enhanced model with optimal intrinsic parameter values that were obtained by the direct extraction approach can effectively characterize the effects of impact ionization on the RF performance of HEMTs.« less
Classical spin glass system in external field with taking into account relaxation effects
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gevorkyan, A. S., E-mail: g_ashot@sci.am; Abajyan, H. G.
2013-08-15
We study statistical properties of disordered spin systems under the influence of an external field with taking into account relaxation effects. For description of system the spatial 1D Heisenberg spin-glass Hamiltonian is used. In addition, we suppose that interactions occur between nearest-neighboring spins and they are random. Exact solutions which define angular configuration of the spin in nodes were obtained from the equations of stationary points of Hamiltonian and the corresponding conditions for the energy local minimum. On the basis of these recurrent solutions an effective parallel algorithm is developed for simulation of stabile spin-chains of an arbitrary length. Itmore » is shown that by way of an independent order of N{sup 2} numerical simulations (where N is number of spin in each chain) it is possible to generate ensemble of spin-chains, which is completely ergodic which is equivalent to full self-averaging of spin-chains' vector polarization. Distributions of different parameters (energy, average polarization by coordinates, and spin-spin interaction constant) of unperturbed system are calculated. In particular, analytically is proved and numerically is shown, that for the Heisenberg nearest-neighboring Hamiltonian model, the distribution of spin-spin interaction constants as opposed to widely used Gauss-Edwards-Anderson distribution satisfies Levy alpha-stable distribution law. This distribution is nonanalytic function and does not have variance. In the work we have in detail studied critical properties of an ensemble depending on value of external field parameters (from amplitude and frequency) and have shown that even at weak external fields the spin-glass systemis strongly frustrated. It is shown that frustrations have fractal behavior, they are selfsimilar and do not disappear at scale decreasing of area. By the numerical computation is shown that the average polarization of spin-glass on a different coordinates can have values which can lead to catastrophes in the equation ofClausius-Mossotti for dielectric constant. In other words, for some values of external field parameter, a critical phenomenon occurs in the system which is impossible to describe by the real-valued Heisenberg spin-glass Hamiltonian. For the solution of this problem at first the complex-valued disordered Hamiltonian is used. Physically this type of extension of Hamiltonian allows to consider relaxation effects which occur in the system under the influence of an external field. On the basis of developed approach an effective parallel algorithm is developed for simulation of statistic parameters of spin-glass system under the influence of an external field.« less
Development of Methods for the Determination of pKa Values
Reijenga, Jetse; van Hoof, Arno; van Loon, Antonie; Teunissen, Bram
2013-01-01
The acid dissociation constant (pKa) is among the most frequently used physicochemical parameters, and its determination is of interest to a wide range of research fields. We present a brief introduction on the conceptual development of pKa as a physical parameter and its relationship to the concept of the pH of a solution. This is followed by a general summary of the historical development and current state of the techniques of pKa determination and an attempt to develop insight into future developments. Fourteen methods of determining the acid dissociation constant are placed in context and are critically evaluated to make a fair comparison and to determine their applications in modern chemistry. Additionally, we have studied these techniques in light of present trends in science and technology and attempt to determine how these trends might affect future developments in the field. PMID:23997574
Parameters of Transportation of Tailings of Metals Lixiviating
NASA Astrophysics Data System (ADS)
Golik, Vladimir; Dmitrak, Yury
2017-11-01
The article shows that the change in the situation in the metals market with a steady increase in production volumes is intensified against the tendency of the transition of mining production from underground mining to underground mining for a certain group of ores. The possibility of a non-waste metals extraction from not only standard, but also from substandard raw materials, is currently provided only by technology with the lixiviating of metals from developing ores. The regular dependences of the magnitude of hydraulic resistances on the hydro-mixture velocity and its density are determined. The correct values of the experimental data convergence with the calculated values of these parameters are obtained. It is shown that the optimization of the transportation parameters of lixiviating tailings allows reducing the level of chemically dangerous pollution of the environment by leachate products. The direction of obtaining the ecological and technological effect from the use of simultaneously environmental and resource-saving technology for the extraction of the disclosed metals is indicated.
NASA Astrophysics Data System (ADS)
Reis, D. S.; Stedinger, J. R.; Martins, E. S.
2005-10-01
This paper develops a Bayesian approach to analysis of a generalized least squares (GLS) regression model for regional analyses of hydrologic data. The new approach allows computation of the posterior distributions of the parameters and the model error variance using a quasi-analytic approach. Two regional skew estimation studies illustrate the value of the Bayesian GLS approach for regional statistical analysis of a shape parameter and demonstrate that regional skew models can be relatively precise with effective record lengths in excess of 60 years. With Bayesian GLS the marginal posterior distribution of the model error variance and the corresponding mean and variance of the parameters can be computed directly, thereby providing a simple but important extension of the regional GLS regression procedures popularized by Tasker and Stedinger (1989), which is sensitive to the likely values of the model error variance when it is small relative to the sampling error in the at-site estimator.
View Estimation Based on Value System
NASA Astrophysics Data System (ADS)
Takahashi, Yasutake; Shimada, Kouki; Asada, Minoru
Estimation of a caregiver's view is one of the most important capabilities for a child to understand the behavior demonstrated by the caregiver, that is, to infer the intention of behavior and/or to learn the observed behavior efficiently. We hypothesize that the child develops this ability in the same way as behavior learning motivated by an intrinsic reward, that is, he/she updates the model of the estimated view of his/her own during the behavior imitated from the observation of the behavior demonstrated by the caregiver based on minimizing the estimation error of the reward during the behavior. From this view, this paper shows a method for acquiring such a capability based on a value system from which values can be obtained by reinforcement learning. The parameters of the view estimation are updated based on the temporal difference error (hereafter TD error: estimation error of the state value), analogous to the way such that the parameters of the state value of the behavior are updated based on the TD error. Experiments with simple humanoid robots show the validity of the method, and the developmental process parallel to young children's estimation of its own view during the imitation of the observed behavior of the caregiver is discussed.
NASA Astrophysics Data System (ADS)
Adabanija, M. A.; Omidiora, E. O.; Olayinka, A. I.
2008-05-01
A linguistic fuzzy logic system (LFLS)-based expert system model has been developed for the assessment of aquifers for the location of productive water boreholes in a crystalline basement complex. The model design employed a multiple input/single output (MISO) approach with geoelectrical parameters and topographic features as input variables and control crisp value as the output. The application of the method to the data acquired in Khondalitic terrain, a basement complex in Vizianagaram District, south India, shows that potential groundwater resource zones that have control output values in the range 0.3295-0.3484 have a yield greater than 6,000 liters per hour (LPH). The range 0.3174-0.3226 gives a yield less than 4,000 LPH. The validation of the control crisp value using data acquired from Oban Massif, a basement complex in southeastern Nigeria, indicates a yield less than 3,000 LPH for control output values in the range 0.2938-0.3065. This validation corroborates the ability of control output values to predict a yield, thereby vindicating the applicability of linguistic fuzzy logic system in siting productive water boreholes in a basement complex.
The review of dynamic monitoring technology for crop growth
NASA Astrophysics Data System (ADS)
Zhang, Hong-wei; Chen, Huai-liang; Zou, Chun-hui; Yu, Wei-dong
2010-10-01
In this paper, crop growth monitoring methods are described elaborately. The crop growth models, Netherlands-Wageningen model system, the United States-GOSSYM model and CERES models, Australia APSIM model and CCSODS model system in China, are introduced here more focus on the theories of mechanism, applications, etc. The methods and application of remote sensing monitoring methods, which based on leaf area index (LAI) and biomass were proposed by different scholars at home and abroad, are highly stressed in the paper. The monitoring methods of remote sensing coupling with crop growth models are talked out at large, including the method of "forced law" which using remote sensing retrieval state parameters as the crop growth model parameters input, and then to enhance the dynamic simulation accuracy of crop growth model and the method of "assimilation of Law" which by reducing the gap difference between the value of remote sensing retrieval and the simulated values of crop growth model and thus to estimate the initial value or parameter values to increasing the simulation accuracy. At last, the developing trend of monitoring methods are proposed based on the advantages and shortcomings in previous studies, it is assured that the combination of remote sensing with moderate resolution data of FY-3A, MODIS, etc., crop growth model, "3S" system and observation in situ are the main methods in refinement of dynamic monitoring and quantitative assessment techniques for crop growth in future.
Nonstationary Extreme Value Analysis in a Changing Climate: A Software Package
NASA Astrophysics Data System (ADS)
Cheng, L.; AghaKouchak, A.; Gilleland, E.
2013-12-01
Numerous studies show that climatic extremes have increased substantially in the second half of the 20th century. For this reason, analysis of extremes under a nonstationary assumption has received a great deal of attention. This paper presents a software package developed for estimation of return levels, return periods, and risks of climatic extremes in a changing climate. This MATLAB software package offers tools for analysis of climate extremes under both stationary and non-stationary assumptions. The Nonstationary Extreme Value Analysis (hereafter, NEVA) provides an efficient and generalized framework for analyzing extremes using Bayesian inference. NEVA estimates the extreme value parameters using a Differential Evolution Markov Chain (DE-MC) which utilizes the genetic algorithm Differential Evolution (DE) for global optimization over the real parameter space with the Markov Chain Monte Carlo (MCMC) approach and has the advantage of simplicity, speed of calculation and convergence over conventional MCMC. NEVA also offers the confidence interval and uncertainty bounds of estimated return levels based on the sampled parameters. NEVA integrates extreme value design concepts, data analysis tools, optimization and visualization, explicitly designed to facilitate analysis extremes in geosciences. The generalized input and output files of this software package make it attractive for users from across different fields. Both stationary and nonstationary components of the package are validated for a number of case studies using empirical return levels. The results show that NEVA reliably describes extremes and their return levels.
Lake Number, a quantitative indicator of mixing used to estimate changes in dissolved oxygen
Robertson, Dale M.; Imberger, Jorg
1994-01-01
Lake Number, LN, values are shown to be quantitative indicators of deep mixing in lakes and reservoirs that can be used to estimate changes in deep water dissolved oxygen (DO) concentrations. LN is a dimensionless parameter defined as the ratio of the moments about the center of volume of the water body, of the stabilizing force of gravity associated with density stratification to the destabilizing forces supplied by wind, cooling, inflow, outflow, and other artificial mixing devices. To demonstrate the universality of this parameter, LN values are used to describe the extent of deep mixing and are compared with changes in DO concentrations in three reservoirs in Australia and four lakes in the U.S.A., which vary in productivity and mixing regimes. A simple model is developed which relates changes in LN values, i.e., the extent of mixing, to changes in near bottom DO concentrations. After calibrating the model for a specific system, it is possible to use real-time LN values, calculated using water temperature profiles and surface wind velocities, to estimate changes in DO concentrations (assuming unchanged trophic conditions).
NASA Technical Reports Server (NTRS)
Parsons, C. L. (Editor)
1989-01-01
The Multimode Airborne Radar Altimeter (MARA), a flexible airborne radar remote sensing facility developed by NASA's Goddard Space Flight Center, is discussed. This volume describes the scientific justification for the development of the instrument and the translation of these scientific requirements into instrument design goals. Values for key instrument parameters are derived to accommodate these goals, and simulations and analytical models are used to estimate the developed system's performance.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Domanskyi, Sergii; Schilling, Joshua E.; Privman, Vladimir, E-mail: privman@clarkson.edu
We develop a theoretical approach that uses physiochemical kinetics modelling to describe cell population dynamics upon progression of viral infection in cell culture, which results in cell apoptosis (programmed cell death) and necrosis (direct cell death). Several model parameters necessary for computer simulation were determined by reviewing and analyzing available published experimental data. By comparing experimental data to computer modelling results, we identify the parameters that are the most sensitive to the measured system properties and allow for the best data fitting. Our model allows extraction of parameters from experimental data and also has predictive power. Using the model wemore » describe interesting time-dependent quantities that were not directly measured in the experiment and identify correlations among the fitted parameter values. Numerical simulation of viral infection progression is done by a rate-equation approach resulting in a system of “stiff” equations, which are solved by using a novel variant of the stochastic ensemble modelling approach. The latter was originally developed for coupled chemical reactions.« less
Lu, Dalei; Cai, Xuemei; Yan, Fabao; Sun, Xuli; Wang, Xin; Lu, Weiping
2014-05-01
Waxy maize is grown in South China, where high temperatures frequently prevail. The effect of high-temperature stress on grain development of waxy maize is not known. High temperature decreased the grain fresh weight and volume, and lowered the grain dry weight and water content after 22 days after pollination (DAP). Plants exposed to high temperature had low starch content, and high protein and soluble sugar contents at maturity. Starch iodine binding capacity and granule size were increased by heat stress at all grain-filling stages. The former parameter decreased, while the latter parameter increased gradually with grain development. High temperature increased the peak and breakdown viscosity before 30 DAP, but the value decreased at maturity. Pasting and gelatinization temperatures at different stages were increased by heat stress and gradually decreased with grain development under both high-temperature and control conditions. Gelatinization enthalpy increased initially but decreased after peaking at 22 DAP under both control and heat stress conditions. High temperature decreased gelatinization enthalpy after 10 DAP. Retrogradation percentage value increased with high temperature throughout grain development. High temperature after pollination changes the dynamics of grain filling of waxy maize, which may underlie the observed changes in its pasting and thermal properties. © 2013 Society of Chemical Industry.
1982-11-01
algorithm for turning-region boundary value problem -70- d. Program control parameters: ALPHA (Qq) max’ maximum value of Qq in present coding. BETA, BLOSS...Parameters available for either system descrip- tion or program control . (These parameters are currently unused, so they are set equal to zero.) IGUESS...Parameter that controls the initial choices of first-shoot values along y = 0. IGUESS = 1: Discretized versions of P(r, 0), T(r, 0), and u(r, 0) must
NASA Astrophysics Data System (ADS)
Abramson, A.; Lazarovitch, N.; Adar, E.
2013-12-01
Groundwater is often the most or only feasible drinking water source in remote, low-resource areas. Yet the economics of its development have not been systematically outlined. We applied CBARWI (Cost-Benefit Analysis for Remote Water Improvements), a recently developed Decision Support System, to investigate the economic, physical and management factors related to the costs and benefits of non-networked groundwater supply in remote areas. Synthetic profiles of community water services (n = 17,962), defined across 14 parameters' values and ranges relevant to remote areas, were imputed into the decision framework, and the parameter effects on economic outcomes were investigated through regression analysis (Table 1). Several approaches were included for financing the improvements, after Abramson et al, 2011: willingness-to -pay (WTP), -borrow (WTB) and -work (WTW) in community irrigation (';water-for-work'). We found that low-cost groundwater development approaches are almost 7 times more cost-effective than conventional boreholes fitted with handpumps. The costs of electric, submersible borehole pumps are comparable only when providing expanded water supplies, and off-grid communities pay significantly more for such expansions. In our model, new source construction is less cost-effective than improvement of existing wells, but necessary for expanding access to isolated households. The financing approach significantly impacts the feasibility of demand-driven cost recovery; in our investigation, benefit exceeds cost in 16, 32 and 48% of water service configurations financed by WTP, WTB and WTW, respectively. Regressions of total cost (R2 = 0.723) and net benefit under WTW (R2 = 0.829) along with analysis of output distributions indicate that parameters determining the profitability of irrigation are different from those determining costs and other measures of net benefit. These findings suggest that the cost-benefit outcomes associated with groundwater-based water supply improvements vary considerably by many parameters. Thus, a wide variety of factors should be included to inform water development strategies. Abramson, A. et al (2011), Willingness to pay, borrow and work for water service improvements in developing countries, Water Resour Res, 47Table 1: Descriptions, investigated values and regression coefficients of parameters included in our analysis. Rank of standardized β indicates relative importance. Regression dependent variables are in [($ household-1) y-1]. * Parameters relevant to water-for-work program only.† p <.0001‡ p <.05
Newly developed vaginal atrophy symptoms II and vaginal pH: a better correlation in vaginal atrophy?
Tuntiviriyapun, P; Panyakhamlerd, K; Triratanachat, S; Chatsuwan, T; Chaikittisilpa, S; Jaisamrarn, U; Taechakraichana, N
2015-04-01
The primary objective of this study was to evaluate the correlation among symptoms, signs, and the number of lactobacilli in postmenopausal vaginal atrophy. The secondary objective was to develop a new parameter to improve the correlation. A cross-sectional descriptive study. Naturally postmenopausal women aged 45-70 years with at least one clinical symptom of vaginal atrophy of moderate to severe intensity were included in this study. All of the objective parameters (vaginal atrophy score, vaginal pH, the number of lactobacilli, vaginal maturation index, and vaginal maturation value) were evaluated and correlated with vaginal atrophy symptoms. A new parameter of vaginal atrophy, vaginal atrophy symptoms II, was developed and consists of the two most bothersome symptoms (vaginal dryness and dyspareunia). Vaginal atrophy symptoms II was analyzed for correlation with the objective parameters. A total of 132 naturally postmenopausal women were recruited for analysis. Vaginal pH was the only objective parameter found to have a weak correlation with vaginal atrophy symptoms (r = 0.273, p = 0.002). The newly developed vaginal atrophy symptoms II parameter showed moderate correlation with vaginal pH (r = 0.356, p < 0.001) and a weak correlation with the vaginal atrophy score (r = 0.230, p < 0.001). History of sexual intercourse within 3 months was associated with a better correlation between vaginal atrophy symptoms and the objective parameters. Vaginal pH was significantly correlated with vaginal atrophy symptoms. The newly developed vaginal atrophy symptoms II was associated with a better correlation. The vaginal atrophy symptoms II and vaginal pH may be better tools for clinical evaluation and future study of the vaginal ecosystem.
The Inverse Problem for Confined Aquifer Flow: Identification and Estimation With Extensions
NASA Astrophysics Data System (ADS)
Loaiciga, Hugo A.; MariñO, Miguel A.
1987-01-01
The contributions of this work are twofold. First, a methodology for estimating the elements of parameter matrices in the governing equation of flow in a confined aquifer is developed. The estimation techniques for the distributed-parameter inverse problem pertain to linear least squares and generalized least squares methods. The linear relationship among the known heads and unknown parameters of the flow equation provides the background for developing criteria for determining the identifiability status of unknown parameters. Under conditions of exact or overidentification it is possible to develop statistically consistent parameter estimators and their asymptotic distributions. The estimation techniques, namely, two-stage least squares and three stage least squares, are applied to a specific groundwater inverse problem and compared between themselves and with an ordinary least squares estimator. The three-stage estimator provides the closer approximation to the actual parameter values, but it also shows relatively large standard errors as compared to the ordinary and two-stage estimators. The estimation techniques provide the parameter matrices required to simulate the unsteady groundwater flow equation. Second, a nonlinear maximum likelihood estimation approach to the inverse problem is presented. The statistical properties of maximum likelihood estimators are derived, and a procedure to construct confidence intervals and do hypothesis testing is given. The relative merits of the linear and maximum likelihood estimators are analyzed. Other topics relevant to the identification and estimation methodologies, i.e., a continuous-time solution to the flow equation, coping with noise-corrupted head measurements, and extension of the developed theory to nonlinear cases are also discussed. A simulation study is used to evaluate the methods developed in this study.
Curry, B. Brandon
1999-01-01
Continental ostracode occurrences reflect salinity, solute composition, temperature, flow conditions, and other environmental properties of the water they inhabit. Their occurrences also reflect the variability of many of these environmental parameters. Environmental tolerance indices (ETIs) offer a new way to express the nature of an ostracode's environment. As defined herein, ETIs range in value from zero to one, and may be calculated for continuous and binary variables. For continuous variables such as salinity, the ETI is the ratio of the range of values of salinity tolerated by an ostracode to the total range of salinity values from a representative database. In this investigation, the database of continuous variables consists of information from 341 sites located throughout the United States. Binary ETIs indicate whether an environmental variable such as flowing water affects ostracode presence or absence. The binary database consists of information from 784 sites primarily from Illinois, USA. ETIs were developed in this investigation to interpret paleohydrological changes implied by fossil ostracode successions. ETI profiles may be cast in terms of a weighted average, or on presence/absence. The profiles express ostracode tolerance of environmental parameters such as salinity or currents. Tolerance of a wide range of values is taken to indicate shallow water because shallow environments are conducive to thermal variability, short-term water residence, and the development of currents from wind-driven waves.
Strömberg, Eric A; Nyberg, Joakim; Hooker, Andrew C
2016-12-01
With the increasing popularity of optimal design in drug development it is important to understand how the approximations and implementations of the Fisher information matrix (FIM) affect the resulting optimal designs. The aim of this work was to investigate the impact on design performance when using two common approximations to the population model and the full or block-diagonal FIM implementations for optimization of sampling points. Sampling schedules for two example experiments based on population models were optimized using the FO and FOCE approximations and the full and block-diagonal FIM implementations. The number of support points was compared between the designs for each example experiment. The performance of these designs based on simulation/estimations was investigated by computing bias of the parameters as well as through the use of an empirical D-criterion confidence interval. Simulations were performed when the design was computed with the true parameter values as well as with misspecified parameter values. The FOCE approximation and the Full FIM implementation yielded designs with more support points and less clustering of sample points than designs optimized with the FO approximation and the block-diagonal implementation. The D-criterion confidence intervals showed no performance differences between the full and block diagonal FIM optimal designs when assuming true parameter values. However, the FO approximated block-reduced FIM designs had higher bias than the other designs. When assuming parameter misspecification in the design evaluation, the FO Full FIM optimal design was superior to the FO block-diagonal FIM design in both of the examples.
Visual Image Sensor Organ Replacement: Implementation
NASA Technical Reports Server (NTRS)
Maluf, A. David (Inventor)
2011-01-01
Method and system for enhancing or extending visual representation of a selected region of a visual image, where visual representation is interfered with or distorted, by supplementing a visual signal with at least one audio signal having one or more audio signal parameters that represent one or more visual image parameters, such as vertical and/or horizontal location of the region; region brightness; dominant wavelength range of the region; change in a parameter value that characterizes the visual image, with respect to a reference parameter value; and time rate of change in a parameter value that characterizes the visual image. Region dimensions can be changed to emphasize change with time of a visual image parameter.
Subjective ranking of concert halls substantiated through orthogonal objective parameters.
Cerdá, Salvador; Giménez, Alicia; Cibrián, Rosa; Girón, Sara; Zamarreño, Teófilo
2015-02-01
This paper studies the global subjective assessment, obtained from mean values of the results of surveys addressed to members of the audience of live concerts in Spanish auditoriums, through the mean values of the three orthogonal objective parameters (Tmid, IACCE3, and LEV), expressed in just noticeable differences (JNDs), regarding the best-valued hall. Results show that a linear combination of the relative variations of orthogonal parameters can largely explain the overall perceived quality of the sample. However, the mean values of certain orthogonal parameters are not representative, which shows that an alternative approach to the problem is necessary. Various possibilities are proposed.
Boer, H M T; Butler, S T; Stötzel, C; Te Pas, M F W; Veerkamp, R F; Woelders, H
2017-11-01
A recently developed mechanistic mathematical model of the bovine estrous cycle was parameterized to fit empirical data sets collected during one estrous cycle of 31 individual cows, with the main objective to further validate the model. The a priori criteria for validation were (1) the resulting model can simulate the measured data correctly (i.e. goodness of fit), and (2) this is achieved without needing extreme, probably non-physiological parameter values. We used a least squares optimization procedure to identify parameter configurations for the mathematical model to fit the empirical in vivo measurements of follicle and corpus luteum sizes, and the plasma concentrations of progesterone, estradiol, FSH and LH for each cow. The model was capable of accommodating normal variation in estrous cycle characteristics of individual cows. With the parameter sets estimated for the individual cows, the model behavior changed for 21 cows, with improved fit of the simulated output curves for 18 of these 21 cows. Moreover, the number of follicular waves was predicted correctly for 18 of the 25 two-wave and three-wave cows, without extreme parameter value changes. Estimation of specific parameters confirmed results of previous model simulations indicating that parameters involved in luteolytic signaling are very important for regulation of general estrous cycle characteristics, and are likely responsible for differences in estrous cycle characteristics between cows.
Optimization of seismic isolation systems via harmony search
NASA Astrophysics Data System (ADS)
Melih Nigdeli, Sinan; Bekdaş, Gebrail; Alhan, Cenk
2014-11-01
In this article, the optimization of isolation system parameters via the harmony search (HS) optimization method is proposed for seismically isolated buildings subjected to both near-fault and far-fault earthquakes. To obtain optimum values of isolation system parameters, an optimization program was developed in Matlab/Simulink employing the HS algorithm. The objective was to obtain a set of isolation system parameters within a defined range that minimizes the acceleration response of a seismically isolated structure subjected to various earthquakes without exceeding a peak isolation system displacement limit. Several cases were investigated for different isolation system damping ratios and peak displacement limitations of seismic isolation devices. Time history analyses were repeated for the neighbouring parameters of optimum values and the results proved that the parameters determined via HS were true optima. The performance of the optimum isolation system was tested under a second set of earthquakes that was different from the first set used in the optimization process. The proposed optimization approach is applicable to linear isolation systems. Isolation systems composed of isolation elements that are inherently nonlinear are the subject of a future study. Investigation of the optimum isolation system parameters has been considered in parametric studies. However, obtaining the best performance of a seismic isolation system requires a true optimization by taking the possibility of both near-fault and far-fault earthquakes into account. HS optimization is proposed here as a viable solution to this problem.
Improving the Fit of a Land-Surface Model to Data Using its Adjoint
NASA Astrophysics Data System (ADS)
Raoult, Nina; Jupp, Tim; Cox, Peter; Luke, Catherine
2016-04-01
Land-surface models (LSMs) are crucial components of the Earth System Models (ESMs) which are used to make coupled climate-carbon cycle projections for the 21st century. The Joint UK Land Environment Simulator (JULES) is the land-surface model used in the climate and weather forecast models of the UK Met Office. In this study, JULES is automatically differentiated using commercial software from FastOpt, resulting in an analytical gradient, or adjoint, of the model. Using this adjoint, the adJULES parameter estimation system has been developed, to search for locally optimum parameter sets by calibrating against observations. We present an introduction to the adJULES system and demonstrate its ability to improve the model-data fit using eddy covariance measurements of gross primary production (GPP) and latent heat (LE) fluxes. adJULES also has the ability to calibrate over multiple sites simultaneously. This feature is used to define new optimised parameter values for the 5 Plant Functional Types (PFTS) in JULES. The optimised PFT-specific parameters improve the performance of JULES over 90% of the FLUXNET sites used in the study. These reductions in error are shown and compared to reductions found due to site-specific optimisations. Finally, we show that calculation of the 2nd derivative of JULES allows us to produce posterior probability density functions of the parameters and how knowledge of parameter values is constrained by observations.
VIP: A knowledge-based design aid for the engineering of space systems
NASA Technical Reports Server (NTRS)
Lewis, Steven M.; Bellman, Kirstie L.
1990-01-01
The Vehicles Implementation Project (VIP), a knowledge-based design aid for the engineering of space systems is described. VIP combines qualitative knowledge in the form of rules, quantitative knowledge in the form of equations, and other mathematical modeling tools. The system allows users rapidly to develop and experiment with models of spacecraft system designs. As information becomes available to the system, appropriate equations are solved symbolically and the results are displayed. Users may browse through the system, observing dependencies and the effects of altering specific parameters. The system can also suggest approaches to the derivation of specific parameter values. In addition to providing a tool for the development of specific designs, VIP aims at increasing the user's understanding of the design process. Users may rapidly examine the sensitivity of a given parameter to others in the system and perform tradeoffs or optimizations of specific parameters. A second major goal of VIP is to integrate the existing corporate knowledge base of models and rules into a central, symbolic form.
Longitudinal control of aircraft dynamics based on optimization of PID parameters
NASA Astrophysics Data System (ADS)
Deepa, S. N.; Sudha, G.
2016-03-01
Recent years many flight control systems and industries are employing PID controllers to improve the dynamic behavior of the characteristics. In this paper, PID controller is developed to improve the stability and performance of general aviation aircraft system. Designing the optimum PID controller parameters for a pitch control aircraft is important in expanding the flight safety envelope. Mathematical model is developed to describe the longitudinal pitch control of an aircraft. The PID controller is designed based on the dynamic modeling of an aircraft system. Different tuning methods namely Zeigler-Nichols method (ZN), Modified Zeigler-Nichols method, Tyreus-Luyben tuning, Astrom-Hagglund tuning methods are employed. The time domain specifications of different tuning methods are compared to obtain the optimum parameters value. The results prove that PID controller tuned by Zeigler-Nichols for aircraft pitch control dynamics is better in stability and performance in all conditions. Future research work of obtaining optimum PID controller parameters using artificial intelligence techniques should be carried out.
HEART: an automated beat-to-beat cardiovascular analysis package using Matlab.
Schroeder, M J Mark J; Perreault, Bill; Ewert, D L Daniel L; Koenig, S C Steven C
2004-07-01
A computer program is described for beat-to-beat analysis of cardiovascular parameters from high-fidelity pressure and flow waveforms. The Hemodynamic Estimation and Analysis Research Tool (HEART) is a post-processing analysis software package developed in Matlab that enables scientists and clinicians to document, load, view, calibrate, and analyze experimental data that have been digitally saved in ascii or binary format. Analysis routines include traditional hemodynamic parameter estimates as well as more sophisticated analyses such as lumped arterial model parameter estimation and vascular impedance frequency spectra. Cardiovascular parameter values of all analyzed beats can be viewed and statistically analyzed. An attractive feature of the HEART program is the ability to analyze data with visual quality assurance throughout the process, thus establishing a framework toward which Good Laboratory Practice (GLP) compliance can be obtained. Additionally, the development of HEART on the Matlab platform provides users with the flexibility to adapt or create study specific analysis files according to their specific needs. Copyright 2003 Elsevier Ltd.
Real Time Correction of Aircraft Flight Fonfiguration
NASA Technical Reports Server (NTRS)
Schipper, John F. (Inventor)
2009-01-01
Method and system for monitoring and analyzing, in real time, variation with time of an aircraft flight parameter. A time-dependent recovery band, defined by first and second recovery band boundaries that are spaced apart at at least one time point, is constructed for a selected flight parameter and for a selected time recovery time interval length .DELTA.t(FP;rec). A flight parameter, having a value FP(t=t.sub.p) at a time t=t.sub.p, is likely to be able to recover to a reference flight parameter value FP(t';ref), lying in a band of reference flight parameter values FP(t';ref;CB), within a time interval given by t.sub.p.ltoreq.t'.ltoreq.t.sub.p.DELTA.t(FP;rec), if (or only if) the flight parameter value lies between the first and second recovery band boundary traces.
Modification of a rainfall-runoff model for distributed modeling in a GIS and its validation
NASA Astrophysics Data System (ADS)
Nyabeze, W. R.
A rainfall-runoff model, which can be inter-faced with a Geographical Information System (GIS) to integrate definition, measurement, calculating parameter values for spatial features, presents considerable advantages. The modification of the GWBasic Wits Rainfall-Runoff Erosion Model (GWBRafler) to enable parameter value estimation in a GIS (GISRafler) is presented in this paper. Algorithms are applied to estimate parameter values reducing the number of input parameters and the effort to populate them. The use of a GIS makes the relationship between parameter estimates and cover characteristics more evident. This paper has been produced as part of research to generalize the GWBRafler on a spatially distributed basis. Modular data structures are assumed and parameter values are weighted relative to the module area and centroid properties. Modifications to the GWBRafler enable better estimation of low flows, which are typical in drought conditions.
Models for estimating photosynthesis parameters from in situ production profiles
NASA Astrophysics Data System (ADS)
Kovač, Žarko; Platt, Trevor; Sathyendranath, Shubha; Antunović, Suzana
2017-12-01
The rate of carbon assimilation in phytoplankton primary production models is mathematically prescribed with photosynthesis irradiance functions, which convert a light flux (energy) into a material flux (carbon). Information on this rate is contained in photosynthesis parameters: the initial slope and the assimilation number. The exactness of parameter values is crucial for precise calculation of primary production. Here we use a model of the daily production profile based on a suite of photosynthesis irradiance functions and extract photosynthesis parameters from in situ measured daily production profiles at the Hawaii Ocean Time-series station Aloha. For each function we recover parameter values, establish parameter distributions and quantify model skill. We observe that the choice of the photosynthesis irradiance function to estimate the photosynthesis parameters affects the magnitudes of parameter values as recovered from in situ profiles. We also tackle the problem of parameter exchange amongst the models and the effect it has on model performance. All models displayed little or no bias prior to parameter exchange, but significant bias following parameter exchange. The best model performance resulted from using optimal parameter values. Model formulation was extended further by accounting for spectral effects and deriving a spectral analytical solution for the daily production profile. The daily production profile was also formulated with time dependent growing biomass governed by a growth equation. The work on parameter recovery was further extended by exploring how to extract photosynthesis parameters from information on watercolumn production. It was demonstrated how to estimate parameter values based on a linearization of the full analytical solution for normalized watercolumn production and from the solution itself, without linearization. The paper complements previous works on photosynthesis irradiance models by analysing the skill and consistency of photosynthesis irradiance functions and parameters for modeling in situ production profiles. In light of the results obtained in this work we argue that the choice of the primary production model should reflect the available data and these models should be data driven regarding parameter estimation.
Principles of parametric estimation in modeling language competition
Zhang, Menghan; Gong, Tao
2013-01-01
It is generally difficult to define reasonable parameters and interpret their values in mathematical models of social phenomena. Rather than directly fitting abstract parameters against empirical data, we should define some concrete parameters to denote the sociocultural factors relevant for particular phenomena, and compute the values of these parameters based upon the corresponding empirical data. Taking the example of modeling studies of language competition, we propose a language diffusion principle and two language inheritance principles to compute two critical parameters, namely the impacts and inheritance rates of competing languages, in our language competition model derived from the Lotka–Volterra competition model in evolutionary biology. These principles assign explicit sociolinguistic meanings to those parameters and calculate their values from the relevant data of population censuses and language surveys. Using four examples of language competition, we illustrate that our language competition model with thus-estimated parameter values can reliably replicate and predict the dynamics of language competition, and it is especially useful in cases lacking direct competition data. PMID:23716678
Principles of parametric estimation in modeling language competition.
Zhang, Menghan; Gong, Tao
2013-06-11
It is generally difficult to define reasonable parameters and interpret their values in mathematical models of social phenomena. Rather than directly fitting abstract parameters against empirical data, we should define some concrete parameters to denote the sociocultural factors relevant for particular phenomena, and compute the values of these parameters based upon the corresponding empirical data. Taking the example of modeling studies of language competition, we propose a language diffusion principle and two language inheritance principles to compute two critical parameters, namely the impacts and inheritance rates of competing languages, in our language competition model derived from the Lotka-Volterra competition model in evolutionary biology. These principles assign explicit sociolinguistic meanings to those parameters and calculate their values from the relevant data of population censuses and language surveys. Using four examples of language competition, we illustrate that our language competition model with thus-estimated parameter values can reliably replicate and predict the dynamics of language competition, and it is especially useful in cases lacking direct competition data.
NASA Technical Reports Server (NTRS)
Orme, John S.; Gilyard, Glenn B.
1992-01-01
Integrated engine-airframe optimal control technology may significantly improve aircraft performance. This technology requires a reliable and accurate parameter estimator to predict unmeasured variables. To develop this technology base, NASA Dryden Flight Research Facility (Edwards, CA), McDonnell Aircraft Company (St. Louis, MO), and Pratt & Whitney (West Palm Beach, FL) have developed and flight-tested an adaptive performance seeking control system which optimizes the quasi-steady-state performance of the F-15 propulsion system. This paper presents flight and ground test evaluations of the propulsion system parameter estimation process used by the performance seeking control system. The estimator consists of a compact propulsion system model and an extended Kalman filter. The extended Laman filter estimates five engine component deviation parameters from measured inputs. The compact model uses measurements and Kalman-filter estimates as inputs to predict unmeasured propulsion parameters such as net propulsive force and fan stall margin. The ability to track trends and estimate absolute values of propulsion system parameters was demonstrated. For example, thrust stand results show a good correlation, especially in trends, between the performance seeking control estimated and measured thrust.
Use of system identification techniques for improving airframe finite element models using test data
NASA Technical Reports Server (NTRS)
Hanagud, Sathya V.; Zhou, Weiyu; Craig, James I.; Weston, Neil J.
1991-01-01
A method for using system identification techniques to improve airframe finite element models was developed and demonstrated. The method uses linear sensitivity matrices to relate changes in selected physical parameters to changes in total system matrices. The values for these physical parameters were determined using constrained optimization with singular value decomposition. The method was confirmed using both simple and complex finite element models for which pseudo-experimental data was synthesized directly from the finite element model. The method was then applied to a real airframe model which incorporated all the complexities and details of a large finite element model and for which extensive test data was available. The method was shown to work, and the differences between the identified model and the measured results were considered satisfactory.
Sweeney, Lisa M.; Parker, Ann; Haber, Lynne T.; Tran, C. Lang; Kuempel, Eileen D.
2015-01-01
A biomathematical model was previously developed to describe the long-term clearance and retention of particles in the lungs of coal miners. The model structure was evaluated and parameters were estimated in two data sets, one from the United States and one from the United Kingdom. The three-compartment model structure consists of deposition of inhaled particles in the alveolar region, competing processes of either clearance from the alveolar region or translocation to the lung interstitial region, and very slow, irreversible sequestration of interstitialized material in the lung-associated lymph nodes. Point estimates of model parameter values were estimated separately for the two data sets. In the current effort, Bayesian population analysis using Markov chain Monte Carlo simulation was used to recalibrate the model while improving assessments of parameter variability and uncertainty. When model parameters were calibrated simultaneously to the two data sets, agreement between the derived parameters for the two groups was very good, and the central tendency values were similar to those derived from the deterministic approach. These findings are relevant to the proposed update of the ICRP human respiratory tract model with revisions to the alveolar-interstitial region based on this long-term particle clearance and retention model. PMID:23454101
Nilsson, Ingemar; Polla, Magnus O
2012-10-01
Drug design is a multi-parameter task present in the analysis of experimental data for synthesized compounds and in the prediction of new compounds with desired properties. This article describes the implementation of a binned scoring and composite ranking scheme for 11 experimental parameters that were identified as key drivers in the MC4R project. The composite ranking scheme was implemented in an AstraZeneca tool for analysis of project data, thereby providing an immediate re-ranking as new experimental data was added. The automated ranking also highlighted compounds overlooked by the project team. The successful implementation of a composite ranking on experimental data led to the development of an equivalent virtual score, which was based on Free-Wilson models of the parameters from the experimental ranking. The individual Free-Wilson models showed good to high predictive power with a correlation coefficient between 0.45 and 0.97 based on the external test set. The virtual ranking adds value to the selection of compounds for synthesis but error propagation must be controlled. The experimental ranking approach adds significant value, is parameter independent and can be tuned and applied to any drug discovery project.
Kashyap, Anamika; Jain, Manjula; Shukla, Shailaja; Andley, Manoj
2018-01-01
Fine needle aspiration cytology (FNAC) is a simple, rapid, inexpensive, and reliable method of diagnosis of breast mass. Cytoprognostic grading in breast cancers is important to identify high-grade tumors. Computer-assisted image morphometric analysis has been developed to quantitate as well as standardize various grading systems. To apply nuclear morphometry on cytological aspirates of breast cancer and evaluate its correlation with cytomorphological grading with derivation of suitable cutoff values between various grades. Descriptive cross-sectional hospital-based study. This study included 64 breast cancer cases (29 of grade 1, 22 of grade 2, and 13 of grade 3). Image analysis was performed on Papanicolaou stained FNAC slides by NIS -Elements Advanced Research software (Ver 4.00). Nuclear morphometric parameters analyzed included 5 nuclear size, 2 shape, 4 texture, and 2 density parameters. Nuclear size parameters showed an increase in values with increasing cytological grades of carcinoma. Nuclear shape parameters were not found to be significantly different between the three grades. Among nuclear texture parameters, sum intensity, and sum brightness were found to be different between the three grades. Nuclear morphometry can be applied to augment the cytology grading of breast cancer and thus help in classifying patients into low and high-risk groups.
Volumetric flow rate in simulations of microfluidic devices+
NASA Astrophysics Data System (ADS)
Kovalčíková, KristÍna; Slavík, Martin; Bachratá, Katarína; Bachratý, Hynek; Bohiniková, Alžbeta
2018-06-01
In this work, we examine the volumetric flow rate of microfluidic devices. The volumetric flow rate is a parameter which is necessary to correctly set up a simulation of a real device and to check the conformity of a simulation and a laboratory experiments [1]. Instead of defining the volumetric rate at the beginning as a simulation parameter, a parameter of external force is set. The proposed hypothesis is that for a fixed set of other parameters (topology, viscosity of the liquid, …) the volumetric flow rate is linearly dependent on external force in typical ranges of fluid velocity used in our simulations. To confirm this linearity hypothesis and to find numerical limits of this approach, we test several values of the external force parameter. The tests are designed for three different topologies of simulation box and for various haematocrits. The topologies of the microfluidic devices are inspired by existing laboratory experiments [3 - 6]. The linear relationship between the external force and the volumetric flow rate is verified in orders of magnitudes similar to the values obtained from laboratory experiments. Supported by the Slovak Research and Development Agency under the contract No. APVV-15-0751 and by the Ministry of Education, Science, Research and Sport of the Slovak Republic under the contract No. VEGA 1/0643/17.
Chen, Jiajia; Pitchai, Krishnamoorthy; Birla, Sohan; Negahban, Mehrdad; Jones, David; Subbiah, Jeyamkondan
2014-10-01
A 3-dimensional finite-element model coupling electromagnetics and heat and mass transfer was developed to understand the interactions between the microwaves and fresh mashed potato in a 500 mL tray. The model was validated by performing heating of mashed potato from 25 °C on a rotating turntable in a microwave oven, rated at 1200 W, for 3 min. The simulated spatial temperature profiles on the top and bottom layer of the mashed potato showed similar hot and cold spots when compared to the thermal images acquired by an infrared camera. Transient temperature profiles at 6 locations collected by fiber-optic sensors showed good agreement with predicted results, with the root mean square error ranging from 1.6 to 11.7 °C. The predicted total moisture loss matched well with the observed result. Several input parameters, such as the evaporation rate constant, the intrinsic permeability of water and gas, and the diffusion coefficient of water and gas, are not readily available for mashed potato, and they cannot be easily measured experimentally. Reported values for raw potato were used as baseline values. A sensitivity analysis of these input parameters on the temperature profiles and the total moisture loss was evaluated by changing the baseline values to their 10% and 1000%. The sensitivity analysis showed that the gas diffusion coefficient, intrinsic water permeability, and the evaporation rate constant greatly influenced the predicted temperature and total moisture loss, while the intrinsic gas permeability and the water diffusion coefficient had little influence. This model can be used by the food product developers to understand microwave heating of food products spatially and temporally. This tool will allow food product developers to design food package systems that would heat more uniformly in various microwave ovens. The sensitivity analysis of this study will help us determine the most significant parameters that need to be measured accurately for reliable model prediction. © 2014 Institute of Food Technologists®
Tripathi, Dharmendra; Yadav, Ashu; Bég, O Anwar
2017-01-01
Analytical solutions are developed for the electro-kinetic flow of a viscoelastic biological liquid in a finite length cylindrical capillary geometry under peristaltic waves. The Jefferys' non-Newtonian constitutive model is employed to characterize rheological properties of the fluid. The unsteady conservation equations for mass and momentum with electro-kinetic and Darcian porous medium drag force terms are reduced to a system of steady linearized conservation equations in an axisymmetric coordinate system. The long wavelength, creeping (low Reynolds number) and Debye-Hückel linearization approximations are utilized. The resulting boundary value problem is shown to be controlled by a number of parameters including the electro-osmotic parameter, Helmholtz-Smoluchowski velocity (maximum electro-osmotic velocity), and Jefferys' first parameter (ratio of relaxation and retardation time), wave amplitude. The influence of these parameters and also time on axial velocity, pressure difference, maximum volumetric flow rate and streamline distributions (for elucidating trapping phenomena) is visualized graphically and interpreted in detail. Pressure difference magnitudes are enhanced consistently with both increasing electro-osmotic parameter and Helmholtz-Smoluchowski velocity, whereas they are only elevated with increasing Jefferys' first parameter for positive volumetric flow rates. Maximum time averaged flow rate is enhanced with increasing electro-osmotic parameter, Helmholtz-Smoluchowski velocity and Jefferys' first parameter. Axial flow is accelerated in the core (plug) region of the conduit with greater values of electro-osmotic parameter and Helmholtz-Smoluchowski velocity whereas it is significantly decelerated with increasing Jefferys' first parameter. The simulations find applications in electro-osmotic (EO) transport processes in capillary physiology and also bio-inspired EO pump devices in chemical and aerospace engineering. Copyright © 2016 Elsevier Inc. All rights reserved.
Sadeghi-Naini, Ali; Vorauer, Eric; Chin, Lee; Falou, Omar; Tran, William T; Wright, Frances C; Gandhi, Sonal; Yaffe, Martin J; Czarnota, Gregory J
2015-11-01
Changes in textural characteristics of diffuse optical spectroscopic (DOS) functional images, accompanied by alterations in their mean values, are demonstrated here for the first time as early surrogates of ultimate treatment response in locally advanced breast cancer (LABC) patients receiving neoadjuvant chemotherapy (NAC). NAC, as a standard component of treatment for LABC patient, induces measurable heterogeneous changes in tumor metabolism which were evaluated using DOS-based metabolic maps. This study characterizes such inhomogeneous nature of response development, by determining alterations in textural properties of DOS images apparent at early stages of therapy, followed later by gross changes in mean values of these functional metabolic maps. Twelve LABC patients undergoing NAC were scanned before and at four times after treatment initiation, and tomographic DOS images were reconstructed at each time. Ultimate responses of patients were determined clinically and pathologically, based on a reduction in tumor size and assessment of residual tumor cellularity. The mean-value parameters and textural features were extracted from volumetric DOS images for several functional and metabolic parameters prior to the treatment initiation. Changes in these DOS-based biomarkers were also monitored over the course of treatment. The measured biomarkers were applied to differentiate patient responses noninvasively and compared to clinical and pathologic responses. Responding and nonresponding patients demonstrated different changes in DOS-based textural and mean-value parameters during chemotherapy. Whereas none of the biomarkers measured prior the start of therapy demonstrated a significant difference between the two patient populations, statistically significant differences were observed at week one after treatment initiation using the relative change in contrast/homogeneity of seven functional maps (0.001
Scalable Online Network Modeling and Simulation
2005-08-01
ONLINE NETWORK MODELING AND SIMULATION 6. AUTHOR(S) Boleslaw Szymanski , Shivkumar Kalyanaraman, Biplab Sikdar and Christopher Carothers 5...performance for a wide range of parameter values (parameter sensitivity), understanding of protocol stability and dynamics, and studying feature ...a wide range of parameter values (parameter sensitivity), understanding of protocol stability and dynamics, and studying feature interactions
Spacecraft utility and the development of confidence intervals for criticality of anomalies
NASA Technical Reports Server (NTRS)
Williams, R. E.
1980-01-01
The concept of spacecraft utility, a measure of its performance in orbit, is discussed and its formulation is described. Performance is defined in terms of the malfunctions that occur and the criticality to the mission of these malfunctions. Different approaches to establishing average or expected values of criticality are discussed and confidence intervals are developed for parameters used in the computation of utility.
Sensitivity of NTCP parameter values against a change of dose calculation algorithm.
Brink, Carsten; Berg, Martin; Nielsen, Morten
2007-09-01
Optimization of radiation treatment planning requires estimations of the normal tissue complication probability (NTCP). A number of models exist that estimate NTCP from a calculated dose distribution. Since different dose calculation algorithms use different approximations the dose distributions predicted for a given treatment will in general depend on the algorithm. The purpose of this work is to test whether the optimal NTCP parameter values change significantly when the dose calculation algorithm is changed. The treatment plans for 17 breast cancer patients have retrospectively been recalculated with a collapsed cone algorithm (CC) to compare the NTCP estimates for radiation pneumonitis with those obtained from the clinically used pencil beam algorithm (PB). For the PB calculations the NTCP parameters were taken from previously published values for three different models. For the CC calculations the parameters were fitted to give the same NTCP as for the PB calculations. This paper demonstrates that significant shifts of the NTCP parameter values are observed for three models, comparable in magnitude to the uncertainties of the published parameter values. Thus, it is important to quote the applied dose calculation algorithm when reporting estimates of NTCP parameters in order to ensure correct use of the models.
Sensitivity of NTCP parameter values against a change of dose calculation algorithm
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brink, Carsten; Berg, Martin; Nielsen, Morten
2007-09-15
Optimization of radiation treatment planning requires estimations of the normal tissue complication probability (NTCP). A number of models exist that estimate NTCP from a calculated dose distribution. Since different dose calculation algorithms use different approximations the dose distributions predicted for a given treatment will in general depend on the algorithm. The purpose of this work is to test whether the optimal NTCP parameter values change significantly when the dose calculation algorithm is changed. The treatment plans for 17 breast cancer patients have retrospectively been recalculated with a collapsed cone algorithm (CC) to compare the NTCP estimates for radiation pneumonitis withmore » those obtained from the clinically used pencil beam algorithm (PB). For the PB calculations the NTCP parameters were taken from previously published values for three different models. For the CC calculations the parameters were fitted to give the same NTCP as for the PB calculations. This paper demonstrates that significant shifts of the NTCP parameter values are observed for three models, comparable in magnitude to the uncertainties of the published parameter values. Thus, it is important to quote the applied dose calculation algorithm when reporting estimates of NTCP parameters in order to ensure correct use of the models.« less
Dresch, Jacqueline M; Liu, Xiaozhou; Arnosti, David N; Ay, Ahmet
2010-10-24
Quantitative models of gene expression generate parameter values that can shed light on biological features such as transcription factor activity, cooperativity, and local effects of repressors. An important element in such investigations is sensitivity analysis, which determines how strongly a model's output reacts to variations in parameter values. Parameters of low sensitivity may not be accurately estimated, leading to unwarranted conclusions. Low sensitivity may reflect the nature of the biological data, or it may be a result of the model structure. Here, we focus on the analysis of thermodynamic models, which have been used extensively to analyze gene transcription. Extracted parameter values have been interpreted biologically, but until now little attention has been given to parameter sensitivity in this context. We apply local and global sensitivity analyses to two recent transcriptional models to determine the sensitivity of individual parameters. We show that in one case, values for repressor efficiencies are very sensitive, while values for protein cooperativities are not, and provide insights on why these differential sensitivities stem from both biological effects and the structure of the applied models. In a second case, we demonstrate that parameters that were thought to prove the system's dependence on activator-activator cooperativity are relatively insensitive. We show that there are numerous parameter sets that do not satisfy the relationships proferred as the optimal solutions, indicating that structural differences between the two types of transcriptional enhancers analyzed may not be as simple as altered activator cooperativity. Our results emphasize the need for sensitivity analysis to examine model construction and forms of biological data used for modeling transcriptional processes, in order to determine the significance of estimated parameter values for thermodynamic models. Knowledge of parameter sensitivities can provide the necessary context to determine how modeling results should be interpreted in biological systems.
NASA Astrophysics Data System (ADS)
Wells, J. R.; Kim, J. B.
2011-12-01
Parameters in dynamic global vegetation models (DGVMs) are thought to be weakly constrained and can be a significant source of errors and uncertainties. DGVMs use between 5 and 26 plant functional types (PFTs) to represent the average plant life form in each simulated plot, and each PFT typically has a dozen or more parameters that define the way it uses resource and responds to the simulated growing environment. Sensitivity analysis explores how varying parameters affects the output, but does not do a full exploration of the parameter solution space. The solution space for DGVM parameter values are thought to be complex and non-linear; and multiple sets of acceptable parameters may exist. In published studies, PFT parameters are estimated from published literature, and often a parameter value is estimated from a single published value. Further, the parameters are "tuned" using somewhat arbitrary, "trial-and-error" methods. BIOMAP is a new DGVM created by fusing MAPSS biogeography model with Biome-BGC. It represents the vegetation of North America using 26 PFTs. We are using simulated annealing, a global search method, to systematically and objectively explore the solution space for the BIOMAP PFTs and system parameters important for plant water use. We defined the boundaries of the solution space by obtaining maximum and minimum values from published literature, and where those were not available, using +/-20% of current values. We used stratified random sampling to select a set of grid cells representing the vegetation of the conterminous USA. Simulated annealing algorithm is applied to the parameters for spin-up and a transient run during the historical period 1961-1990. A set of parameter values is considered acceptable if the associated simulation run produces a modern potential vegetation distribution map that is as accurate as one produced by trial-and-error calibration. We expect to confirm that the solution space is non-linear and complex, and that multiple acceptable parameter sets exist. Further we expect to demonstrate that the multiple parameter sets produce significantly divergent future forecasts in NEP, C storage, and ET and runoff; and thereby identify a highly important source of DGVM uncertainty
NASA Astrophysics Data System (ADS)
Huang, C. L.; Hsu, N. S.; Yeh, W. W. G.; Hsieh, I. H.
2017-12-01
This study develops an innovative calibration method for regional groundwater modeling by using multi-class empirical orthogonal functions (EOFs). The developed method is an iterative approach. Prior to carrying out the iterative procedures, the groundwater storage hydrographs associated with the observation wells are calculated. The combined multi-class EOF amplitudes and EOF expansion coefficients of the storage hydrographs are then used to compute the initial gauss of the temporal and spatial pattern of multiple recharges. The initial guess of the hydrogeological parameters are also assigned according to in-situ pumping experiment. The recharges include net rainfall recharge and boundary recharge, and the hydrogeological parameters are riverbed leakage conductivity, horizontal hydraulic conductivity, vertical hydraulic conductivity, storage coefficient, and specific yield. The first step of the iterative algorithm is to conduct the numerical model (i.e. MODFLOW) by the initial guess / adjusted values of the recharges and parameters. Second, in order to determine the best EOF combination of the error storage hydrographs for determining the correction vectors, the objective function is devised as minimizing the root mean square error (RMSE) of the simulated storage hydrographs. The error storage hydrograph are the differences between the storage hydrographs computed from observed and simulated groundwater level fluctuations. Third, adjust the values of recharges and parameters and repeat the iterative procedures until the stopping criterion is reached. The established methodology was applied to the groundwater system of Ming-Chu Basin, Taiwan. The study period is from January 1st to December 2ed in 2012. Results showed that the optimal EOF combination for the multiple recharges and hydrogeological parameters can decrease the RMSE of the simulated storage hydrographs dramatically within three calibration iterations. It represents that the iterative approach that using EOF techniques can capture the groundwater flow tendency and detects the correction vector of the simulated error sources. Hence, the established EOF-based methodology can effectively and accurately identify the multiple recharges and hydrogeological parameters.
Tuncer, Necibe; Gulbudak, Hayriye; Cannataro, Vincent L; Martcheva, Maia
2016-09-01
In this article, we discuss the structural and practical identifiability of a nested immuno-epidemiological model of arbovirus diseases, where host-vector transmission rate, host recovery, and disease-induced death rates are governed by the within-host immune system. We incorporate the newest ideas and the most up-to-date features of numerical methods to fit multi-scale models to multi-scale data. For an immunological model, we use Rift Valley Fever Virus (RVFV) time-series data obtained from livestock under laboratory experiments, and for an epidemiological model we incorporate a human compartment to the nested model and use the number of human RVFV cases reported by the CDC during the 2006-2007 Kenya outbreak. We show that the immunological model is not structurally identifiable for the measurements of time-series viremia concentrations in the host. Thus, we study the non-dimensionalized and scaled versions of the immunological model and prove that both are structurally globally identifiable. After fixing estimated parameter values for the immunological model derived from the scaled model, we develop a numerical method to fit observable RVFV epidemiological data to the nested model for the remaining parameter values of the multi-scale system. For the given (CDC) data set, Monte Carlo simulations indicate that only three parameters of the epidemiological model are practically identifiable when the immune model parameters are fixed. Alternatively, we fit the multi-scale data to the multi-scale model simultaneously. Monte Carlo simulations for the simultaneous fitting suggest that the parameters of the immunological model and the parameters of the immuno-epidemiological model are practically identifiable. We suggest that analytic approaches for studying the structural identifiability of nested models are a necessity, so that identifiable parameter combinations can be derived to reparameterize the nested model to obtain an identifiable one. This is a crucial step in developing multi-scale models which explain multi-scale data.
NASA Astrophysics Data System (ADS)
Kong, Xianyu; Liu, Yanfang; Jian, Huimin; Su, Rongguo; Yao, Qingzhen; Shi, Xiaoyong
2017-10-01
To realize potential cost savings in coastal monitoring programs and provide timely advice for marine management, there is an urgent need for efficient evaluation tools based on easily measured variables for the rapid and timely assessment of estuarine and offshore eutrophication. In this study, using parallel factor analysis (PARAFAC), principal component analysis (PCA), and discriminant function analysis (DFA) with the trophic index (TRIX) for reference, we developed an approach for rapidly assessing the eutrophication status of coastal waters using easy-to-measure parameters, including chromophoric dissolved organic matter (CDOM), fluorescence excitation-emission matrices, CDOM UV-Vis absorbance, and other water-quality parameters (turbidity, chlorophyll a, and dissolved oxygen). First, we decomposed CDOM excitation-emission matrices (EEMs) by PARAFAC to identify three components. Then, we applied PCA to simplify the complexity of the relationships between the water-quality parameters. Finally, we used the PCA score values as independent variables in DFA to develop a eutrophication assessment model. The developed model yielded classification accuracy rates of 97.1%, 80.5%, 90.3%, and 89.1% for good, moderate, and poor water qualities, and for the overall data sets, respectively. Our results suggest that these easy-to-measure parameters could be used to develop a simple approach for rapid in-situ assessment and monitoring of the eutrophication of estuarine and offshore areas.
Gryko, Anna; Głowińska-Olszewska, Barbara; Płudowska, Katarzyna; Smithson, W Henry; Owłasiuk, Anna; Żelazowska-Rutkowska, Beata; Wojtkielewicz, Katarzyna; Milewski, Robert; Chlabicz, Sławomir
2017-01-01
In the recent years, alterations in the carbohydrate metabolism, including insulin resistance, are considered as risk factors in the development of hypertension and its complications in young age. Hypertension is associated with significant cardiovascular morbidity and mortality. The onset of pathology responsible for the development of hypertension, as well as levels of biomarkers specific for early stages of atherosclerosis are poorly understood. To compare a group of children whose parents have a history of hypertension (study group) with a group of children with normotensive parents (reference group), with consideration of typical risk factors for atherosclerosis, parameters of lipid and carbohydrate metabolism, anthropometric data and new biomarkers of early cardiovascular disease (hsCRP, adiponectin, sICAM-1). The study population consists of 84 children. Of these, 40 children (mean age 13.6±2.7 years) had a parental history of hypertension, and 44 aged 13.1±3.7 yrs were children of normotensive parents. Anthropometric measurements were taken, and measurements of blood pressure, lipid profile, glucose and insulin levels were carried out. The insulin resistance index (HOMA IR) was calculated. Levels of hsCRP, soluble cell adhesion molecules (sICAM) and adiponectin were measured. There were no statistically significant differences in anthropometric parameters (body mass, SDS BMI, skin folds) between groups. Values of systolic blood pressure were statistically significantly higher in the study group (Me 108 vs. 100 mmHg, p= 0.031), as were glycaemia (Me 80 vs. 67 mg/dl p<0.001) and insulinaemia levels (Me 8.89 vs. 5.34 µIU/ml, p=0.024). Higher, statistically significant values of HOMA IR were found in the study group (children of hypertensive parents) (Me 1.68 vs. 0.80 mmol/l × mU/l, p=0.007). Lower adiponectin levels (Me 13959.45 vs. 16822 ng/ml, p=0.020) were found in children with a family history of hypertension. No significant differences were found in the levels of sICAM, hsCRP, and parameters of lipid metabolism. Family history of hypertension is correlated with higher values of systolic blood pressure and higher values of parameters for carbohydrate metabolism in children. Hypertension in parents is a risk factor for cardiovascular disease in their children. © Polish Society for Pediatric Endocrinology and Diabetology.
NASA Technical Reports Server (NTRS)
Palosz, B.; Grzanka, E.; Gierlotka, S.; Stelmakh, S.; Pielaszek, R.; Bismayer, U.; Weber, H.-P.; Palosz, W.; Curreri, Peter A. (Technical Monitor)
2002-01-01
The applicability of standard methods of elaboration of powder diffraction data for determination of the structure of nano-size crystallites is analysed. Based on our theoretical calculations of powder diffraction data we show, that the assumption of the infinite crystal lattice for nanocrystals smaller than 20 nm in size is not justified. Application of conventional tools developed for elaboration of powder diffraction data, like the Rietveld method, may lead to erroneous interpretation of the experimental results. An alternate evaluation of diffraction data of nanoparticles, based on the so-called 'apparent lattice parameter' (alp) is introduced. We assume a model of nanocrystal having a grain core with well-defined crystal structure, surrounded by a surface shell with the atomic structure similar to that of the core but being under a strain (compressive or tensile). The two structural components, the core and the shell, form essentially a composite crystal with interfering, inseparable diffraction properties. Because the structure of such a nanocrystal is not uniform, it defies the basic definitions of an unambiguous crystallographic phase. Consequently, a set of lattice parameters used for characterization of simple crystal phases is insufficient for a proper description of the complex structure of nanocrystals. We developed a method of evaluation of powder diffraction data of nanocrystals, which refers to a core-shell model and is based on the 'apparent lattice parameter' methodology. For a given diffraction pattem, the alp values are calculated for every individual Bragg reflection. For nanocrystals the alp values depend on the diffraction vector Q. By modeling different a0tomic structures of nanocrystals and calculating theoretically corresponding diffraction patterns using the Debye functions we showed, that alp-Q plots show characteristic shapes which can be used for evaluation of the atomic structure of the core-shell system. We show, that using a simple model of a nanocrystal with spherical shape and centro-symmetric strain at the surface shell we obtain theoretical alp-Q values which match very well the alp-Q plots determined experimentally for Sic, GaN, and diamond nanopowders. The theoretical models are defined by the lattice parameter of the grain core, thickness of the surface shell, and the magnitude and distribution of the strain field in the surface shell. According to our calculations, the part of the diffraction pattern measured at relatively low diffraction vectors Q (below 10/angstrom) provides information on the surface strain, whle determination of the lattice parameters in the grain core requires measurements at large Q-values (above 15 - 20/angstrom).
Waszak, Małgorzata; Cieślik, Krystyna; Pietryga, Marek; Lewandowski, Jacek; Chuchracki, Marek; Nowak-Markwitz, Ewa; Bręborowicz, Grzegorz
2016-01-01
The aim of the study was to determine if, and to what extent, structural and functional changes of the secundines influence biometric parameters of neonates from dichorionic twin pregnancies. The study included neonates from dichorionic, diamniotic twin pregnancies, along with their secundines. Based on histopathological examination of the secundines, the mass and dimensions of the placenta, length and condition of the umbilical cord, chorionicity, focal lesions, and microscopic placental abnormalities were determined for 445 pairs of twins. Morphological development of examined twins was characterized on the basis of their six somatic traits, while birth status of the newborns was assessed based on their Apgar scores. Statistical analysis included Student t-tests, Snedecor's F-tests, post-hoc tests, non-parametric chi-squared Pearson's tests, and determination of Spearman coefficients of rank correlation. The lowest values of analyzed somatic traits were observed in twins who had placentas with velamentous or marginal cord insertion. Inflammatory lesions in the placenta and placental abruption turned out to have the greatest impact of all analyzed abnormalities of the secundines. Inflammatory lesions in the placenta were associated with lower values of biometric parameters and a greater likelihood of preterm birth. Neonates with a history of placental abruption were characterized by significantly lower birth weight and smaller chest circumference. Morphological changes in the secundines have a limited impact on biometric parameters of neonates from dichorionic twin pregnancies. In turn, functional changes exert a significant effect and more often contribute to impaired fetal development.
NASA Astrophysics Data System (ADS)
Muravev, Dmitri; Rakhmangulov, Aleksandr
2016-11-01
Currently, container shipping development is directly associated with an increase of warehouse areas for containers' storage. One of the most successful types of container terminal is an intermodal terminal called a dry port. Main pollution sources during the organization of intermodal transport are considered. A system of dry port parameters, which are recommended for the evaluation of different scenarios for a seaport infrastructure development at the stage of its strategic planning, is proposed in this paper. The authors have developed a method for determining the optimal values of the main dry port parameters by simulation modeling in the programming software Any- Logic. Dependencies thatwere obtained as a result of modeling experiments prove the adequacy of main selected dry port parameters for the effective scenarios' evaluation of throughput and handling capacity at existing seaports at the stage of strategic planning and a rational dry port location, allowed ensuring the improvement of the ecological situation in a port city.
Halstead, Judith A; Kliman, Sabrina; Berheide, Catherine White; Chaucer, Alexander; Cock-Esteb, Alicea
2014-06-01
The relationships among land use patterns, geology, soil, and major solute concentrations in stream water for eight tributaries of the Kayaderosseras Creek watershed in Saratoga County, NY, were investigated using Pearson correlation coefficients and multivariate regression analysis. Sub-watersheds corresponding to each sampling site were delineated, and land use patterns were determined for each of the eight sub-watersheds using GIS. Four land use categories (urban development, agriculture, forests, and wetlands) constituted more than 99 % of the land in the sub-watersheds. Eleven water chemistry parameters were highly and positively correlated with each other and urban development. Multivariate regression models indicated urban development was the most powerful predictor for the same eleven parameters (conductivity, TN, TP, NO[Formula: see text], Cl(-), HCO(-)3, SO9(2-)4, Na(+), K(+), Ca(2+), and Mg(2+)). Adjusted R(2) values, ranging from 19 to 91 %, indicated that these models explained an average of 64 % of the variance in these 11 parameters across the samples and 70 % when Mg(2+) was omitted. The more common R (2), ranging from 29 to 92 %, averaged 68 % for these 11 parameters and 72 % when Mg(2+) was omitted. Water quality improved most with forest coverage in stream watersheds. The strong associations between water quality variables and urban development indicated an urban source for these 11 water quality parameters at all eight sampling sites was likely, suggesting that urban stream syndrome can be detected even on a relatively small scale in a lightly developed area. Possible urban sources of Ca(2+) and HCO(-)3 are suggested.
NASA Astrophysics Data System (ADS)
Norris, J. Q.
2016-12-01
Published 60 years ago, the Gutenburg-Richter law provides a universal frequency-magnitude distribution for natural and induced seismicity. The GR law is a two parameter power-law with the b-value specifying the relative frequency of small and large events. For large catalogs of natural seismicity, the observed b-values are near one, while fracking associated seismicity has observed b-values near two, indicating relatively fewer large events. We have developed a computationally inexpensive percolation model for fracking that allows us to generate large catalogs of fracking associated seismicity. Using these catalogs, we show that different power-law fitting procedures produce different b-values for the same data set. This shows that care must be taken when determining and comparing b-values for fracking associated seismicity.
Curve Number Application in Continuous Runoff Models: An Exercise in Futility?
NASA Astrophysics Data System (ADS)
Lamont, S. J.; Eli, R. N.
2006-12-01
The suitability of applying the NRCS (Natural Resource Conservation Service) Curve Number (CN) to continuous runoff prediction is examined by studying the dependence of CN on several hydrologic variables in the context of a complex nonlinear hydrologic model. The continuous watershed model Hydrologic Simulation Program-FORTRAN (HSPF) was employed using a simple theoretical watershed in two numerical procedures designed to investigate the influence of soil type, soil depth, storm depth, storm distribution, and initial abstraction ratio value on the calculated CN value. This study stems from a concurrent project involving the design of a hydrologic modeling system to support the Cumulative Hydrologic Impact Assessments (CHIA) of over 230 coal-mined watersheds throughout West Virginia. Because of the large number of watersheds and limited availability of data necessary for HSPF calibration, it was initially proposed that predetermined CN values be used as a surrogate for those HSPF parameters controlling direct runoff. A soil physics model was developed to relate CN values to those HSPF parameters governing soil moisture content and infiltration behavior, with the remaining HSPF parameters being adopted from previous calibrations on real watersheds. A numerical procedure was then adopted to back-calculate CN values from the theoretical watershed using antecedent moisture conditions equivalent to the NRCS Antecedent Runoff Condition (ARC) II. This procedure used the direct runoff produced from a cyclic synthetic storm event time series input to HSPF. A second numerical method of CN determination, using real time series rainfall data, was used to provide a comparison to those CN values determined using the synthetic storm event time series. It was determined that the calculated CN values resulting from both numerical methods demonstrated a nonlinear dependence on all of the computational variables listed above. It was concluded that the use of the Curve Number as a surrogate for the selected subset of HPSF parameters could not be justified. These results suggest that use of the Curve Number in other complex continuous time series hydrologic models may not be appropriate, given the limitations inherent in the definition of the NRCS CN method.
Development and Training of a Neural Controller for Hind Leg Walking in a Dog Robot
Hunt, Alexander; Szczecinski, Nicholas; Quinn, Roger
2017-01-01
Animals dynamically adapt to varying terrain and small perturbations with remarkable ease. These adaptations arise from complex interactions between the environment and biomechanical and neural components of the animal's body and nervous system. Research into mammalian locomotion has resulted in several neural and neuro-mechanical models, some of which have been tested in simulation, but few “synthetic nervous systems” have been implemented in physical hardware models of animal systems. One reason is that the implementation into a physical system is not straightforward. For example, it is difficult to make robotic actuators and sensors that model those in the animal. Therefore, even if the sensorimotor circuits were known in great detail, those parameters would not be applicable and new parameter values must be found for the network in the robotic model of the animal. This manuscript demonstrates an automatic method for setting parameter values in a synthetic nervous system composed of non-spiking leaky integrator neuron models. This method works by first using a model of the system to determine required motor neuron activations to produce stable walking. Parameters in the neural system are then tuned systematically such that it produces similar activations to the desired pattern determined using expected sensory feedback. We demonstrate that the developed method successfully produces adaptive locomotion in the rear legs of a dog-like robot actuated by artificial muscles. Furthermore, the results support the validity of current models of mammalian locomotion. This research will serve as a basis for testing more complex locomotion controllers and for testing specific sensory pathways and biomechanical designs. Additionally, the developed method can be used to automatically adapt the neural controller for different mechanical designs such that it could be used to control different robotic systems. PMID:28420977
Dependence of quantitative accuracy of CT perfusion imaging on system parameters
NASA Astrophysics Data System (ADS)
Li, Ke; Chen, Guang-Hong
2017-03-01
Deconvolution is a popular method to calculate parametric perfusion parameters from four dimensional CT perfusion (CTP) source images. During the deconvolution process, the four dimensional space is squeezed into three-dimensional space by removing the temporal dimension, and a prior knowledge is often used to suppress noise associated with the process. These additional complexities confound the understanding about deconvolution-based CTP imaging system and how its quantitative accuracy depends on parameters and sub-operations involved in the image formation process. Meanwhile, there has been a strong clinical need in answering this question, as physicians often rely heavily on the quantitative values of perfusion parameters to make diagnostic decisions, particularly during an emergent clinical situation (e.g. diagnosis of acute ischemic stroke). The purpose of this work was to develop a theoretical framework that quantitatively relates the quantification accuracy of parametric perfusion parameters with CTP acquisition and post-processing parameters. This goal was achieved with the help of a cascaded systems analysis for deconvolution-based CTP imaging systems. Based on the cascaded systems analysis, the quantitative relationship between regularization strength, source image noise, arterial input function, and the quantification accuracy of perfusion parameters was established. The theory could potentially be used to guide developments of CTP imaging technology for better quantification accuracy and lower radiation dose.
NASA Technical Reports Server (NTRS)
Scalzo, F.
1983-01-01
Sensor redundancy management (SRM) requires a system which will detect failures and reconstruct avionics accordingly. A probability density function to determine false alarm rates, using an algorithmic approach was generated. Microcomputer software was developed which will print out tables of values for the cummulative probability of being in the domain of failure; system reliability; and false alarm probability, given a signal is in the domain of failure. The microcomputer software was applied to the sensor output data for various AFT1 F-16 flights and sensor parameters. Practical recommendations for further research were made.
Uncertainty Quantification of Equilibrium Climate Sensitivity in CCSM4
NASA Astrophysics Data System (ADS)
Covey, C. C.; Lucas, D. D.; Tannahill, J.; Klein, R.
2013-12-01
Uncertainty in the global mean equilibrium surface warming due to doubled atmospheric CO2, as computed by a "slab ocean" configuration of the Community Climate System Model version 4 (CCSM4), is quantified using 1,039 perturbed-input-parameter simulations. The slab ocean configuration reduces the model's e-folding time when approaching an equilibrium state to ~5 years. This time is much less than for the full ocean configuration, consistent with the shallow depth of the upper well-mixed layer of the ocean represented by the "slab." Adoption of the slab ocean configuration requires the assumption of preset values for the convergence of ocean heat transport beneath the upper well-mixed layer. A standard procedure for choosing these values maximizes agreement with the full ocean version's simulation of the present-day climate when input parameters assume their default values. For each new set of input parameter values, we computed the change in ocean heat transport implied by a "Phase 1" model run in which sea surface temperatures and sea ice concentrations were set equal to present-day values. The resulting total ocean heat transport (= standard value + change implied by Phase 1 run) was then input into "Phase 2" slab ocean runs with varying values of atmospheric CO2. Our uncertainty estimate is based on Latin Hypercube sampling over expert-provided uncertainty ranges of N = 36 adjustable parameters in the atmosphere (CAM4) and sea ice (CICE4) components of CCSM4. Two-dimensional projections of our sampling distribution for the N(N-1)/2 possible pairs of input parameters indicate full coverage of the N-dimensional parameter space, including edges. We used a machine learning-based support vector regression (SVR) statistical model to estimate the probability density function (PDF) of equilibrium warming. This fitting procedure produces a PDF that is qualitatively consistent with the raw histogram of our CCSM4 results. Most of the values from the SVR statistical model are within ~0.1 K of the raw results, well below the inter-decile range inferred below. Independent validation of the fit indicates residual errors that are distributed about zero with a standard deviation of 0.17 K. Analysis of variance shows that the equilibrium warming in CCSM4 is mainly linear in parameter changes. Thus, in accord with the Central Limit Theorem of statistics, the PDF of the warming is approximately Gaussian, i.e. symmetric about its mean value (3.0 K). Since SVR allows for highly nonlinear fits, the symmetry is not an artifact of the fitting procedure. The 10-90 percentile range of the PDF is 2.6-3.4 K, consistent with earlier estimates from CCSM4 but narrower than estimates from other models, which sometimes produce a high-temperature asymmetric tail in the PDF. This work was performed under auspices of the US Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344, and was funded by LLNL's Uncertainty Quantification Strategic Initiative (Laboratory Directed Research and Development Project 10-SI-013).
The estimation of parameter compaction values for pavement subgrade stabilized with lime
NASA Astrophysics Data System (ADS)
Lubis, A. S.; Muis, Z. A.; Simbolon, C. A.
2018-02-01
The type of soil material, field control, maintenance and availability of funds are several factors that must be considered in compaction of the pavement subgrade. In determining the compaction parameters in laboratory desperately requires considerable materials, time and funds, and reliable laboratory operators. If the result of soil classification values can be used to estimate the compaction parameters of a subgrade material, so it would save time, energy, materials and cost on the execution of this work. This is also a clarification (cross check) of the work that has been done by technicians in the laboratory. The study aims to estimate the compaction parameter values ie. maximum dry unit weight (γdmax) and optimum water content (Wopt) of the soil subgrade that stabilized with lime. The tests that conducted in the laboratory of soil mechanics were to determine the index properties (Fines and Liquid Limit/LL) and Standard Compaction Test. Soil samples that have Plasticity Index (PI) > 10% were made with additional 3% lime for 30 samples. By using the Goswami equation, the compaction parameter values can be estimated by equation γd max # = -0,1686 Log G + 1,8434 and Wopt # = 2,9178 log G + 17,086. From the validation calculation, there was a significant positive correlation between the compaction parameter values laboratory and the compaction parameter values estimated, with a 95% confidence interval as a strong relationship.
Reference values of clinical chemistry and hematology parameters in rhesus monkeys (Macaca mulatta).
Chen, Younan; Qin, Shengfang; Ding, Yang; Wei, Lingling; Zhang, Jie; Li, Hongxia; Bu, Hong; Lu, Yanrong; Cheng, Jingqiu
2009-01-01
Rhesus monkey models are valuable to the studies of human biology. Reference values for clinical chemistry and hematology parameters of rhesus monkeys are required for proper data interpretation. Whole blood was collected from 36 healthy Chinese rhesus monkeys (Macaca mulatta) of either sex, 3 to 5 yr old. Routine chemistry and hematology parameters, and some special coagulation parameters including thromboelastograph and activities of coagulation factors were tested. We presented here the baseline values of clinical chemistry and hematology parameters in normal Chinese rhesus monkeys. These data may provide valuable information for veterinarians and investigators using rhesus monkeys in experimental studies.
NASA Astrophysics Data System (ADS)
Hashimoto, M.; Nakajima, T.; Takenaka, H.; Higurashi, A.
2013-12-01
We develop a new satellite remote sensing algorithm to retrieve the properties of aerosol particles in the atmosphere. In late years, high resolution and multi-wavelength, and multiple-angle observation data have been obtained by grand-based spectral radiometers and imaging sensors on board the satellite. With this development, optimized multi-parameter remote sensing methods based on the Bayesian theory have become popularly used (Turchin and Nozik, 1969; Rodgers, 2000; Dubovik et al., 2000). Additionally, a direct use of radiation transfer calculation has been employed for non-linear remote sensing problems taking place of look up table methods supported by the progress of computing technology (Dubovik et al., 2011; Yoshida et al., 2011). We are developing a flexible multi-pixel and multi-parameter remote sensing algorithm for aerosol optical properties. In this algorithm, the inversion method is a combination of the MAP method (Maximum a posteriori method, Rodgers, 2000) and the Phillips-Twomey method (Phillips, 1962; Twomey, 1963) as a smoothing constraint for the state vector. Furthermore, we include a radiation transfer calculation code, Rstar (Nakajima and Tanaka, 1986, 1988), numerically solved each time in iteration for solution search. The Rstar-code has been directly used in the AERONET operational processing system (Dubovik and King, 2000). Retrieved parameters in our algorithm are aerosol optical properties, such as aerosol optical thickness (AOT) of fine mode, sea salt, and dust particles, a volume soot fraction in fine mode particles, and ground surface albedo of each observed wavelength. We simultaneously retrieve all the parameters that characterize pixels in each of horizontal sub-domains consisting the target area. Then we successively apply the retrieval method to all the sub-domains in the target area. We conducted numerical tests for the retrieval of aerosol properties and ground surface albedo for GOSAT/CAI imager data to test the algorithm for the land area. In this test, we simulated satellite-observed radiances for a sub-domain consisting of 5 by 5 pixels by the Rstar code assuming wavelengths of 380, 674, 870 and 1600 [nm], atmospheric condition of the US standard atmosphere, and the several aerosol and ground surface conditions. The result of the experiment showed that AOTs of fine mode and dust particles, soot fraction and ground surface albedo at the wavelength of 674 [nm] are retrieved within absolute value differences of 0.04, 0.01, 0.06 and 0.006 from the true value, respectively, for the case of dark surface, and also, for the case of blight surface, 0.06, 0.03, 0.04 and 0.10 from the true value, respectively. We will conduct more tests to study the information contents of parameters needed for aerosol and land surface remote sensing with different boundary conditions among sub-domains.
NASA Technical Reports Server (NTRS)
Duval, R. W.; Bahrami, M.
1985-01-01
The Rotor Systems Research Aircraft uses load cells to isolate the rotor/transmission systm from the fuselage. A mathematical model relating applied rotor loads and inertial loads of the rotor/transmission system to the load cell response is required to allow the load cells to be used to estimate rotor loads from flight data. Such a model is derived analytically by applying a force and moment balance to the isolated rotor/transmission system. The model is tested by comparing its estimated values of applied rotor loads with measured values obtained from a ground based shake test. Discrepancies in the comparison are used to isolate sources of unmodeled external loads. Once the structure of the mathematical model has been validated by comparison with experimental data, the parameters must be identified. Since the parameters may vary with flight condition it is desirable to identify the parameters directly from the flight data. A Maximum Likelihood identification algorithm is derived for this purpose and tested using a computer simulation of load cell data. The identification is found to converge within 10 samples. The rapid convergence facilitates tracking of time varying parameters of the load cell model in flight.
Ramzan, M; Ullah, Naeem; Chung, Jae Dong; Lu, Dianchen; Farooq, Umer
2017-10-10
A mathematical model has been developed to examine the magneto hydrodynamic micropolar nanofluid flow with buoyancy effects. Flow analysis is carried out in the presence of nonlinear thermal radiation and dual stratification. The impact of binary chemical reaction with Arrhenius activation energy is also considered. Apposite transformations are engaged to transform nonlinear partial differential equations to differential equations with high nonlinearity. Resulting nonlinear system of differential equations is solved by differential solver method in Maple software which uses Runge-Kutta fourth and fifth order technique (RK45). To authenticate the obtained results, a comparison with the preceding article is also made. The evaluations are executed graphically for numerous prominent parameters versus velocity, micro rotation component, temperature, and concentration distributions. Tabulated numerical calculations of Nusselt and Sherwood numbers with respective well-argued discussions are also presented. Our findings illustrate that the angular velocity component declines for opposing buoyancy forces and enhances for aiding buoyancy forces by changing the micropolar parameter. It is also found that concentration profile increases for higher values of chemical reaction parameter, whereas it diminishes for growing values of solutal stratification parameter.
Clinical tooth preparations and associated measuring methods: a systematic review.
Tiu, Janine; Al-Amleh, Basil; Waddell, J Neil; Duncan, Warwick J
2015-03-01
The geometries of tooth preparations are important features that aid in the retention and resistance of cemented complete crowns. The clinically relevant values and the methods used to measure these are not clear. The purpose of this systematic review was to retrieve, organize, and critically appraise studies measuring clinical tooth preparation parameters, specifically the methodology used to measure the preparation geometry. A database search was performed in Scopus, PubMed, and ScienceDirect with an additional hand search on December 5, 2013. The articles were screened for inclusion and exclusion criteria and information regarding the total occlusal convergence (TOC) angle, margin design, and associated measuring methods were extracted. The values and associated measuring methods were tabulated. A total of 1006 publications were initially retrieved. After removing duplicates and filtering by using exclusion and inclusion criteria, 983 articles were excluded. Twenty-three articles reported clinical tooth preparation values. Twenty articles reported the TOC, 4 articles reported margin designs, 4 articles reported margin angles, and 3 articles reported the abutment height of preparations. A variety of methods were used to measure these parameters. TOC values seem to be the most important preparation parameter. Recommended TOC values have increased over the past 4 decades from an unachievable 2- to 5-degree taper to a more realistic 10 to 22 degrees. Recommended values are more likely to be achieved under experimental conditions if crown preparations are performed outside of the mouth. We recommend that a standardized measurement method based on the cross sections of crown preparations and standardized reporting be developed for future studies analyzing preparation geometry. Copyright © 2015 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.
Holler, P J; Wess, G
2014-01-01
E-point-to-septal-separation (EPSS) and the sphericity index (SI) are echocardiographic parameters that are recommended in the ESVC-DCM guidelines. However, SI cutoff values to diagnose dilated cardiomyopathy (DCM) have never been evaluated. To establish reference ranges, calculate cutoff values, and assess the clinical value of SI and EPSS to diagnose DCM in Doberman Pinschers. One hundred seventy-nine client-owned Doberman Pinschers. Three groups were formed in this prospective longitudinal study according to established Holter and echocardiographic criteria using the Simpson method of disk (SMOD): control group (97 dogs), DCM with echocardiographic changes (75 dogs) and "last normal" group (n = 7), which included dogs that developed DCM within 1.5 years, but were still normal at this time point. In a substudy, dogs with early DCM based upon SMOD values above the reference range but still normal M-Mode measurements were selected, to evaluate if EPSS or SI were abnormal using the established cutoff values. ROC-curve analysis determined <1.65 for the SI (sensitivity 86.8%; specificity 87.6%) and >6.5 mm for EPSS (sensitivity 100%; specificity 99.0%) as optimal cutoff values to diagnose DCM. Both parameters were significantly different between the control group and the DCM group (P < 0.001), but were not abnormal in the "last normal" group. In the substudy, EPSS was abnormal in 13/13 dogs and SI in 2/13 dogs. E-point-to-septal-separation is a valuable additional parameter for the diagnosis of DCM, which can enhance diagnostic capabilities of M-Mode and which performs similar as well as SMOD. Copyright © 2013 by the American College of Veterinary Internal Medicine.
Shao, Yuan; Ramachandran, Sandhya; Arnold, Susan; Ramachandran, Gurumurthy
2017-03-01
The use of the turbulent eddy diffusion model and its variants in exposure assessment is limited due to the lack of knowledge regarding the isotropic eddy diffusion coefficient, D T . But some studies have suggested a possible relationship between D T and the air changes per hour (ACH) through a room. The main goal of this study was to accurately estimate D T for a range of ACH values by minimizing the difference between the concentrations measured and predicted by eddy diffusion model. We constructed an experimental chamber with a spatial concentration gradient away from the contaminant source, and conducted 27 3-hr long experiments using toluene and acetone under different air flow conditions (0.43-2.89 ACHs). An eddy diffusion model accounting for chamber boundary, general ventilation, and advection was developed. A mathematical expression for the slope based on the geometrical parameters of the ventilation system was also derived. There is a strong linear relationship between D T and ACH, providing a surrogate parameter for estimating D T in real-life settings. For the first time, a mathematical expression for the relationship between D T and ACH has been derived that also corrects for non-ideal conditions, and the calculated value of the slope between these two parameters is very close to the experimentally determined value. The values of D T obtained from the experiments are generally consistent with values reported in the literature. They are also independent of averaging time of measurements, allowing for comparison of values obtained from different measurement settings. These findings make the use of turbulent eddy diffusion models for exposure assessment in workplace/indoor environments more practical.
NASA Astrophysics Data System (ADS)
Akinci, A.; Pace, B.
2017-12-01
In this study, we discuss the seismic hazard variability of peak ground acceleration (PGA) at 475 years return period in the Southern Apennines of Italy. The uncertainty and parametric sensitivity are presented to quantify the impact of the several fault parameters on ground motion predictions for 10% exceedance in 50-year hazard. A time-independent PSHA model is constructed based on the long-term recurrence behavior of seismogenic faults adopting the characteristic earthquake model for those sources capable of rupturing the entire fault segment with a single maximum magnitude. The fault-based source model uses the dimensions and slip rates of mapped fault to develop magnitude-frequency estimates for characteristic earthquakes. Variability of the selected fault parameter is given with a truncated normal random variable distribution presented by standard deviation about a mean value. A Monte Carlo approach, based on the random balanced sampling by logic tree, is used in order to capture the uncertainty in seismic hazard calculations. For generating both uncertainty and sensitivity maps, we perform 200 simulations for each of the fault parameters. The results are synthesized both in frequency-magnitude distribution of modeled faults as well as the different maps: the overall uncertainty maps provide a confidence interval for the PGA values and the parameter uncertainty maps determine the sensitivity of hazard assessment to variability of every logic tree branch. These branches of logic tree, analyzed through the Monte Carlo approach, are maximum magnitudes, fault length, fault width, fault dip and slip rates. The overall variability of these parameters is determined by varying them simultaneously in the hazard calculations while the sensitivity of each parameter to overall variability is determined varying each of the fault parameters while fixing others. However, in this study we do not investigate the sensitivity of mean hazard results to the consideration of different GMPEs. Distribution of possible seismic hazard results is illustrated by 95% confidence factor map, which indicates the dispersion about mean value, and coefficient of variation map, which shows percent variability. The results of our study clearly illustrate the influence of active fault parameters to probabilistic seismic hazard maps.
Hubert: Software for efficient analysis of in-situ nuclear forward scattering experiments
NASA Astrophysics Data System (ADS)
Vrba, Vlastimil; Procházka, Vít; Smrčka, David; Miglierini, Marcel
2016-10-01
Combination of short data acquisition time and local investigation of a solid state through hyperfine parameters makes nuclear forward scattering (NFS) a unique experimental technique for investigation of fast processes. However, the total number of acquired NFS time spectra may be very high. Therefore an efficient way of the data evaluation is needed. In this paper we report the development of Hubert software package as a response to the rapidly developing field of in-situ NFS experiments. Hubert offers several useful features for data files processing and could significantly shorten the evaluation time by using a simple connection between the neighboring time spectra through their input and output parameter values.
Marginal oil fields, profitable oil at low reserves: How?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Agiza, M.N.; Shaheen, S.E.; Barawi, S.A.
1995-12-31
Fields with recoverable reserves of about five million barrels of oil are considered in Egypt as marginal fields. Economics of Egyptian marginal oil fields depend on non-traditional approaches followed in developing and operating such fields. The actual exploration, development and operating expenses and state fiscal terms were used to evaluate the sensitivity of the economic parameters of such marginal fields. The operator net present value (NPV) and internal rate of return (IRR) beside the government take are presented for different parameters used. The purpose is to make acceptable profits out of the marginal oil fields, for the mutual benefits ofmore » both the country and the investors.« less
NASA Technical Reports Server (NTRS)
Motiwalla, S. K.
1973-01-01
Using the first and the second derivative of flutter velocity with respect to the parameters, the velocity hypersurface is made quadratic. This greatly simplifies the numerical procedure developed for determining the values of the design parameters such that a specified flutter velocity constraint is satisfied and the total structural mass is near a relative minimum. A search procedure is presented utilizing two gradient search methods and a gradient projection method. The procedure is applied to the design of a box beam, using finite-element representation. The results indicate that the procedure developed yields substantial design improvement satisfying the specified constraint and does converge to near a local optimum.
Parameter regionalization of a monthly water balance model for the conterminous United States
Bock, Andrew R.; Hay, Lauren E.; McCabe, Gregory J.; Markstrom, Steven L.; Atkinson, R. Dwight
2016-01-01
A parameter regionalization scheme to transfer parameter values from gaged to ungaged areas for a monthly water balance model (MWBM) was developed and tested for the conterminous United States (CONUS). The Fourier Amplitude Sensitivity Test, a global-sensitivity algorithm, was implemented on a MWBM to generate parameter sensitivities on a set of 109 951 hydrologic response units (HRUs) across the CONUS. The HRUs were grouped into 110 calibration regions based on similar parameter sensitivities. Subsequently, measured runoff from 1575 streamgages within the calibration regions were used to calibrate the MWBM parameters to produce parameter sets for each calibration region. Measured and simulated runoff at the 1575 streamgages showed good correspondence for the majority of the CONUS, with a median computed Nash–Sutcliffe efficiency coefficient of 0.76 over all streamgages. These methods maximize the use of available runoff information, resulting in a calibrated CONUS-wide application of the MWBM suitable for providing estimates of water availability at the HRU resolution for both gaged and ungaged areas of the CONUS.
Tensor methods for parameter estimation and bifurcation analysis of stochastic reaction networks
Liao, Shuohao; Vejchodský, Tomáš; Erban, Radek
2015-01-01
Stochastic modelling of gene regulatory networks provides an indispensable tool for understanding how random events at the molecular level influence cellular functions. A common challenge of stochastic models is to calibrate a large number of model parameters against the experimental data. Another difficulty is to study how the behaviour of a stochastic model depends on its parameters, i.e. whether a change in model parameters can lead to a significant qualitative change in model behaviour (bifurcation). In this paper, tensor-structured parametric analysis (TPA) is developed to address these computational challenges. It is based on recently proposed low-parametric tensor-structured representations of classical matrices and vectors. This approach enables simultaneous computation of the model properties for all parameter values within a parameter space. The TPA is illustrated by studying the parameter estimation, robustness, sensitivity and bifurcation structure in stochastic models of biochemical networks. A Matlab implementation of the TPA is available at http://www.stobifan.org. PMID:26063822
Determining the accuracy of maximum likelihood parameter estimates with colored residuals
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.; Klein, Vladislav
1994-01-01
An important part of building high fidelity mathematical models based on measured data is calculating the accuracy associated with statistical estimates of the model parameters. Indeed, without some idea of the accuracy of parameter estimates, the estimates themselves have limited value. In this work, an expression based on theoretical analysis was developed to properly compute parameter accuracy measures for maximum likelihood estimates with colored residuals. This result is important because experience from the analysis of measured data reveals that the residuals from maximum likelihood estimation are almost always colored. The calculations involved can be appended to conventional maximum likelihood estimation algorithms. Simulated data runs were used to show that the parameter accuracy measures computed with this technique accurately reflect the quality of the parameter estimates from maximum likelihood estimation without the need for analysis of the output residuals in the frequency domain or heuristically determined multiplication factors. The result is general, although the application studied here is maximum likelihood estimation of aerodynamic model parameters from flight test data.
Tensor methods for parameter estimation and bifurcation analysis of stochastic reaction networks.
Liao, Shuohao; Vejchodský, Tomáš; Erban, Radek
2015-07-06
Stochastic modelling of gene regulatory networks provides an indispensable tool for understanding how random events at the molecular level influence cellular functions. A common challenge of stochastic models is to calibrate a large number of model parameters against the experimental data. Another difficulty is to study how the behaviour of a stochastic model depends on its parameters, i.e. whether a change in model parameters can lead to a significant qualitative change in model behaviour (bifurcation). In this paper, tensor-structured parametric analysis (TPA) is developed to address these computational challenges. It is based on recently proposed low-parametric tensor-structured representations of classical matrices and vectors. This approach enables simultaneous computation of the model properties for all parameter values within a parameter space. The TPA is illustrated by studying the parameter estimation, robustness, sensitivity and bifurcation structure in stochastic models of biochemical networks. A Matlab implementation of the TPA is available at http://www.stobifan.org.
Parameter regionalization of a monthly water balance model for the conterminous United States
NASA Astrophysics Data System (ADS)
Bock, Andrew R.; Hay, Lauren E.; McCabe, Gregory J.; Markstrom, Steven L.; Atkinson, R. Dwight
2016-07-01
A parameter regionalization scheme to transfer parameter values from gaged to ungaged areas for a monthly water balance model (MWBM) was developed and tested for the conterminous United States (CONUS). The Fourier Amplitude Sensitivity Test, a global-sensitivity algorithm, was implemented on a MWBM to generate parameter sensitivities on a set of 109 951 hydrologic response units (HRUs) across the CONUS. The HRUs were grouped into 110 calibration regions based on similar parameter sensitivities. Subsequently, measured runoff from 1575 streamgages within the calibration regions were used to calibrate the MWBM parameters to produce parameter sets for each calibration region. Measured and simulated runoff at the 1575 streamgages showed good correspondence for the majority of the CONUS, with a median computed Nash-Sutcliffe efficiency coefficient of 0.76 over all streamgages. These methods maximize the use of available runoff information, resulting in a calibrated CONUS-wide application of the MWBM suitable for providing estimates of water availability at the HRU resolution for both gaged and ungaged areas of the CONUS.
Missing value imputation: with application to handwriting data
NASA Astrophysics Data System (ADS)
Xu, Zhen; Srihari, Sargur N.
2015-01-01
Missing values make pattern analysis difficult, particularly with limited available data. In longitudinal research, missing values accumulate, thereby aggravating the problem. Here we consider how to deal with temporal data with missing values in handwriting analysis. In the task of studying development of individuality of handwriting, we encountered the fact that feature values are missing for several individuals at several time instances. Six algorithms, i.e., random imputation, mean imputation, most likely independent value imputation, and three methods based on Bayesian network (static Bayesian network, parameter EM, and structural EM), are compared with children's handwriting data. We evaluate the accuracy and robustness of the algorithms under different ratios of missing data and missing values, and useful conclusions are given. Specifically, static Bayesian network is used for our data which contain around 5% missing data to provide adequate accuracy and low computational cost.
Meta-Learning Approach for Automatic Parameter Tuning: A Case Study with Educational Datasets
ERIC Educational Resources Information Center
Molina, M. M.; Luna, J. M.; Romero, C.; Ventura, S.
2012-01-01
This paper proposes to the use of a meta-learning approach for automatic parameter tuning of a well-known decision tree algorithm by using past information about algorithm executions. Fourteen educational datasets were analysed using various combinations of parameter values to examine the effects of the parameter values on accuracy classification.…
2009-03-01
Set negative pixel values = 0 (remove bad pixels) -------------- [m,n] = size(data_matrix_new); for i =1:m for j= 1:n if...everything from packaging toothpaste to high speed fluid dynamics. While future engagements will continue to require the development of specialized
NASA Technical Reports Server (NTRS)
Glushkov, A. V.; Efimov, N. N.; Makarov, I. T.; Pravdin, M. I.; Dedenko, L. G.
1985-01-01
The extensive air shower (EAS) development model independent method of the determination of a maximum depth of shower (X sub m) is considered. X sub m values obtained on various EAS parameters are in a good agreement.
Development of a Standard Set of Software Indicators for Aeronautical Systems Center.
1992-09-01
29:12). The composite models listed include COCOMO and the Software Productivity, Quality, and Reliability Model ( SPQR ) (29:12). The SPQR model was...determine the values of the 68 input parameters. Source provides no specifics. Indicator Name SPQR (SW Productivity, Qual, Reliability) Indicator Class
Madimenos, Felicia C; Snodgrass, J Josh; Blackwell, Aaron D; Liebert, Melissa A; Cepon, Tara J; Sugiyama, Lawrence S
2011-01-01
Minimal data on bone mineral density changes are available from populations in developing countries. Using calcaneal quantitative ultrasound (QUS) techniques, the current study contributes to remedying this gap in the literature by establishing a normative data set on the indigenous Shuar and non-Shuar Colonos of the Ecuadorian Amazon. The paucity of bone mineral density (BMD) data from populations in developing countries partially reflects the lack of diagnostic resources in these areas. Portable QUS techniques now enable researchers to collect bone health data in remote field-based settings and to contribute normative data from developing regions. The main objective of this study is to establish normative QUS data for two Ecuadorian Amazonian populations-the indigenous Shuar and non-Shuar Colonos. The effects of ethnic group, sex, age, and body size on QUS parameters are also considered. A study cohort consisting of 227 Shuar and 261 Colonos (15-91 years old) were recruited from several small rural Ecuadorian communities in the Upano River Valley. Calcaneal QUS parameters were collected on the right heel of each participant using a Sahara bone sonometer. Three ultrasound generated parameters were employed: broadband ultrasound attenuation (BUA), speed of sound (SOS), and calculated heel BMD (hBMD). In both populations and sexes, all QUS values were progressively lower with advancing age. Shuar have significantly higher QUS values than Colonos, with most pronounced differences found between pre-menopausal Shuar and Colono females. Multiple regression analyses show that age is a key predictor of QUS while weight alone is a less consistent determinant. Both Shuar males and females display comparatively greater QUS parameters than other reference populations. These normative data for three calcaneal QUS parameters will be useful for predicting fracture risk and determining diagnostic QUS criteria of osteoporosis in non-industrialized populations in South America and elsewhere.
Uchida, Takashi; Yakumaru, Masafumi; Nishioka, Keisuke; Higashi, Yoshihiro; Sano, Tomohiko; Todo, Hiroaki; Sugibayashi, Kenji
2016-01-01
We evaluated the effectiveness of a silicone membrane as an alternative to human skin using the skin permeation parameters of chemical compounds. An in vitro permeation study using 15 model compounds was conducted, and permeation parameters comprising permeability coefficient (P), diffusion parameter (DL(-2)), and partition parameter (KL) were calculated from each permeation profile. Significant correlations were obtained in log P, log DL(-2), and log KL values between the silicone membrane and human skin. DL(-2) values of model compounds, except flurbiprofen, in the silicone membrane were independent of the lipophilicity of the model compounds and were 100-fold higher than those in human skin. For antipyrine and caffeine, which are hydrophilic, KL values in the silicone membrane were 100-fold lower than those in human skin, and P values, calculated as the product of a DL(-2) and KL, were similar. For lipophilic compounds, such as n-butyl paraben and flurbiprofen, KL values for silicone were similar to or 10-fold higher than those in human skin, and P values for silicone were 100-fold higher than those in human skin. Furthermore, for amphiphilic compounds with log Ko/w values from 0.5 to 3.5, KL values in the silicone membrane were 10-fold lower than those in human skin, and P values for silicone were 10-fold higher than those in human skin. The silicone membrane was useful as a human skin alternative in an in vitro skin permeation study. However, depending on the lipophilicity of the model compounds, some parameters may be over- or underestimated.
Performance evaluation of image-intensifier-TV fluoroscopy systems
NASA Astrophysics Data System (ADS)
van der Putten, Wilhelm J.; Bouley, Shawn
1995-05-01
Through use of a computer model and an aluminum low contrast phantom developed in-house, a method has been developed which is able to grade the imaging performance of fluoroscopy systems through use of a variable, K. This parameter was derived from Rose's model of image perception and is here used as a figure of merit to grade fluoroscopy systems. From Rose's model for an ideal system, a typical value of K for the perception of low contrast details should be between 3 and 7, assuming threshold vision by human observers. Thus, various fluoroscopy systems are graded with different values of K, with a lower value of K indicating better imaging performance of the system. A series of fluoroscopy systems have been graded where the best system produces a value in the low teens, while the poorest systems produce a value in the low twenties. Correlation with conventional image quality measurements is good and the method has the potential for automated assessment of image quality.
Gupta, Manoj; Gupta, T C
2017-10-01
The present study aims to accurately estimate inertial, physical, and dynamic parameters of human body vibratory model consistent with physical structure of the human body that also replicates its dynamic response. A 13 degree-of-freedom (DOF) lumped parameter model for standing person subjected to support excitation is established. Model parameters are determined from anthropometric measurements, uniform mass density, elastic modulus of individual body segments, and modal damping ratios. Elastic moduli of ellipsoidal body segments are initially estimated by comparing stiffness of spring elements, calculated from a detailed scheme, and values available in literature for same. These values are further optimized by minimizing difference between theoretically calculated platform-to-head transmissibility ratio (TR) and experimental measurements. Modal damping ratios are estimated from experimental transmissibility response using two dominant peaks in the frequency range of 0-25 Hz. From comparison between dynamic response determined form modal analysis and experimental results, a set of elastic moduli for different segments of human body and a novel scheme to determine modal damping ratios from TR plots, are established. Acceptable match between transmissibility values calculated from the vibratory model and experimental measurements for 50th percentile U.S. male, except at very low frequencies, establishes the human body model developed. Also, reasonable agreement obtained between theoretical response curve and experimental response envelop for average Indian male, affirms the technique used for constructing vibratory model of a standing person. Present work attempts to develop effective technique for constructing subject specific damped vibratory model based on its physical measurements.
NEWTPOIS- NEWTON POISSON DISTRIBUTION PROGRAM
NASA Technical Reports Server (NTRS)
Bowerman, P. N.
1994-01-01
The cumulative poisson distribution program, NEWTPOIS, is one of two programs which make calculations involving cumulative poisson distributions. Both programs, NEWTPOIS (NPO-17715) and CUMPOIS (NPO-17714), can be used independently of one another. NEWTPOIS determines percentiles for gamma distributions with integer shape parameters and calculates percentiles for chi-square distributions with even degrees of freedom. It can be used by statisticians and others concerned with probabilities of independent events occurring over specific units of time, area, or volume. NEWTPOIS determines the Poisson parameter (lambda), that is; the mean (or expected) number of events occurring in a given unit of time, area, or space. Given that the user already knows the cumulative probability for a specific number of occurrences (n) it is usually a simple matter of substitution into the Poisson distribution summation to arrive at lambda. However, direct calculation of the Poisson parameter becomes difficult for small positive values of n and unmanageable for large values. NEWTPOIS uses Newton's iteration method to extract lambda from the initial value condition of the Poisson distribution where n=0, taking successive estimations until some user specified error term (epsilon) is reached. The NEWTPOIS program is written in C. It was developed on an IBM AT with a numeric co-processor using Microsoft C 5.0. Because the source code is written using standard C structures and functions, it should compile correctly on most C compilers. The program format is interactive, accepting epsilon, n, and the cumulative probability of the occurrence of n as inputs. It has been implemented under DOS 3.2 and has a memory requirement of 30K. NEWTPOIS was developed in 1988.
NASA Technical Reports Server (NTRS)
Quattrochi, Dale A.; Morin, Cory
2015-01-01
Dengue fever (DF) is caused by a virus transmitted between humans and Aedes genus mosquitoes through blood feeding. In recent decades incidence of the disease has drastically increased in the tropical Americas, culminating with the Pan American outbreak in 2010 which resulted in 1.7 million reported cases. In Puerto Rico dengue is endemic, however, there is significant inter-annual, intraannual, and spatial variability in case loads. Variability in climate and the environment, herd immunity and virus genetics, and demographic characteristics may all contribute to differing patterns of transmission both spatially and temporally. Knowledge of climate influences on dengue incidence could facilitate development of early warning systems allowing public health workers to implement appropriate transmission intervention strategies. In this study, we simulate dengue incidence in several municipalities in Puerto Rico using population and meteorological data derived from ground based stations and remote sensing instruments. This data was used to drive a process based model of vector population development and virus transmission. Model parameter values for container composition, vector characteristics, and incubation period were chosen by employing a Monte Carlo approach. Multiple simulations were performed for each municipality and the results were compared with reported dengue cases. The best performing simulations were retained and their parameter values and meteorological input were compared between years and municipalities. Parameter values varied by municipality and year illustrating the complexity and sensitivity of the disease system. Local characteristics including the natural and built environment impact transmission dynamics and produce varying responses to meteorological conditions.
Determination of Watershed Lag Equation for Philippine Hydrology
NASA Astrophysics Data System (ADS)
Cipriano, F. R.; Lagmay, A. M. F. A.; Uichanco, C.; Mendoza, J.; Sabio, G.; Punay, K. N.; Oquindo, M. R.; Horritt, M.
2014-12-01
Widespread flooding is a major problem in the Philippines. The country experiences heavy amount of rainfall throughout the year and several areas are prone to flood hazards because of its unique topography. Human casualties and destruction of infrastructure are some of the damages caused by flooding and the country's government has undertaken various efforts to mitigate these hazards. One of the solutions was to create flood hazard maps of different floodplains and use them to predict the possible catastrophic results of different rain scenarios. To produce these maps, different types of data were needed and part of that is calculating hydrological components to come up with an accurate output. This paper presents how an important parameter, the time-to-peak of the watershed (Tp) was calculated. Time-to-peak is defined as the time at which the largest discharge of the watershed occurs. This is computed by using a lag time equation that was developed specifically for the Philippine setting. The equation involves three measurable parameters, namely, watershed length (L), maximum potential retention (S), and watershed slope (Y). This approach is based on a similar method developed by CH2M Hill and Horritt for Taiwan, which has a similar set of meteorological and hydrological parameters with the Philippines. Data from fourteen water level sensors covering 67 storms from all the regions in the country were used to estimate the time-to-peak. These sensors were chosen by using a screening process that considers the distance of the sensors from the sea, the availability of recorded data, and the catchment size. Values of Tp from the different sensors were generated from the general lag time equation based on the Natural Resource Conservation Management handbook by the US Department of Agriculture. The calculated Tp values were plotted against the values obtained from the equation L0.8(S+1)0.7/Y0.5. Regression analysis was used to obtain the final equation that would be used to calculate the time-to-peak specifically for rivers in the Philippine setting. The calculated values could then be used as a parameter for modeling different flood scenarios in the country.
Rankl, James G.
1990-01-01
A physically based point-infiltration model was developed for computing infiltration of rainfall into soils and the resulting runoff from small basins in Wyoming. The user describes a 'design storm' in terms of average rainfall intensity and storm duration. Information required to compute runoff for the design storm by using the model include (1) soil type and description, and (2) two infiltration parameters and a surface-retention storage parameter. Parameter values are tabulated in the report. Rainfall and runoff data for three ephemeral-stream basins that contain only one type of soil were used to develop the model. Two assumptions were necessary: antecedent soil moisture is some long-term average, and storm rainfall is uniform in both time and space. The infiltration and surface-retention storage parameters were determined for the soil of each basin. Observed rainstorm and runoff data were used to develop a separation curve, or incipient-runoff curve, which distinguishes between runoff and nonrunoff rainfall data. The position of this curve defines the infiltration and surface-retention storage parameters. A procedure for applying the model to basins that contain more than one type of soil was developed using data from 7 of the 10 study basins. For these multiple-soil basins, the incipient-runoff curve defines the infiltration and retention-storage parameters for the soil having the highest runoff potential. Parameters were defined by ranking the soils according to their relative permeabilities and optimizing the position of the incipient-runoff curve by using measured runoff as a control for the fit. Analyses of runoff from multiple-soil basins indicate that the effective contributing area of runoff is less than the drainage area of the basin. In this study, the effective drainage area ranged from 41.6 to 71.1 percent of the total drainage area. Information on effective drainage area is useful in evaluating drainage area as an independent variable in statistical analyses of hydrologic data, such as annual peak frequency distributions and sediment yield.A comparison was made of the sum of the simulated runoff and the sum of the measured runoff for all available records of runoff-producing storms in the 10 study basins. The sums of the simulated runoff ranged from 12.0 percent less than to 23.4 percent more than the sums of the measured runoff. A measure of the standard error of estimate was computed for each data set. These values ranged from 20 to 70 percent of the mean value of the measured runoff. Rainfall-simulator infiltrometer tests were made in two small basins. The amount of water uptake measured by the test in Dugout Creek tributary basin averaged about three times greater than the amount of water uptake computed from rainfall and runoff data. Therefore, infiltrometer data were not used to determine infiltration rates for this study.
The Predicted Influence of Climate Change on Lesser Prairie-Chicken Reproductive Parameters
Grisham, Blake A.; Boal, Clint W.; Haukos, David A.; Davis, Dawn M.; Boydston, Kathy K.; Dixon, Charles; Heck, Willard R.
2013-01-01
The Southern High Plains is anticipated to experience significant changes in temperature and precipitation due to climate change. These changes may influence the lesser prairie-chicken (Tympanuchus pallidicinctus) in positive or negative ways. We assessed the potential changes in clutch size, incubation start date, and nest survival for lesser prairie-chickens for the years 2050 and 2080 based on modeled predictions of climate change and reproductive data for lesser prairie-chickens from 2001–2011 on the Southern High Plains of Texas and New Mexico. We developed 9 a priori models to assess the relationship between reproductive parameters and biologically relevant weather conditions. We selected weather variable(s) with the most model support and then obtained future predicted values from climatewizard.org. We conducted 1,000 simulations using each reproductive parameter’s linear equation obtained from regression calculations, and the future predicted value for each weather variable to predict future reproductive parameter values for lesser prairie-chickens. There was a high degree of model uncertainty for each reproductive value. Winter temperature had the greatest effect size for all three parameters, suggesting a negative relationship between above-average winter temperature and reproductive output. The above-average winter temperatures are correlated to La Niña events, which negatively affect lesser prairie-chickens through resulting drought conditions. By 2050 and 2080, nest survival was predicted to be below levels considered viable for population persistence; however, our assessment did not consider annual survival of adults, chick survival, or the positive benefit of habitat management and conservation, which may ultimately offset the potentially negative effect of drought on nest survival. PMID:23874549
Zielińska-Bliźniewska, Hanna; Sułkowski, Wiesław J; Pietkiewicz, Piotr; Miłoński, Jarosław; Mazurek, Agnieszka; Olszewski, Jurek
2012-06-01
The aim of this study was to compare the parameters of vocal acoustic and vocal efficiency analyses in medical students and academic teachers with use of the IRIS and DiagnoScope Specialist software and to evaluate their usefulness in prevention and certification of occupational disease. The study group comprised 40 women, including students and employees of the Military Medical Faculty, Medical University of Łodź. After informed consent had been obtained from the participant women, the primary medical history was taken, videolaryngoscopic and stroboscopic examinations were performed and diagnostic vocal acoustic analysis was carried out with the use of the IRIS and Diagno-Scope Specialist software. Based on the results of the performed measurements, the statistical analysis evidenced the compatibility between two software programs, IRIS and DiagnoScope Specialist, with the only exception of the F4 formant. The mean values of vocal acoustic parameters in medical students and academic teachers, obtained by means of the IRIS software, can be used as standards for the female population not yet developed by the producer. When using the DiagnoScope Specialist software, some mean values were higher and some lower than the standards specified by the producer. The study evidenced the compatibility between two measurement software programs, IRIS and DiagnoScope Specialist, except for the F4 formant. It should be noted that the later has advantage over the former since the standard values of vocal acoustic parameters have been worked out by the producer. Moreover, they only slightly departed from the values obtained in our study and may be useful in diagnostics of occupational voice disorders.
Toledo-Martín, Eva María; García-García, María Carmen; Font, Rafael; Moreno-Rojas, José Manuel; Gómez, Pedro; Salinas-Navarro, María; Del Río-Celestino, Mercedes
2016-07-01
The characterization of internal (°Brix, pH, malic acid, total phenolic compounds, ascorbic acid and total carotenoid content) and external (color, firmness and pericarp wall thickness) pepper quality is necessary to better understand its possible applications and increase consumer awareness of its benefits. The main aim of this work was to examine the feasibility of using visible/near-infrared reflectance spectroscopy (VIS-NIRS) to predict quality parameters in different pepper types. Commercially available spectrophotometers were evaluated for this purpose: a Polychromix Phazir spectrometer for intact raw pepper, and a scanning monochromator for freeze-dried pepper. The RPD values (ratio of the standard deviation of the reference data to the standard error of prediction) obtained from the external validation exceeded a value of 3 for chlorophyll a and total carotenoid content; values ranging between 2.5 < RPD < 3 for total phenolic compounds; between 1.5 < RPD <2.5 for °Brix, pH, color parameters a* and h* and chlorophyll b; and RPD values below 1.5 for fruit firmness, pericarp wall thickness, color parameters C*, b* and L*, vitamin C and malic acid content. The present work has led to the development of multi-type calibrations for pepper quality parameters in intact and freeze-dried peppers. The majority of NIRS equations obtained were suitable for screening purposes in pepper breeding programs. Components such as pigments (xanthophyll, carotenes and chlorophyll), glucides, lipids, cellulose and water were used by modified partial least-squares regression for modeling the predicting equations. © 2015 Society of Chemical Industry. © 2015 Society of Chemical Industry.
Q estimation of seismic data using the generalized S-transform
NASA Astrophysics Data System (ADS)
Hao, Yaju; Wen, Xiaotao; Zhang, Bo; He, Zhenhua; Zhang, Rui; Zhang, Jinming
2016-12-01
Quality factor, Q, is a parameter that characterizes the energy dissipation during seismic wave propagation. The reservoir pore is one of the main factors that affect the value of Q. Especially, when pore space is filled with oil or gas, the rock usually exhibits a relative low Q value. Such a low Q value has been used as a direct hydrocarbon indicator by many researchers. The conventional Q estimation method based on spectral ratio suffers from the problem of waveform tuning; hence, many researchers have introduced time-frequency analysis techniques to tackle this problem. Unfortunately, the window functions adopted in time-frequency analysis algorithms such as continuous wavelet transform (CWT) and S-transform (ST) contaminate the amplitude spectra because the seismic signal is multiplied by the window functions during time-frequency decomposition. The basic assumption of the spectral ratio method is that there is a linear relationship between natural logarithmic spectral ratio and frequency. However, this assumption does not hold if we take the influence of window functions into consideration. In this paper, we first employ a recently developed two-parameter generalized S-transform (GST) to obtain the time-frequency spectra of seismic traces. We then deduce the non-linear relationship between natural logarithmic spectral ratio and frequency. Finally, we obtain a linear relationship between natural logarithmic spectral ratio and a newly defined parameter γ by ignoring the negligible second order term. The gradient of this linear relationship is 1/Q. Here, the parameter γ is a function of frequency and source wavelet. Numerical examples for VSP and post-stack reflection data confirm that our algorithm is capable of yielding accurate results. The Q-value results estimated from field data acquired in western China show reasonable comparison with oil-producing well location.
Simulated discharge trends indicate robustness of hydrological models in a changing climate
NASA Astrophysics Data System (ADS)
Addor, Nans; Nikolova, Silviya; Seibert, Jan
2016-04-01
Assessing the robustness of hydrological models under contrasted climatic conditions should be part any hydrological model evaluation. Robust models are particularly important for climate impact studies, as models performing well under current conditions are not necessarily capable of correctly simulating hydrological perturbations caused by climate change. A pressing issue is the usually assumed stationarity of parameter values over time. Modeling experiments using conceptual hydrological models revealed that assuming transposability of parameters values in changing climatic conditions can lead to significant biases in discharge simulations. This raises the question whether parameter values should to be modified over time to reflect changes in hydrological processes induced by climate change. Such a question denotes a focus on the contribution of internal processes (i.e., catchment processes) to discharge generation. Here we adopt a different perspective and explore the contribution of external forcing (i.e., changes in precipitation and temperature) to changes in discharge. We argue that in a robust hydrological model, discharge variability should be induced by changes in the boundary conditions, and not by changes in parameter values. In this study, we explore how well the conceptual hydrological model HBV captures transient changes in hydrological signatures over the period 1970-2009. Our analysis focuses on research catchments in Switzerland undisturbed by human activities. The precipitation and temperature forcing are extracted from recently released 2km gridded data sets. We use a genetic algorithm to calibrate HBV for the whole 40-year period and for the eight successive 5-year periods to assess eventual trends in parameter values. Model calibration is run multiple times to account for parameter uncertainty. We find that in alpine catchments showing a significant increase of winter discharge, this trend can be captured reasonably well with constant parameter values over the whole reference period. Further, preliminary results suggest that some trends in parameter values do not reflect changes in hydrological processes, as reported by others previously, but instead might stem from a modeling artifact related to the parameterization of evapotranspiration, which is overly sensitive to temperature increase. We adopt a trading-space-for-time approach to better understand whether robust relationships between parameter values and forcing can be established, and to critically explore the rationale behind time-dependent parameter values in conceptual hydrological models.
Analysis of uncertainties in Monte Carlo simulated organ dose for chest CT
NASA Astrophysics Data System (ADS)
Muryn, John S.; Morgan, Ashraf G.; Segars, W. P.; Liptak, Chris L.; Dong, Frank F.; Primak, Andrew N.; Li, Xiang
2015-03-01
In Monte Carlo simulation of organ dose for a chest CT scan, many input parameters are required (e.g., half-value layer of the x-ray energy spectrum, effective beam width, and anatomical coverage of the scan). The input parameter values are provided by the manufacturer, measured experimentally, or determined based on typical clinical practices. The goal of this study was to assess the uncertainties in Monte Carlo simulated organ dose as a result of using input parameter values that deviate from the truth (clinical reality). Organ dose from a chest CT scan was simulated for a standard-size female phantom using a set of reference input parameter values (treated as the truth). To emulate the situation in which the input parameter values used by the researcher may deviate from the truth, additional simulations were performed in which errors were purposefully introduced into the input parameter values, the effects of which on organ dose per CTDIvol were analyzed. Our study showed that when errors in half value layer were within ± 0.5 mm Al, the errors in organ dose per CTDIvol were less than 6%. Errors in effective beam width of up to 3 mm had negligible effect (< 2.5%) on organ dose. In contrast, when the assumed anatomical center of the patient deviated from the true anatomical center by 5 cm, organ dose errors of up to 20% were introduced. Lastly, when the assumed extra scan length was longer by 4 cm than the true value, dose errors of up to 160% were found. The results answer the important question: to what level of accuracy each input parameter needs to be determined in order to obtain accurate organ dose results.
Woodbury, Allan D.; Rubin, Yoram
2000-01-01
A method for inverting the travel time moments of solutes in heterogeneous aquifers is presented and is based on peak concentration arrival times as measured at various samplers in an aquifer. The approach combines a Lagrangian [Rubin and Dagan, 1992] solute transport framework with full‐Bayesian hydrogeological parameter inference. In the full‐Bayesian approach the noise values in the observed data are treated as hyperparameters, and their effects are removed by marginalization. The prior probability density functions (pdfs) for the model parameters (horizontal integral scale, velocity, and log K variance) and noise values are represented by prior pdfs developed from minimum relative entropy considerations. Analysis of the Cape Cod (Massachusetts) field experiment is presented. Inverse results for the hydraulic parameters indicate an expected value for the velocity, variance of log hydraulic conductivity, and horizontal integral scale of 0.42 m/d, 0.26, and 3.0 m, respectively. While these results are consistent with various direct‐field determinations, the importance of the findings is in the reduction of confidence range about the various expected values. On selected control planes we compare observed travel time frequency histograms with the theoretical pdf, conditioned on the observed travel time moments. We observe a positive skew in the travel time pdf which tends to decrease as the travel time distance grows. We also test the hypothesis that there is no scale dependence of the integral scale λ with the scale of the experiment at Cape Cod. We adopt two strategies. The first strategy is to use subsets of the full data set and then to see if the resulting parameter fits are different as we use different data from control planes at expanding distances from the source. The second approach is from the viewpoint of entropy concentration. No increase in integral scale with distance is inferred from either approach over the range of the Cape Cod tracer experiment.
Mobasheri, Nasrin; Karimi, Mehrdad; Hamedi, Javad
2018-06-05
New methods to determine antimicrobial susceptibility of bacterial pathogens especially the minimum inhibitory concentration (MIC) of antibiotics have great importance in pharmaceutical industry and treatment procedures. In the present study, the MIC of several antibiotics was determined against some pathogenic bacteria using macrodilution test. In order to accelerate and increase the efficiency of culture-based method to determine antimicrobial susceptibility, the possible relationship between the changes in some physico-chemical parameters including conductivity, electrical potential difference (EPD), pH and total number of test strains was investigated during the logarithmic phase of bacterial growth in presence of antibiotics. The correlation between changes in these physico-chemical parameters and growth of bacteria was statistically evaluated using linear and non-linear regression models. Finally, the calculated MIC values in new proposed method were compared with the MIC derived from macrodilution test. The results represent significant association between the changes in EPD and pH values and growth of the tested bacteria during the exponential phase of bacterial growth. It has been assumed that the proliferation of bacteria can cause the significant changes in EPD values. The MIC values in both conventional and new method were consistent to each other. In conclusion, cost and time effective antimicrobial susceptibility test can be developed based on monitoring the changes in EPD values. The new proposed strategy also can be used in high throughput screening of biocompounds for their antimicrobial activity in a relatively shorter time (6-8 h) in comparison with the conventional methods.
López-González, Ángel Arturo; García-Agudo, Sheila; Tomás-Salvá, Matías; Vicente-Herrero, María Teófila; Queimadelos-Carmona, Milagros; Campos-González, Irene
2017-01-01
The Finnish Diabetes Risk Score (FINDRISC) questionnaire has been used to assess the risk of type 2 diabetes and metabolic syndrome. The objetive was to assess the relationship between different scales related to cardiovascular risk and FINDRISC questionnaire. Values of different anthropometric and clinical parameters (body mass index, waist circumference, waist to height ratio, blood pressure), analytical parameters (lipid profile, blood glucose) and scales related to cardiovascular risk (atherogenic index, metabolic syndrome, REGICOR, SCORE, heart age and vascular age) were determined on the basis of the value of the FINDRISC questionnaire. All analyzed parameters related to cardiovascular risk were getting worse at the same time that the value of the FINDRISC questionnaire increased. There is a close relationship between FINDRISC questionnaire values and those obtained in the different parameters by which cardiovascular risk was measured directly or indirectly.
Optimized microsystems-enabled photovoltaics
Cruz-Campa, Jose Luis; Nielson, Gregory N.; Young, Ralph W.; Resnick, Paul J.; Okandan, Murat; Gupta, Vipin P.
2015-09-22
Technologies pertaining to designing microsystems-enabled photovoltaic (MEPV) cells are described herein. A first restriction for a first parameter of an MEPV cell is received. Subsequently, a selection of a second parameter of the MEPV cell is received. Values for a plurality of parameters of the MEPV cell are computed such that the MEPV cell is optimized with respect to the second parameter, wherein the values for the plurality of parameters are computed based at least in part upon the restriction for the first parameter.
Li, Zhaofu; Liu, Hongyu; Luo, Chuan; Li, Yan; Li, Hengpeng; Pan, Jianjun; Jiang, Xiaosan; Zhou, Quansuo; Xiong, Zhengqin
2015-05-01
The Hydrological Simulation Program-Fortran (HSPF), which is a hydrological and water-quality computer model that was developed by the United States Environmental Protection Agency, was employed to simulate runoff and nutrient export from a typical small watershed in a hilly eastern monsoon region of China. First, a parameter sensitivity analysis was performed to assess how changes in the model parameters affect runoff and nutrient export. Next, the model was calibrated and validated using measured runoff and nutrient concentration data. The Nash-Sutcliffe efficiency (E NS ) values of the yearly runoff were 0.87 and 0.69 for the calibration and validation periods, respectively. For storms runoff events, the E NS values were 0.93 for the calibration period and 0.47 for the validation period. Antecedent precipitation and soil moisture conditions can affect the simulation accuracy of storm event flow. The E NS values for the total nitrogen (TN) export were 0.58 for the calibration period and 0.51 for the validation period. In addition, the correlation coefficients between the observed and simulated TN concentrations were 0.84 for the calibration period and 0.74 for the validation period. For phosphorus export, the E NS values were 0.89 for the calibration period and 0.88 for the validation period. In addition, the correlation coefficients between the observed and simulated orthophosphate concentrations were 0.96 and 0.94 for the calibration and validation periods, respectively. The nutrient simulation results are generally satisfactory even though the parameter-lumped HSPF model cannot represent the effects of the spatial pattern of land cover on nutrient export. The model parameters obtained in this study could serve as reference values for applying the model to similar regions. In addition, HSPF can properly describe the characteristics of water quantity and quality processes in this area. After adjustment, calibration, and validation of the parameters, the HSPF model is suitable for hydrological and water-quality simulations in watershed planning and management and for designing best management practices.
Zimmermann, Johannes; Wright, Aidan G C
2017-01-01
The interpersonal circumplex is a well-established structural model that organizes interpersonal functioning within the two-dimensional space marked by dominance and affiliation. The structural summary method (SSM) was developed to evaluate the interpersonal nature of other constructs and measures outside the interpersonal circumplex. To date, this method has been primarily descriptive, providing no way to draw inferences when comparing SSM parameters across constructs or groups. We describe a newly developed resampling-based method for deriving confidence intervals, which allows for SSM parameter comparisons. In a series of five studies, we evaluated the accuracy of the approach across a wide range of possible sample sizes and parameter values, and demonstrated its utility for posing theoretical questions on the interpersonal nature of relevant constructs (e.g., personality disorders) using real-world data. As a result, the SSM is strengthened for its intended purpose of construct evaluation and theory building. © The Author(s) 2015.
NASA Astrophysics Data System (ADS)
Uddin, Iftikhar; Khan, Muhammad Altaf; Ullah, Saif; Islam, Saeed; Israr, Muhammad; Hussain, Fawad
2018-03-01
This attempt dedicated to the solution of buoyancy effect over a stretching sheet in existence of MHD stagnation point flow with convective boundary conditions. Thermophoresis and Brownian motion aspects are included. Incompressible fluid is electrically conducted in the presence of varying magnetic field. Boundary layer analysis is used to develop the mathematical formulation. Zero mass flux condition is considered at the boundary. Non-linear ordinary differential system of equations is constructed by means of proper transformations. Interval of convergence via numerical data and plots are developed. Characteristics of involved variables on the velocity, temperature and concentration distributions are sketched and discussed. Features of correlated parameters on Cf and Nu are examined by means of tables. It is found that buoyancy ratio and magnetic parameters increase and reduce the velocity field. Further opposite feature is noticed for higher values of thermophoresis and Brownian motion parameters on concentration distribution.
NASA Astrophysics Data System (ADS)
Wang, Bei; Sugi, Takenao; Wang, Xingyu; Nakamura, Masatoshi
Data for human sleep study may be affected by internal and external influences. The recorded sleep data contains complex and stochastic factors, which increase the difficulties for the computerized sleep stage determination techniques to be applied for clinical practice. The aim of this study is to develop an automatic sleep stage determination system which is optimized for variable sleep data. The main methodology includes two modules: expert knowledge database construction and automatic sleep stage determination. Visual inspection by a qualified clinician is utilized to obtain the probability density function of parameters during the learning process of expert knowledge database construction. Parameter selection is introduced in order to make the algorithm flexible. Automatic sleep stage determination is manipulated based on conditional probability. The result showed close agreement comparing with the visual inspection by clinician. The developed system can meet the customized requirements in hospitals and institutions.
Mass Transport through Nanostructured Membranes: Towards a Predictive Tool
Darvishmanesh, Siavash; Van der Bruggen, Bart
2016-01-01
This study proposes a new mechanism to understand the transport of solvents through nanostructured membranes from a fundamental point of view. The findings are used to develop readily applicable mathematical models to predict solvent fluxes and solute rejections through solvent resistant membranes used for nanofiltration. The new model was developed based on a pore-flow type of transport. New parameters found to be of fundamental importance were introduced to the equation, i.e., the affinity of the solute and the solvent for the membrane expressed as the hydrogen-bonding contribution of the solubility parameter for the solute, solvent and membrane. A graphical map was constructed to predict the solute rejection based on the hydrogen-bonding contribution of the solubility parameter. The model was evaluated with performance data from the literature. Both the solvent flux and the solute rejection calculated with the new approach were similar to values reported in the literature. PMID:27918434
Ensemble-Based Parameter Estimation in a Coupled GCM Using the Adaptive Spatial Average Method
Liu, Y.; Liu, Z.; Zhang, S.; ...
2014-05-29
Ensemble-based parameter estimation for a climate model is emerging as an important topic in climate research. And for a complex system such as a coupled ocean–atmosphere general circulation model, the sensitivity and response of a model variable to a model parameter could vary spatially and temporally. An adaptive spatial average (ASA) algorithm is proposed to increase the efficiency of parameter estimation. Refined from a previous spatial average method, the ASA uses the ensemble spread as the criterion for selecting “good” values from the spatially varying posterior estimated parameter values; these good values are then averaged to give the final globalmore » uniform posterior parameter. In comparison with existing methods, the ASA parameter estimation has a superior performance: faster convergence and enhanced signal-to-noise ratio.« less
Burkhardt, M; Holstein, J H; Moersdorf, P; Kristen, A; Lefering, R; Pohlemann, T; Pizanis, A
2014-08-01
The Abbreviated Injury Scale (AIS) requires the estimation of the lost blood volume for some severity assignments. This study aimed to develop a rule of thumb for facilitating AIS coding by using objective clinical parameters as surrogate markers of blood loss. Using the example of pelvic ring fractures, a retrospective analysis of TraumaRegister DGU(®) data from 2002 to 2011 was performed. As potential surrogate markers of blood loss, we recorded the hemoglobin (Hb) level, systolic blood pressure (SBP), base excess (BE), Quick's value, units of packed red blood cells (PRBCs) transfused before intensive care unit (ICU) admission, and mortality within 24 h. We identified 11,574 patients with pelvic ring fractures (Tile/OTA classification: 39 % type A, 40 % type B, 21 % type C). Type C fractures were 73.1 % AISpelvis 4 and 26.9 % AISpelvis 5. Type B fractures were 47 % AISpelvis 3, 47 % AISpelvis 4, and 6 % AISpelvis 5. In type C fractures, cut-off values of <7 g/dL Hb, <90 mmHg SBP, <-9 mmol/L BE, <35 % Quick's value, >15 units PRBCs, and death within 24 h had a positive predictive value of 47 % and a sensitivity of 62 % for AISpelvis 5. In type B fractures, these cut-off values had poor sensitivity (48 %) and positive predictive value (11 %) for AISpelvis 5. We failed to develop a rule of thumb for facilitating a proper future AIS coding using the example of pelvic ring fractures. The estimation of blood loss for severity assignment still remains a noteworthy weakness in the AIS coding of traumatic injuries.
Short-term heart rate variability (HRV) in healthy dogs.
Bogucki, Sz; Noszczyk-Nowak, A
2015-01-01
Heart rate variability (HRV) is a well established mortality risk factor in both healthy dogs and those with heart failure. While the standards for short-term HRV analysis have been developed in humans, only reference values for HRV parameters determined from 24-hour ECG have been proposed in dogs. The aim of this study was to develop the reference values for short-term HRV parameters in a group of 50 healthy dogs of various breeds (age 4.86 ± 2.74 years, body weight 12.2 ± 3.88 kg). The ECG was recorded continuously for at least 180 min in a dark and quiet room. All electrocardiograms were inspected automatically and manually to eliminate atrial or ventricular premature complexes. Signals were transformed into a spectrum using the fast Fourier transform. The HRV parameters were measured at fixed times from 60-min ECG segments. The following time-domain parameters (ms) were analyzed: mean NN, SDNN, SDANN, SDNN index, rMSSD and pNN50. Moreover, frequency-domain parameters (Hz) were determined, including very low frequency (VLF), low frequency (LF) and high frequency (HF) components, total power (TP) and the LF/HF ratio. The results (means ± SD) were as follows: mean NN = 677.68 ± 126.89; SDNN = 208.86 ± 77.1; SDANN = 70.75 ± 30.9; SDNN index = 190.75 ± 76.12; rMSSD = 259 ± 120.17, pNN50 = 71.84 ± 13.96; VLF = 984.96 ± 327.7; LF = 1501.24 ± 736.32; HF = 5845.45 ± 2914.20; TP = 11065.31 ± 3866.87; LF/HF = 0.28 ± 0.11.
Efficient Bayesian experimental design for contaminant source identification
NASA Astrophysics Data System (ADS)
Zhang, J.; Zeng, L.
2013-12-01
In this study, an efficient full Bayesian approach is developed for the optimal sampling well location design and source parameter identification of groundwater contaminants. An information measure, i.e., the relative entropy, is employed to quantify the information gain from indirect concentration measurements in identifying unknown source parameters such as the release time, strength and location. In this approach, the sampling location that gives the maximum relative entropy is selected as the optimal one. Once the sampling location is determined, a Bayesian approach based on Markov Chain Monte Carlo (MCMC) is used to estimate unknown source parameters. In both the design and estimation, the contaminant transport equation is required to be solved many times to evaluate the likelihood. To reduce the computational burden, an interpolation method based on the adaptive sparse grid is utilized to construct a surrogate for the contaminant transport. The approximated likelihood can be evaluated directly from the surrogate, which greatly accelerates the design and estimation process. The accuracy and efficiency of our approach are demonstrated through numerical case studies. Compared with the traditional optimal design, which is based on the Gaussian linear assumption, the method developed in this study can cope with arbitrary nonlinearity. It can be used to assist in groundwater monitor network design and identification of unknown contaminant sources. Contours of the expected information gain. The optimal observing location corresponds to the maximum value. Posterior marginal probability densities of unknown parameters, the thick solid black lines are for the designed location. For comparison, other 7 lines are for randomly chosen locations. The true values are denoted by vertical lines. It is obvious that the unknown parameters are estimated better with the desinged location.
NASA Astrophysics Data System (ADS)
Durga Prasada Rao, V.; Harsha, N.; Raghu Ram, N. S.; Navya Geethika, V.
2018-02-01
In this work, turning was performed to optimize the surface finish or roughness (Ra) of stainless steel 304 with uncoated and coated carbide tools under dry conditions. The carbide tools were coated with Titanium Aluminium Nitride (TiAlN) nano coating using Physical Vapour Deposition (PVD) method. The machining parameters, viz., cutting speed, depth of cut and feed rate which show major impact on Ra are considered during turning. The experiments are designed as per Taguchi orthogonal array and machining process is done accordingly. Then second-order regression equations have been developed on the basis of experimental results for Ra in terms of machining parameters used. Regarding the effect of machining parameters, an upward trend is observed in Ra with respect to feed rate, and as cutting speed increases the Ra value increased slightly due to chatter and vibrations. The adequacy of response variable (Ra) is tested by conducting additional experiments. The predicted Ra values are found to be a close match of their corresponding experimental values of uncoated and coated tools. The corresponding average % errors are found to be within the acceptable limits. Then the surface roughness equations of uncoated and coated tools are set as the objectives of optimization problem and are solved by using Differential Evolution (DE) algorithm. Also the tool lives of uncoated and coated tools are predicted by using Taylor’s tool life equation.
Sahoo, C; Gupta, A K
2012-05-15
Photocatalytic degradation of methyl blue (MYB) was studied using Ag(+) doped TiO(2) under UV irradiation in a batch reactor. Catalytic dose, initial concentration of dye and pH of the reaction mixture were found to influence the degradation process most. The degradation was found to be effective in the range catalytic dose (0.5-1.5g/L), initial dye concentration (25-100ppm) and pH of reaction mixture (5-9). Using the three factors three levels Box-Behnken design of experiment technique 15 sets of experiments were designed considering the effective ranges of the influential parameters. The results of the experiments were fitted to two quadratic polynomial models developed using response surface methodology (RSM), representing functional relationship between the decolorization and mineralization of MYB and the experimental parameters. Design Expert software version 8.0.6.1 was used to optimize the effects of the experimental parameters on the responses. The optimum values of the parameters were dose of Ag(+) doped TiO(2) 0.99g/L, initial concentration of MYB 57.68ppm and pH of reaction mixture 7.76. Under the optimal condition the predicted decolorization and mineralization rate of MYB were 95.97% and 80.33%, respectively. Regression analysis with R(2) values >0.99 showed goodness of fit of the experimental results with predicted values. Copyright © 2012 Elsevier B.V. All rights reserved.
Suitability of spring wheat varieties for the production of best quality pizza.
Tehseen, Saima; Anjum, Faqir Muhammad; Pasha, Imran; Khan, Muhammad Issa; Saeed, Farhan
2014-08-01
The selection of appropriate wheat cultivars is an imperative issue in product development and realization. The nutritional profiling of plants and their cultivars along with their suitability for development of specific products is of considerable interests for multi-national food chains. In this project, Pizza-Hut Pakistan provided funds for the selection of suitable newly developed Pakistani spring variety for pizza production. In this regard, the recent varieties were selected and evaluated for nutritional and functional properties for pizza production. Additionally, emphasis has been paid to assess all varieties for their physico-chemical attributes, rheological parameters and mineral content. Furthermore, pizza prepared from respective flour samples were further evaluated for sensory attributes Results showed that Anmool, Abadgar, Imdad, SKD-1, Shafaq and Moomal have higher values for protein, gluten content, pelshenke value and SDS sedimentation and these were relatively better in studied parameters as compared to other varieties although which were considered best for good quality pizza production. TD-1 got significantly highest score for flavor of pizza and lowest score was observed from wheat variety Kiran. Moreover, it is concluded from current study that all wheat varieties except TJ-83 and Kiran exhibited better results for flavor.
King, Randy L; Liu, Yunbo; Maruvada, Subha; Herman, Bruce A; Wear, Keith A; Harris, Gerald R
2011-07-01
A tissue-mimicking material (TMM) for the acoustic and thermal characterization of high-intensity focused ultrasound (HIFU) devices has been developed. The material is a high-temperature hydrogel matrix (gellan gum) combined with different sizes of aluminum oxide particles and other chemicals. The ultrasonic properties (attenuation coefficient, speed of sound, acoustical impedance, and the thermal conductivity and diffusivity) were characterized as a function of temperature from 20 to 70°C. The backscatter coefficient and nonlinearity parameter B/A were measured at room temperature. Importantly, the attenuation coefficient has essentially linear frequency dependence, as is the case for most mammalian tissues at 37°C. The mean value is 0.64f(0.95) dB·cm(-1) at 20°C, based on measurements from 2 to 8 MHz. Most of the other relevant physical parameters are also close to the reported values, although backscatter signals are low compared with typical human soft tissues. Repeatable and consistent temperature elevations of 40°C were produced under 20-s HIFU exposures in the TMM. This TMM is appropriate for developing standardized dosimetry techniques, validating numerical models, and determining the safety and efficacy of HIFU devices.
Prediction of blood pressure and blood flow in stenosed renal arteries using CFD
NASA Astrophysics Data System (ADS)
Jhunjhunwala, Pooja; Padole, P. M.; Thombre, S. B.; Sane, Atul
2018-04-01
In the present work an attempt is made to develop a diagnostive tool for renal artery stenosis (RAS) which is inexpensive and in-vitro. To analyse the effects of increase in the degree of severity of stenosis on hypertension and blood flow, haemodynamic parameters are studied by performing numerical simulations. A total of 16 stenosed models with varying degree of stenosis severity from 0-97.11% are assessed numerically. Blood is modelled as a shear-thinning, non-Newtonian fluid using the Carreau model. Computational Fluid Dynamics (CFD) analysis is carried out to compute the values of flow parameters like maximum velocity and maximum pressure attained by blood due to stenosis under pulsatile flow. These values are further used to compute the increase in blood pressure and decrease in available blood flow to kidney. The computed available blood flow and secondary hypertension for varying extent of stenosis are mapped by curve fitting technique using MATLAB and a mathematical model is developed. Based on these mathematical models, a quantification tool is developed for tentative prediction of probable availability of blood flow to the kidney and severity of stenosis if secondary hypertension is known.
NASA Astrophysics Data System (ADS)
Yang, Kun; Chen, Yingying; Qin, Jun; Lu, Hui
2017-04-01
Multi-sphere interactions over the Tibetan Plateau directly impact its surrounding climate and environment at a variety of spatiotemporal scales. Remote sensing and modeling are expected to provide hydro-meteorological data needed for these process studies, but in situ observations are required to support their calibration and validation. For this purpose, we have established two networks on the Tibetan Plateau to measure densely two state variables (soil moisture and temperature) and four soil depths (0 5, 10, 20, and 40 cm). The experimental area is characterized by low biomass, high soil moisture dynamic range, and typical freeze-thaw cycle. As auxiliary parameters of these networks, soil texture and soil organic carbon content are measured at each station to support further studies. In order to guarantee continuous and high-quality data, tremendous efforts have been made to protect the data logger from soil water intrusion, to calibrate soil moisture sensors, and to upscale the point measurements. One soil moisture network is located in a semi-humid area in central Tibetan Plateau (Naqu), which consists of 56 stations with their elevation varying over 4470 4950 m and covers three spatial scales (1.0, 0.3, 0.1 degree). The other is located in a semi-arid area in southern Tibetan Plateau (Pali), which consists of 25 stations and covers an area of 0.25 degree. The spatiotemporal characteristics of the former network were analyzed, and a new spatial upscaling method was developed to obtain the regional mean soil moisture truth from the point measurements. Our networks meet the requirement for evaluating a variety of soil moisture products, developing new algorithms, and analyzing soil moisture scaling. Three applications with the network data are presented in this paper. 1. Evaluation of Current remote sensing and LSM products. The in situ data have been used to evaluate AMSR-E, AMSR2, SMOS and SMAP products and four modeled outputs by the Global Land Data Assimilation System (GLDAS). 2. Development of New Products. We developed a dual-pass land data assimilation system. The essential idea of the system is to calibrate a land data assimilation system before a normal data assimilation. The calibration is based on satellite data rather than in situ data. Through this way, we may alleviate the impact of uncertainties in determining the error covariance of both observation operator and model operation, as it is always tough to determine the covariance. The performance of the data assimilation system is presented through comparison against the Tibetan Plateau soil moisture measuring networks. And the results are encouraging. 3. Estimation of Soil Parameter Values in a Land Surface Model. We explored the possibility to estimate soil parameter values by assimilating AMSR-E brightness temperature (TB) data. In the assimilation system, the TB is simulated by the coupled system of a land surface model (LSM) and a radiative transfer model (RTM), and the simulation errors highly depend on parameters in both the LSM and the RTM. Thus, sensitive soil parameters may be inversely estimated through minimizing the TB errors. The effectiveness of the estimated parameter values is evaluated against intensive measurements of soil parameters and soil moisture in three grasslands of the Tibetan Plateau and the Mongolian Plateau. The results indicate that this satellite data-based approach can improve the data quality of soil porosity, a key parameter for soil moisture modeling, and LSM simulations with the estimated parameter values reasonably reproduce the measured soil moisture. This demonstrates it is feasible to calibrate LSMs for soil moisture simulations at grid scale by assimilating microwave satellite data, although more efforts are expected to improve the robustness of the model calibration.
Crop physiology calibration in the CLM
Bilionis, I.; Drewniak, B. A.; Constantinescu, E. M.
2015-04-15
Farming is using more of the land surface, as population increases and agriculture is increasingly applied for non-nutritional purposes such as biofuel production. This agricultural expansion exerts an increasing impact on the terrestrial carbon cycle. In order to understand the impact of such processes, the Community Land Model (CLM) has been augmented with a CLM-Crop extension that simulates the development of three crop types: maize, soybean, and spring wheat. The CLM-Crop model is a complex system that relies on a suite of parametric inputs that govern plant growth under a given atmospheric forcing and available resources. CLM-Crop development used measurementsmore » of gross primary productivity (GPP) and net ecosystem exchange (NEE) from AmeriFlux sites to choose parameter values that optimize crop productivity in the model. In this paper, we calibrate these parameters for one crop type, soybean, in order to provide a faithful projection in terms of both plant development and net carbon exchange. Calibration is performed in a Bayesian framework by developing a scalable and adaptive scheme based on sequential Monte Carlo (SMC). The model showed significant improvement of crop productivity with the new calibrated parameters. We demonstrate that the calibrated parameters are applicable across alternative years and different sites.« less
Crop physiology calibration in the CLM
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bilionis, I.; Drewniak, B. A.; Constantinescu, E. M.
Farming is using more of the land surface, as population increases and agriculture is increasingly applied for non-nutritional purposes such as biofuel production. This agricultural expansion exerts an increasing impact on the terrestrial carbon cycle. In order to understand the impact of such processes, the Community Land Model (CLM) has been augmented with a CLM-Crop extension that simulates the development of three crop types: maize, soybean, and spring wheat. The CLM-Crop model is a complex system that relies on a suite of parametric inputs that govern plant growth under a given atmospheric forcing and available resources. CLM-Crop development used measurementsmore » of gross primary productivity (GPP) and net ecosystem exchange (NEE) from AmeriFlux sites to choose parameter values that optimize crop productivity in the model. In this paper, we calibrate these parameters for one crop type, soybean, in order to provide a faithful projection in terms of both plant development and net carbon exchange. Calibration is performed in a Bayesian framework by developing a scalable and adaptive scheme based on sequential Monte Carlo (SMC). The model showed significant improvement of crop productivity with the new calibrated parameters. We demonstrate that the calibrated parameters are applicable across alternative years and different sites.« less
Bayesian Parameter Estimation for Heavy-Duty Vehicles
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, Eric; Konan, Arnaud; Duran, Adam
2017-03-28
Accurate vehicle parameters are valuable for design, modeling, and reporting. Estimating vehicle parameters can be a very time-consuming process requiring tightly-controlled experimentation. This work describes a method to estimate vehicle parameters such as mass, coefficient of drag/frontal area, and rolling resistance using data logged during standard vehicle operation. The method uses Monte Carlo to generate parameter sets which is fed to a variant of the road load equation. Modeled road load is then compared to measured load to evaluate the probability of the parameter set. Acceptance of a proposed parameter set is determined using the probability ratio to the currentmore » state, so that the chain history will give a distribution of parameter sets. Compared to a single value, a distribution of possible values provides information on the quality of estimates and the range of possible parameter values. The method is demonstrated by estimating dynamometer parameters. Results confirm the method's ability to estimate reasonable parameter sets, and indicates an opportunity to increase the certainty of estimates through careful selection or generation of the test drive cycle.« less
Finite element analysis of history-dependent damage in time-dependent fracture mechanics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krishnaswamy, P.; Brust, F.W.; Ghadiali, N.D.
1993-11-01
The demands for structural systems to perform reliably under both severe and changing operating conditions continue to increase. Under these conditions time-dependent straining and history-dependent damage become extremely important. This work focuses on studying creep crack growth using finite element (FE) analysis. Two important issues, namely, (1) the use of history-dependent constitutive laws, and (2) the use of various fracture parameters in predicting creep crack growth, have both been addressed in this work. The constitutive model used here is the one developed by Murakami and Ohno and is based on the concept of a creep hardening surface. An implicit FEmore » algorithm for this model was first developed and verified for simple geometries and loading configurations. The numerical methodology developed here has been used to model stationary and growing cracks in CT specimens. Various fracture parameters such as the C[sub 1], C[sup *], T[sup *], J were used to compare the numerical predictions with experimental results available in the literature. A comparison of the values of these parameters as a function of time has been made for both stationary and growing cracks. The merit of using each of these parameters has also been discussed.« less
Wang, Mingyu; Han, Lijuan; Liu, Shasha; Zhao, Xuebing; Yang, Jinghua; Loh, Soh Kheang; Sun, Xiaomin; Zhang, Chenxi; Fang, Xu
2015-09-01
Renewable energy from lignocellulosic biomass has been deemed an alternative to depleting fossil fuels. In order to improve this technology, we aim to develop robust mathematical models for the enzymatic lignocellulose degradation process. By analyzing 96 groups of previously published and newly obtained lignocellulose saccharification results and fitting them to Weibull distribution, we discovered Weibull statistics can accurately predict lignocellulose saccharification data, regardless of the type of substrates, enzymes and saccharification conditions. A mathematical model for enzymatic lignocellulose degradation was subsequently constructed based on Weibull statistics. Further analysis of the mathematical structure of the model and experimental saccharification data showed the significance of the two parameters in this model. In particular, the λ value, defined the characteristic time, represents the overall performance of the saccharification system. This suggestion was further supported by statistical analysis of experimental saccharification data and analysis of the glucose production levels when λ and n values change. In conclusion, the constructed Weibull statistics-based model can accurately predict lignocellulose hydrolysis behavior and we can use the λ parameter to assess the overall performance of enzymatic lignocellulose degradation. Advantages and potential applications of the model and the λ value in saccharification performance assessment were discussed. Copyright © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Ram, G. Chinna; Narendrudu, T.; Suresh, S.; Kumar, A. Suneel; Rao, M. V. Sambasiva; Kumar, V. Ravi; Rao, D. Krishna
2017-04-01
P2O5sbnd PbOsbnd Bi2O3sbnd R2O3 (R = Al, Ga, In) glasses doped with Dy2O3 were prepared by melt quenching technique. The prepared glasses were characterized by XRD, optical absorption, FTIR, luminescence studies. Judd-Ofelt parameters have been evaluated for three glass systems from optical absorption spectra and in turn radiative parameters for excited luminescent levels of Dy3+ ion are also calculated. Emission cross section and branching ratio values are observed to high for 6H13/2 level for Dy3+ ion. The yellow to blue intensity ratios and CIE chromaticity coordinates were calculated. Decay curves exhibit non exponential behavior. Quantum efficiency of prepared glasses was measured by using radiative and calculated life times. IR studies, J-O parameters and Y/B ratio values indicate that more asymmetry around Dy3+ ions in Ga2O3 mixed glass was observed. Chromaticity coordinates lie near ideal white light region. These coordinates and CCT values have revealed that all the prepared glasses emit quality white light especially the glasses mixed with Ga2O3 are suitable for development of white LEDs.
Validation of DYSTOOL for unsteady aerodynamic modeling of 2D airfoils
NASA Astrophysics Data System (ADS)
González, A.; Gomez-Iradi, S.; Munduate, X.
2014-06-01
From the point of view of wind turbine modeling, an important group of tools is based on blade element momentum (BEM) theory using 2D aerodynamic calculations on the blade elements. Due to the importance of this sectional computation of the blades, the National Renewable Wind Energy Center of Spain (CENER) developed DYSTOOL, an aerodynamic code for 2D airfoil modeling based on the Beddoes-Leishman model. The main focus here is related to the model parameters, whose values depend on the airfoil or the operating conditions. In this work, the values of the parameters are adjusted using available experimental or CFD data. The present document is mainly related to the validation of the results of DYSTOOL for 2D airfoils. The results of the computations have been compared with unsteady experimental data of the S809 and NACA0015 profiles. Some of the cases have also been modeled using the CFD code WMB (Wind Multi Block), within the framework of a collaboration with ACCIONA Windpower. The validation has been performed using pitch oscillations with different reduced frequencies, Reynolds numbers, amplitudes and mean angles of attack. The results have shown a good agreement using the methodology of adjustment for the value of the parameters. DYSTOOL have demonstrated to be a promising tool for 2D airfoil unsteady aerodynamic modeling.
Heidari, M.; Ranjithan, S.R.
1998-01-01
In using non-linear optimization techniques for estimation of parameters in a distributed ground water model, the initial values of the parameters and prior information about them play important roles. In this paper, the genetic algorithm (GA) is combined with the truncated-Newton search technique to estimate groundwater parameters for a confined steady-state ground water model. Use of prior information about the parameters is shown to be important in estimating correct or near-correct values of parameters on a regional scale. The amount of prior information needed for an accurate solution is estimated by evaluation of the sensitivity of the performance function to the parameters. For the example presented here, it is experimentally demonstrated that only one piece of prior information of the least sensitive parameter is sufficient to arrive at the global or near-global optimum solution. For hydraulic head data with measurement errors, the error in the estimation of parameters increases as the standard deviation of the errors increases. Results from our experiments show that, in general, the accuracy of the estimated parameters depends on the level of noise in the hydraulic head data and the initial values used in the truncated-Newton search technique.In using non-linear optimization techniques for estimation of parameters in a distributed ground water model, the initial values of the parameters and prior information about them play important roles. In this paper, the genetic algorithm (GA) is combined with the truncated-Newton search technique to estimate groundwater parameters for a confined steady-state ground water model. Use of prior information about the parameters is shown to be important in estimating correct or near-correct values of parameters on a regional scale. The amount of prior information needed for an accurate solution is estimated by evaluation of the sensitivity of the performance function to the parameters. For the example presented here, it is experimentally demonstrated that only one piece of prior information of the least sensitive parameter is sufficient to arrive at the global or near-global optimum solution. For hydraulic head data with measurement errors, the error in the estimation of parameters increases as the standard deviation of the errors increases. Results from our experiments show that, in general, the accuracy of the estimated parameters depends on the level of noise in the hydraulic head data and the initial values used in the truncated-Newton search technique.
An advanced technique for the prediction of decelerator system dynamics.
NASA Technical Reports Server (NTRS)
Talay, T. A.; Morris, W. D.; Whitlock, C. H.
1973-01-01
An advanced two-body six-degree-of-freedom computer model employing an indeterminate structures approach has been developed for the parachute deployment process. The program determines both vehicular and decelerator responses to aerodynamic and physical property inputs. A better insight into the dynamic processes that occur during parachute deployment has been developed. The model is of value in sensitivity studies to isolate important parameters that affect the vehicular response.
Modeling of venturi scrubber efficiency
NASA Astrophysics Data System (ADS)
Crowder, Jerry W.; Noll, Kenneth E.; Davis, Wayne T.
The parameters affecting venturi scrubber performance have been rationally examined and modifications to the current modeling theory have been developed. The modified model has been validated with available experimental data for a range of throat gas velocities, liquid-to-gas ratios and particle diameters and is used to study the effect of some design parameters on collection efficiency. Most striking among the observations is the prediction of a new design parameter termed the minimum contactor length. Also noted is the prediction of little effect on collection efficiency with increasing liquid-to-gas ratio above about 2ℓ m-3. Indeed, for some cases a decrease in collection efficiency is predicted for liquid rates above this value.
Asquith, William H.; Roussel, Meghan C.
2007-01-01
Estimation of representative hydrographs from design storms, which are known as design hydrographs, provides for cost-effective, riskmitigated design of drainage structures such as bridges, culverts, roadways, and other infrastructure. During 2001?07, the U.S. Geological Survey (USGS), in cooperation with the Texas Department of Transportation, investigated runoff hydrographs, design storms, unit hydrographs,and watershed-loss models to enhance design hydrograph estimation in Texas. Design hydrographs ideally should mimic the general volume, peak, and shape of observed runoff hydrographs. Design hydrographs commonly are estimated in part by unit hydrographs. A unit hydrograph is defined as the runoff hydrograph that results from a unit pulse of excess rainfall uniformly distributed over the watershed at a constant rate for a specific duration. A time-distributed, watershed-loss model is required for modeling by unit hydrographs. This report develops a specific time-distributed, watershed-loss model known as an initial-abstraction, constant-loss model. For this watershed-loss model, a watershed is conceptualized to have the capacity to store or abstract an absolute depth of rainfall at and near the beginning of a storm. Depths of total rainfall less than this initial abstraction do not produce runoff. The watershed also is conceptualized to have the capacity to remove rainfall at a constant rate (loss) after the initial abstraction is satisfied. Additional rainfall inputs after the initial abstraction is satisfied contribute to runoff if the rainfall rate (intensity) is larger than the constant loss. The initial abstraction, constant-loss model thus is a two-parameter model. The initial-abstraction, constant-loss model is investigated through detailed computational and statistical analysis of observed rainfall and runoff data for 92 USGS streamflow-gaging stations (watersheds) in Texas with contributing drainage areas from 0.26 to 166 square miles. The analysis is limited to a previously described, watershed-specific, gamma distribution model of the unit hydrograph. In particular, the initial-abstraction, constant-loss model is tuned to the gamma distribution model of the unit hydrograph. A complex computational analysis of observed rainfall and runoff for the 92 watersheds was done to determine, by storm, optimal values of initial abstraction and constant loss. Optimal parameter values for a given storm were defined as those values that produced a modeled runoff hydrograph with volume equal to the observed runoff hydrograph and also minimized the residual sum of squares of the two hydrographs. Subsequently, the means of the optimal parameters were computed on a watershed-specific basis. These means for each watershed are considered the most representative, are tabulated, and are used in further statistical analyses. Statistical analyses of watershed-specific, initial abstraction and constant loss include documentation of the distribution of each parameter using the generalized lambda distribution. The analyses show that watershed development has substantial influence on initial abstraction and limited influence on constant loss. The means and medians of the 92 watershed-specific parameters are tabulated with respect to watershed development; although they have considerable uncertainty, these parameters can be used for parameter prediction for ungaged watersheds. The statistical analyses of watershed-specific, initial abstraction and constant loss also include development of predictive procedures for estimation of each parameter for ungaged watersheds. Both regression equations and regression trees for estimation of initial abstraction and constant loss are provided. The watershed characteristics included in the regression analyses are (1) main-channel length, (2) a binary factor representing watershed development, (3) a binary factor representing watersheds with an abundance of rocky and thin-soiled terrain, and (4) curve numb
Asymmetry of short-term control of spatio-temporal gait parameters during treadmill walking
NASA Astrophysics Data System (ADS)
Kozlowska, Klaudia; Latka, Miroslaw; West, Bruce J.
2017-03-01
Optimization of energy cost determines average values of spatio-temporal gait parameters such as step duration, step length or step speed. However, during walking, humans need to adapt these parameters at every step to respond to exogenous and/or endogenic perturbations. While some neurological mechanisms that trigger these responses are known, our understanding of the fundamental principles governing step-by-step adaptation remains elusive. We determined the gait parameters of 20 healthy subjects with right-foot preference during treadmill walking at speeds of 1.1, 1.4 and 1.7 m/s. We found that when the value of the gait parameter was conspicuously greater (smaller) than the mean value, it was either followed immediately by a smaller (greater) value of the contralateral leg (interleg control), or the deviation from the mean value decreased during the next movement of ipsilateral leg (intraleg control). The selection of step duration and the selection of step length during such transient control events were performed in unique ways. We quantified the symmetry of short-term control of gait parameters and observed the significant dominance of the right leg in short-term control of all three parameters at higher speeds (1.4 and 1.7 m/s).
Chang, Y K; Lim, H C
1989-08-20
A multivariable on-line adaptive optimization algorithm using a bilevel forgetting factor method was developed and applied to a continuous baker's yeast culture in simulation and experimental studies to maximize the cellular productivity by manipulating the dilution rate and the temperature. The algorithm showed a good optimization speed and a good adaptability and reoptimization capability. The algorithm was able to stably maintain the process around the optimum point for an extended period of time. Two cases were investigated: an unconstrained and a constrained optimization. In the constrained optimization the ethanol concentration was used as an index for the baking quality of yeast cells. An equality constraint with a quadratic penalty was imposed on the ethanol concentration to keep its level close to a hypothetical "optimum" value. The developed algorithm was experimentally applied to a baker's yeast culture to demonstrate its validity. Only unconstrained optimization was carried out experimentally. A set of tuning parameter values was suggested after evaluating the results from several experimental runs. With those tuning parameter values the optimization took 50-90 h. At the attained steady state the dilution rate was 0.310 h(-1) the temperature 32.8 degrees C, and the cellular productivity 1.50 g/L/h.
NASA Astrophysics Data System (ADS)
Valade, A.; Ciais, P.; Vuichard, N.; Viovy, N.; Huth, N.; Marin, F.; Martiné, J.-F.
2014-01-01
Agro-Land Surface Models (agro-LSM) have been developed from the integration of specific crop processes into large-scale generic land surface models that allow calculating the spatial distribution and variability of energy, water and carbon fluxes within the soil-vegetation-atmosphere continuum. When developing agro-LSM models, a particular attention must be given to the effects of crop phenology and management on the turbulent fluxes exchanged with the atmosphere, and the underlying water and carbon pools. A part of the uncertainty of Agro-LSM models is related to their usually large number of parameters. In this study, we quantify the parameter-values uncertainty in the simulation of sugar cane biomass production with the agro-LSM ORCHIDEE-STICS, using a multi-regional approach with data from sites in Australia, La Réunion and Brazil. In ORCHIDEE-STICS, two models are chained: STICS, an agronomy model that calculates phenology and management, and ORCHIDEE, a land surface model that calculates biomass and other ecosystem variables forced by STICS' phenology. First, the parameters that dominate the uncertainty of simulated biomass at harvest date are determined through a screening of 67 different parameters of both STICS and ORCHIDEE on a multi-site basis. Secondly, the uncertainty of harvested biomass attributable to those most sensitive parameters is quantified and specifically attributed to either STICS (phenology, management) or to ORCHIDEE (other ecosystem variables including biomass) through distinct Monte-Carlo runs. The uncertainty on parameter values is constrained using observations by calibrating the model independently at seven sites. In a third step, a sensitivity analysis is carried out by varying the most sensitive parameters to investigate their effects at continental scale. A Monte-Carlo sampling method associated with the calculation of Partial Ranked Correlation Coefficients is used to quantify the sensitivity of harvested biomass to input parameters on a continental scale across the large regions of intensive sugar cane cultivation in Australia and Brazil. Ten parameters driving most of the uncertainty in the ORCHIDEE-STICS modeled biomass at the 7 sites are identified by the screening procedure. We found that the 10 most sensitive parameters control phenology (maximum rate of increase of LAI) and root uptake of water and nitrogen (root profile and root growth rate, nitrogen stress threshold) in STICS, and photosynthesis (optimal temperature of photosynthesis, optimal carboxylation rate), radiation interception (extinction coefficient), and transpiration and respiration (stomatal conductance, growth and maintenance respiration coefficients) in ORCHIDEE. We find that the optimal carboxylation rate and photosynthesis temperature parameters contribute most to the uncertainty in harvested biomass simulations at site scale. The spatial variation of the ranked correlation between input parameters and modeled biomass at harvest is well explained by rain and temperature drivers, suggesting climate-mediated different sensitivities of modeled sugar cane yield to the model parameters, for Australia and Brazil. This study reveals the spatial and temporal patterns of uncertainty variability for a highly parameterized agro-LSM and calls for more systematic uncertainty analyses of such models.