Sample records for selected parameter values

  1. Automatic parameter selection for feature-based multi-sensor image registration

    NASA Astrophysics Data System (ADS)

    DelMarco, Stephen; Tom, Victor; Webb, Helen; Chao, Alan

    2006-05-01

    Accurate image registration is critical for applications such as precision targeting, geo-location, change-detection, surveillance, and remote sensing. However, the increasing volume of image data is exceeding the current capacity of human analysts to perform manual registration. This image data glut necessitates the development of automated approaches to image registration, including algorithm parameter value selection. Proper parameter value selection is crucial to the success of registration techniques. The appropriate algorithm parameters can be highly scene and sensor dependent. Therefore, robust algorithm parameter value selection approaches are a critical component of an end-to-end image registration algorithm. In previous work, we developed a general framework for multisensor image registration which includes feature-based registration approaches. In this work we examine the problem of automated parameter selection. We apply the automated parameter selection approach of Yitzhaky and Peli to select parameters for feature-based registration of multisensor image data. The approach consists of generating multiple feature-detected images by sweeping over parameter combinations and using these images to generate estimated ground truth. The feature-detected images are compared to the estimated ground truth images to generate ROC points associated with each parameter combination. We develop a strategy for selecting the optimal parameter set by choosing the parameter combination corresponding to the optimal ROC point. We present numerical results showing the effectiveness of the approach using registration of collected SAR data to reference EO data.

  2. Progressive sampling-based Bayesian optimization for efficient and automatic machine learning model selection.

    PubMed

    Zeng, Xueqiang; Luo, Gang

    2017-12-01

    Machine learning is broadly used for clinical data analysis. Before training a model, a machine learning algorithm must be selected. Also, the values of one or more model parameters termed hyper-parameters must be set. Selecting algorithms and hyper-parameter values requires advanced machine learning knowledge and many labor-intensive manual iterations. To lower the bar to machine learning, miscellaneous automatic selection methods for algorithms and/or hyper-parameter values have been proposed. Existing automatic selection methods are inefficient on large data sets. This poses a challenge for using machine learning in the clinical big data era. To address the challenge, this paper presents progressive sampling-based Bayesian optimization, an efficient and automatic selection method for both algorithms and hyper-parameter values. We report an implementation of the method. We show that compared to a state of the art automatic selection method, our method can significantly reduce search time, classification error rate, and standard deviation of error rate due to randomization. This is major progress towards enabling fast turnaround in identifying high-quality solutions required by many machine learning-based clinical data analysis tasks.

  3. Measurand transient signal suppressor

    NASA Technical Reports Server (NTRS)

    Bozeman, Richard J., Jr. (Inventor)

    1994-01-01

    A transient signal suppressor for use in a controls system which is adapted to respond to a change in a physical parameter whenever it crosses a predetermined threshold value in a selected direction of increasing or decreasing values with respect to the threshold value and is sustained for a selected discrete time interval is presented. The suppressor includes a sensor transducer for sensing the physical parameter and generating an electrical input signal whenever the sensed physical parameter crosses the threshold level in the selected direction. A manually operated switch is provided for adapting the suppressor to produce an output drive signal whenever the physical parameter crosses the threshold value in the selected direction of increasing or decreasing values. A time delay circuit is selectively adjustable for suppressing the transducer input signal for a preselected one of a plurality of available discrete suppression time and producing an output signal only if the input signal is sustained for a time greater than the selected suppression time. An electronic gate is coupled to receive the transducer input signal and the timer output signal and produce an output drive signal for energizing a control relay whenever the transducer input is a non-transient signal which is sustained beyond the selected time interval.

  4. A framework for streamflow prediction in the world's most severely data-limited regions: Test of applicability and performance in a poorly-gauged region of China

    NASA Astrophysics Data System (ADS)

    Alipour, M. H.; Kibler, Kelly M.

    2018-02-01

    A framework methodology is proposed for streamflow prediction in poorly-gauged rivers located within large-scale regions of sparse hydrometeorologic observation. A multi-criteria model evaluation is developed to select models that balance runoff efficiency with selection of accurate parameter values. Sparse observed data are supplemented by uncertain or low-resolution information, incorporated as 'soft' data, to estimate parameter values a priori. Model performance is tested in two catchments within a data-poor region of southwestern China, and results are compared to models selected using alternative calibration methods. While all models perform consistently with respect to runoff efficiency (NSE range of 0.67-0.78), models selected using the proposed multi-objective method may incorporate more representative parameter values than those selected by traditional calibration. Notably, parameter values estimated by the proposed method resonate with direct estimates of catchment subsurface storage capacity (parameter residuals of 20 and 61 mm for maximum soil moisture capacity (Cmax), and 0.91 and 0.48 for soil moisture distribution shape factor (B); where a parameter residual is equal to the centroid of a soft parameter value minus the calibrated parameter value). A model more traditionally calibrated to observed data only (single-objective model) estimates a much lower soil moisture capacity (residuals of Cmax = 475 and 518 mm and B = 1.24 and 0.7). A constrained single-objective model also underestimates maximum soil moisture capacity relative to a priori estimates (residuals of Cmax = 246 and 289 mm). The proposed method may allow managers to more confidently transfer calibrated models to ungauged catchments for streamflow predictions, even in the world's most data-limited regions.

  5. Identification of atypical flight patterns

    NASA Technical Reports Server (NTRS)

    Statler, Irving C. (Inventor); Ferryman, Thomas A. (Inventor); Amidan, Brett G. (Inventor); Whitney, Paul D. (Inventor); White, Amanda M. (Inventor); Willse, Alan R. (Inventor); Cooley, Scott K. (Inventor); Jay, Joseph Griffith (Inventor); Lawrence, Robert E. (Inventor); Mosbrucker, Chris (Inventor)

    2005-01-01

    Method and system for analyzing aircraft data, including multiple selected flight parameters for a selected phase of a selected flight, and for determining when the selected phase of the selected flight is atypical, when compared with corresponding data for the same phase for other similar flights. A flight signature is computed using continuous-valued and discrete-valued flight parameters for the selected flight parameters and is optionally compared with a statistical distribution of other observed flight signatures, yielding atypicality scores for the same phase for other similar flights. A cluster analysis is optionally applied to the flight signatures to define an optimal collection of clusters. A level of atypicality for a selected flight is estimated, based upon an index associated with the cluster analysis.

  6. Pilot-Configurable Information on a Display Unit

    NASA Technical Reports Server (NTRS)

    Bell, Charles Frederick (Inventor); Ametsitsi, Julian (Inventor); Che, Tan Nhat (Inventor); Shafaat, Syed Tahir (Inventor)

    2017-01-01

    A small thin display unit that can be installed in the flight deck for displaying only flight crew-selected tactical information needed for the task at hand. The flight crew can select the tactical information to be displayed by means of any conventional user interface. Whenever the flight crew selects tactical information for processes the request, including periodically retrieving measured current values or computing current values for the requested tactical parameters and returning those current tactical parameter values to the display unit for display.

  7. Image Discrimination Models With Stochastic Channel Selection

    NASA Technical Reports Server (NTRS)

    Ahumada, Albert J., Jr.; Beard, Bettina L.; Null, Cynthia H. (Technical Monitor)

    1995-01-01

    Many models of human image processing feature a large fixed number of channels representing cortical units varying in spatial position (visual field direction and eccentricity) and spatial frequency (radial frequency and orientation). The values of these parameters are usually sampled at fixed values selected to ensure adequate overlap considering the bandwidth and/or spread parameters, which are usually fixed. Even high levels of overlap does not always ensure that the performance of the model will vary smoothly with image translation or scale changes. Physiological measurements of bandwidth and/or spread parameters result in a broad distribution of estimated parameter values and the prediction of some psychophysical results are facilitated by the assumption that these parameters also take on a range of values. Selecting a sample of channels from a continuum of channels rather than using a fixed set can make model performance vary smoothly with changes in image position, scale, and orientation. It also facilitates the addition of spatial inhomogeneity, nonlinear feature channels, and focus of attention to channel models.

  8. Bayesian Parameter Inference and Model Selection by Population Annealing in Systems Biology

    PubMed Central

    Murakami, Yohei

    2014-01-01

    Parameter inference and model selection are very important for mathematical modeling in systems biology. Bayesian statistics can be used to conduct both parameter inference and model selection. Especially, the framework named approximate Bayesian computation is often used for parameter inference and model selection in systems biology. However, Monte Carlo methods needs to be used to compute Bayesian posterior distributions. In addition, the posterior distributions of parameters are sometimes almost uniform or very similar to their prior distributions. In such cases, it is difficult to choose one specific value of parameter with high credibility as the representative value of the distribution. To overcome the problems, we introduced one of the population Monte Carlo algorithms, population annealing. Although population annealing is usually used in statistical mechanics, we showed that population annealing can be used to compute Bayesian posterior distributions in the approximate Bayesian computation framework. To deal with un-identifiability of the representative values of parameters, we proposed to run the simulations with the parameter ensemble sampled from the posterior distribution, named “posterior parameter ensemble”. We showed that population annealing is an efficient and convenient algorithm to generate posterior parameter ensemble. We also showed that the simulations with the posterior parameter ensemble can, not only reproduce the data used for parameter inference, but also capture and predict the data which was not used for parameter inference. Lastly, we introduced the marginal likelihood in the approximate Bayesian computation framework for Bayesian model selection. We showed that population annealing enables us to compute the marginal likelihood in the approximate Bayesian computation framework and conduct model selection depending on the Bayes factor. PMID:25089832

  9. Real Time Correction of Aircraft Flight Fonfiguration

    NASA Technical Reports Server (NTRS)

    Schipper, John F. (Inventor)

    2009-01-01

    Method and system for monitoring and analyzing, in real time, variation with time of an aircraft flight parameter. A time-dependent recovery band, defined by first and second recovery band boundaries that are spaced apart at at least one time point, is constructed for a selected flight parameter and for a selected time recovery time interval length .DELTA.t(FP;rec). A flight parameter, having a value FP(t=t.sub.p) at a time t=t.sub.p, is likely to be able to recover to a reference flight parameter value FP(t';ref), lying in a band of reference flight parameter values FP(t';ref;CB), within a time interval given by t.sub.p.ltoreq.t'.ltoreq.t.sub.p.DELTA.t(FP;rec), if (or only if) the flight parameter value lies between the first and second recovery band boundary traces.

  10. [Temporal and spatial heterogeneity analysis of optimal value of sensitive parameters in ecological process model: The BIOME-BGC model as an example.

    PubMed

    Li, Yi Zhe; Zhang, Ting Long; Liu, Qiu Yu; Li, Ying

    2018-01-01

    The ecological process models are powerful tools for studying terrestrial ecosystem water and carbon cycle at present. However, there are many parameters for these models, and weather the reasonable values of these parameters were taken, have important impact on the models simulation results. In the past, the sensitivity and the optimization of model parameters were analyzed and discussed in many researches. But the temporal and spatial heterogeneity of the optimal parameters is less concerned. In this paper, the BIOME-BGC model was used as an example. In the evergreen broad-leaved forest, deciduous broad-leaved forest and C3 grassland, the sensitive parameters of the model were selected by constructing the sensitivity judgment index with two experimental sites selected under each vegetation type. The objective function was constructed by using the simulated annealing algorithm combined with the flux data to obtain the monthly optimal values of the sensitive parameters at each site. Then we constructed the temporal heterogeneity judgment index, the spatial heterogeneity judgment index and the temporal and spatial heterogeneity judgment index to quantitatively analyze the temporal and spatial heterogeneity of the optimal values of the model sensitive parameters. The results showed that the sensitivity of BIOME-BGC model parameters was different under different vegetation types, but the selected sensitive parameters were mostly consistent. The optimal values of the sensitive parameters of BIOME-BGC model mostly presented time-space heterogeneity to different degrees which varied with vegetation types. The sensitive parameters related to vegetation physiology and ecology had relatively little temporal and spatial heterogeneity while those related to environment and phenology had generally larger temporal and spatial heterogeneity. In addition, the temporal heterogeneity of the optimal values of the model sensitive parameters showed a significant linear correlation with the spatial heterogeneity under the three vegetation types. According to the temporal and spatial heterogeneity of the optimal values, the parameters of the BIOME-BGC model could be classified in order to adopt different parameter strategies in practical application. The conclusion could help to deeply understand the parameters and the optimal values of the ecological process models, and provide a way or reference for obtaining the reasonable values of parameters in models application.

  11. Ballistic projectile trajectory determining system

    DOEpatents

    Karr, Thomas J.

    1997-01-01

    A computer controlled system determines the three-dimensional trajectory of a ballistic projectile. To initialize the system, predictions of state parameters for a ballistic projectile are received at an estimator. The estimator uses the predictions of the state parameters to estimate first trajectory characteristics of the ballistic projectile. A single stationary monocular sensor then observes the actual first trajectory characteristics of the ballistic projectile. A comparator generates an error value related to the predicted state parameters by comparing the estimated first trajectory characteristics of the ballistic projectile with the observed first trajectory characteristics of the ballistic projectile. If the error value is equal to or greater than a selected limit, the predictions of the state parameters are adjusted. New estimates for the trajectory characteristics of the ballistic projectile are made and are then compared with actual observed trajectory characteristics. This process is repeated until the error value is less than the selected limit. Once the error value is less than the selected limit, a calculator calculates trajectory characteristics such a the origin and destination of the ballistic projectile.

  12. Ballistic projectile trajectory determining system

    DOEpatents

    Karr, T.J.

    1997-05-20

    A computer controlled system determines the three-dimensional trajectory of a ballistic projectile. To initialize the system, predictions of state parameters for a ballistic projectile are received at an estimator. The estimator uses the predictions of the state parameters to estimate first trajectory characteristics of the ballistic projectile. A single stationary monocular sensor then observes the actual first trajectory characteristics of the ballistic projectile. A comparator generates an error value related to the predicted state parameters by comparing the estimated first trajectory characteristics of the ballistic projectile with the observed first trajectory characteristics of the ballistic projectile. If the error value is equal to or greater than a selected limit, the predictions of the state parameters are adjusted. New estimates for the trajectory characteristics of the ballistic projectile are made and are then compared with actual observed trajectory characteristics. This process is repeated until the error value is less than the selected limit. Once the error value is less than the selected limit, a calculator calculates trajectory characteristics such a the origin and destination of the ballistic projectile. 8 figs.

  13. Optimized tuner selection for engine performance estimation

    NASA Technical Reports Server (NTRS)

    Simon, Donald L. (Inventor); Garg, Sanjay (Inventor)

    2013-01-01

    A methodology for minimizing the error in on-line Kalman filter-based aircraft engine performance estimation applications is presented. This technique specifically addresses the underdetermined estimation problem, where there are more unknown parameters than available sensor measurements. A systematic approach is applied to produce a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. Tuning parameter selection is performed using a multi-variable iterative search routine which seeks to minimize the theoretical mean-squared estimation error. Theoretical Kalman filter estimation error bias and variance values are derived at steady-state operating conditions, and the tuner selection routine is applied to minimize these values. The new methodology yields an improvement in on-line engine performance estimation accuracy.

  14. Structure-activity relationships of pyrethroid insecticides. Part 2. The use of molecular dynamics for conformation searching and average parameter calculation

    NASA Astrophysics Data System (ADS)

    Hudson, Brian D.; George, Ashley R.; Ford, Martyn G.; Livingstone, David J.

    1992-04-01

    Molecular dynamics simulations have been performed on a number of conformationally flexible pyrethroid insecticides. The results indicate that molecular dynamics is a suitable tool for conformational searching of small molecules given suitable simulation parameters. The structures derived from the simulations are compared with the static conformation used in a previous study. Various physicochemical parameters have been calculated for a set of conformations selected from the simulations using multivariate analysis. The averaged values of the parameters over the selected set (and the factors derived from them) are compared with the single conformation values used in the previous study.

  15. Asymmetry of short-term control of spatio-temporal gait parameters during treadmill walking

    NASA Astrophysics Data System (ADS)

    Kozlowska, Klaudia; Latka, Miroslaw; West, Bruce J.

    2017-03-01

    Optimization of energy cost determines average values of spatio-temporal gait parameters such as step duration, step length or step speed. However, during walking, humans need to adapt these parameters at every step to respond to exogenous and/or endogenic perturbations. While some neurological mechanisms that trigger these responses are known, our understanding of the fundamental principles governing step-by-step adaptation remains elusive. We determined the gait parameters of 20 healthy subjects with right-foot preference during treadmill walking at speeds of 1.1, 1.4 and 1.7 m/s. We found that when the value of the gait parameter was conspicuously greater (smaller) than the mean value, it was either followed immediately by a smaller (greater) value of the contralateral leg (interleg control), or the deviation from the mean value decreased during the next movement of ipsilateral leg (intraleg control). The selection of step duration and the selection of step length during such transient control events were performed in unique ways. We quantified the symmetry of short-term control of gait parameters and observed the significant dominance of the right leg in short-term control of all three parameters at higher speeds (1.4 and 1.7 m/s).

  16. Nondestructive prediction of pork freshness parameters using multispectral scattering images

    NASA Astrophysics Data System (ADS)

    Tang, Xiuying; Li, Cuiling; Peng, Yankun; Chao, Kuanglin; Wang, Mingwu

    2012-05-01

    Optical technology is an important and immerging technology for non-destructive and rapid detection of pork freshness. This paper studied on the possibility of using multispectral imaging technique and scattering characteristics to predict the freshness parameters of pork meat. The pork freshness parameters selected for prediction included total volatile basic nitrogen (TVB-N), color parameters (L *, a *, b *), and pH value. Multispectral scattering images were obtained from pork sample surface by a multispectral imaging system developed by ourselves; they were acquired at the selected narrow wavebands whose center wavelengths were 517,550, 560, 580, 600, 760, 810 and 910nm. In order to extract scattering characteristics from multispectral images at multiple wavelengths, a Lorentzian distribution (LD) function with four parameters (a: scattering asymptotic value; b: scattering peak; c: scattering width; d: scattering slope) was used to fit the scattering curves at the selected wavelengths. The results show that the multispectral imaging technique combined with scattering characteristics is promising for predicting the freshness parameters of pork meat.

  17. Calibration by Hydrological Response Unit of a National Hydrologic Model to Improve Spatial Representation and Distribution of Parameters

    NASA Astrophysics Data System (ADS)

    Norton, P. A., II

    2015-12-01

    The U. S. Geological Survey is developing a National Hydrologic Model (NHM) to support consistent hydrologic modeling across the conterminous United States (CONUS). The Precipitation-Runoff Modeling System (PRMS) simulates daily hydrologic and energy processes in watersheds, and is used for the NHM application. For PRMS each watershed is divided into hydrologic response units (HRUs); by default each HRU is assumed to have a uniform hydrologic response. The Geospatial Fabric (GF) is a database containing initial parameter values for input to PRMS and was created for the NHM. The parameter values in the GF were derived from datasets that characterize the physical features of the entire CONUS. The NHM application is composed of more than 100,000 HRUs from the GF. Selected parameter values commonly are adjusted by basin in PRMS using an automated calibration process based on calibration targets, such as streamflow. Providing each HRU with distinct values that captures variability within the CONUS may improve simulation performance of the NHM. During calibration of the NHM by HRU, selected parameter values are adjusted for PRMS based on calibration targets, such as streamflow, snow water equivalent (SWE) and actual evapotranspiration (AET). Simulated SWE, AET, and runoff were compared to value ranges derived from multiple sources (e.g. the Snow Data Assimilation System, the Moderate Resolution Imaging Spectroradiometer (i.e. MODIS) Global Evapotranspiration Project, the Simplified Surface Energy Balance model, and the Monthly Water Balance Model). This provides each HRU with a distinct set of parameter values that captures the variability within the CONUS, leading to improved model performance. We present simulation results from the NHM after preliminary calibration, including the results of basin-level calibration for the NHM using: 1) default initial GF parameter values, and 2) parameter values calibrated by HRU.

  18. Recursive Branching Simulated Annealing Algorithm

    NASA Technical Reports Server (NTRS)

    Bolcar, Matthew; Smith, J. Scott; Aronstein, David

    2012-01-01

    This innovation is a variation of a simulated-annealing optimization algorithm that uses a recursive-branching structure to parallelize the search of a parameter space for the globally optimal solution to an objective. The algorithm has been demonstrated to be more effective at searching a parameter space than traditional simulated-annealing methods for a particular problem of interest, and it can readily be applied to a wide variety of optimization problems, including those with a parameter space having both discrete-value parameters (combinatorial) and continuous-variable parameters. It can take the place of a conventional simulated- annealing, Monte-Carlo, or random- walk algorithm. In a conventional simulated-annealing (SA) algorithm, a starting configuration is randomly selected within the parameter space. The algorithm randomly selects another configuration from the parameter space and evaluates the objective function for that configuration. If the objective function value is better than the previous value, the new configuration is adopted as the new point of interest in the parameter space. If the objective function value is worse than the previous value, the new configuration may be adopted, with a probability determined by a temperature parameter, used in analogy to annealing in metals. As the optimization continues, the region of the parameter space from which new configurations can be selected shrinks, and in conjunction with lowering the annealing temperature (and thus lowering the probability for adopting configurations in parameter space with worse objective functions), the algorithm can converge on the globally optimal configuration. The Recursive Branching Simulated Annealing (RBSA) algorithm shares some features with the SA algorithm, notably including the basic principles that a starting configuration is randomly selected from within the parameter space, the algorithm tests other configurations with the goal of finding the globally optimal solution, and the region from which new configurations can be selected shrinks as the search continues. The key difference between these algorithms is that in the SA algorithm, a single path, or trajectory, is taken in parameter space, from the starting point to the globally optimal solution, while in the RBSA algorithm, many trajectories are taken; by exploring multiple regions of the parameter space simultaneously, the algorithm has been shown to converge on the globally optimal solution about an order of magnitude faster than when using conventional algorithms. Novel features of the RBSA algorithm include: 1. More efficient searching of the parameter space due to the branching structure, in which multiple random configurations are generated and multiple promising regions of the parameter space are explored; 2. The implementation of a trust region for each parameter in the parameter space, which provides a natural way of enforcing upper- and lower-bound constraints on the parameters; and 3. The optional use of a constrained gradient- search optimization, performed on the continuous variables around each branch s configuration in parameter space to improve search efficiency by allowing for fast fine-tuning of the continuous variables within the trust region at that configuration point.

  19. Automated respiratory cycles selection is highly specific and improves respiratory mechanics analysis.

    PubMed

    Rigo, Vincent; Graas, Estelle; Rigo, Jacques

    2012-07-01

    Selected optimal respiratory cycles should allow calculation of respiratory mechanic parameters focusing on patient-ventilator interaction. New computer software automatically selecting optimal breaths and respiratory mechanics derived from those cycles are evaluated. Retrospective study. University level III neonatal intensive care unit. Ten mins synchronized intermittent mandatory ventilation and assist/control ventilation recordings from ten newborns. The ventilator provided respiratory mechanic data (ventilator respiratory cycles) every 10 secs. Pressure, flow, and volume waves and pressure-volume, pressure-flow, and volume-flow loops were reconstructed from continuous pressure-volume recordings. Visual assessment determined assisted leak-free optimal respiratory cycles (selected respiratory cycles). New software graded the quality of cycles (automated respiratory cycles). Respiratory mechanic values were derived from both sets of optimal cycles. We evaluated quality selection and compared mean values and their variability according to ventilatory mode and respiratory mechanic provenance. To assess discriminating power, all 45 "t" values obtained from interpatient comparisons were compared for each respiratory mechanic parameter. A total of 11,724 breaths are evaluated. Automated respiratory cycle/selected respiratory cycle selections agreement is high: 88% of maximal κ with linear weighting. Specificity and positive predictive values are 0.98 and 0.96, respectively. Averaged values are similar between automated respiratory cycle and ventilator respiratory cycle. C20/C alone is markedly decreased in automated respiratory cycle (1.27 ± 0.37 vs. 1.81 ± 0.67). Tidal volume apparent similarity disappears in assist/control: automated respiratory cycle tidal volume (4.8 ± 1.0 mL/kg) is significantly lower than for ventilator respiratory cycle (5.6 ± 1.8 mL/kg). Coefficients of variation decrease for all automated respiratory cycle parameters in all infants. "t" values from ventilator respiratory cycle data are two to three times higher than ventilator respiratory cycles. Automated selection is highly specific. Automated respiratory cycle reflects most the interaction of both ventilator and patient. Improving discriminating power of ventilator monitoring will likely help in assessing disease status and following trends. Averaged parameters derived from automated respiratory cycles are more precise and could be displayed by ventilators to improve real-time fine tuning of ventilator settings.

  20. Optimized microsystems-enabled photovoltaics

    DOEpatents

    Cruz-Campa, Jose Luis; Nielson, Gregory N.; Young, Ralph W.; Resnick, Paul J.; Okandan, Murat; Gupta, Vipin P.

    2015-09-22

    Technologies pertaining to designing microsystems-enabled photovoltaic (MEPV) cells are described herein. A first restriction for a first parameter of an MEPV cell is received. Subsequently, a selection of a second parameter of the MEPV cell is received. Values for a plurality of parameters of the MEPV cell are computed such that the MEPV cell is optimized with respect to the second parameter, wherein the values for the plurality of parameters are computed based at least in part upon the restriction for the first parameter.

  1. SU-D-12A-06: A Comprehensive Parameter Analysis for Low Dose Cone-Beam CT Reconstruction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, W; Southern Medical University, Guangzhou; Yan, H

    Purpose: There is always a parameter in compressive sensing based iterative reconstruction (IR) methods low dose cone-beam CT (CBCT), which controls the weight of regularization relative to data fidelity. A clear understanding of the relationship between image quality and parameter values is important. The purpose of this study is to investigate this subject based on experimental data and a representative advanced IR algorithm using Tight-frame (TF) regularization. Methods: Three data sets of a Catphan phantom acquired at low, regular and high dose levels are used. For each tests, 90 projections covering a 200-degree scan range are used for reconstruction. Threemore » different regions-of-interest (ROIs) of different contrasts are used to calculate contrast-to-noise ratios (CNR) for contrast evaluation. A single point structure is used to measure modulation transfer function (MTF) for spatial-resolution evaluation. Finally, we analyze CNRs and MTFs to study the relationship between image quality and parameter selections. Results: It was found that: 1) there is no universal optimal parameter. The optimal parameter value depends on specific task and dose level. 2) There is a clear trade-off between CNR and resolution. The parameter for the best CNR is always smaller than that for the best resolution. 3) Optimal parameters are also dose-specific. Data acquired under a high dose protocol require less regularization, yielding smaller optimal parameter values. 4) Comparing with conventional FDK images, TF-based CBCT images are better under a certain optimally selected parameters. The advantages are more obvious for low dose data. Conclusion: We have investigated the relationship between image quality and parameter values in the TF-based IR algorithm. Preliminary results indicate optimal parameters are specific to both the task types and dose levels, providing guidance for selecting parameters in advanced IR algorithms. This work is supported in part by NIH (1R01CA154747-01)« less

  2. Computer program for analysis of hemodynamic response to head-up tilt test

    NASA Astrophysics Data System (ADS)

    ŚwiÄ tek, Eliza; Cybulski, Gerard; Koźluk, Edward; PiÄ tkowska, Agnieszka; Niewiadomski, Wiktor

    2014-11-01

    The aim of this work was to create a computer program, written in the MATLAB environment, which enables the visualization and analysis of hemodynamic parameters recorded during a passive tilt test using the CNS Task Force Monitor System. The application was created to help in the assessment of the relationship between the values and dynamics of changes of the selected parameters and the risk of orthostatic syncope. The signal analysis included: R-R intervals (RRI), heart rate (HR), systolic blood pressure (sBP), diastolic blood pressure (dBP), mean blood pressure (mBP), stroke volume (SV), stroke index (SI), cardiac output (CO), cardiac index (CI), total peripheral resistance (TPR), total peripheral resistance index (TPRI), ventricular ejection time (LVET) and thoracic fluid content (TFC). The program enables the user to visualize waveforms for a selected parameter and to perform smoothing with selected moving average parameters. It allows one to construct the graph of means for any range, and the Poincare plot for a selected time range. The program automatically determines the average value of the parameter before tilt, its minimum and maximum value immediately after changing positions and the times of their occurrence. It is possible to correct the automatically detected points manually. For the RR interval, it determines the acceleration index (AI) and the brake index (BI). It is possible to save calculated values to an XLS with a name specified by user. The application has a user-friendly graphical interface and can run on a computer that has no MATLAB software.

  3. The physiology of spacecraft and space suit atmosphere selection

    NASA Astrophysics Data System (ADS)

    Waligora, J. M.; Horrigan, D. J.; Nicogossian, A.

    The majority of the environmental factors which comprise the spacecraft and space suit environments can be controlled at "Earth normal" values, at optimum values, or at other values decided upon by spacecraft designers. Factors which are considered in arriving at control values and control ranges of these parameters include physiological, engineering, operational cost, and safety considerations. Several of the physiologic considerations, including hypoxia and hyperoxia, hypercapnia, temperature regulation, and decompression sickness are identified and their impact on space craft and space suit atmosphere selection are considered. The past experience in controlling these parameters in U.S. and Soviet spacecraft and space suits and the associated physiological responses are reviewed. Current areas of physiological investigation relating to environmental factors in spacecraft are discussed, particularly decompression sickness which can occur as a result of change in pressure from Earth to spacecraft or spacecraft to space suit. Physiological considerations for long-term lunar or Martian missions will have different impacts on atmosphere selection and may result in the selection of atmospheres different than those currently in use.

  4. Optimizing the availability of a buffered industrial process

    DOEpatents

    Martz, Jr., Harry F.; Hamada, Michael S.; Koehler, Arthur J.; Berg, Eric C.

    2004-08-24

    A computer-implemented process determines optimum configuration parameters for a buffered industrial process. A population size is initialized by randomly selecting a first set of design and operation values associated with subsystems and buffers of the buffered industrial process to form a set of operating parameters for each member of the population. An availability discrete event simulation (ADES) is performed on each member of the population to determine the product-based availability of each member. A new population is formed having members with a second set of design and operation values related to the first set of design and operation values through a genetic algorithm and the product-based availability determined by the ADES. Subsequent population members are then determined by iterating the genetic algorithm with product-based availability determined by ADES to form improved design and operation values from which the configuration parameters are selected for the buffered industrial process.

  5. Method for Predicting and Optimizing System Parameters for Electrospinning System

    NASA Technical Reports Server (NTRS)

    Wincheski, Russell A. (Inventor)

    2011-01-01

    An electrospinning system using a spinneret and a counter electrode is first operated for a fixed amount of time at known system and operational parameters to generate a fiber mat having a measured fiber mat width associated therewith. Next, acceleration of the fiberizable material at the spinneret is modeled to determine values of mass, drag, and surface tension associated with the fiberizable material at the spinneret output. The model is then applied in an inversion process to generate predicted values of an electric charge at the spinneret output and an electric field between the spinneret and electrode required to fabricate a selected fiber mat design. The electric charge and electric field are indicative of design values for system and operational parameters needed to fabricate the selected fiber mat design.

  6. Content dependent selection of image enhancement parameters for mobile displays

    NASA Astrophysics Data System (ADS)

    Lee, Yoon-Gyoo; Kang, Yoo-Jin; Kim, Han-Eol; Kim, Ka-Hee; Kim, Choon-Woo

    2011-01-01

    Mobile devices such as cellular phones and portable multimedia player with capability of playing terrestrial digital multimedia broadcasting (T-DMB) contents have been introduced into consumer market. In this paper, content dependent image quality enhancement method for sharpness and colorfulness and noise reduction is presented to improve perceived image quality on mobile displays. Human visual experiments are performed to analyze viewers' preference. Relationship between the objective measures and the optimal values of image control parameters are modeled by simple lookup tables based on the results of human visual experiments. Content dependent values of image control parameters are determined based on the calculated measures and predetermined lookup tables. Experimental results indicate that dynamic selection of image control parameters yields better image quality.

  7. The Impact of Variability of Selected Geological and Mining Parameters on the Value and Risks of Projects in the Hard Coal Mining Industry

    NASA Astrophysics Data System (ADS)

    Kopacz, Michał

    2017-09-01

    The paper attempts to assess the impact of variability of selected geological (deposit) parameters on the value and risks of projects in the hard coal mining industry. The study was based on simulated discounted cash flow analysis, while the results were verified for three existing bituminous coal seams. The Monte Carlo simulation was based on nonparametric bootstrap method, while correlations between individual deposit parameters were replicated with use of an empirical copula. The calculations take into account the uncertainty towards the parameters of empirical distributions of the deposit variables. The Net Present Value (NPV) and the Internal Rate of Return (IRR) were selected as the main measures of value and risk, respectively. The impact of volatility and correlation of deposit parameters were analyzed in two aspects, by identifying the overall effect of the correlated variability of the parameters and the indywidual impact of the correlation on the NPV and IRR. For this purpose, a differential approach, allowing determining the value of the possible errors in calculation of these measures in numerical terms, has been used. Based on the study it can be concluded that the mean value of the overall effect of the variability does not exceed 11.8% of NPV and 2.4 percentage points of IRR. Neglecting the correlations results in overestimating the NPV and the IRR by up to 4.4%, and 0.4 percentage point respectively. It should be noted, however, that the differences in NPV and IRR values can vary significantly, while their interpretation depends on the likelihood of implementation. Generalizing the obtained results, based on the average values, the maximum value of the risk premium in the given calculation conditions of the "X" deposit, and the correspondingly large datasets (greater than 2500), should not be higher than 2.4 percentage points. The impact of the analyzed geological parameters on the NPV and IRR depends primarily on their co-existence, which can be measured by the strength of correlation. In the analyzed case, the correlations result in limiting the range of variation of the geological parameters and economics results (the empirical copula reduces the NPV and IRR in probabilistic approach). However, this is due to the adjustment of the calculation under conditions similar to those prevailing in the deposit.

  8. Application of nonlinear least-squares regression to ground-water flow modeling, west-central Florida

    USGS Publications Warehouse

    Yobbi, D.K.

    2000-01-01

    A nonlinear least-squares regression technique for estimation of ground-water flow model parameters was applied to an existing model of the regional aquifer system underlying west-central Florida. The regression technique minimizes the differences between measured and simulated water levels. Regression statistics, including parameter sensitivities and correlations, were calculated for reported parameter values in the existing model. Optimal parameter values for selected hydrologic variables of interest are estimated by nonlinear regression. Optimal estimates of parameter values are about 140 times greater than and about 0.01 times less than reported values. Independently estimating all parameters by nonlinear regression was impossible, given the existing zonation structure and number of observations, because of parameter insensitivity and correlation. Although the model yields parameter values similar to those estimated by other methods and reproduces the measured water levels reasonably accurately, a simpler parameter structure should be considered. Some possible ways of improving model calibration are to: (1) modify the defined parameter-zonation structure by omitting and/or combining parameters to be estimated; (2) carefully eliminate observation data based on evidence that they are likely to be biased; (3) collect additional water-level data; (4) assign values to insensitive parameters, and (5) estimate the most sensitive parameters first, then, using the optimized values for these parameters, estimate the entire data set.

  9. Visual Image Sensor Organ Replacement: Implementation

    NASA Technical Reports Server (NTRS)

    Maluf, A. David (Inventor)

    2011-01-01

    Method and system for enhancing or extending visual representation of a selected region of a visual image, where visual representation is interfered with or distorted, by supplementing a visual signal with at least one audio signal having one or more audio signal parameters that represent one or more visual image parameters, such as vertical and/or horizontal location of the region; region brightness; dominant wavelength range of the region; change in a parameter value that characterizes the visual image, with respect to a reference parameter value; and time rate of change in a parameter value that characterizes the visual image. Region dimensions can be changed to emphasize change with time of a visual image parameter.

  10. Genetic parameters and prediction of breeding values in switchgrass bred for bioenergy

    USDA-ARS?s Scientific Manuscript database

    Estimating genetic parameters is an essential step in breeding by recurrent selection to maximize genetic gains over time. This study evaluated the effects of selection on genetic variation across two successive cycles (C1 and C2) of a ‘Summer’x‘Kanlow’ switchgrass (Panicum virgatum L.) population. ...

  11. Optimal Tuner Selection for Kalman Filter-Based Aircraft Engine Performance Estimation

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Garg, Sanjay

    2010-01-01

    A linear point design methodology for minimizing the error in on-line Kalman filter-based aircraft engine performance estimation applications is presented. This technique specifically addresses the underdetermined estimation problem, where there are more unknown parameters than available sensor measurements. A systematic approach is applied to produce a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. Tuning parameter selection is performed using a multi-variable iterative search routine which seeks to minimize the theoretical mean-squared estimation error. This paper derives theoretical Kalman filter estimation error bias and variance values at steady-state operating conditions, and presents the tuner selection routine applied to minimize these values. Results from the application of the technique to an aircraft engine simulation are presented and compared to the conventional approach of tuner selection. Experimental simulation results are found to be in agreement with theoretical predictions. The new methodology is shown to yield a significant improvement in on-line engine performance estimation accuracy

  12. The alfa and beta of tumours: a review of parameters of the linear-quadratic model, derived from clinical radiotherapy studies.

    PubMed

    van Leeuwen, C M; Oei, A L; Crezee, J; Bel, A; Franken, N A P; Stalpers, L J A; Kok, H P

    2018-05-16

    Prediction of radiobiological response is a major challenge in radiotherapy. Of several radiobiological models, the linear-quadratic (LQ) model has been best validated by experimental and clinical data. Clinically, the LQ model is mainly used to estimate equivalent radiotherapy schedules (e.g. calculate the equivalent dose in 2 Gy fractions, EQD 2 ), but increasingly also to predict tumour control probability (TCP) and normal tissue complication probability (NTCP) using logistic models. The selection of accurate LQ parameters α, β and α/β is pivotal for a reliable estimate of radiation response. The aim of this review is to provide an overview of published values for the LQ parameters of human tumours as a guideline for radiation oncologists and radiation researchers to select appropriate radiobiological parameter values for LQ modelling in clinical radiotherapy. We performed a systematic literature search and found sixty-four clinical studies reporting α, β and α/β for tumours. Tumour site, histology, stage, number of patients, type of LQ model, radiation type, TCP model, clinical endpoint and radiobiological parameter estimates were extracted. Next, we stratified by tumour site and by tumour histology. Study heterogeneity was expressed by the I 2 statistic, i.e. the percentage of variance in reported values not explained by chance. A large heterogeneity in LQ parameters was found within and between studies (I 2  > 75%). For the same tumour site, differences in histology partially explain differences in the LQ parameters: epithelial tumours have higher α/β values than adenocarcinomas. For tumour sites with different histologies, such as in oesophageal cancer, the α/β estimates correlate well with histology. However, many other factors contribute to the study heterogeneity of LQ parameters, e.g. tumour stage, type of LQ model, TCP model and clinical endpoint (i.e. survival, tumour control and biochemical control). The value of LQ parameters for tumours as published in clinical radiotherapy studies depends on many clinical and methodological factors. Therefore, for clinical use of the LQ model, LQ parameters for tumour should be selected carefully, based on tumour site, histology and the applied LQ model. To account for uncertainties in LQ parameter estimates, exploring a range of values is recommended.

  13. Optimisation of process parameters on thin shell part using response surface methodology (RSM)

    NASA Astrophysics Data System (ADS)

    Faiz, J. M.; Shayfull, Z.; Nasir, S. M.; Fathullah, M.; Rashidi, M. M.

    2017-09-01

    This study is carried out to focus on optimisation of process parameters by simulation using Autodesk Moldflow Insight (AMI) software. The process parameters are taken as the input in order to analyse the warpage value which is the output in this study. There are some significant parameters that have been used which are melt temperature, mould temperature, packing pressure, and cooling time. A plastic part made of Polypropylene (PP) has been selected as the study part. Optimisation of process parameters is applied in Design Expert software with the aim to minimise the obtained warpage value. Response Surface Methodology (RSM) has been applied in this study together with Analysis of Variance (ANOVA) in order to investigate the interactions between parameters that are significant to the warpage value. Thus, the optimised warpage value can be obtained using the model designed using RSM due to its minimum error value. This study comes out with the warpage value improved by using RSM.

  14. The selection function of the LAMOST Spectroscopic Survey of the Galactic Anti-centre

    NASA Astrophysics Data System (ADS)

    Chen, B.-Q.; Liu, X.-W.; Yuan, H.-B.; Xiang, M.-S.; Huang, Y.; Wang, C.; Zhang, H.-W.; Tian, Z.-J.

    2018-05-01

    We present a detailed analysis of the selection function of the LAMOST Spectroscopic Survey of the Galactic Anti-centre (LSS-GAC). LSS-GAC was designed to obtain low-resolution optical spectra for a sample of more than 3 million stars in the Galactic anti-centre. The second release of value-added catalogues of the LSS-GAC (LSS-GAC DR2) contains stellar parameters, including radial velocity, atmospheric parameters, elemental abundances, and absolute magnitudes deduced from 1.8 million spectra of 1.4 million unique stars targeted by the LSS-GAC between 2011 and 2014. For many studies using this data base, such as those investigating the chemodynamical structure of the Milky Way, a detailed understanding of the selection function of the survey is indispensable. In this paper, we describe how the selection function of the LSS-GAC can be evaluated to sufficient detail and provide selection function corrections for all spectroscopic measurements with reliable parameters released in LSS-GAC DR2. The results, to be released as new entries in the LSS-GAC value-added catalogues, can be used to correct the selection effects of the catalogue for scientific studies of various purposes.

  15. The DREO Elint Browser Utility (DEBU) reference manual

    NASA Astrophysics Data System (ADS)

    Ford, Barbara; Jones, David

    1992-04-01

    An electronic intelligent database browsing tool called DEBU has been developed that allows databases such as ELP, Kilting, EWIR, and AFEWC to be reviewed and analyzed from a user-friendly environment on a personal computer. DEBU's basic function is to allow users to examine the contents of user-selected subfiles of user-selected emitters of user-selected databases. DEBU augments this functionality with support for selecting (filtering) and combining subsets of emitters by user-selected attributes such as name, parameter type, or parameter value. DEBU provides facilities for examining histograms and x-y plots of selected parameters, for doing ambiguity analysis and mode level analysis, and for generating and printing a variety of reports. A manual is provided for users of DEBU, including descriptions and illustrations of menus and windows.

  16. Ensemble-Based Parameter Estimation in a Coupled GCM Using the Adaptive Spatial Average Method

    DOE PAGES

    Liu, Y.; Liu, Z.; Zhang, S.; ...

    2014-05-29

    Ensemble-based parameter estimation for a climate model is emerging as an important topic in climate research. And for a complex system such as a coupled ocean–atmosphere general circulation model, the sensitivity and response of a model variable to a model parameter could vary spatially and temporally. An adaptive spatial average (ASA) algorithm is proposed to increase the efficiency of parameter estimation. Refined from a previous spatial average method, the ASA uses the ensemble spread as the criterion for selecting “good” values from the spatially varying posterior estimated parameter values; these good values are then averaged to give the final globalmore » uniform posterior parameter. In comparison with existing methods, the ASA parameter estimation has a superior performance: faster convergence and enhanced signal-to-noise ratio.« less

  17. Algorithm-enabled exploration of image-quality potential of cone-beam CT in image-guided radiation therapy

    NASA Astrophysics Data System (ADS)

    Han, Xiao; Pearson, Erik; Pelizzari, Charles; Al-Hallaq, Hania; Sidky, Emil Y.; Bian, Junguo; Pan, Xiaochuan

    2015-06-01

    Kilo-voltage (KV) cone-beam computed tomography (CBCT) unit mounted onto a linear accelerator treatment system, often referred to as on-board imager (OBI), plays an increasingly important role in image-guided radiation therapy. While the FDK algorithm is currently used for reconstructing images from clinical OBI data, optimization-based reconstruction has also been investigated for OBI CBCT. An optimization-based reconstruction involves numerous parameters, which can significantly impact reconstruction properties (or utility). The success of an optimization-based reconstruction for a particular class of practical applications thus relies strongly on appropriate selection of parameter values. In the work, we focus on tailoring the constrained-TV-minimization-based reconstruction, an optimization-based reconstruction previously shown of some potential for CBCT imaging conditions of practical interest, to OBI imaging through appropriate selection of parameter values. In particular, for given real data of phantoms and patient collected with OBI CBCT, we first devise utility metrics specific to OBI-quality-assurance tasks and then apply them to guiding the selection of parameter values in constrained-TV-minimization-based reconstruction. The study results show that the reconstructions are with improvement, relative to clinical FDK reconstruction, in both visualization and quantitative assessments in terms of the devised utility metrics.

  18. Parameter selection in limited data cone-beam CT reconstruction using edge-preserving total variation algorithms

    NASA Astrophysics Data System (ADS)

    Lohvithee, Manasavee; Biguri, Ander; Soleimani, Manuchehr

    2017-12-01

    There are a number of powerful total variation (TV) regularization methods that have great promise in limited data cone-beam CT reconstruction with an enhancement of image quality. These promising TV methods require careful selection of the image reconstruction parameters, for which there are no well-established criteria. This paper presents a comprehensive evaluation of parameter selection in a number of major TV-based reconstruction algorithms. An appropriate way of selecting the values for each individual parameter has been suggested. Finally, a new adaptive-weighted projection-controlled steepest descent (AwPCSD) algorithm is presented, which implements the edge-preserving function for CBCT reconstruction with limited data. The proposed algorithm shows significant robustness compared to three other existing algorithms: ASD-POCS, AwASD-POCS and PCSD. The proposed AwPCSD algorithm is able to preserve the edges of the reconstructed images better with fewer sensitive parameters to tune.

  19. VizieR Online Data Catalog: PCA-based inversion of stellar parameters (Gebran+, 2016)

    NASA Astrophysics Data System (ADS)

    Gebran, M.; Farah, W.; Paletou, F.; Monier, R.; Watson, V.

    2016-03-01

    Inverted effective temperatures, surface gravities, projected rotational velocities, metalicities, and radial velocities for the selected A stars. The "closest" are the values found in Vizier catalogues closest to our inverted parameters, while "median" are the median of the catalogue values. Outliers are marked as "1" in the "outliers" column (see sect. 6) (1 data file).

  20. Column Selection for Biomedical Analysis Supported by Column Classification Based on Four Test Parameters

    PubMed Central

    Plenis, Alina; Rekowska, Natalia; Bączek, Tomasz

    2016-01-01

    This article focuses on correlating the column classification obtained from the method created at the Katholieke Universiteit Leuven (KUL), with the chromatographic resolution attained in biomedical separation. In the KUL system, each column is described with four parameters, which enables estimation of the FKUL value characterising similarity of those parameters to the selected reference stationary phase. Thus, a ranking list based on the FKUL value can be calculated for the chosen reference column, then correlated with the results of the column performance test. In this study, the column performance test was based on analysis of moclobemide and its two metabolites in human plasma by liquid chromatography (LC), using 18 columns. The comparative study was performed using traditional correlation of the FKUL values with the retention parameters of the analytes describing the column performance test. In order to deepen the comparative assessment of both data sets, factor analysis (FA) was also used. The obtained results indicated that the stationary phase classes, closely related according to the KUL method, yielded comparable separation for the target substances. Therefore, the column ranking system based on the FKUL-values could be considered supportive in the choice of the appropriate column for biomedical analysis. PMID:26805819

  1. Column Selection for Biomedical Analysis Supported by Column Classification Based on Four Test Parameters.

    PubMed

    Plenis, Alina; Rekowska, Natalia; Bączek, Tomasz

    2016-01-21

    This article focuses on correlating the column classification obtained from the method created at the Katholieke Universiteit Leuven (KUL), with the chromatographic resolution attained in biomedical separation. In the KUL system, each column is described with four parameters, which enables estimation of the FKUL value characterising similarity of those parameters to the selected reference stationary phase. Thus, a ranking list based on the FKUL value can be calculated for the chosen reference column, then correlated with the results of the column performance test. In this study, the column performance test was based on analysis of moclobemide and its two metabolites in human plasma by liquid chromatography (LC), using 18 columns. The comparative study was performed using traditional correlation of the FKUL values with the retention parameters of the analytes describing the column performance test. In order to deepen the comparative assessment of both data sets, factor analysis (FA) was also used. The obtained results indicated that the stationary phase classes, closely related according to the KUL method, yielded comparable separation for the target substances. Therefore, the column ranking system based on the FKUL-values could be considered supportive in the choice of the appropriate column for biomedical analysis.

  2. Automatic detection of malaria parasite in blood images using two parameters.

    PubMed

    Kim, Jong-Dae; Nam, Kyeong-Min; Park, Chan-Young; Kim, Yu-Seop; Song, Hye-Jeong

    2015-01-01

    Malaria must be diagnosed quickly and accurately at the initial infection stage and treated early to cure it properly. The malaria diagnosis method using a microscope requires much labor and time of a skilled expert and the diagnosis results vary greatly between individual diagnosticians. Therefore, to be able to measure the malaria parasite infection quickly and accurately, studies have been conducted for automated classification techniques using various parameters. In this study, by measuring classification technique performance according to changes of two parameters, the parameter values were determined that best distinguish normal from plasmodium-infected red blood cells. To reduce the stain deviation of the acquired images, a principal component analysis (PCA) grayscale conversion method was used, and as parameters, we used a malaria infected area and a threshold value used in binarization. The parameter values with the best classification performance were determined by selecting the value (72) corresponding to the lowest error rate on the basis of cell threshold value 128 for the malaria threshold value for detecting plasmodium-infected red blood cells.

  3. Differential evolution algorithm-based kernel parameter selection for Fukunaga-Koontz Transform subspaces construction

    NASA Astrophysics Data System (ADS)

    Binol, Hamidullah; Bal, Abdullah; Cukur, Huseyin

    2015-10-01

    The performance of the kernel based techniques depends on the selection of kernel parameters. That's why; suitable parameter selection is an important problem for many kernel based techniques. This article presents a novel technique to learn the kernel parameters in kernel Fukunaga-Koontz Transform based (KFKT) classifier. The proposed approach determines the appropriate values of kernel parameters through optimizing an objective function constructed based on discrimination ability of KFKT. For this purpose we have utilized differential evolution algorithm (DEA). The new technique overcomes some disadvantages such as high time consumption existing in the traditional cross-validation method, and it can be utilized in any type of data. The experiments for target detection applications on the hyperspectral images verify the effectiveness of the proposed method.

  4. Optimisation of process parameters on thin shell part using response surface methodology (RSM) and genetic algorithm (GA)

    NASA Astrophysics Data System (ADS)

    Faiz, J. M.; Shayfull, Z.; Nasir, S. M.; Fathullah, M.; Hazwan, M. H. M.

    2017-09-01

    This study conducts the simulation on optimisation of injection moulding process parameters using Autodesk Moldflow Insight (AMI) software. This study has applied some process parameters which are melt temperature, mould temperature, packing pressure, and cooling time in order to analyse the warpage value of the part. Besides, a part has been selected to be studied which made of Polypropylene (PP). The combination of the process parameters is analysed using Analysis of Variance (ANOVA) and the optimised value is obtained using Response Surface Methodology (RSM). The RSM as well as Genetic Algorithm are applied in Design Expert software in order to minimise the warpage value. The outcome of this study shows that the warpage value improved by using RSM and GA.

  5. Physico-chemical characterisation of material fractions in household waste: Overview of data in literature.

    PubMed

    Götze, Ramona; Boldrin, Alessio; Scheutz, Charlotte; Astrup, Thomas Fruergaard

    2016-03-01

    State-of-the-art environmental assessment of waste management systems rely on data for the physico-chemical composition of individual material fractions comprising the waste in question. To derive the necessary inventory data for different scopes and systems, literature data from different sources and backgrounds are consulted and combined. This study provides an overview of physico-chemical waste characterisation data for individual waste material fractions available in literature and thereby aims to support the selection of data fitting to a specific scope and the selection of uncertainty ranges related to the data selection from literature. Overall, 97 publications were reviewed with respect to employed characterisation method, regional origin of the waste, number of investigated parameters and material fractions and other qualitative aspects. Descriptive statistical analysis of the reported physico-chemical waste composition data was performed to derive value ranges and data distributions for element concentrations (e.g. Cd content) and physical parameters (e.g. heating value). Based on 11,886 individual data entries, median values and percentiles for 47 parameters in 11 individual waste fractions are presented. Exceptional values and publications are identified and discussed. Detailed datasets are attached to this study, allowing further analysis and new applications of the data. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. Predicting distant failure in early stage NSCLC treated with SBRT using clinical parameters.

    PubMed

    Zhou, Zhiguo; Folkert, Michael; Cannon, Nathan; Iyengar, Puneeth; Westover, Kenneth; Zhang, Yuanyuan; Choy, Hak; Timmerman, Robert; Yan, Jingsheng; Xie, Xian-J; Jiang, Steve; Wang, Jing

    2016-06-01

    The aim of this study is to predict early distant failure in early stage non-small cell lung cancer (NSCLC) treated with stereotactic body radiation therapy (SBRT) using clinical parameters by machine learning algorithms. The dataset used in this work includes 81 early stage NSCLC patients with at least 6months of follow-up who underwent SBRT between 2006 and 2012 at a single institution. The clinical parameters (n=18) for each patient include demographic parameters, tumor characteristics, treatment fraction schemes, and pretreatment medications. Three predictive models were constructed based on different machine learning algorithms: (1) artificial neural network (ANN), (2) logistic regression (LR) and (3) support vector machine (SVM). Furthermore, to select an optimal clinical parameter set for the model construction, three strategies were adopted: (1) clonal selection algorithm (CSA) based selection strategy; (2) sequential forward selection (SFS) method; and (3) statistical analysis (SA) based strategy. 5-cross-validation is used to validate the performance of each predictive model. The accuracy was assessed by area under the receiver operating characteristic (ROC) curve (AUC), sensitivity and specificity of the system was also evaluated. The AUCs for ANN, LR and SVM were 0.75, 0.73, and 0.80, respectively. The sensitivity values for ANN, LR and SVM were 71.2%, 72.9% and 83.1%, while the specificity values for ANN, LR and SVM were 59.1%, 63.6% and 63.6%, respectively. Meanwhile, the CSA based strategy outperformed SFS and SA in terms of AUC, sensitivity and specificity. Based on clinical parameters, the SVM with the CSA optimal parameter set selection strategy achieves better performance than other strategies for predicting distant failure in lung SBRT patients. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  7. Measures of GCM Performance as Functions of Model Parameters Affecting Clouds and Radiation

    NASA Astrophysics Data System (ADS)

    Jackson, C.; Mu, Q.; Sen, M.; Stoffa, P.

    2002-05-01

    This abstract is one of three related presentations at this meeting dealing with several issues surrounding optimal parameter and uncertainty estimation of model predictions of climate. Uncertainty in model predictions of climate depends in part on the uncertainty produced by model approximations or parameterizations of unresolved physics. Evaluating these uncertainties is computationally expensive because one needs to evaluate how arbitrary choices for any given combination of model parameters affects model performance. Because the computational effort grows exponentially with the number of parameters being investigated, it is important to choose parameters carefully. Evaluating whether a parameter is worth investigating depends on two considerations: 1) does reasonable choices of parameter values produce a large range in model response relative to observational uncertainty? and 2) does the model response depend non-linearly on various combinations of model parameters? We have decided to narrow our attention to selecting parameters that affect clouds and radiation, as it is likely that these parameters will dominate uncertainties in model predictions of future climate. We present preliminary results of ~20 to 30 AMIPII style climate model integrations using NCAR's CCM3.10 that show model performance as functions of individual parameters controlling 1) critical relative humidity for cloud formation (RHMIN), and 2) boundary layer critical Richardson number (RICR). We also explore various definitions of model performance that include some or all observational data sources (surface air temperature and pressure, meridional and zonal winds, clouds, long and short-wave cloud forcings, etc...) and evaluate in a few select cases whether the model's response depends non-linearly on the parameter values we have selected.

  8. Perseveration in Tool Use: A Window for Understanding the Dynamics of the Action-Selection Process

    ERIC Educational Resources Information Center

    Smitsman, Ad W.; Cox, Ralf F. A.

    2008-01-01

    Two experiments investigated how 3-year-old children select a tool to perform a manual task, with a focus on their perseverative parameter choices for the various relationships involved in handling a tool: the actor-to-tool relation and the tool-to-target relation (topology). The first study concerned the parameter value for the tool-to-target…

  9. A parameter for the assessment of the segmentation of TEM tomography reconstructed volumes based on mutual information.

    PubMed

    Okariz, Ana; Guraya, Teresa; Iturrondobeitia, Maider; Ibarretxe, Julen

    2017-12-01

    A method is proposed and verified for selecting the optimum segmentation of a TEM reconstruction among the results of several segmentation algorithms. The selection criterion is the accuracy of the segmentation. To do this selection, a parameter for the comparison of the accuracies of the different segmentations has been defined. It consists of the mutual information value between the acquired TEM images of the sample and the Radon projections of the segmented volumes. In this work, it has been proved that this new mutual information parameter and the Jaccard coefficient between the segmented volume and the ideal one are correlated. In addition, the results of the new parameter are compared to the results obtained from another validated method to select the optimum segmentation. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. The analysis of distribution of meteorological over China in astronomical site selection

    NASA Astrophysics Data System (ADS)

    Zhang, Cai-yun; Weng, Ning-quan

    2014-02-01

    The distribution of parameters such as sunshine hours, precipitation, and visibility were obtained by analyzing the meteorological data in 906 stations of China during 1981~2012. And the month and annual variations of the parameters in some typical stations were discussed. The results show that: (1) the distribution of clear days is similar to that of sunshine hours, the values of which decrease from north to south and from west to east. The distributions of cloud, precipitation and vapor pressure are opposite. (2) The northwest areas in China have the characteristic such as low precipitation and vapor pressure, small cloud clever, and good visibility, which are the general conditions of astronomical site selection. (3) The parameters have obvious month variation. There are large precipitation, long sunshine hours and strong radiation in the mid months of one year, which are opposite in beginning and ending of one year. (4) In the selected stations, the value of vapor pressure decreases year by year, and the optical depth is similar or invariable. All the above results provided for astronomical site selection.

  11. An analysis of sensitivity of CLIMEX parameters in mapping species potential distribution and the broad-scale changes observed with minor variations in parameters values: an investigation using open-field Solanum lycopersicum and Neoleucinodes elegantalis as an example

    NASA Astrophysics Data System (ADS)

    da Silva, Ricardo Siqueira; Kumar, Lalit; Shabani, Farzin; Picanço, Marcelo Coutinho

    2018-04-01

    A sensitivity analysis can categorize levels of parameter influence on a model's output. Identifying parameters having the most influence facilitates establishing the best values for parameters of models, providing useful implications in species modelling of crops and associated insect pests. The aim of this study was to quantify the response of species models through a CLIMEX sensitivity analysis. Using open-field Solanum lycopersicum and Neoleucinodes elegantalis distribution records, and 17 fitting parameters, including growth and stress parameters, comparisons were made in model performance by altering one parameter value at a time, in comparison to the best-fit parameter values. Parameters that were found to have a greater effect on the model results are termed "sensitive". Through the use of two species, we show that even when the Ecoclimatic Index has a major change through upward or downward parameter value alterations, the effect on the species is dependent on the selection of suitability categories and regions of modelling. Two parameters were shown to have the greatest sensitivity, dependent on the suitability categories of each species in the study. Results enhance user understanding of which climatic factors had a greater impact on both species distributions in our model, in terms of suitability categories and areas, when parameter values were perturbed by higher or lower values, compared to the best-fit parameter values. Thus, the sensitivity analyses have the potential to provide additional information for end users, in terms of improving management, by identifying the climatic variables that are most sensitive.

  12. Evaluation of biochemical and haematological parameters and prevalence of selected pathogens in feral cats from urban and rural habitats in South Korea.

    PubMed

    Hwang, Jusun; Gottdenker, Nicole; Min, Mi-Sook; Lee, Hang; Chun, Myung-Sun

    2016-06-01

    In this study, we evaluated the potential association between the habitat types of feral cats and the prevalence of selected infectious pathogens and health status based on a set of blood parameters. We live-trapped 72 feral cats from two different habitat types: an urban area (n = 48) and a rural agricultural area (n = 24). We compared blood values and the prevalence of feline immunodeficiency virus (FIV), feline leukaemia virus (FeLV) and haemotropic Mycoplasma infection in feral cats from the two contrasting habitats. Significant differences were observed in several blood values (haematocrit, red blood cells, blood urea nitrogen, creatinine) depending on the habitat type and/or sex of the cat. Two individuals from the urban area were seropositive for FIV (3.0%), and eight (12.1%) were positive for FeLV infection (five from an urban habitat and three from a rural habitat). Haemoplasma infection was more common. Based on molecular analysis, 38 cats (54.3%) were positive for haemoplasma, with a significantly higher infection rate in cats from rural habitats (70.8%) compared with urban cats (47.8%). Our study recorded haematological and serum biochemical values, and prevalence of selected pathogens in feral cat populations from two different habitat types. A subset of important laboratory parameters from rural cats showed values under or above the corresponding reference intervals for healthy domestic cats, suggesting potential differences in the health status of feral cats depending on the habitat type. Our findings provide information about the association between 1) blood values (haematological and serum biochemistry parameters) and 2) prevalence of selected pathogen infections and different habitat types; this may be important for veterinarians who work with feral and/or stray cats and for overall cat welfare management. © ISFM and AAFP 2015.

  13. Online selective kernel-based temporal difference learning.

    PubMed

    Chen, Xingguo; Gao, Yang; Wang, Ruili

    2013-12-01

    In this paper, an online selective kernel-based temporal difference (OSKTD) learning algorithm is proposed to deal with large scale and/or continuous reinforcement learning problems. OSKTD includes two online procedures: online sparsification and parameter updating for the selective kernel-based value function. A new sparsification method (i.e., a kernel distance-based online sparsification method) is proposed based on selective ensemble learning, which is computationally less complex compared with other sparsification methods. With the proposed sparsification method, the sparsified dictionary of samples is constructed online by checking if a sample needs to be added to the sparsified dictionary. In addition, based on local validity, a selective kernel-based value function is proposed to select the best samples from the sample dictionary for the selective kernel-based value function approximator. The parameters of the selective kernel-based value function are iteratively updated by using the temporal difference (TD) learning algorithm combined with the gradient descent technique. The complexity of the online sparsification procedure in the OSKTD algorithm is O(n). In addition, two typical experiments (Maze and Mountain Car) are used to compare with both traditional and up-to-date O(n) algorithms (GTD, GTD2, and TDC using the kernel-based value function), and the results demonstrate the effectiveness of our proposed algorithm. In the Maze problem, OSKTD converges to an optimal policy and converges faster than both traditional and up-to-date algorithms. In the Mountain Car problem, OSKTD converges, requires less computation time compared with other sparsification methods, gets a better local optima than the traditional algorithms, and converges much faster than the up-to-date algorithms. In addition, OSKTD can reach a competitive ultimate optima compared with the up-to-date algorithms.

  14. Assessment of air pollution tolerance levels of selected plants around cement industry, Coimbatore, India.

    PubMed

    Radhapriya, P; NavaneethaGopalakrishnan, A; Malini, P; Ramachandran, A

    2012-05-01

    Being the second largest manufacturing industry in India, cement industry is one of the major contributors of suspended particulate matter (SPM). Since plants are sensitive to air pollution, introducing suitable plant species as part of the greenbelt around cement industry was the objective of the present study. Suitable plant species were selected based on the Air pollution tolerance index (APTI) calculated by analyzing ascorbic acid (AA), pH, relative water content (RWC) and total chlorophyll (TChl) of the plants occuring in the locality. Plants were selected within a 6 km radius from the industry and were graded as per their tolerance levels by analyzing the biochemical parameters. From the statistical analysis at 0.05 level of significance a difference in the APTI values among the 27 plant species was observed, but they showed homogenous results when analysed zone wise using one-way analyses of variance. Analyses of individual parameters showed variation in the different zones surrounding the cement industry, whereas the APTI value (which is a combination of the parameter viz. AA, RWC, TChl, pH) showed more or less same gradation. Significant variation in individual parameters and APTI was seen with in the species. All the plants surrounding the cement industry are indicative of high pollution exposure comparable to the results obtain for control plants. Based on the APTI value, it was observed that about 37% of the plant species were tolerant. Among them Mangifera indica, Bougainvillea species, Psidum quajava showed high APTI values. 33% of the species were highly susceptible to the adverse effects of SPM, among which Thevetia neriifolia, Saraca indica, Phyllanthus emblica and Cercocarpus ledifolius showed low APTI values. 15% each of the species were at the intermediary and moderate tolerance levels.

  15. Guidance for Selecting Input Parameters in Modeling the Environmental Fate and Transport of Pesticides

    EPA Pesticide Factsheets

    Guidance to select and prepare input values for OPP's aquatic exposure models. Intended to improve the consistency in modeling the fate of pesticides in the environment and quality of OPP's aquatic risk assessments.

  16. Selecting Sensitive Parameter Subsets in Dynamical Models With Application to Biomechanical System Identification.

    PubMed

    Ramadan, Ahmed; Boss, Connor; Choi, Jongeun; Peter Reeves, N; Cholewicki, Jacek; Popovich, John M; Radcliffe, Clark J

    2018-07-01

    Estimating many parameters of biomechanical systems with limited data may achieve good fit but may also increase 95% confidence intervals in parameter estimates. This results in poor identifiability in the estimation problem. Therefore, we propose a novel method to select sensitive biomechanical model parameters that should be estimated, while fixing the remaining parameters to values obtained from preliminary estimation. Our method relies on identifying the parameters to which the measurement output is most sensitive. The proposed method is based on the Fisher information matrix (FIM). It was compared against the nonlinear least absolute shrinkage and selection operator (LASSO) method to guide modelers on the pros and cons of our FIM method. We present an application identifying a biomechanical parametric model of a head position-tracking task for ten human subjects. Using measured data, our method (1) reduced model complexity by only requiring five out of twelve parameters to be estimated, (2) significantly reduced parameter 95% confidence intervals by up to 89% of the original confidence interval, (3) maintained goodness of fit measured by variance accounted for (VAF) at 82%, (4) reduced computation time, where our FIM method was 164 times faster than the LASSO method, and (5) selected similar sensitive parameters to the LASSO method, where three out of five selected sensitive parameters were shared by FIM and LASSO methods.

  17. Intuitive parameter-free visualization of tumor vascularization using rotating connectivity projections

    NASA Astrophysics Data System (ADS)

    Wiemker, Rafael; Bülow, Thomas; Opfer, Roland; Kabus, Sven; Dharaiya, Ekta

    2008-03-01

    We present an effective and intuitive visualization of the macro-vasculature of a selected nodule or tumor in three-dimensional image data (e.g. CT, MR, US). For the differential diagnosis of nodules the possible distortion of adjacent vessels is one important clinical criterion. Surface renderings of vessel- and tumor-segmentations depend critically on the chosen parameter- and threshold-values for the underlying segmentation. Therefore we use rotating Maximum Intensity Projections (MIPs) of a volume of interests (VOI) around the selected tumor. The MIP does not require specific parameters, and allows much quicker visual inspection in comparison to slicewise navigation, while the rotation gives depth cues to the viewer. Of the vessel network within the VOI, however, not all vessels are connected to the selected tumor, and it is tedious to sort out which adjacent vessels are in fact connected and which are overlaid only by projection. Therefore we suggest a simple transformation of the original image values into connectivity values. In the derived connectedness-image each voxel value corresponds to the lowest image value encountered on the highest possible pathway from the tumor to the voxel. The advantage of the visualization is that no implicit binary decision is made whether a certain vessel is connected to the tumor or not, but rather the degree of connectedness is visualized as the brightness of the vessel. Non-connected structures disappear, feebly connected structures appear faint, and strongly connected structures remain in their original brightness. The visualization does not depend on delicate threshold values. Promising results have been achieved for pulmonary nodules in CT.

  18. Bayesian Parameter Estimation for Heavy-Duty Vehicles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, Eric; Konan, Arnaud; Duran, Adam

    2017-03-28

    Accurate vehicle parameters are valuable for design, modeling, and reporting. Estimating vehicle parameters can be a very time-consuming process requiring tightly-controlled experimentation. This work describes a method to estimate vehicle parameters such as mass, coefficient of drag/frontal area, and rolling resistance using data logged during standard vehicle operation. The method uses Monte Carlo to generate parameter sets which is fed to a variant of the road load equation. Modeled road load is then compared to measured load to evaluate the probability of the parameter set. Acceptance of a proposed parameter set is determined using the probability ratio to the currentmore » state, so that the chain history will give a distribution of parameter sets. Compared to a single value, a distribution of possible values provides information on the quality of estimates and the range of possible parameter values. The method is demonstrated by estimating dynamometer parameters. Results confirm the method's ability to estimate reasonable parameter sets, and indicates an opportunity to increase the certainty of estimates through careful selection or generation of the test drive cycle.« less

  19. Laser confocal microscope for analysis of 3013 inner container closure weld region

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Martinez-Rodriguez, M. J.

    As part of the protocol to investigate the corrosion in the inner container closure weld region (ICCWR) a laser confocal microscope (LCM) was used to perform close visual examination of the surface and measurements of corrosion features on the surface. However, initial analysis of selected destructively evaluated (DE) containers using the LCM revealed several challenges for acquiring, processing and interpreting the data. These challenges include topography of the ICCWR sample, surface features, and the amount of surface area for collecting data at high magnification conditions. In FY17, the LCM parameters were investigated to identify the appropriate parameter values for datamore » acquisition and identification of regions of interest. Using these parameter values, selected DE containers were analyzed to determine the extent of the ICCWR to be examined.« less

  20. Convergence in parameters and predictions using computational experimental design.

    PubMed

    Hagen, David R; White, Jacob K; Tidor, Bruce

    2013-08-06

    Typically, biological models fitted to experimental data suffer from significant parameter uncertainty, which can lead to inaccurate or uncertain predictions. One school of thought holds that accurate estimation of the true parameters of a biological system is inherently problematic. Recent work, however, suggests that optimal experimental design techniques can select sets of experiments whose members probe complementary aspects of a biochemical network that together can account for its full behaviour. Here, we implemented an experimental design approach for selecting sets of experiments that constrain parameter uncertainty. We demonstrated with a model of the epidermal growth factor-nerve growth factor pathway that, after synthetically performing a handful of optimal experiments, the uncertainty in all 48 parameters converged below 10 per cent. Furthermore, the fitted parameters converged to their true values with a small error consistent with the residual uncertainty. When untested experimental conditions were simulated with the fitted models, the predicted species concentrations converged to their true values with errors that were consistent with the residual uncertainty. This paper suggests that accurate parameter estimation is achievable with complementary experiments specifically designed for the task, and that the resulting parametrized models are capable of accurate predictions.

  1. Estimation of genetic parameters and breeding values across challenged environments to select for robust pigs.

    PubMed

    Herrero-Medrano, J M; Mathur, P K; ten Napel, J; Rashidi, H; Alexandri, P; Knol, E F; Mulder, H A

    2015-04-01

    Robustness is an important issue in the pig production industry. Since pigs from international breeding organizations have to withstand a variety of environmental challenges, selection of pigs with the inherent ability to sustain their productivity in diverse environments may be an economically feasible approach in the livestock industry. The objective of this study was to estimate genetic parameters and breeding values across different levels of environmental challenge load. The challenge load (CL) was estimated as the reduction in reproductive performance during different weeks of a year using 925,711 farrowing records from farms distributed worldwide. A wide range of levels of challenge, from favorable to unfavorable environments, was observed among farms with high CL values being associated with confirmed situations of unfavorable environment. Genetic parameters and breeding values were estimated in high- and low-challenge environments using a bivariate analysis, as well as across increasing levels of challenge with a random regression model using Legendre polynomials. Although heritability estimates of number of pigs born alive were slightly higher in environments with extreme CL than in those with intermediate levels of CL, the heritabilities of number of piglet losses increased progressively as CL increased. Genetic correlations among environments with different levels of CL suggest that selection in environments with extremes of low or high CL would result in low response to selection. Therefore, selection programs of breeding organizations that are commonly conducted under favorable environments could have low response to selection in commercial farms that have unfavorable environmental conditions. Sows that had experienced high levels of challenge at least once during their productive life were ranked according to their EBV. The selection of pigs using EBV ignoring environmental challenges or on the basis of records from only favorable environments resulted in a sharp decline in productivity as the level of challenge increased. In contrast, selection using the random regression approach resulted in limited change in productivity with increasing levels of challenge. Hence, we demonstrate that the use of a quantitative measure of environmental CL and a random regression approach can be comprehensively combined for genetic selection of pigs with enhanced ability to maintain high productivity in harsh environments.

  2. Parameter Optimization for Selected Correlation Analysis of Intracranial Pathophysiology.

    PubMed

    Faltermeier, Rupert; Proescholdt, Martin A; Bele, Sylvia; Brawanski, Alexander

    2015-01-01

    Recently we proposed a mathematical tool set, called selected correlation analysis, that reliably detects positive and negative correlations between arterial blood pressure (ABP) and intracranial pressure (ICP). Such correlations are associated with severe impairment of the cerebral autoregulation and intracranial compliance, as predicted by a mathematical model. The time resolved selected correlation analysis is based on a windowing technique combined with Fourier-based coherence calculations and therefore depends on several parameters. For real time application of this method at an ICU it is inevitable to adjust this mathematical tool for high sensitivity and distinct reliability. In this study, we will introduce a method to optimize the parameters of the selected correlation analysis by correlating an index, called selected correlation positive (SCP), with the outcome of the patients represented by the Glasgow Outcome Scale (GOS). For that purpose, the data of twenty-five patients were used to calculate the SCP value for each patient and multitude of feasible parameter sets of the selected correlation analysis. It could be shown that an optimized set of parameters is able to improve the sensitivity of the method by a factor greater than four in comparison to our first analyses.

  3. Parameter Optimization for Selected Correlation Analysis of Intracranial Pathophysiology

    PubMed Central

    Faltermeier, Rupert; Proescholdt, Martin A.; Bele, Sylvia; Brawanski, Alexander

    2015-01-01

    Recently we proposed a mathematical tool set, called selected correlation analysis, that reliably detects positive and negative correlations between arterial blood pressure (ABP) and intracranial pressure (ICP). Such correlations are associated with severe impairment of the cerebral autoregulation and intracranial compliance, as predicted by a mathematical model. The time resolved selected correlation analysis is based on a windowing technique combined with Fourier-based coherence calculations and therefore depends on several parameters. For real time application of this method at an ICU it is inevitable to adjust this mathematical tool for high sensitivity and distinct reliability. In this study, we will introduce a method to optimize the parameters of the selected correlation analysis by correlating an index, called selected correlation positive (SCP), with the outcome of the patients represented by the Glasgow Outcome Scale (GOS). For that purpose, the data of twenty-five patients were used to calculate the SCP value for each patient and multitude of feasible parameter sets of the selected correlation analysis. It could be shown that an optimized set of parameters is able to improve the sensitivity of the method by a factor greater than four in comparison to our first analyses. PMID:26693250

  4. On the applicability of surrogate-based Markov chain Monte Carlo-Bayesian inversion to the Community Land Model: Case studies at flux tower sites: SURROGATE-BASED MCMC FOR CLM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Maoyi; Ray, Jaideep; Hou, Zhangshuan

    2016-07-04

    The Community Land Model (CLM) has been widely used in climate and Earth system modeling. Accurate estimation of model parameters is needed for reliable model simulations and predictions under current and future conditions, respectively. In our previous work, a subset of hydrological parameters has been identified to have significant impact on surface energy fluxes at selected flux tower sites based on parameter screening and sensitivity analysis, which indicate that the parameters could potentially be estimated from surface flux observations at the towers. To date, such estimates do not exist. In this paper, we assess the feasibility of applying a Bayesianmore » model calibration technique to estimate CLM parameters at selected flux tower sites under various site conditions. The parameters are estimated as a joint probability density function (PDF) that provides estimates of uncertainty of the parameters being inverted, conditional on climatologically-average latent heat fluxes derived from observations. We find that the simulated mean latent heat fluxes from CLM using the calibrated parameters are generally improved at all sites when compared to those obtained with CLM simulations using default parameter sets. Further, our calibration method also results in credibility bounds around the simulated mean fluxes which bracket the measured data. The modes (or maximum a posteriori values) and 95% credibility intervals of the site-specific posterior PDFs are tabulated as suggested parameter values for each site. Analysis of relationships between the posterior PDFs and site conditions suggests that the parameter values are likely correlated with the plant functional type, which needs to be confirmed in future studies by extending the approach to more sites.« less

  5. On the applicability of surrogate-based MCMC-Bayesian inversion to the Community Land Model: Case studies at Flux tower sites

    DOE PAGES

    Huang, Maoyi; Ray, Jaideep; Hou, Zhangshuan; ...

    2016-06-01

    The Community Land Model (CLM) has been widely used in climate and Earth system modeling. Accurate estimation of model parameters is needed for reliable model simulations and predictions under current and future conditions, respectively. In our previous work, a subset of hydrological parameters has been identified to have significant impact on surface energy fluxes at selected flux tower sites based on parameter screening and sensitivity analysis, which indicate that the parameters could potentially be estimated from surface flux observations at the towers. To date, such estimates do not exist. In this paper, we assess the feasibility of applying a Bayesianmore » model calibration technique to estimate CLM parameters at selected flux tower sites under various site conditions. The parameters are estimated as a joint probability density function (PDF) that provides estimates of uncertainty of the parameters being inverted, conditional on climatologically average latent heat fluxes derived from observations. We find that the simulated mean latent heat fluxes from CLM using the calibrated parameters are generally improved at all sites when compared to those obtained with CLM simulations using default parameter sets. Further, our calibration method also results in credibility bounds around the simulated mean fluxes which bracket the measured data. The modes (or maximum a posteriori values) and 95% credibility intervals of the site-specific posterior PDFs are tabulated as suggested parameter values for each site. As a result, analysis of relationships between the posterior PDFs and site conditions suggests that the parameter values are likely correlated with the plant functional type, which needs to be confirmed in future studies by extending the approach to more sites.« less

  6. On the applicability of surrogate-based MCMC-Bayesian inversion to the Community Land Model: Case studies at Flux tower sites

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Maoyi; Ray, Jaideep; Hou, Zhangshuan

    The Community Land Model (CLM) has been widely used in climate and Earth system modeling. Accurate estimation of model parameters is needed for reliable model simulations and predictions under current and future conditions, respectively. In our previous work, a subset of hydrological parameters has been identified to have significant impact on surface energy fluxes at selected flux tower sites based on parameter screening and sensitivity analysis, which indicate that the parameters could potentially be estimated from surface flux observations at the towers. To date, such estimates do not exist. In this paper, we assess the feasibility of applying a Bayesianmore » model calibration technique to estimate CLM parameters at selected flux tower sites under various site conditions. The parameters are estimated as a joint probability density function (PDF) that provides estimates of uncertainty of the parameters being inverted, conditional on climatologically average latent heat fluxes derived from observations. We find that the simulated mean latent heat fluxes from CLM using the calibrated parameters are generally improved at all sites when compared to those obtained with CLM simulations using default parameter sets. Further, our calibration method also results in credibility bounds around the simulated mean fluxes which bracket the measured data. The modes (or maximum a posteriori values) and 95% credibility intervals of the site-specific posterior PDFs are tabulated as suggested parameter values for each site. As a result, analysis of relationships between the posterior PDFs and site conditions suggests that the parameter values are likely correlated with the plant functional type, which needs to be confirmed in future studies by extending the approach to more sites.« less

  7. On the applicability of surrogate-based Markov chain Monte Carlo-Bayesian inversion to the Community Land Model: Case studies at flux tower sites

    NASA Astrophysics Data System (ADS)

    Huang, Maoyi; Ray, Jaideep; Hou, Zhangshuan; Ren, Huiying; Liu, Ying; Swiler, Laura

    2016-07-01

    The Community Land Model (CLM) has been widely used in climate and Earth system modeling. Accurate estimation of model parameters is needed for reliable model simulations and predictions under current and future conditions, respectively. In our previous work, a subset of hydrological parameters has been identified to have significant impact on surface energy fluxes at selected flux tower sites based on parameter screening and sensitivity analysis, which indicate that the parameters could potentially be estimated from surface flux observations at the towers. To date, such estimates do not exist. In this paper, we assess the feasibility of applying a Bayesian model calibration technique to estimate CLM parameters at selected flux tower sites under various site conditions. The parameters are estimated as a joint probability density function (PDF) that provides estimates of uncertainty of the parameters being inverted, conditional on climatologically average latent heat fluxes derived from observations. We find that the simulated mean latent heat fluxes from CLM using the calibrated parameters are generally improved at all sites when compared to those obtained with CLM simulations using default parameter sets. Further, our calibration method also results in credibility bounds around the simulated mean fluxes which bracket the measured data. The modes (or maximum a posteriori values) and 95% credibility intervals of the site-specific posterior PDFs are tabulated as suggested parameter values for each site. Analysis of relationships between the posterior PDFs and site conditions suggests that the parameter values are likely correlated with the plant functional type, which needs to be confirmed in future studies by extending the approach to more sites.

  8. Gene flow from domesticated species to wild relatives: migration load in a model of multivariate selection.

    PubMed

    Tufto, Jarle

    2010-01-01

    Domesticated species frequently spread their genes into populations of wild relatives through interbreeding. The domestication process often involves artificial selection for economically desirable traits. This can lead to an indirect response in unknown correlated traits and a reduction in fitness of domesticated individuals in the wild. Previous models for the effect of gene flow from domesticated species to wild relatives have assumed that evolution occurs in one dimension. Here, I develop a quantitative genetic model for the balance between migration and multivariate stabilizing selection. Different forms of correlational selection consistent with a given observed ratio between average fitness of domesticated and wild individuals offsets the phenotypic means at migration-selection balance away from predictions based on simpler one-dimensional models. For almost all parameter values, correlational selection leads to a reduction in the migration load. For ridge selection, this reduction arises because the distance the immigrants deviates from the local optimum in effect is reduced. For realistic parameter values, however, the effect of correlational selection on the load is small, suggesting that simpler one-dimensional models may still be adequate in terms of predicting mean population fitness and viability.

  9. Coated or doped carbon nanotube network sensors as affected by environmental parameters

    NASA Technical Reports Server (NTRS)

    Li, Jing (Inventor)

    2011-01-01

    Methods for using modified single wall carbon nanotubes ("SWCNTs") to detect presence and/or concentration of a gas component, such as a halogen (e.g., Cl.sub.2), hydrogen halides (e.g., HCl), a hydrocarbon (e.g., C.sub.nH.sub.2n+2), an alcohol, an aldehyde or a ketone, to which an unmodified SWCNT is substantially non-reactive. In a first embodiment, a connected network of SWCNTs is coated with a selected polymer, such as chlorosulfonated polyethylene, hydroxypropyl cellulose, polystyrene and/or polyvinylalcohol, and change in an electrical parameter or response value (e.g., conductance, current, voltage difference or resistance) of the coated versus uncoated SWCNT networks is analyzed. In a second embodiment, the network is doped with a transition element, such as Pd, Pt, Rh, Ir, Ru, Os and/or Au, and change in an electrical parameter value is again analyzed. The parameter change value depends monotonically, not necessarily linearly, upon concentration of the gas component. Two general algorithms are presented for estimating concentration value(s), or upper or lower concentration bounds on such values, from measured differences of response values.

  10. Software Would Largely Automate Design of Kalman Filter

    NASA Technical Reports Server (NTRS)

    Chuang, Jason C. H.; Negast, William J.

    2005-01-01

    Embedded Navigation Filter Automatic Designer (ENFAD) is a computer program being developed to automate the most difficult tasks in designing embedded software to implement a Kalman filter in a navigation system. The most difficult tasks are selection of error states of the filter and tuning of filter parameters, which are timeconsuming trial-and-error tasks that require expertise and rarely yield optimum results. An optimum selection of error states and filter parameters depends on navigation-sensor and vehicle characteristics, and on filter processing time. ENFAD would include a simulation module that would incorporate all possible error states with respect to a given set of vehicle and sensor characteristics. The first of two iterative optimization loops would vary the selection of error states until the best filter performance was achieved in Monte Carlo simulations. For a fixed selection of error states, the second loop would vary the filter parameter values until an optimal performance value was obtained. Design constraints would be satisfied in the optimization loops. Users would supply vehicle and sensor test data that would be used to refine digital models in ENFAD. Filter processing time and filter accuracy would be computed by ENFAD.

  11. Estimation of genetic parameters and response to selection for a continuous trait subject to culling before testing.

    PubMed

    Arnason, T; Albertsdóttir, E; Fikse, W F; Eriksson, S; Sigurdsson, A

    2012-02-01

    The consequences of assuming a zero environmental covariance between a binary trait 'test-status' and a continuous trait on the estimates of genetic parameters by restricted maximum likelihood and Gibbs sampling and on response from genetic selection when the true environmental covariance deviates from zero were studied. Data were simulated for two traits (one that culling was based on and a continuous trait) using the following true parameters, on the underlying scale: h² = 0.4; r(A) = 0.5; r(E) = 0.5, 0.0 or -0.5. The selection on the continuous trait was applied to five subsequent generations where 25 sires and 500 dams produced 1500 offspring per generation. Mass selection was applied in the analysis of the effect on estimation of genetic parameters. Estimated breeding values were used in the study of the effect of genetic selection on response and accuracy. The culling frequency was either 0.5 or 0.8 within each generation. Each of 10 replicates included 7500 records on 'test-status' and 9600 animals in the pedigree file. Results from bivariate analysis showed unbiased estimates of variance components and genetic parameters when true r(E) = 0.0. For r(E) = 0.5, variance components (13-19% bias) and especially (50-80%) were underestimated for the continuous trait, while heritability estimates were unbiased. For r(E) = -0.5, heritability estimates of test-status were unbiased, while genetic variance and heritability of the continuous trait together with were overestimated (25-50%). The bias was larger for the higher culling frequency. Culling always reduced genetic progress from selection, but the genetic progress was found to be robust to the use of wrong parameter values of the true environmental correlation between test-status and the continuous trait. Use of a bivariate linear-linear model reduced bias in genetic evaluations, when data were subject to culling. © 2011 Blackwell Verlag GmbH.

  12. Selection of regularization parameter for l1-regularized damage detection

    NASA Astrophysics Data System (ADS)

    Hou, Rongrong; Xia, Yong; Bao, Yuequan; Zhou, Xiaoqing

    2018-06-01

    The l1 regularization technique has been developed for structural health monitoring and damage detection through employing the sparsity condition of structural damage. The regularization parameter, which controls the trade-off between data fidelity and solution size of the regularization problem, exerts a crucial effect on the solution. However, the l1 regularization problem has no closed-form solution, and the regularization parameter is usually selected by experience. This study proposes two strategies of selecting the regularization parameter for the l1-regularized damage detection problem. The first method utilizes the residual and solution norms of the optimization problem and ensures that they are both small. The other method is based on the discrepancy principle, which requires that the variance of the discrepancy between the calculated and measured responses is close to the variance of the measurement noise. The two methods are applied to a cantilever beam and a three-story frame. A range of the regularization parameter, rather than one single value, can be determined. When the regularization parameter in this range is selected, the damage can be accurately identified even for multiple damage scenarios. This range also indicates the sensitivity degree of the damage identification problem to the regularization parameter.

  13. A systematic approach to parameter selection for CAD-virtual reality data translation using response surface methodology and MOGA-II.

    PubMed

    Abidi, Mustufa Haider; Al-Ahmari, Abdulrahman; Ahmad, Ali

    2018-01-01

    Advanced graphics capabilities have enabled the use of virtual reality as an efficient design technique. The integration of virtual reality in the design phase still faces impediment because of issues linked to the integration of CAD and virtual reality software. A set of empirical tests using the selected conversion parameters was found to yield properly represented virtual reality models. The reduced model yields an R-sq (pred) value of 72.71% and an R-sq (adjusted) value of 86.64%, indicating that 86.64% of the response variability can be explained by the model. The R-sq (pred) is 67.45%, which is not very high, indicating that the model should be further reduced by eliminating insignificant terms. The reduced model yields an R-sq (pred) value of 73.32% and an R-sq (adjusted) value of 79.49%, indicating that 79.49% of the response variability can be explained by the model. Using the optimization software MODE Frontier (Optimization, MOGA-II, 2014), four types of response surfaces for the three considered response variables were tested for the data of DOE. The parameter values obtained using the proposed experimental design methodology result in better graphics quality, and other necessary design attributes.

  14. On the use of published radiobiological parameters and the evaluation of NTCP models regarding lung pneumonitis in clinical breast radiotherapy.

    PubMed

    Svolos, Patricia; Tsougos, Ioannis; Kyrgias, Georgios; Kappas, Constantine; Theodorou, Kiki

    2011-04-01

    In this study we sought to evaluate and accent the importance of radiobiological parameter selection and implementation to the normal tissue complication probability (NTCP) models. The relative seriality (RS) and the Lyman-Kutcher-Burman (LKB) models were studied. For each model, a minimum and maximum set of radiobiological parameter sets was selected from the overall published sets applied in literature and a theoretical mean parameter set was computed. In order to investigate the potential model weaknesses in NTCP estimation and to point out the correct use of model parameters, these sets were used as input to the RS and the LKB model, estimating radiation induced complications for a group of 36 breast cancer patients treated with radiotherapy. The clinical endpoint examined was Radiation Pneumonitis. Each model was represented by a certain dose-response range when the selected parameter sets were applied. Comparing the models with their ranges, a large area of coincidence was revealed. If the parameter uncertainties (standard deviation) are included in the models, their area of coincidence might be enlarged, constraining even greater their predictive ability. The selection of the proper radiobiological parameter set for a given clinical endpoint is crucial. Published parameter values are not definite but should be accompanied by uncertainties, and one should be very careful when applying them to the NTCP models. Correct selection and proper implementation of published parameters provides a quite accurate fit of the NTCP models to the considered endpoint.

  15. Empirical flow parameters : a tool for hydraulic model validity

    USGS Publications Warehouse

    Asquith, William H.; Burley, Thomas E.; Cleveland, Theodore G.

    2013-01-01

    The objectives of this project were (1) To determine and present from existing data in Texas, relations between observed stream flow, topographic slope, mean section velocity, and other hydraulic factors, to produce charts such as Figure 1 and to produce empirical distributions of the various flow parameters to provide a methodology to "check if model results are way off!"; (2) To produce a statistical regional tool to estimate mean velocity or other selected parameters for storm flows or other conditional discharges at ungauged locations (most bridge crossings) in Texas to provide a secondary way to compare such values to a conventional hydraulic modeling approach. (3.) To present ancillary values such as Froude number, stream power, Rosgen channel classification, sinuosity, and other selected characteristics (readily determinable from existing data) to provide additional information to engineers concerned with the hydraulic-soil-foundation component of transportation infrastructure.

  16. FREQ: A computational package for multivariable system loop-shaping procedures

    NASA Technical Reports Server (NTRS)

    Giesy, Daniel P.; Armstrong, Ernest S.

    1989-01-01

    Many approaches in the field of linear, multivariable time-invariant systems analysis and controller synthesis employ loop-sharing procedures wherein design parameters are chosen to shape frequency-response singular value plots of selected transfer matrices. A software package, FREQ, is documented for computing within on unified framework many of the most used multivariable transfer matrices for both continuous and discrete systems. The matrices are evaluated at user-selected frequency-response values, and singular values against frequency. Example computations are presented to demonstrate the use of the FREQ code.

  17. Selection of entropy-measure parameters for knowledge discovery in heart rate variability data

    PubMed Central

    2014-01-01

    Background Heart rate variability is the variation of the time interval between consecutive heartbeats. Entropy is a commonly used tool to describe the regularity of data sets. Entropy functions are defined using multiple parameters, the selection of which is controversial and depends on the intended purpose. This study describes the results of tests conducted to support parameter selection, towards the goal of enabling further biomarker discovery. Methods This study deals with approximate, sample, fuzzy, and fuzzy measure entropies. All data were obtained from PhysioNet, a free-access, on-line archive of physiological signals, and represent various medical conditions. Five tests were defined and conducted to examine the influence of: varying the threshold value r (as multiples of the sample standard deviation σ, or the entropy-maximizing rChon), the data length N, the weighting factors n for fuzzy and fuzzy measure entropies, and the thresholds rF and rL for fuzzy measure entropy. The results were tested for normality using Lilliefors' composite goodness-of-fit test. Consequently, the p-value was calculated with either a two sample t-test or a Wilcoxon rank sum test. Results The first test shows a cross-over of entropy values with regard to a change of r. Thus, a clear statement that a higher entropy corresponds to a high irregularity is not possible, but is rather an indicator of differences in regularity. N should be at least 200 data points for r = 0.2 σ and should even exceed a length of 1000 for r = rChon. The results for the weighting parameters n for the fuzzy membership function show different behavior when coupled with different r values, therefore the weighting parameters have been chosen independently for the different threshold values. The tests concerning rF and rL showed that there is no optimal choice, but r = rF = rL is reasonable with r = rChon or r = 0.2σ. Conclusions Some of the tests showed a dependency of the test significance on the data at hand. Nevertheless, as the medical conditions are unknown beforehand, compromises had to be made. Optimal parameter combinations are suggested for the methods considered. Yet, due to the high number of potential parameter combinations, further investigations of entropy for heart rate variability data will be necessary. PMID:25078574

  18. Selection of entropy-measure parameters for knowledge discovery in heart rate variability data.

    PubMed

    Mayer, Christopher C; Bachler, Martin; Hörtenhuber, Matthias; Stocker, Christof; Holzinger, Andreas; Wassertheurer, Siegfried

    2014-01-01

    Heart rate variability is the variation of the time interval between consecutive heartbeats. Entropy is a commonly used tool to describe the regularity of data sets. Entropy functions are defined using multiple parameters, the selection of which is controversial and depends on the intended purpose. This study describes the results of tests conducted to support parameter selection, towards the goal of enabling further biomarker discovery. This study deals with approximate, sample, fuzzy, and fuzzy measure entropies. All data were obtained from PhysioNet, a free-access, on-line archive of physiological signals, and represent various medical conditions. Five tests were defined and conducted to examine the influence of: varying the threshold value r (as multiples of the sample standard deviation σ, or the entropy-maximizing rChon), the data length N, the weighting factors n for fuzzy and fuzzy measure entropies, and the thresholds rF and rL for fuzzy measure entropy. The results were tested for normality using Lilliefors' composite goodness-of-fit test. Consequently, the p-value was calculated with either a two sample t-test or a Wilcoxon rank sum test. The first test shows a cross-over of entropy values with regard to a change of r. Thus, a clear statement that a higher entropy corresponds to a high irregularity is not possible, but is rather an indicator of differences in regularity. N should be at least 200 data points for r = 0.2 σ and should even exceed a length of 1000 for r = rChon. The results for the weighting parameters n for the fuzzy membership function show different behavior when coupled with different r values, therefore the weighting parameters have been chosen independently for the different threshold values. The tests concerning rF and rL showed that there is no optimal choice, but r = rF = rL is reasonable with r = rChon or r = 0.2σ. Some of the tests showed a dependency of the test significance on the data at hand. Nevertheless, as the medical conditions are unknown beforehand, compromises had to be made. Optimal parameter combinations are suggested for the methods considered. Yet, due to the high number of potential parameter combinations, further investigations of entropy for heart rate variability data will be necessary.

  19. Testable solution of the cosmological constant and coincidence problems

    NASA Astrophysics Data System (ADS)

    Shaw, Douglas J.; Barrow, John D.

    2011-02-01

    We present a new solution to the cosmological constant (CC) and coincidence problems in which the observed value of the CC, Λ, is linked to other observable properties of the Universe. This is achieved by promoting the CC from a parameter that must be specified, to a field that can take many possible values. The observed value of Λ≈(9.3Gyrs)-2 [≈10-120 in Planck units] is determined by a new constraint equation which follows from the application of a causally restricted variation principle. When applied to our visible Universe, the model makes a testable prediction for the dimensionless spatial curvature of Ωk0=-0.0056(ζb/0.5), where ζb˜1/2 is a QCD parameter. Requiring that a classical history exist, our model determines the probability of observing a given Λ. The observed CC value, which we successfully predict, is typical within our model even before the effects of anthropic selection are included. When anthropic selection effects are accounted for, we find that the observed coincidence between tΛ=Λ-1/2 and the age of the Universe, tU, is a typical occurrence in our model. In contrast to multiverse explanations of the CC problems, our solution is independent of the choice of a prior weighting of different Λ values and does not rely on anthropic selection effects. Our model includes no unnatural small parameters and does not require the introduction of new dynamical scalar fields or modifications to general relativity, and it can be tested by astronomical observations in the near future.

  20. A new methodology based on sensitivity analysis to simplify the recalibration of functional-structural plant models in new conditions.

    PubMed

    Mathieu, Amélie; Vidal, Tiphaine; Jullien, Alexandra; Wu, QiongLi; Chambon, Camille; Bayol, Benoit; Cournède, Paul-Henry

    2018-06-19

    Functional-structural plant models (FSPMs) describe explicitly the interactions between plants and their environment at organ to plant scale. However, the high level of description of the structure or model mechanisms makes this type of model very complex and hard to calibrate. A two-step methodology to facilitate the calibration process is proposed here. First, a global sensitivity analysis method was applied to the calibration loss function. It provided first-order and total-order sensitivity indexes that allow parameters to be ranked by importance in order to select the most influential ones. Second, the Akaike information criterion (AIC) was used to quantify the model's quality of fit after calibration with different combinations of selected parameters. The model with the lowest AIC gives the best combination of parameters to select. This methodology was validated by calibrating the model on an independent data set (same cultivar, another year) with the parameters selected in the second step. All the parameters were set to their nominal value; only the most influential ones were re-estimated. Sensitivity analysis applied to the calibration loss function is a relevant method to underline the most significant parameters in the estimation process. For the studied winter oilseed rape model, 11 out of 26 estimated parameters were selected. Then, the model could be recalibrated for a different data set by re-estimating only three parameters selected with the model selection method. Fitting only a small number of parameters dramatically increases the efficiency of recalibration, increases the robustness of the model and helps identify the principal sources of variation in varying environmental conditions. This innovative method still needs to be more widely validated but already gives interesting avenues to improve the calibration of FSPMs.

  1. Lateral position detection and control for friction stir systems

    DOEpatents

    Fleming, Paul; Lammlein, David H.; Cook, George E.; Wilkes, Don Mitchell; Strauss, Alvin M.; Delapp, David R.; Hartman, Daniel A.

    2012-06-05

    An apparatus and computer program are disclosed for processing at least one workpiece using a rotary tool with rotating member for contacting and processing the workpiece. The methods include oscillating the rotary tool laterally with respect to a selected propagation path for the rotating member with respect to the workpiece to define an oscillation path for the rotating member. The methods further include obtaining force signals or parameters related to the force experienced by the rotary tool at least while the rotating member is disposed at the extremes of the oscillation. The force signals or parameters associated with the extremes can then be analyzed to determine a lateral position of the selected path with respect to a target path and a lateral offset value can be determined based on the lateral position. The lateral distance between the selected path and the target path can be decreased based on the lateral offset value.

  2. Lateral position detection and control for friction stir systems

    DOEpatents

    Fleming, Paul [Boulder, CO; Lammlein, David H [Houston, TX; Cook, George E [Brentwood, TN; Wilkes, Don Mitchell [Nashville, TN; Strauss, Alvin M [Nashville, TN; Delapp, David R [Ashland City, TN; Hartman, Daniel A [Fairhope, AL

    2011-11-08

    Friction stir methods are disclosed for processing at least one workpiece using a rotary tool with rotating member for contacting and processing the workpiece. The methods include oscillating the rotary tool laterally with respect to a selected propagation path for the rotating member with respect to the workpiece to define an oscillation path for the rotating member. The methods further include obtaining force signals or parameters related to the force experienced by the rotary tool at least while the rotating member is disposed at the extremes of the oscillation. The force signals or parameters associated with the extremes can then be analyzed to determine a lateral position of the selected path with respect to a target path and a lateral offset value can be determined based on the lateral position. The lateral distance between the selected path and the target path can be decreased based on the lateral offset value.

  3. Relationships and redundancies of selected hemodynamic and structural parameters for characterizing virtual treatment of cerebral aneurysms with flow diverter devices.

    PubMed

    Karmonik, C; Anderson, J R; Beilner, J; Ge, J J; Partovi, S; Klucznik, R P; Diaz, O; Zhang, Y J; Britz, G W; Grossman, R G; Lv, N; Huang, Q

    2016-07-26

    To quantify the relationship and to demonstrate redundancies between hemodynamic and structural parameters before and after virtual treatment with a flow diverter device (FDD) in cerebral aneurysms. Steady computational fluid dynamics (CFD) simulations were performed for 10 cerebral aneurysms where FDD treatment with the SILK device was simulated by virtually reducing the porosity at the aneurysm ostium. Velocity and pressure values proximal and distal to and at the aneurysm ostium as well as inside the aneurysm were quantified. In addition, dome-to-neck ratios and size ratios were determined. Multiple correlation analysis (MCA) and hierarchical cluster analysis (HCA) were conducted to demonstrate dependencies between both structural and hemodynamic parameters. Velocities in the aneurysm were reduced by 0.14m/s on average and correlated significantly (p<0.05) with velocity values in the parent artery (average correlation coefficient: 0.70). Pressure changes in the aneurysm correlated significantly with pressure values in the parent artery and aneurysm (average correlation coefficient: 0.87). MCA found statistically significant correlations between velocity values and between pressure values, respectively. HCA sorted velocity parameters, pressure parameters and structural parameters into different hierarchical clusters. HCA of aneurysms based on the parameter values yielded similar results by either including all (n=22) or only non-redundant parameters (n=2, 3 and 4). Hemodynamic and structural parameters before and after virtual FDD treatment show strong inter-correlations. Redundancy of parameters was demonstrated with hierarchical cluster analysis. Copyright © 2015 Elsevier Ltd. All rights reserved.

  4. Bayesian inference for OPC modeling

    NASA Astrophysics Data System (ADS)

    Burbine, Andrew; Sturtevant, John; Fryer, David; Smith, Bruce W.

    2016-03-01

    The use of optical proximity correction (OPC) demands increasingly accurate models of the photolithographic process. Model building and inference techniques in the data science community have seen great strides in the past two decades which make better use of available information. This paper aims to demonstrate the predictive power of Bayesian inference as a method for parameter selection in lithographic models by quantifying the uncertainty associated with model inputs and wafer data. Specifically, the method combines the model builder's prior information about each modelling assumption with the maximization of each observation's likelihood as a Student's t-distributed random variable. Through the use of a Markov chain Monte Carlo (MCMC) algorithm, a model's parameter space is explored to find the most credible parameter values. During parameter exploration, the parameters' posterior distributions are generated by applying Bayes' rule, using a likelihood function and the a priori knowledge supplied. The MCMC algorithm used, an affine invariant ensemble sampler (AIES), is implemented by initializing many walkers which semiindependently explore the space. The convergence of these walkers to global maxima of the likelihood volume determine the parameter values' highest density intervals (HDI) to reveal champion models. We show that this method of parameter selection provides insights into the data that traditional methods do not and outline continued experiments to vet the method.

  5. Efficient packet forwarding using cyber-security aware policies

    DOEpatents

    Ros-Giralt, Jordi

    2017-04-04

    For balancing load, a forwarder can selectively direct data from the forwarder to a processor according to a loading parameter. The selective direction includes forwarding the data to the processor for processing, transforming and/or forwarding the data to another node, and dropping the data. The forwarder can also adjust the loading parameter based on, at least in part, feedback received from the processor. One or more processing elements can store values associated with one or more flows into a structure without locking the structure. The stored values can be used to determine how to direct the flows, e.g., whether to process a flow or to drop it. The structure can be used within an information channel providing feedback to a processor.

  6. Efficient packet forwarding using cyber-security aware policies

    DOEpatents

    Ros-Giralt, Jordi

    2017-10-25

    For balancing load, a forwarder can selectively direct data from the forwarder to a processor according to a loading parameter. The selective direction includes forwarding the data to the processor for processing, transforming and/or forwarding the data to another node, and dropping the data. The forwarder can also adjust the loading parameter based on, at least in part, feedback received from the processor. One or more processing elements can store values associated with one or more flows into a structure without locking the structure. The stored values can be used to determine how to direct the flows, e.g., whether to process a flow or to drop it. The structure can be used within an information channel providing feedback to a processor.

  7. Optimal time points sampling in pathway modelling.

    PubMed

    Hu, Shiyan

    2004-01-01

    Modelling cellular dynamics based on experimental data is at the heart of system biology. Considerable progress has been made to dynamic pathway modelling as well as the related parameter estimation. However, few of them gives consideration for the issue of optimal sampling time selection for parameter estimation. Time course experiments in molecular biology rarely produce large and accurate data sets and the experiments involved are usually time consuming and expensive. Therefore, to approximate parameters for models with only few available sampling data is of significant practical value. For signal transduction, the sampling intervals are usually not evenly distributed and are based on heuristics. In the paper, we investigate an approach to guide the process of selecting time points in an optimal way to minimize the variance of parameter estimates. In the method, we first formulate the problem to a nonlinear constrained optimization problem by maximum likelihood estimation. We then modify and apply a quantum-inspired evolutionary algorithm, which combines the advantages of both quantum computing and evolutionary computing, to solve the optimization problem. The new algorithm does not suffer from the morass of selecting good initial values and being stuck into local optimum as usually accompanied with the conventional numerical optimization techniques. The simulation results indicate the soundness of the new method.

  8. Consistency of QSAR models: Correct split of training and test sets, ranking of models and performance parameters.

    PubMed

    Rácz, A; Bajusz, D; Héberger, K

    2015-01-01

    Recent implementations of QSAR modelling software provide the user with numerous models and a wealth of information. In this work, we provide some guidance on how one should interpret the results of QSAR modelling, compare and assess the resulting models, and select the best and most consistent ones. Two QSAR datasets are applied as case studies for the comparison of model performance parameters and model selection methods. We demonstrate the capabilities of sum of ranking differences (SRD) in model selection and ranking, and identify the best performance indicators and models. While the exchange of the original training and (external) test sets does not affect the ranking of performance parameters, it provides improved models in certain cases (despite the lower number of molecules in the training set). Performance parameters for external validation are substantially separated from the other merits in SRD analyses, highlighting their value in data fusion.

  9. Growth curves of carcass traits obtained by ultrasonography in three lines of Nellore cattle selected for body weight.

    PubMed

    Coutinho, C C; Mercadante, M E Z; Jorge, A M; Paz, C C P; El Faro, L; Monteiro, F M

    2015-10-30

    The effect of selection for postweaning weight was evaluated within the growth curve parameters for both growth and carcass traits. Records of 2404 Nellore animals from three selection lines were analyzed: two selection lines for high postweaning weight, selection (NeS) and traditional (NeT); and a control line (NeC) in which animals were selected for postweaning weight close to the average. Body weight (BW), hip height (HH), rib eye area (REA), back fat thickness (BFT), and rump fat thickness (RFT) were measured and records collected from animals 8 to 20 (males) and 11 to 26 (females) months of age. The parameters A (asymptotic value) and k (growth rate) were estimated using the nonlinear model procedure of the Statistical Analysis System program, which included fixed effect of line (NeS, NeT, and NeC) in the model, with the objective to evaluate differences in the estimated parameters between lines. Selected animals (NeS and NeT) showed higher growth rates than control line animals (NeC) for all traits. Line effect on curves parameters was significant (P < 0.001) for BW, HH, and REA in males, and for BFT and RFT in females. Selection for postweaning weight was effective in altering growth curves, resulting in animals with higher growth potential.

  10. Low Shrinkage Cement Concrete Intended for Airfield Pavements

    NASA Astrophysics Data System (ADS)

    Małgorzata, Linek

    2017-10-01

    The work concerns the issue of hardened concrete parameters improvement intended for airfield pavements. Factors which have direct or indirect influence on rheological deformation size were of particular interest. The aim of lab testing was to select concrete mixture ratio which would make hardened concrete less susceptible to influence of basic operating factors. Analyses included two research groups. External and internal factors were selected. They influence parameters of hardened cement concrete by increasing rheological deformations. Research referred to innovative cement concrete intended for airfield pavements. Due to construction operation, the research considered the influence of weather conditions and forced thermal loads intensifying concrete stress. Fresh concrete mixture parameters were tested and basic parameters of hardened concrete were defined (density, absorbability, compression strength, tensile strength). Influence of the following factors on rheological deformation value was also analysed. Based on obtained test results, it has been discovered that innovative concrete, made on the basis of modifier, which changes internal structure of concrete composite, has definitely lower values of rheological deformation. Observed changes of microstructure, in connection with reduced deformation values allowed to reach the conclusion regarding advantageous characteristic features of the newly designed cement concrete. Applying such concrete for airfield construction may contribute to extension of its operation without malfunction and the increase of its general service life.

  11. Computer simulation of storm runoff for three watersheds in Albuquerque, New Mexico

    USGS Publications Warehouse

    Knutilla, R.L.; Veenhuis, J.E.

    1994-01-01

    Rainfall-runoff data from three watersheds were selected for calibration and verification of the U.S. Geological Survey's Distributed Routing Rainfall-Runoff Model. The watersheds chosen are residentially developed. The conceptually based model uses an optimization process that adjusts selected parameters to achieve the best fit between measured and simulated runoff volumes and peak discharges. Three of these optimization parameters represent soil-moisture conditions, three represent infiltration, and one accounts for effective impervious area. Each watershed modeled was divided into overland-flow segments and channel segments. The overland-flow segments were further subdivided to reflect pervious and impervious areas. Each overland-flow and channel segment was assigned representative values of area, slope, percentage of imperviousness, and roughness coefficients. Rainfall-runoff data for each watershed were separated into two sets for use in calibration and verification. For model calibration, seven input parameters were optimized to attain a best fit of the data. For model verification, parameter values were set using values from model calibration. The standard error of estimate for calibration of runoff volumes ranged from 19 to 34 percent, and for peak discharge calibration ranged from 27 to 44 percent. The standard error of estimate for verification of runoff volumes ranged from 26 to 31 percent, and for peak discharge verification ranged from 31 to 43 percent.

  12. Optimization of biological sulfide removal in a CSTR bioreactor.

    PubMed

    Roosta, Aliakbar; Jahanmiri, Abdolhossein; Mowla, Dariush; Niazi, Ali; Sotoodeh, Hamidreza

    2012-08-01

    In this study, biological sulfide removal from natural gas in a continuous bioreactor is investigated for estimation of the optimal operational parameters. According to the carried out reactions, sulfide can be converted to elemental sulfur, sulfate, thiosulfate, and polysulfide, of which elemental sulfur is the desired product. A mathematical model is developed and was used for investigation of the effect of various parameters on elemental sulfur selectivity. The results of the simulation show that elemental sulfur selectivity is a function of dissolved oxygen, sulfide load, pH, and concentration of bacteria. Optimal parameter values are calculated for maximum elemental sulfur selectivity by using genetic algorithm as an adaptive heuristic search. In the optimal conditions, 87.76% of sulfide loaded to the bioreactor is converted to elemental sulfur.

  13. Demand theory of gene regulation. II. Quantitative application to the lactose and maltose operons of Escherichia coli.

    PubMed Central

    Savageau, M A

    1998-01-01

    Induction of gene expression can be accomplished either by removing a restraining element (negative mode of control) or by providing a stimulatory element (positive mode of control). According to the demand theory of gene regulation, which was first presented in qualitative form in the 1970s, the negative mode will be selected for the control of a gene whose function is in low demand in the organism's natural environment, whereas the positive mode will be selected for the control of a gene whose function is in high demand. This theory has now been further developed in a quantitative form that reveals the importance of two key parameters: cycle time C, which is the average time for a gene to complete an ON/OFF cycle, and demand D, which is the fraction of the cycle time that the gene is ON. Here we estimate nominal values for the relevant mutation rates and growth rates and apply the quantitative demand theory to the lactose and maltose operons of Escherichia coli. The results define regions of the C vs. D plot within which selection for the wild-type regulatory mechanisms is realizable, and these in turn provide the first estimates for the minimum and maximum values of demand that are required for selection of the positive and negative modes of gene control found in these systems. The ratio of mutation rate to selection coefficient is the most relevant determinant of the realizable region for selection, and the most influential parameter is the selection coefficient that reflects the reduction in growth rate when there is superfluous expression of a gene. The quantitative theory predicts the rate and extent of selection for each mode of control. It also predicts three critical values for the cycle time. The predicted maximum value for the cycle time C is consistent with the lifetime of the host. The predicted minimum value for C is consistent with the time for transit through the intestinal tract without colonization. Finally, the theory predicts an optimum value of C that is in agreement with the observed frequency for E. coli colonizing the human intestinal tract. PMID:9691028

  14. Integrating economic parameters into genetic selection for Large White pigs.

    PubMed

    Dube, Bekezela; Mulugeta, Sendros D; Dzama, Kennedy

    2013-08-01

    The objective of the study was to integrate economic parameters into genetic selection for sow productivity, growth performance and carcass characteristics in South African Large White pigs. Simulation models for sow productivity and terminal production systems were performed based on a hypothetical 100-sow herd, to derive economic values for the economically relevant traits. The traits included in the study were number born alive (NBA), 21-day litter size (D21LS), 21-day litter weight (D21LWT), average daily gain (ADG), feed conversion ratio (FCR), age at slaughter (AGES), dressing percentage (DRESS), lean content (LEAN) and backfat thickness (BFAT). Growth of a pig was described by the Gompertz growth function, while feed intake was derived from the nutrient requirements of pigs at the respective ages. Partial budgeting and partial differentiation of the profit function were used to derive economic values, which were defined as the change in profit per unit genetic change in a given trait. The respective economic values (ZAR) were: 61.26, 38.02, 210.15, 33.34, -21.81, -68.18, 5.78, 4.69 and -1.48. These economic values indicated the direction and emphases of selection, and were sensitive to changes in feed prices and marketing prices for carcasses and maiden gilts. Economic values for NBA, D21LS, DRESS and LEAN decreased with increasing feed prices, suggesting a point where genetic improvement would be a loss, if feed prices continued to increase. The economic values for DRESS and LEAN increased as the marketing prices for carcasses increased, while the economic value for BFAT was not sensitive to changes in all prices. Reductions in economic values can be counterbalanced by simultaneous increases in marketing prices of carcasses and maiden gilts. Economic values facilitate genetic improvement by translating it to proportionate profitability. Breeders should, however, continually recalculate economic values to place the most appropriate emphases on the respective traits during genetic selection.

  15. Optical analysis of suspended particles in the cerebrospinal fluid obtained by puncture from patients diagnosed with the disorders of cerebrospinal fluid (CSF) circulation

    NASA Astrophysics Data System (ADS)

    Staroń, Waldemar; Herbowski, Leszek; Gurgul, Henryk

    2007-04-01

    The goal of the work was to determine the values of cumulative parameters of the cerebrospinal fluid. Values of the parameters characterise statistical cerebrospinal fluid obtained by puncture from the patients diagnosed due to suspicion of normotensive hydrocephalus. The cerebrospinal fluid taken by puncture for the routine examinations carried out at the patients suspected of normotensive hydrocephalus was analysed. In the paper there are presented results of examinations of several dozens of puncture samples of the cerebrospinal fluid coming from various patients. Each sample was examined under the microscope and photographed in 20 randomly chosen places. On the basis of analysis of the pictures showing the area of 100 x 100μm, the selected cumulative parameters such as count, numerical density, field area and field perimeter were determined for each sample. Then the average value of the parameters was determined as well.

  16. Comparison of Mann-Kendall and innovative trend method for water quality parameters of the Kizilirmak River, Turkey

    NASA Astrophysics Data System (ADS)

    Kisi, Ozgur; Ay, Murat

    2014-05-01

    Low, medium and high values of a parameter are very important issues in climatological, meteorological and hydrological events. Moreover these values are used to decide various design parameters based on scientific aspects and real applications everywhere in the world. With this concept, a new trend method recently proposed by Şen was used for water parameters, pH, T, EC, Na+, K+, CO3-2, HCO3-, Cl-, SO4-2, B+3 and Q recorded at five different stations (station numbers and locations: 1535-Sogutluhan (Sivas), 1501-Yamula (Kayseri), 1546-Tuzkoy (Kayseri), 1503-Yahsihan (Kirsehir), and 1533-Inozu (Samsun)) selected from the Kizilirmak River in Turkey. Low, medium and high values of the parameters were graphically evaluated with this method. For comparison purposes, the Mann-Kendall trend test was also applied to the same data. Differences of the two trend tests were also emphasised. It was found that the Şen trend test compared with the MK trend test had several advantages. The results also revealed that the Şen trend test could be successfully used for trend analysis of water parameters especially in terms of evaluation of low, medium and high values of data.

  17. Testable solution of the cosmological constant and coincidence problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shaw, Douglas J.; Barrow, John D.

    2011-02-15

    We present a new solution to the cosmological constant (CC) and coincidence problems in which the observed value of the CC, {Lambda}, is linked to other observable properties of the Universe. This is achieved by promoting the CC from a parameter that must be specified, to a field that can take many possible values. The observed value of {Lambda}{approx_equal}(9.3 Gyrs){sup -2}[{approx_equal}10{sup -120} in Planck units] is determined by a new constraint equation which follows from the application of a causally restricted variation principle. When applied to our visible Universe, the model makes a testable prediction for the dimensionless spatial curvaturemore » of {Omega}{sub k0}=-0.0056({zeta}{sub b}/0.5), where {zeta}{sub b}{approx}1/2 is a QCD parameter. Requiring that a classical history exist, our model determines the probability of observing a given {Lambda}. The observed CC value, which we successfully predict, is typical within our model even before the effects of anthropic selection are included. When anthropic selection effects are accounted for, we find that the observed coincidence between t{sub {Lambda}={Lambda}}{sup -1/2} and the age of the Universe, t{sub U}, is a typical occurrence in our model. In contrast to multiverse explanations of the CC problems, our solution is independent of the choice of a prior weighting of different {Lambda} values and does not rely on anthropic selection effects. Our model includes no unnatural small parameters and does not require the introduction of new dynamical scalar fields or modifications to general relativity, and it can be tested by astronomical observations in the near future.« less

  18. Optimizing Vowel Formant Measurements in Four Acoustic Analysis Systems for Diverse Speaker Groups

    PubMed Central

    Derdemezis, Ekaterini; Kent, Ray D.; Fourakis, Marios; Reinicke, Emily L.; Bolt, Daniel M.

    2016-01-01

    Purpose This study systematically assessed the effects of select linear predictive coding (LPC) analysis parameter manipulations on vowel formant measurements for diverse speaker groups using 4 trademarked Speech Acoustic Analysis Software Packages (SAASPs): CSL, Praat, TF32, and WaveSurfer. Method Productions of 4 words containing the corner vowels were recorded from 4 speaker groups with typical development (male and female adults and male and female children) and 4 speaker groups with Down syndrome (male and female adults and male and female children). Formant frequencies were determined from manual measurements using a consensus analysis procedure to establish formant reference values, and from the 4 SAASPs (using both the default analysis parameters and with adjustments or manipulations to select parameters). Smaller differences between values obtained from the SAASPs and the consensus analysis implied more optimal analysis parameter settings. Results Manipulations of default analysis parameters in CSL, Praat, and TF32 yielded more accurate formant measurements, though the benefit was not uniform across speaker groups and formants. In WaveSurfer, manipulations did not improve formant measurements. Conclusions The effects of analysis parameter manipulations on accuracy of formant-frequency measurements varied by SAASP, speaker group, and formant. The information from this study helps to guide clinical and research applications of SAASPs. PMID:26501214

  19. Method for computing self-consistent solution in a gun code

    DOEpatents

    Nelson, Eric M

    2014-09-23

    Complex gun code computations can be made to converge more quickly based on a selection of one or more relaxation parameters. An eigenvalue analysis is applied to error residuals to identify two error eigenvalues that are associated with respective error residuals. Relaxation values can be selected based on these eigenvalues so that error residuals associated with each can be alternately reduced in successive iterations. In some examples, relaxation values that would be unstable if used alone can be used.

  20. Monte Carlo Solution to Find Input Parameters in Systems Design Problems

    NASA Astrophysics Data System (ADS)

    Arsham, Hossein

    2013-06-01

    Most engineering system designs, such as product, process, and service design, involve a framework for arriving at a target value for a set of experiments. This paper considers a stochastic approximation algorithm for estimating the controllable input parameter within a desired accuracy, given a target value for the performance function. Two different problems, what-if and goal-seeking problems, are explained and defined in an auxiliary simulation model, which represents a local response surface model in terms of a polynomial. A method of constructing this polynomial by a single run simulation is explained. An algorithm is given to select the design parameter for the local response surface model. Finally, the mean time to failure (MTTF) of a reliability subsystem is computed and compared with its known analytical MTTF value for validation purposes.

  1. Nanostructure Sensing and Transmission of Gas Data

    NASA Technical Reports Server (NTRS)

    Li, Jing (Inventor)

    2011-01-01

    A system for receiving, analyzing and communicating results of sensing chemical and/or physical parameter values, using wireless transmission of the data. Presence or absence of one or more of a group of selected chemicals in a gas or vapor is determined, using suitably functionalized carbon nanostructures that are exposed to the gas. One or more physical parameter values, such as temperature, vapor pressure, relative humidity and distance from a reference location, are also sensed for the gas, using nanostructures and/or microstructures. All parameter values are transmitted wirelessly to a data processing site or to a control site, using an interleaving pattern for data received from different sensor groups, using I.E.E.E. 802.11 or 802.15 protocol, for example. Methods for estimating chemical concentration are discussed.

  2. Method and system for assigning a confidence metric for automated determination of optic disc location

    DOEpatents

    Karnowski, Thomas P [Knoxville, TN; Tobin, Jr., Kenneth W.; Muthusamy Govindasamy, Vijaya Priya [Knoxville, TN; Chaum, Edward [Memphis, TN

    2012-07-10

    A method for assigning a confidence metric for automated determination of optic disc location that includes analyzing a retinal image and determining at least two sets of coordinates locating an optic disc in the retinal image. The sets of coordinates can be determined using first and second image analysis techniques that are different from one another. An accuracy parameter can be calculated and compared to a primary risk cut-off value. A high confidence level can be assigned to the retinal image if the accuracy parameter is less than the primary risk cut-off value and a low confidence level can be assigned to the retinal image if the accuracy parameter is greater than the primary risk cut-off value. The primary risk cut-off value being selected to represent an acceptable risk of misdiagnosis of a disease having retinal manifestations by the automated technique.

  3. Self selected speed and maximal lactate steady state speed in swimming.

    PubMed

    Baron, B; Dekerle, J; Depretz, S; Lefevre, T; Pelayo, P

    2005-03-01

    The purposes of this study were to ascertain whether physiological and stroking parameters remain stable during a 2-hour exercise performed at self-selected swimming speed (S4) and whether this speed corresponds to those associated with the maximal lactate steady state (SMLSS). Ten well-trained competitive swimmers performed a maximal 400-m front crawl test, 4 30-min swimming tests in order to determine S(MLSS) and a 2-hour test swum at their preferred paces to determine self-selected swimming speed (S4), stroke rate (SR4), and stroke length (SL4) defined as the mean values observed between the 5th and the 15th min of this test. The stroking, metabolic and respiratory parameters, and ratings of perceived exertion (CR10) were reported throughout the 2-hour test. S4 and SMLSS were not significantly different and were highly correlated (r=0.891). S4 and SL4 decreased significantly after a steady state of 68 min and 100 min, respectively, whereas SR4 remained constant. Mean VO2, dioxide output, and heart rate values did not evolve significantly between the 10th and 120th minute of the test whereas capillary blood lactate concentration (La) decreased significantly (p<0.05). Moreover, respiratory CR10 did not evolve significantly between the 10th and the 120th minute of the test whereas general CR10 and muscular CR10 increased significantly. Considering the (La), SL4 and CR10 values variations, muscular parameters and a probably glycogenic depletion seem to be the main limiting factors that prevent maintaining the self selected swimming speed.

  4. Parsimony and goodness-of-fit in multi-dimensional NMR inversion

    NASA Astrophysics Data System (ADS)

    Babak, Petro; Kryuchkov, Sergey; Kantzas, Apostolos

    2017-01-01

    Multi-dimensional nuclear magnetic resonance (NMR) experiments are often used for study of molecular structure and dynamics of matter in core analysis and reservoir evaluation. Industrial applications of multi-dimensional NMR involve a high-dimensional measurement dataset with complicated correlation structure and require rapid and stable inversion algorithms from the time domain to the relaxation rate and/or diffusion domains. In practice, applying existing inverse algorithms with a large number of parameter values leads to an infinite number of solutions with a reasonable fit to the NMR data. The interpretation of such variability of multiple solutions and selection of the most appropriate solution could be a very complex problem. In most cases the characteristics of materials have sparse signatures, and investigators would like to distinguish the most significant relaxation and diffusion values of the materials. To produce an easy to interpret and unique NMR distribution with the finite number of the principal parameter values, we introduce a new method for NMR inversion. The method is constructed based on the trade-off between the conventional goodness-of-fit approach to multivariate data and the principle of parsimony guaranteeing inversion with the least number of parameter values. We suggest performing the inversion of NMR data using the forward stepwise regression selection algorithm. To account for the trade-off between goodness-of-fit and parsimony, the objective function is selected based on Akaike Information Criterion (AIC). The performance of the developed multi-dimensional NMR inversion method and its comparison with conventional methods are illustrated using real data for samples with bitumen, water and clay.

  5. Chemical sensors using coated or doped carbon nanotube networks

    NASA Technical Reports Server (NTRS)

    Li, Jing (Inventor); Meyyappan, Meyya (Inventor)

    2010-01-01

    Methods for using modified single wall carbon nanotubes ("SWCNTs") to detect presence and/or concentration of a gas component, such as a halogen (e.g., Cl.sub.2), hydrogen halides (e.g., HCl), a hydrocarbon (e.g., C.sub.nH.sub.2n+2), an alcohol, an aldehyde or a ketone, to which an unmodified SWCNT is substantially non-reactive. In a first embodiment, a connected network of SWCNTs is coated with a selected polymer, such as chlorosulfonated polyethylene, hydroxypropyl cellulose, polystyrene and/or polyvinylalcohol, and change in an electrical parameter or response value (e.g., conductance, current, voltage difference or resistance) of the coated versus uncoated SWCNT networks is analyzed. In a second embodiment, the network is doped with a transition element, such as Pd, Pt, Rh, Ir, Ru, Os and/or Au, and change in an electrical parameter value is again analyzed. The parameter change value depends monotonically, not necessarily linearly, upon concentration of the gas component. Two general algorithms are presented for estimating concentration value(s), or upper or lower concentration bounds on such values, from measured differences of response values.

  6. Parameter-based stochastic simulation of selection and breeding for multiple traits

    Treesearch

    Jennifer Myszewski; Thomas Byram; Floyd Bridgwater

    2006-01-01

    To increase the adaptability and economic value of plantations, tree improvement professionals often manage multiple traits in their breeding programs. When these traits are unfavorably correlated, breeders must weigh the economic importance of each trait and select for a desirable aggregate phenotype. Stochastic simulation allows breeders to test the effects of...

  7. Bayesian Multi-Trait Analysis Reveals a Useful Tool to Increase Oil Concentration and to Decrease Toxicity in Jatropha curcas L.

    PubMed Central

    Silva Junqueira, Vinícius; de Azevedo Peixoto, Leonardo; Galvêas Laviola, Bruno; Lopes Bhering, Leonardo; Mendonça, Simone; Agostini Costa, Tania da Silveira; Antoniassi, Rosemar

    2016-01-01

    The biggest challenge for jatropha breeding is to identify superior genotypes that present high seed yield and seed oil content with reduced toxicity levels. Therefore, the objective of this study was to estimate genetic parameters for three important traits (weight of 100 seed, oil seed content, and phorbol ester concentration), and to select superior genotypes to be used as progenitors in jatropha breeding. Additionally, the genotypic values and the genetic parameters estimated under the Bayesian multi-trait approach were used to evaluate different selection indices scenarios of 179 half-sib families. Three different scenarios and economic weights were considered. It was possible to simultaneously reduce toxicity and increase seed oil content and weight of 100 seed by using index selection based on genotypic value estimated by the Bayesian multi-trait approach. Indeed, we identified two families that present these characteristics by evaluating genetic diversity using the Ward clustering method, which suggested nine homogenous clusters. Future researches must integrate the Bayesian multi-trait methods with realized relationship matrix, aiming to build accurate selection indices models. PMID:27281340

  8. An empirical study of scanner system parameters

    NASA Technical Reports Server (NTRS)

    Landgrebe, D.; Biehl, L.; Simmons, W.

    1976-01-01

    The selection of the current combination of parametric values (instantaneous field of view, number and location of spectral bands, signal-to-noise ratio, etc.) of a multispectral scanner is a complex problem due to the strong interrelationship these parameters have with one another. The study was done with the proposed scanner known as Thematic Mapper in mind. Since an adequate theoretical procedure for this problem has apparently not yet been devised, an empirical simulation approach was used with candidate parameter values selected by the heuristic means. The results obtained using a conventional maximum likelihood pixel classifier suggest that although the classification accuracy declines slightly as the IFOV is decreased this is more than made up by an improved mensuration accuracy. Further, the use of a classifier involving both spatial and spectral features shows a very substantial tendency to resist degradation as the signal-to-noise ratio is decreased. And finally, further evidence is provided of the importance of having at least one spectral band in each of the major available portions of the optical spectrum.

  9. [The effect of vegetarian diet on selected biochemical and blood morphology parameters].

    PubMed

    Nazarewicz, Rafał

    2007-01-01

    The objective was to examine whether vegetarian diet influence biochemical parameters of blood and plasma urea in selective vegetarian group. The investigation covered 41 subject, 22 of them had been applying vegetarian diet and 19 were omnivorous. The study shows statistically significant lower values of white blood cells, % and amounts of neutrocytes and insignificant lower level of red blood cells, hemoglobine, hematocrit and platelet in vegetarian group. Significant lower plasma urea level was observed in that group. These changes indicate that high quality deficiency protein was due to vegetarian diet.

  10. Secure and Efficient Signature Scheme Based on NTRU for Mobile Payment

    NASA Astrophysics Data System (ADS)

    Xia, Yunhao; You, Lirong; Sun, Zhe; Sun, Zhixin

    2017-10-01

    Mobile payment becomes more and more popular, however the traditional public-key encryption algorithm has higher requirements for hardware which is not suitable for mobile terminals of limited computing resources. In addition, these public-key encryption algorithms do not have the ability of anti-quantum computing. This paper researches public-key encryption algorithm NTRU for quantum computation through analyzing the influence of parameter q and k on the probability of generating reasonable signature value. Two methods are proposed to improve the probability of generating reasonable signature value. Firstly, increase the value of parameter q. Secondly, add the authentication condition that meet the reasonable signature requirements during the signature phase. Experimental results show that the proposed signature scheme can realize the zero leakage of the private key information of the signature value, and increase the probability of generating the reasonable signature value. It also improve rate of the signature, and avoid the invalid signature propagation in the network, but the scheme for parameter selection has certain restrictions.

  11. Determination of the influence of factors (ethanol, pH and a(w) ) on the preservation of cosmetics using experimental design.

    PubMed

    Berthele, H; Sella, O; Lavarde, M; Mielcarek, C; Pense-Lheritier, A-M; Pirnay, S

    2014-02-01

    Ethanol, pH and water activity are three well-known parameters that can influence the preservation of cosmetic products. With the new constraints regarding the antimicrobial effectiveness and the restrictive use of preservatives, a D-optimal design was set up to evaluate the influence of these three parameters on the microbiological conservation. To monitor the effectiveness of the different combination of these set parameters, a challenge test in compliance with the International standard ISO 11930: 2012 was implemented. The formulations established in our study could support wide variations of ethanol concentration, pH values and glycerin concentration without noticeable effects on the stability of the products. In the conditions of the study, determining the value of a single parameter, with the tested concentration, could not guarantee microbiological conservation. However, a high concentration of ethanol associated with an extreme pH could inhibit bacteria growth from the first day (D0). Besides, it appears that despite an aw above 0.6 (even 0.8) and without any preservatives incorporated in formulas, it was possible to guarantee the microbiological stability of the cosmetic product when maintaining the right combination of the selected parameters. Following the analysis of the different values obtained during the experimentation, there seems to be a correlation between the aw and the selected parameters aforementioned. An application of this relationship could be to define the aw of cosmetic products by using the formula, thus avoiding the evaluation of this parameter with a measuring device. © 2013 Society of Cosmetic Scientists and the Société Française de Cosmétologie.

  12. Importance of double-pole CFS-PML for broad-band seismic wave simulation and optimal parameters selection

    NASA Astrophysics Data System (ADS)

    Feng, Haike; Zhang, Wei; Zhang, Jie; Chen, Xiaofei

    2017-05-01

    The perfectly matched layer (PML) is an efficient absorbing technique for numerical wave simulation. The complex frequency-shifted PML (CFS-PML) introduces two additional parameters in the stretching function to make the absorption frequency dependent. This can help to suppress converted evanescent waves from near grazing incident waves, but does not efficiently absorb low-frequency waves below the cut-off frequency. To absorb both the evanescent wave and the low-frequency wave, the double-pole CFS-PML having two poles in the coordinate stretching function was developed in computational electromagnetism. Several studies have investigated the performance of the double-pole CFS-PML for seismic wave simulations in the case of a narrowband seismic wavelet and did not find significant difference comparing to the CFS-PML. Another difficulty to apply the double-pole CFS-PML for real problems is that a practical strategy to set optimal parameter values has not been established. In this work, we study the performance of the double-pole CFS-PML for broad-band seismic wave simulation. We find that when the maximum to minimum frequency ratio is larger than 16, the CFS-PML will either fail to suppress the converted evanescent waves for grazing incident waves, or produce visible low-frequency reflection, depending on the value of α. In contrast, the double-pole CFS-PML can simultaneously suppress the converted evanescent waves and avoid low-frequency reflections with proper parameter values. We analyse the different roles of the double-pole CFS-PML parameters and propose optimal selections of these parameters. Numerical tests show that the double-pole CFS-PML with the optimal parameters can generate satisfactory results for broad-band seismic wave simulations.

  13. Optimal Parameter Design of Coarse Alignment for Fiber Optic Gyro Inertial Navigation System.

    PubMed

    Lu, Baofeng; Wang, Qiuying; Yu, Chunmei; Gao, Wei

    2015-06-25

    Two different coarse alignment algorithms for Fiber Optic Gyro (FOG) Inertial Navigation System (INS) based on inertial reference frame are discussed in this paper. Both of them are based on gravity vector integration, therefore, the performance of these algorithms is determined by integration time. In previous works, integration time is selected by experience. In order to give a criterion for the selection process, and make the selection of the integration time more accurate, optimal parameter design of these algorithms for FOG INS is performed in this paper. The design process is accomplished based on the analysis of the error characteristics of these two coarse alignment algorithms. Moreover, this analysis and optimal parameter design allow us to make an adequate selection of the most accurate algorithm for FOG INS according to the actual operational conditions. The analysis and simulation results show that the parameter provided by this work is the optimal value, and indicate that in different operational conditions, the coarse alignment algorithms adopted for FOG INS are different in order to achieve better performance. Lastly, the experiment results validate the effectiveness of the proposed algorithm.

  14. Quantitative Rheological Model Selection

    NASA Astrophysics Data System (ADS)

    Freund, Jonathan; Ewoldt, Randy

    2014-11-01

    The more parameters in a rheological the better it will reproduce available data, though this does not mean that it is necessarily a better justified model. Good fits are only part of model selection. We employ a Bayesian inference approach that quantifies model suitability by balancing closeness to data against both the number of model parameters and their a priori uncertainty. The penalty depends upon prior-to-calibration expectation of the viable range of values that model parameters might take, which we discuss as an essential aspect of the selection criterion. Models that are physically grounded are usually accompanied by tighter physical constraints on their respective parameters. The analysis reflects a basic principle: models grounded in physics can be expected to enjoy greater generality and perform better away from where they are calibrated. In contrast, purely empirical models can provide comparable fits, but the model selection framework penalizes their a priori uncertainty. We demonstrate the approach by selecting the best-justified number of modes in a Multi-mode Maxwell description of PVA-Borax. We also quantify relative merits of the Maxwell model relative to powerlaw fits and purely empirical fits for PVA-Borax, a viscoelastic liquid, and gluten.

  15. Trajectory Optimization for Missions to Small Bodies with a Focus on Scientific Merit.

    PubMed

    Englander, Jacob A; Vavrina, Matthew A; Lim, Lucy F; McFadden, Lucy A; Rhoden, Alyssa R; Noll, Keith S

    2017-01-01

    Trajectory design for missions to small bodies is tightly coupled both with the selection of targets for a mission and with the choice of spacecraft power, propulsion, and other hardware. Traditional methods of trajectory optimization have focused on finding the optimal trajectory for an a priori selection of destinations and spacecraft parameters. Recent research has expanded the field of trajectory optimization to multidisciplinary systems optimization that includes spacecraft parameters. The logical next step is to extend the optimization process to include target selection based not only on engineering figures of merit but also scientific value. This paper presents a new technique to solve the multidisciplinary mission optimization problem for small-bodies missions, including classical trajectory design, the choice of spacecraft power and propulsion systems, and also the scientific value of the targets. This technique, when combined with modern parallel computers, enables a holistic view of the small body mission design process that previously required iteration among several different design processes.

  16. Sphericity determination using resonant ultrasound spectroscopy

    DOEpatents

    Dixon, Raymond D.; Migliori, Albert; Visscher, William M.

    1994-01-01

    A method is provided for grading production quantities of spherical objects, such as roller balls for bearings. A resonant ultrasound spectrum (RUS) is generated for each spherical object and a set of degenerate sphere-resonance frequencies is identified. From the degenerate sphere-resonance frequencies and known relationships between degenerate sphere-resonance frequencies and Poisson's ratio, a Poisson's ratio can be determined, along with a "best" spherical diameter, to form spherical parameters for the sphere. From the RUS, fine-structure resonant frequency spectra are identified for each degenerate sphere-resonance frequency previously selected. From each fine-structure spectrum and associated sphere parameter values an asphericity value is determined. The asphericity value can then be compared with predetermined values to provide a measure for accepting or rejecting the sphere.

  17. Sphericity determination using resonant ultrasound spectroscopy

    DOEpatents

    Dixon, R.D.; Migliori, A.; Visscher, W.M.

    1994-10-18

    A method is provided for grading production quantities of spherical objects, such as roller balls for bearings. A resonant ultrasound spectrum (RUS) is generated for each spherical object and a set of degenerate sphere-resonance frequencies is identified. From the degenerate sphere-resonance frequencies and known relationships between degenerate sphere-resonance frequencies and Poisson's ratio, a Poisson's ratio can be determined, along with a 'best' spherical diameter, to form spherical parameters for the sphere. From the RUS, fine-structure resonant frequency spectra are identified for each degenerate sphere-resonance frequency previously selected. From each fine-structure spectrum and associated sphere parameter values an asphericity value is determined. The asphericity value can then be compared with predetermined values to provide a measure for accepting or rejecting the sphere. 14 figs.

  18. The evolution of trade-offs under directional and correlational selection.

    PubMed

    Roff, Derek A; Fairbairn, Daphne J

    2012-08-01

    Using quantitative genetic theory, we develop predictions for the evolution of trade-offs in response to directional and correlational selection. We predict that directional selection favoring an increase in one trait in a trade-off will result in change in the intercept but not the slope of the trade-off function, with the mean value of the selected trait increasing and that of the correlated trait decreasing. Natural selection will generally favor an increase in some combination of trait values, which can be represented as directional selection on an index value. Such selection induces both directional and correlational selection on the component traits. Theory predicts that selection on an index value will also change the intercept but not the slope of the trade-off function but because of correlational selection, the direction of change in component traits may be in the same or opposite directions. We test these predictions using artificial selection on the well-established trade-off between fecundity and flight capability in the cricket, Gryllus firmus and compare the empirical results with a priori predictions made using genetic parameters from a separate half-sibling experiment. Our results support the predictions and illustrate the complexity of trade-off evolution when component traits are subject to both directional and correlational selection. © 2012 The Author(s). Evolution© 2012 The Society for the Study of Evolution.

  19. Relations between Municipal Water Use and Selected Meteorological Parameters and Drought Indices, East-Central and Northeast Florida

    USGS Publications Warehouse

    Murray, Louis C.

    2009-01-01

    Water-use data collected between 1992 and 2006 at eight municipal water-supply utilities in east-central and northeast Florida were analyzed to identify seasonal trends in use and to quantify monthly variations. Regression analyses were applied to identify significant correlations between water use and selected meteorological parameters and drought indices. Selected parameters and indices include precipitation (P), air temperature (T), potential evapotranspiration (PET), available water (P-PET), monthly changes in these parameters (Delta P, Delta T, Delta PET, Delta(P-PET), the Palmer Drought Severity Index (PDSI), and the Standardized Precipitation Index (SPI). Selected utilities include the City of Daytona Beach (Daytona), the City of Eustis (Eustis), Gainesville Regional Utilities (GRU), Jacksonville Electric Authority (JEA), Orange County Utilities (OCU), Orlando Utilities Commission (OUC), Seminole County Utilities (SCU), and the City of St. Augustine (St. Augustine). Water-use rates at these utilities in 2006 ranged from about 3.2 million gallons per day at Eustis to about 131 million gallons per day at JEA. Total water-use rates increased at all utilities throughout the 15-year period of record, ranging from about 4 percent at Daytona to greater than 200 percent at OCU and SCU. Metered rates, however, decreased at six of the eight utilities, ranging from about 2 percent at OCU and OUC to about 17 percent at Eustis. Decreases in metered rates occurred because the number of metered connections increased at a greater rate than did total water use, suggesting that factors other than just population growth may play important roles in water-use dynamics. Given the absence of a concurrent trend in precipitation, these decreases can likely be attributed to changes in non-climatic factors such as water-use type, usage of reclaimed water, water-use restrictions, demographics, and so forth. When averaged for the eight utilities, metered water-use rates depict a clear seasonal pattern in which rates were lowest in the winter and greatest in the late spring. Averaged water-use rates ranged from about 9 percent below the 15-year daily mean in January to about 11 percent above the daily mean in May. Water-use rates were found to be statistically correlated to meteorological parameters and drought indices, and to be influenced by system memory. Metered rates (in gallons per day per active metered connection) were consistently found to be influenced by P, T, PET, and P-PET and changes in these parameters that occurred in prior months. In the single-variant analyses, best correlations were obtained by fitting polynomial functions to plots of metered rates versus moving-averaged values of selected parameters (R2 values greater than 0.50 at three of eight sites). Overall, metered water-use rates were best correlated with the 3- to 4-month moving average of Delta T or Delta PET (R2 values up to 0.66), whereas the full suite of meteorological parameters was best correlated with metered rates at Daytona and least correlated with rates at St. Augustine. Similarly, metered rates were substantially better correlated with moving-averaged values of precipitation (significant at all eight sites) than with single (current) monthly values (significant at only three sites). Total and metered water-use rates were positively correlated with T, PET, Delta P, Delta T, and Delta PET, and negatively correlated with P, P-PET, Delta (P-PET), PDSI, and SPI. The drought indices were better correlated with total water-use rates than with metered rates, whereas metered rates were better correlated with meteorological parameters. Multivariant analyses produced fits of the data that explained a greater degree of the variance in metered rates than did the single-variant analyses. Adjusted R2 values for the 'best' models ranged from 0.79 at JEA to 0.29 at St. Augustine and exceeded 0.60 at five of eight sites. The amount of available water (P-PET) was the si

  20. Parameter estimation of multivariate multiple regression model using bayesian with non-informative Jeffreys’ prior distribution

    NASA Astrophysics Data System (ADS)

    Saputro, D. R. S.; Amalia, F.; Widyaningsih, P.; Affan, R. C.

    2018-05-01

    Bayesian method is a method that can be used to estimate the parameters of multivariate multiple regression model. Bayesian method has two distributions, there are prior and posterior distributions. Posterior distribution is influenced by the selection of prior distribution. Jeffreys’ prior distribution is a kind of Non-informative prior distribution. This prior is used when the information about parameter not available. Non-informative Jeffreys’ prior distribution is combined with the sample information resulting the posterior distribution. Posterior distribution is used to estimate the parameter. The purposes of this research is to estimate the parameters of multivariate regression model using Bayesian method with Non-informative Jeffreys’ prior distribution. Based on the results and discussion, parameter estimation of β and Σ which were obtained from expected value of random variable of marginal posterior distribution function. The marginal posterior distributions for β and Σ are multivariate normal and inverse Wishart. However, in calculation of the expected value involving integral of a function which difficult to determine the value. Therefore, approach is needed by generating of random samples according to the posterior distribution characteristics of each parameter using Markov chain Monte Carlo (MCMC) Gibbs sampling algorithm.

  1. Program for computer aided reliability estimation

    NASA Technical Reports Server (NTRS)

    Mathur, F. P. (Inventor)

    1972-01-01

    A computer program for estimating the reliability of self-repair and fault-tolerant systems with respect to selected system and mission parameters is presented. The computer program is capable of operation in an interactive conversational mode as well as in a batch mode and is characterized by maintenance of several general equations representative of basic redundancy schemes in an equation repository. Selected reliability functions applicable to any mathematical model formulated with the general equations, used singly or in combination with each other, are separately stored. One or more system and/or mission parameters may be designated as a variable. Data in the form of values for selected reliability functions is generated in a tabular or graphic format for each formulated model.

  2. Quantification of dental prostheses on cone‐beam CT images by the Taguchi method

    PubMed Central

    Kuo, Rong‐Fu; Fang, Kwang‐Ming; TY, Wong

    2016-01-01

    The gray values accuracy of dental cone‐beam computed tomography (CBCT) is affected by dental metal prostheses. The distortion of dental CBCT gray values could lead to inaccuracies of orthodontic and implant treatment. The aim of this study was to quantify the effect of scanning parameters and dental metal prostheses on the accuracy of dental cone‐beam computed tomography (CBCT) gray values using the Taguchi method. Eight dental model casts of an upper jaw including prostheses, and a ninth prosthesis‐free dental model cast, were scanned by two dental CBCT devices. The mean gray value of the selected circular regions of interest (ROIs) were measured using dental CBCT images of eight dental model casts and were compared with those measured from CBCT images of the prosthesis‐free dental model cast. For each image set, four consecutive slices of gingiva were selected. The seven factors (CBCTs, occlusal plane canting, implant connection, prosthesis position, coping material, coping thickness, and types of dental restoration) were used to evaluate scanning parameter and dental prostheses effects. Statistical methods of signal to noise ratio (S/N) and analysis of variance (ANOVA) with 95% confidence were applied to quantify the effects of scanning parameters and dental prostheses on dental CBCT gray values accuracy. For ROIs surrounding dental prostheses, the accuracy of CBCT gray values were affected primarily by implant connection (42%), followed by type of restoration (29%), prostheses position (19%), coping material (4%), and coping thickness (4%). For a single crown prosthesis (without support of implants) placed in dental model casts, gray value differences for ROIs 1–9 were below 12% and gray value differences for ROIs 13–18 away from prostheses were below 10%. We found the gray value differences set to be between 7% and 8% for regions next to a single implant‐supported titanium prosthesis, and between 46% and 59% for regions between double implant‐supported, nickel‐chromium alloys (Ni‐Cr) prostheses. Quantification of the effect of prostheses and scanning parameters on dental CBCT gray values was assessed. PACS numbers: 87.59.bd, 87.57Q PMID:26894354

  3. Optimization of motion control laws for tether crawler or elevator systems

    NASA Technical Reports Server (NTRS)

    Swenson, Frank R.; Von Tiesenhausen, Georg

    1988-01-01

    Based on the proposal of a motion control law by Lorenzini (1987), a method is developed for optimizing motion control laws for tether crawler or elevator systems in terms of the performance measures of travel time, the smoothness of acceleration and deceleration, and the maximum values of velocity and acceleration. The Lorenzini motion control law, based on powers of the hyperbolic tangent function, is modified by the addition of a constant-velocity section, and this modified function is then optimized by parameter selections to minimize the peak acceleration value for a selected travel time or to minimize travel time for the selected peak values of velocity and acceleration. It is shown that the addition of a constant-velocity segment permits further optimization of the motion control law performance.

  4. Evaluation of shielding parameters for heavy metal fluoride based tellurite-rich glasses for gamma ray shielding applications

    NASA Astrophysics Data System (ADS)

    Sayyed, M. I.; Lakshminarayana, G.; Kityk, I. V.; Mahdi, M. A.

    2017-10-01

    In this work, we have evaluated the γ-ray shielding parameters such as mass attenuation coefficient (μ/ρ), effective atomic number (Zeff), half value layer (HVL), mean free path (MFP) and exposure buildup factors (EBF) for heavy metal fluoride (PbF2) based tellurite-rich glasses. In addition, neutron total macroscopic cross sections (∑R) for these glasses were also calculated. The maximum value for μ/ρ, Zeff and ∑R was found for heavy metal (Bi2O3) oxide introduced glass. The results of the selected glasses have been compared, in terms of MFP with different glass systems. The shielding effectiveness of the selected glasses is found comparable or better than of common ones, which indicates that these glasses with suitable oxides could be developed for gamma ray shielding applications.

  5. Charts for Helicopter-Performance Estimation

    DTIC Science & Technology

    1945-08-01

    tilerelatlons among the design parameters. Tinerasults should theref~r~ he af practical assistance in comparative performance studies. The accuracy with...page facing the figure, Ihxnerlcal appllcatton.- In order to calculate the maximum value af the rata-of-climb parameter” Ym It is necessary first to find...values of A\\~ and ~ when Czt ~ as 01, snd k have been selected. For Khe values c~ = 1.5 snd a = 6, curves 2 t of uYt Qgatnst h\\v for several values of

  6. Local Variability of Parameters for Characterization of the Corneal Subbasal Nerve Plexus.

    PubMed

    Winter, Karsten; Scheibe, Patrick; Köhler, Bernd; Allgeier, Stephan; Guthoff, Rudolf F; Stachs, Oliver

    2016-01-01

    The corneal subbasal nerve plexus (SNP) offers high potential for early diagnosis of diabetic peripheral neuropathy. Changes in subbasal nerve fibers can be assessed in vivo by confocal laser scanning microscopy (CLSM) and quantified using specific parameters. While current study results agree regarding parameter tendency, there are considerable differences in terms of absolute values. The present study set out to identify factors that might account for this high parameter variability. In three healthy subjects, we used a novel method of software-based large-scale reconstruction that provided SNP images of the central cornea, decomposed the image areas into all possible image sections corresponding to the size of a single conventional CLSM image (0.16 mm2), and calculated a set of parameters for each image section. In order to carry out a large number of virtual examinations within the reconstructed image areas, an extensive simulation procedure (10,000 runs per image) was implemented. The three analyzed images ranged in size from 3.75 mm2 to 4.27 mm2. The spatial configuration of the subbasal nerve fiber networks varied greatly across the cornea and thus caused heavily location-dependent results as well as wide value ranges for the parameters assessed. Distributions of SNP parameter values varied greatly between the three images and showed significant differences between all images for every parameter calculated (p < 0.001 in each case). The relatively small size of the conventionally evaluated SNP area is a contributory factor in high SNP parameter variability. Averaging of parameter values based on multiple CLSM frames does not necessarily result in good approximations of the respective reference values of the whole image area. This illustrates the potential for examiner bias when selecting SNP images in the central corneal area.

  7. Cardiac risk index as a simple geometric indicator to select patients for the heart-sparing radiotherapy of left-sided breast cancer.

    PubMed

    Sung, KiHoon; Choi, Young Eun; Lee, Kyu Chan

    2017-06-01

    This is a dosimetric study to identify a simple geometric indicator to discriminate patients who meet the selection criterion for heart-sparing radiotherapy (RT). The authors proposed a cardiac risk index (CRI), directly measurable from the CT images at the time of scanning. Treatment plans were regenerated using the CT data of 312 consecutive patients with left-sided breast cancer. Dosimetric analysis was performed to estimate the risk of cardiac mortality using cardiac dosimetric parameters, such as the relative heart volumes receiving ≥25 Gy (heart V 25 ). For each CT data set, in-field heart depth (HD) and in-field heart width (HW) were measured to generate the geometric parameters, including maximum HW (HW max ) and maximum HD (HD max ). Seven geometric parameters were evaluated as candidates for CRI. Receiver operating characteristic (ROC) curve analyses were used to examine the overall discriminatory power of the geometric parameters to select high-risk patients (heart V 25  ≥ 10%). Seventy-one high-risk (22.8%) and 241 low-risk patients (77.2%) were identified by dosimetric analysis. The geometric and dosimetric parameters were significantly higher in the high-risk group. Heart V 25 showed the strong positive correlations with all geometric parameters examined (r > 0.8, p < 0.001). The product of HD max and HW max (CRI) revealed the largest area under the curve (AUC) value (0.969) and maintained 100% sensitivity and 88% specificity at the optimal cut-off value of 14.58 cm 2 . Cardiac risk index proposed as a simple geometric indicator to select high-risk patients provides useful guidance for clinicians considering optimal implementation of heart-sparing RT. © 2016 The Royal Australian and New Zealand College of Radiologists.

  8. Large storms: Airglow and related measurements. VLF observations, volume 4

    NASA Technical Reports Server (NTRS)

    1981-01-01

    The data presented show the typical values and range of ionospheric and magnetospheric characteristics, as viewed from 1400 km with the ISIS 2 instruments. The definition of each data set depends partly on geophysical parameters and partly on satellite operating mode. Preceding the data set is a description of the organizational parameters and a review of the objectives and general characteristics of the data set. The data are shown as a selection from 12 different data formats. Each data set has a different selection of formats, but uniformity of a given format selection is preserved throughout each data set. Each data set consists of a selected number of passes, each comprising a format combination that is most appropriae for the particular data set. Description of ISIS 2 instruments are provided.

  9. In vitro susceptibility of four antimicrobials against Riemerella anatipestifer isolates: a comparison of minimum inhibitory concentrations and mutant prevention concentrations for ceftiofur, cefquinome, florfenicol, and tilmicosin.

    PubMed

    Li, Yafei; Zhang, Yanan; Ding, Huanzhong; Mei, Xian; Liu, Wei; Zeng, Jiaxiong; Zeng, Zhenling

    2016-11-09

    Mutant prevention concentration (MPC) is an alternative pharmacodynamic parameter that has been used to measure antimicrobial activity and represents the propensities of antimicrobial agents to select resistant mutants. The concentration range between minimum inhibitory concentration (MIC) and MPC is defined as mutant selection window (MSW). The MPC and MSW parameters represent the ability of antimicrobial agents to inhibit the bacterial mutants selected. This study was conducted to determine the MIC and MPC values of four antimicrobials including ceftiofur, cefquinome, florfenicol and tilmicosin against 105 Riemerella anatipestifer isolates. The MIC 50 /MIC 90 values of clinical isolates tested in our study for ceftiofur, cefquinome, florfenicol and tilmicosin were 0.063/0.5、0.031/0.5、1/4、1/4 μg/mL, respectively; MPC 50 / MPC 90 values were 4/64、8/64、4/32、16/256 μg/mL, respectively. These results provided information on the use of these compounds in treating the R. anatipestifer infection; however, additional studies are needed to demonstrate their therapeutic efficacy. Based on the MSW theory, the hierarchy of these tested antimicrobial agents with respect to selecting resistant subpopulations was as follows: cefquinome > ceftiofur > tilmicosin > florfenicol. Cefquinome was the drug that presented the highest risk of selecting resistant mutant among the four antimicrobial agents.

  10. Choosing the appropriate forecasting model for predictive parameter control.

    PubMed

    Aleti, Aldeida; Moser, Irene; Meedeniya, Indika; Grunske, Lars

    2014-01-01

    All commonly used stochastic optimisation algorithms have to be parameterised to perform effectively. Adaptive parameter control (APC) is an effective method used for this purpose. APC repeatedly adjusts parameter values during the optimisation process for optimal algorithm performance. The assignment of parameter values for a given iteration is based on previously measured performance. In recent research, time series prediction has been proposed as a method of projecting the probabilities to use for parameter value selection. In this work, we examine the suitability of a variety of prediction methods for the projection of future parameter performance based on previous data. All considered prediction methods have assumptions the time series data has to conform to for the prediction method to provide accurate projections. Looking specifically at parameters of evolutionary algorithms (EAs), we find that all standard EA parameters with the exception of population size conform largely to the assumptions made by the considered prediction methods. Evaluating the performance of these prediction methods, we find that linear regression provides the best results by a very small and statistically insignificant margin. Regardless of the prediction method, predictive parameter control outperforms state of the art parameter control methods when the performance data adheres to the assumptions made by the prediction method. When a parameter's performance data does not adhere to the assumptions made by the forecasting method, the use of prediction does not have a notable adverse impact on the algorithm's performance.

  11. Fluid density and concentration measurement using noninvasive in situ ultrasonic resonance interferometry

    DOEpatents

    Pope, Noah G.; Veirs, Douglas K.; Claytor, Thomas N.

    1994-01-01

    The specific gravity or solute concentration of a process fluid solution located in a selected structure is determined by obtaining a resonance response spectrum of the fluid/structure over a range of frequencies that are outside the response of the structure itself. A fast fourier transform (FFT) of the resonance response spectrum is performed to form a set of FFT values. A peak value for the FFT values is determined, e.g., by curve fitting, to output a process parameter that is functionally related to the specific gravity and solute concentration of the process fluid solution. Calibration curves are required to correlate the peak FFT value over the range of expected specific gravities and solute concentrations in the selected structure.

  12. Fluid density and concentration measurement using noninvasive in situ ultrasonic resonance interferometry

    DOEpatents

    Pope, N.G.; Veirs, D.K.; Claytor, T.N.

    1994-10-25

    The specific gravity or solute concentration of a process fluid solution located in a selected structure is determined by obtaining a resonance response spectrum of the fluid/structure over a range of frequencies that are outside the response of the structure itself. A fast Fourier transform (FFT) of the resonance response spectrum is performed to form a set of FFT values. A peak value for the FFT values is determined, e.g., by curve fitting, to output a process parameter that is functionally related to the specific gravity and solute concentration of the process fluid solution. Calibration curves are required to correlate the peak FFT value over the range of expected specific gravities and solute concentrations in the selected structure. 7 figs.

  13. Evaluation of severe accident risks: Quantification of major input parameters: MAACS (MELCOR Accident Consequence Code System) input

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sprung, J.L.; Jow, H-N; Rollstin, J.A.

    1990-12-01

    Estimation of offsite accident consequences is the customary final step in a probabilistic assessment of the risks of severe nuclear reactor accidents. Recently, the Nuclear Regulatory Commission reassessed the risks of severe accidents at five US power reactors (NUREG-1150). Offsite accident consequences for NUREG-1150 source terms were estimated using the MELCOR Accident Consequence Code System (MACCS). Before these calculations were performed, most MACCS input parameters were reviewed, and for each parameter reviewed, a best-estimate value was recommended. This report presents the results of these reviews. Specifically, recommended values and the basis for their selection are presented for MACCS atmospheric andmore » biospheric transport, emergency response, food pathway, and economic input parameters. Dose conversion factors and health effect parameters are not reviewed in this report. 134 refs., 15 figs., 110 tabs.« less

  14. Modelling hydrological extremes under non-stationary conditions using climate covariates

    NASA Astrophysics Data System (ADS)

    Vasiliades, Lampros; Galiatsatou, Panagiota; Loukas, Athanasios

    2013-04-01

    Extreme value theory is a probabilistic theory that can interpret the future probabilities of occurrence of extreme events (e.g. extreme precipitation and streamflow) using past observed records. Traditionally, extreme value theory requires the assumption of temporal stationarity. This assumption implies that the historical patterns of recurrence of extreme events are static over time. However, the hydroclimatic system is nonstationary on time scales that are relevant to extreme value analysis, due to human-mediated and natural environmental change. In this study the generalized extreme value (GEV) distribution is used to assess nonstationarity in annual maximum daily rainfall and streamflow timeseries at selected meteorological and hydrometric stations in Greece and Cyprus. The GEV distribution parameters (location, scale, and shape) are specified as functions of time-varying covariates and estimated using the conditional density network (CDN) as proposed by Cannon (2010). The CDN is a probabilistic extension of the multilayer perceptron neural network. Model parameters are estimated via the generalized maximum likelihood (GML) approach using the quasi-Newton BFGS optimization algorithm, and the appropriate GEV-CDN model architecture for the selected meteorological and hydrometric stations is selected by fitting increasingly complicated models and choosing the one that minimizes the Akaike information criterion with small sample size correction. For all case studies in Greece and Cyprus different formulations are tested with combinational cases of stationary and nonstationary parameters of the GEV distribution, linear and non-linear architecture of the CDN and combinations of the input climatic covariates. Climatic indices such as the Southern Oscillation Index (SOI), which describes atmospheric circulation in the eastern tropical pacific related to El Niño Southern Oscillation (ENSO), the Pacific Decadal Oscillation (PDO) index that varies on an interdecadal rather than interannual time scale and the atmospheric circulation patterns as expressed by the North Atlantic Oscillation (NAO) index are used to express the GEV parameters as functions of the covariates. Results show that the nonstationary GEV model can be an efficient tool to take into account the dependencies between extreme value random variables and the temporal evolution of the climate.

  15. Exploration of DGVM Parameter Solution Space Using Simulated Annealing: Implications for Forecast Uncertainties

    NASA Astrophysics Data System (ADS)

    Wells, J. R.; Kim, J. B.

    2011-12-01

    Parameters in dynamic global vegetation models (DGVMs) are thought to be weakly constrained and can be a significant source of errors and uncertainties. DGVMs use between 5 and 26 plant functional types (PFTs) to represent the average plant life form in each simulated plot, and each PFT typically has a dozen or more parameters that define the way it uses resource and responds to the simulated growing environment. Sensitivity analysis explores how varying parameters affects the output, but does not do a full exploration of the parameter solution space. The solution space for DGVM parameter values are thought to be complex and non-linear; and multiple sets of acceptable parameters may exist. In published studies, PFT parameters are estimated from published literature, and often a parameter value is estimated from a single published value. Further, the parameters are "tuned" using somewhat arbitrary, "trial-and-error" methods. BIOMAP is a new DGVM created by fusing MAPSS biogeography model with Biome-BGC. It represents the vegetation of North America using 26 PFTs. We are using simulated annealing, a global search method, to systematically and objectively explore the solution space for the BIOMAP PFTs and system parameters important for plant water use. We defined the boundaries of the solution space by obtaining maximum and minimum values from published literature, and where those were not available, using +/-20% of current values. We used stratified random sampling to select a set of grid cells representing the vegetation of the conterminous USA. Simulated annealing algorithm is applied to the parameters for spin-up and a transient run during the historical period 1961-1990. A set of parameter values is considered acceptable if the associated simulation run produces a modern potential vegetation distribution map that is as accurate as one produced by trial-and-error calibration. We expect to confirm that the solution space is non-linear and complex, and that multiple acceptable parameter sets exist. Further we expect to demonstrate that the multiple parameter sets produce significantly divergent future forecasts in NEP, C storage, and ET and runoff; and thereby identify a highly important source of DGVM uncertainty

  16. DBCG hypo trial validation of radiotherapy parameters from a national data bank versus manual reporting.

    PubMed

    Brink, Carsten; Lorenzen, Ebbe L; Krogh, Simon Long; Westberg, Jonas; Berg, Martin; Jensen, Ingelise; Thomsen, Mette Skovhus; Yates, Esben Svitzer; Offersen, Birgitte Vrou

    2018-01-01

    The current study evaluates the data quality achievable using a national data bank for reporting radiotherapy parameters relative to the classical manual reporting method of selected parameters. The data comparison is based on 1522 Danish patients of the DBCG hypo trial with data stored in the Danish national radiotherapy data bank. In line with standard DBCG trial practice selected parameters were also reported manually to the DBCG database. Categorical variables are compared using contingency tables, and comparison of continuous parameters is presented in scatter plots. For categorical variables 25 differences between the data bank and manual values were located. Of these 23 were related to mistakes in the manual reported value whilst the remaining two were a wrong classification in the data bank. The wrong classification in the data bank was related to lack of dose information, since the two patients had been treated with an electron boost based on a manual calculation, thus data was not exported to the data bank, and this was not detected prior to comparison with the manual data. For a few database fields in the manual data an ambiguity of the parameter definition of the specific field is seen in the data. This was not the case for the data bank, which extract all data consistently. In terms of data quality the data bank is superior to manually reported values. However, there is a need to allocate resources for checking the validity of the available data as well as ensuring that all relevant data is present. The data bank contains more detailed information, and thus facilitates research related to the actual dose distribution in the patients.

  17. Genetic parameters and selection of soybean lines based on selection indexes.

    PubMed

    Teixeira, F G; Hamawaki, O T; Nogueira, A P O; Hamawaki, R L; Jorge, G L; Hamawaki, C L; Machado, B Q V; Santana, A J O

    2017-09-21

    Defining selection criteria is important to obtain promising genotypes in a breeding program. The objective of this study was to estimate genetic parameters for agronomic traits and to perform soybean line selection using selection indices. The experiment was conducted at an experimental area located at Capim Branco farm, belonging to the Federal University of Uberlândia. A total of 37 soybean genotypes were evaluated in randomized complete block design with three replicates, in which twelve agronomic traits were evaluated. Analysis of variance, the Scott-Knott test at the 1 and 5% level of probability, and selection index analyses were performed. There was genetic variability for all agronomic traits, with medium to high levels of genotype determination coefficient. Twelve lines with a total cycle up to 110 days were observed and grouped with the cultivars MSOY 6101 and UFUS 7910. Three lines, UFUS FG 03, UFUS FG 20, and UFUS FG 31, were highlighted regarding grain yield with higher values than the national average of 3072 kg/ha. The direct selection enabled the highest trait individual gains. The Williams (1962) index and the Smith (1936) and Hazel (1943) index presented the highest selection gain for the grain yield character. The genotype-ideotype distance index and the index of the sum of ranks of Mulamba and Mock (1978) presented higher values of total selection gain. The lines UFUS FG 12, UFUS FG 14, UFUS FG 18, UFUS FG 25, and UFUS FG 31 were distinguished as superior genotypes by direct selection methods and selection indexes.

  18. A four parameter optimization and troubleshooting of a RPLC - charged aerosol detection stability indicating method for determination of S-lysophosphatidylcholines in a phospholipid formulation.

    PubMed

    Tam, James; Ahmad, Imad A Haidar; Blasko, Andrei

    2018-06-05

    A four parameter optimization of a stability indicating method for non-chromophoric degradation products of 1,2-distearoyl-sn-glycero-3-phosphocholine (DSPC), 1-stearoyl-sn-glycero-3-phosphocholine and 2-stearoyl-sn-glycero-3-phosphocholine was achieved using a reverse phase liquid chromatography-charged aerosol detection (RPLC-CAD) technique. Using the hydrophobic subtraction model of selectivity, a core-shell, polar embedded RPLC column was selected followed by gradient-temperature optimization, resulting in ideal relative peak placements for a robust, stability indicating separation. The CAD instrument parameters, power function value (PFV) and evaporator temperature were optimized for lysophosphatidylcholines to give UV absorbance detector-like linearity performance within a defined concentration range. The two lysophosphatidylcholines gave the same response factor in the selected conditions. System specific power function values needed to be set for the two RPLC-CAD instruments used. A custom flow-divert profile, sending only a portion of the column effluent to the detector, was necessary to mitigate detector response drifting effects. The importance of the PFV optimization for each instrument of identical build and how to overcome recovery issues brought on by the matrix effects from the lipid-RP stationary phase interaction is reported. Copyright © 2018 Elsevier B.V. All rights reserved.

  19. Optimisation on processing parameters for minimising warpage on side arm using response surface methodology (RSM) and particle swarm optimisation (PSO)

    NASA Astrophysics Data System (ADS)

    Rayhana, N.; Fathullah, M.; Shayfull, Z.; Nasir, S. M.; Hazwan, M. H. M.; Sazli, M.; Yahya, Z. R.

    2017-09-01

    This study presents the application of optimisation method to reduce the warpage of side arm part. Autodesk Moldflow Insight software was integrated into this study to analyse the warpage. The design of Experiment (DOE) for Response Surface Methodology (RSM) was constructed and by using the equation from RSM, Particle Swarm Optimisation (PSO) was applied. The optimisation method will result in optimised processing parameters with minimum warpage. Mould temperature, melt temperature, packing pressure, packing time and cooling time was selected as the variable parameters. Parameters selection was based on most significant factor affecting warpage stated by previous researchers. The results show that warpage was improved by 28.16% for RSM and 28.17% for PSO. The warpage improvement in PSO from RSM is only by 0.01 %. Thus, the optimisation using RSM is already efficient to give the best combination parameters and optimum warpage value for side arm part. The most significant parameters affecting warpage are packing pressure.

  20. Line Mixing in Water Vapor and Methane

    NASA Technical Reports Server (NTRS)

    Smith, M. A. H.; Brown, L. R.; Toth, R. A.; Devi, V. Malathy; Benner, Chris

    2006-01-01

    A multispectrum fitting algorithm has been used to identify line mixing and determine mixing parameters for infrared transitions of H2O and CH4 in the 5-9 micrometer region. Line mixing parameters at room temperature were determined for two pairs of transitions in the v2 fundamental band of H2O-16, for self-broadening and for broadening by H2, He, CO2, N2, O2 and air. Line mixing parameters have been determined from air-broadened CH4 spectra, recorded at temperatures between 210 K and 314 K, in selected R-branch manifolds of the v4 band. For both H2O and CH4, the inclusion of line mixing was seen to have a greater effect on the retrieved values of the line shifts than on the retrieved values of other parameters

  1. Minorities in Higher Education: Selected Papers from an Interdisciplinary Conference Held at Hofstra University (Hempstead, New York, March 9-11, 1989).

    ERIC Educational Resources Information Center

    Hofstra Univ., Hempstead, NY.

    This report provides a selection of conference papers which discuss issues concerning minority participation in higher education, beginning with recognition of the many discrepancies between what are expressed as personal and organizational values and what parameters remain hidden. The papers consider the causes for limited minority participation…

  2. Feasibility of pedigree recording and genetic selection in village sheep flocks of smallholder farmers.

    PubMed

    Gizaw, Solomon; Goshme, Shenkute; Getachew, Tesfaye; Haile, Aynalem; Rischkowsky, Barbara; van Arendonk, Johan; Valle-Zárate, Anne; Dessie, Tadelle; Mwai, Ally Okeyo

    2014-06-01

    Pedigree recording and genetic selection in village flocks of smallholder farmers have been deemed infeasible by researchers and development workers. This is mainly due to the difficulty of sire identification under uncontrolled village breeding practices. A cooperative village sheep-breeding scheme was designed to achieve controlled breeding and implemented for Menz sheep of Ethiopia in 2009. In this paper, we evaluated the reliability of pedigree recording in village flocks by comparing genetic parameters estimated from data sets collected in the cooperative village and in a nucleus flock maintained under controlled breeding. Effectiveness of selection in the cooperative village was evaluated based on trends in breeding values over generations. Heritability estimates for 6-month weight recorded in the village and the nucleus flock were very similar. There was an increasing trend over generations in average estimated breeding values for 6-month weight in the village flocks. These results have a number of implications: the pedigree recorded in the village flocks was reliable; genetic parameters, which have so far been estimated based on nucleus data sets, can be estimated based on village recording; and appreciable genetic improvement could be achieved in village sheep selection programs under low-input smallholder farming systems.

  3. Evolution of a plastic quantitative trait in an age-structured population in a fluctuating environment.

    PubMed

    Engen, Steinar; Lande, Russell; Saether, Bernt-Erik

    2011-10-01

    We analyze weak fluctuating selection on a quantitative character in an age-structured population not subject to density regulation. We assume that early in the first year of life before selection, during a critical state of development, environments exert a plastic effect on the phenotype, which remains constant throughout the life of an individual. Age-specific selection on the character affects survival and fecundity, which have intermediate optima subject to temporal environmental fluctuations with directional selection in some age classes as special cases. Weighting individuals by their reproductive value, as suggested by Fisher, we show that the expected response per year in the weighted mean character has the same form as for models with no age structure. Environmental stochasticity generates stochastic fluctuations in the weighted mean character following a first-order autoregressive model with a temporally autocorrelated noise term and stationary variance depending on the amount of phenotypic plasticity. The parameters of the process are simple weighted averages of parameters used to describe age-specific survival and fecundity. The "age-specific selective weights" are related to the stable distribution of reproductive values among age classes. This allows partitioning of the change in the weighted mean character into age-specific components. © 2011 The Author(s). Evolution© 2011 The Society for the Study of Evolution.

  4. Kinetic model for microbial growth and desulphurisation with Enterobacter sp.

    PubMed

    Liu, Long; Guo, Zhiguo; Lu, Jianjiang; Xu, Xiaolin

    2015-02-01

    Biodesulphurisation was investigated by using Enterobacter sp. D4, which can selectively desulphurise and convert dibenzothiophene into 2-hydroxybiphenyl (2-HBP). The experimental values of growth, substrate consumption and product generation were obtained at 95 % confidence level of the fitted values using three models: Hinshelwood equation, Luedeking-Piret and Luedeking-Piret-like equations. The average error values between experimental values and fitted values were less than 10 %. These kinetic models describe all the experimental data with good statistical parameters. The production of 2-HBP in Enterobacter sp. was by "coupled growth".

  5. Inside the Mind of a Medicinal Chemist: The Role of Human Bias in Compound Prioritization during Drug Discovery

    PubMed Central

    Kutchukian, Peter S.; Vasilyeva, Nadya Y.; Xu, Jordan; Lindvall, Mika K.; Dillon, Michael P.; Glick, Meir; Coley, John D.; Brooijmans, Natasja

    2012-01-01

    Medicinal chemists’ “intuition” is critical for success in modern drug discovery. Early in the discovery process, chemists select a subset of compounds for further research, often from many viable candidates. These decisions determine the success of a discovery campaign, and ultimately what kind of drugs are developed and marketed to the public. Surprisingly little is known about the cognitive aspects of chemists’ decision-making when they prioritize compounds. We investigate 1) how and to what extent chemists simplify the problem of identifying promising compounds, 2) whether chemists agree with each other about the criteria used for such decisions, and 3) how accurately chemists report the criteria they use for these decisions. Chemists were surveyed and asked to select chemical fragments that they would be willing to develop into a lead compound from a set of ∼4,000 available fragments. Based on each chemist’s selections, computational classifiers were built to model each chemist’s selection strategy. Results suggest that chemists greatly simplified the problem, typically using only 1–2 of many possible parameters when making their selections. Although chemists tended to use the same parameters to select compounds, differing value preferences for these parameters led to an overall lack of consensus in compound selections. Moreover, what little agreement there was among the chemists was largely in what fragments were undesirable. Furthermore, chemists were often unaware of the parameters (such as compound size) which were statistically significant in their selections, and overestimated the number of parameters they employed. A critical evaluation of the problem space faced by medicinal chemists and cognitive models of categorization were especially useful in understanding the low consensus between chemists. PMID:23185259

  6. Generation of accurate peptide retention data for targeted and data independent quantitative LC-MS analysis: Chromatographic lessons in proteomics.

    PubMed

    Krokhin, Oleg V; Spicer, Vic

    2016-12-01

    The emergence of data-independent quantitative LC-MS/MS analysis protocols further highlights the importance of high-quality reproducible chromatographic procedures. Knowing, controlling and being able to predict the effect of multiple factors that alter peptide RP-HPLC separation selectivity is critical for successful data collection for the construction of ion libraries. Proteomic researchers have often regarded RP-HPLC as a "black box", while vast amount of research on peptide separation is readily available. In addition to obvious parameters, such as the type of ion-pairing modifier, stationary phase and column temperature, we describe the "mysterious" effects of gradient slope, column size and flow rate on peptide separation selectivity. Retention time variations due to these parameters are governed by the linear solvent strength (LSS) theory on a peptide level by the value of its slope S in the basic LSS equation-a parameter that can be accurately predicted. Thus, the application of shallower gradients, higher flow rates, or smaller columns will each increases the relative retention of peptides with higher S-values (long species with multiple positively charged groups). Simultaneous changes to these parameters that each drive shifts in separation selectivity in the same direction should be avoided. The unification of terminology represents another pressing issue in this field of applied proteomics that should be addressed to facilitate further progress. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  7. Multi Response Optimization of Process Parameters Using Grey Relational Analysis for Turning of Al-6061

    NASA Astrophysics Data System (ADS)

    Deepak, Doreswamy; Beedu, Rajendra

    2017-08-01

    Al-6061 is one among the most useful material used in manufacturing of products. The major qualities of Aluminium are reasonably good strength, corrosion resistance and thermal conductivity. These qualities have made it a suitable material for various applications. While manufacturing these products, companies strive for reducing the production cost by increasing Material Removal Rate (MRR). Meanwhile, the quality of surface need to be ensured at an acceptable value. This paper aims at bringing a compromise between high MRR and low surface roughness requirement by applying Grey Relational Analysis (GRA). This article presents the selection of controllable parameters like longitudinal feed, cutting speed and depth of cut to arrive at optimum values of MRR and surface roughness (Ra). The process parameters for experiments were selected based on Taguchi’s L9 array with two replications. Grey relation analysis being most suited method for multi response optimization, the same is adopted for the optimization. The result shows that feed rate is the most significant factor that influences MRR and Surface finish.

  8. Modeling multilayer x-ray reflectivity using genetic algorithms

    NASA Astrophysics Data System (ADS)

    Sánchez del Río, M.; Pareschi, G.; Michetschläger, C.

    2000-06-01

    The x-ray reflectivity of a multilayer is a non-linear function of many parameters (materials, layer thickness, density, roughness). Non-linear fitting of experimental data with simulations requires the use of initial values sufficiently close to the optimum value. This is a difficult task when the topology of the space of the variables is highly structured. We apply global optimization methods to fit multilayer reflectivity. Genetic algorithms are stochastic methods based on the model of natural evolution: the improvement of a population along successive generations. A complete set of initial parameters constitutes an individual. The population is a collection of individuals. Each generation is built from the parent generation by applying some operators (selection, crossover, mutation, etc.) on the members of the parent generation. The pressure of selection drives the population to include "good" individuals. For large number of generations, the best individuals will approximate the optimum parameters. Some results on fitting experimental hard x-ray reflectivity data for Ni/C and W/Si multilayers using genetic algorithms are presented. This method can also be applied to design multilayers optimized for a target application.

  9. Examining lag effects between industrial land development and regional economic changes: The Netherlands experience.

    PubMed

    Ustaoglu, Eda; Lavalle, Carlo

    2017-01-01

    In most empirical applications, forecasting models for the analysis of industrial land focus on the relationship between current values of economic parameters and industrial land use. This paper aims to test this assumption by focusing on the dynamic relationship between current and lagged values of the 'economic fundamentals' and industrial land development. Not much effort has yet been attributed to develop land forecasting models to predict the demand for industrial land except those applying static regressions or other statistical measures. In this research, we estimated a dynamic panel data model across 40 regions from 2000 to 2008 for the Netherlands to uncover the relationship between current and lagged values of economic parameters and industrial land development. Land-use regulations such as land zoning policies, and other land-use restrictions like natural protection areas, geographical limitations in the form of water bodies or sludge areas are expected to affect supply of land, which will in turn be reflected in industrial land market outcomes. Our results suggest that gross domestic product (GDP), industrial employment, gross value added (GVA), property price, and other parameters representing demand and supply conditions in the industrial market explain industrial land developments with high significance levels. It is also shown that contrary to the current values, lagged values of the economic parameters have more sound relationships with the industrial developments in the Netherlands. The findings suggest use of lags between selected economic parameters and industrial land use in land forecasting applications.

  10. Examining lag effects between industrial land development and regional economic changes: The Netherlands experience

    PubMed Central

    Ustaoglu, Eda; Lavalle, Carlo

    2017-01-01

    In most empirical applications, forecasting models for the analysis of industrial land focus on the relationship between current values of economic parameters and industrial land use. This paper aims to test this assumption by focusing on the dynamic relationship between current and lagged values of the ‘economic fundamentals’ and industrial land development. Not much effort has yet been attributed to develop land forecasting models to predict the demand for industrial land except those applying static regressions or other statistical measures. In this research, we estimated a dynamic panel data model across 40 regions from 2000 to 2008 for the Netherlands to uncover the relationship between current and lagged values of economic parameters and industrial land development. Land-use regulations such as land zoning policies, and other land-use restrictions like natural protection areas, geographical limitations in the form of water bodies or sludge areas are expected to affect supply of land, which will in turn be reflected in industrial land market outcomes. Our results suggest that gross domestic product (GDP), industrial employment, gross value added (GVA), property price, and other parameters representing demand and supply conditions in the industrial market explain industrial land developments with high significance levels. It is also shown that contrary to the current values, lagged values of the economic parameters have more sound relationships with the industrial developments in the Netherlands. The findings suggest use of lags between selected economic parameters and industrial land use in land forecasting applications. PMID:28877204

  11. Resource allocation to reproduction in animals.

    PubMed

    Kooijman, Sebastiaan A L M; Lika, Konstadia

    2014-11-01

    The standard Dynamic Energy Budget (DEB) model assumes that a fraction κ of mobilised reserve is allocated to somatic maintenance plus growth, while the rest is allocated to maturity maintenance plus maturation (in embryos and juveniles) or reproduction (in adults). All DEB parameters have been estimated for 276 animal species from most large phyla and all chordate classes. The goodness of fit is generally excellent. We compared the estimated values of κ with those that would maximise reproduction in fully grown adults with abundant food. Only 13% of these species show a reproduction rate close to the maximum possible (assuming that κ can be controlled), another 4% have κ lower than the optimal value, and 83% have κ higher than the optimal value. Strong empirical support hence exists for the conclusion that reproduction is generally not maximised. We also compared the parameters of the wild chicken with those of races selected for meat and egg production and found that the latter indeed maximise reproduction in terms of κ, while surface-specific assimilation was not affected by selection. We suggest that small values of κ relate to the down-regulation of maximum body size, and large values to the down-regulation of reproduction. We briefly discuss the ecological context for these findings. © 2014 The Authors. Biological Reviews © 2014 Cambridge Philosophical Society.

  12. Scheduling on the basis of the research of dependences among the construction process parameters

    NASA Astrophysics Data System (ADS)

    Romanovich, Marina; Ermakov, Alexander; Mukhamedzhanova, Olga

    2017-10-01

    The dependences among the construction process parameters are investigated in the article: average integrated value of qualification of the shift, number of workers per shift and average daily amount of completed work on the basis of correlation coefficient are considered. Basic data for the research of dependences among the above-stated parameters have been collected during the construction of two standard objects A and B (monolithic houses), in four months of construction (October, November, December, January). Kobb-Douglas production function has proved the values of coefficients of correlation close to 1. Function is simple to be used and is ideal for the description of the considered dependences. The development function, describing communication among the considered parameters of the construction process, is developed. The function of the development gives the chance to select optimum quantitative and qualitative (qualification) structure of the brigade link for the work during the next period of time, according to a preset value of amount of works. Function of the optimized amounts of works, which reflects interrelation of key parameters of construction process, is developed. Values of function of the optimized amounts of works should be used as the average standard for scheduling of the storming periods of construction.

  13. Bubble Entropy: An Entropy Almost Free of Parameters.

    PubMed

    Manis, George; Aktaruzzaman, Md; Sassi, Roberto

    2017-11-01

    Objective : A critical point in any definition of entropy is the selection of the parameters employed to obtain an estimate in practice. We propose a new definition of entropy aiming to reduce the significance of this selection. Methods: We call the new definition Bubble Entropy . Bubble Entropy is based on permutation entropy, where the vectors in the embedding space are ranked. We use the bubble sort algorithm for the ordering procedure and count instead the number of swaps performed for each vector. Doing so, we create a more coarse-grained distribution and then compute the entropy of this distribution. Results: Experimental results with both real and synthetic HRV signals showed that bubble entropy presents remarkable stability and exhibits increased descriptive and discriminating power compared to all other definitions, including the most popular ones. Conclusion: The definition proposed is almost free of parameters. The most common ones are the scale factor r and the embedding dimension m . In our definition, the scale factor is totally eliminated and the importance of m is significantly reduced. The proposed method presents increased stability and discriminating power. Significance: After the extensive use of some entropy measures in physiological signals, typical values for their parameters have been suggested, or at least, widely used. However, the parameters are still there, application and dataset dependent, influencing the computed value and affecting the descriptive power. Reducing their significance or eliminating them alleviates the problem, decoupling the method from the data and the application, and eliminating subjective factors. Objective : A critical point in any definition of entropy is the selection of the parameters employed to obtain an estimate in practice. We propose a new definition of entropy aiming to reduce the significance of this selection. Methods: We call the new definition Bubble Entropy . Bubble Entropy is based on permutation entropy, where the vectors in the embedding space are ranked. We use the bubble sort algorithm for the ordering procedure and count instead the number of swaps performed for each vector. Doing so, we create a more coarse-grained distribution and then compute the entropy of this distribution. Results: Experimental results with both real and synthetic HRV signals showed that bubble entropy presents remarkable stability and exhibits increased descriptive and discriminating power compared to all other definitions, including the most popular ones. Conclusion: The definition proposed is almost free of parameters. The most common ones are the scale factor r and the embedding dimension m . In our definition, the scale factor is totally eliminated and the importance of m is significantly reduced. The proposed method presents increased stability and discriminating power. Significance: After the extensive use of some entropy measures in physiological signals, typical values for their parameters have been suggested, or at least, widely used. However, the parameters are still there, application and dataset dependent, influencing the computed value and affecting the descriptive power. Reducing their significance or eliminating them alleviates the problem, decoupling the method from the data and the application, and eliminating subjective factors.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tsalafoutas, Ioannis A.; Varsamidis, Athanasios; Thalassinou, Stella

    Purpose: To investigate the utility of the nested polymethylacrylate (PMMA) phantom (which is available in many CT facilities for CTDI measurements), as a tool for the presentation and comparison of the ways that two different CT automatic exposure control (AEC) systems respond to a phantom when various scan parameters and AEC protocols are modified.Methods: By offsetting the two phantom's components (the head phantom and the body ring) half-way along their longitudinal axis, a phantom with three sections of different x-ray attenuation was created. Scan projection radiographs (SPRs) and helical scans of the three-section phantom were performed on a Toshiba Aquilionmore » 64 and a Philips Brilliance 64 CT scanners, with different scan parameter selections [scan direction, pitch factor, slice thickness, and reconstruction interval (ST/RI), AEC protocol, and tube potential used for the SPRs]. The dose length product (DLP) values of each scan were recorded and the tube current (mA) values of the reconstructed CT images were plotted against the respective Z-axis positions on the phantom. Furthermore, measurements of the noise levels at the center of each phantom section were performed to assess the impact of mA modulation on image quality.Results: The mA modulation patterns of the two CT scanners were very dissimilar. The mA variations were more pronounced for Aquilion 64, where changes in any of the aforementioned scan parameters affected both the mA modulations curves and DLP values. However, the noise levels were affected only by changes in pitch, ST/RI, and AEC protocol selections. For Brilliance 64, changes in pitch affected the mA modulation curves but not the DLP values, whereas only AEC protocol and SPR tube potential selection variations affected both the mA modulation curves and DLP values. The noise levels increased for smaller ST/RI, larger weight category AEC protocol, and larger SPR tube potential selection.Conclusions: The nested PMMA dosimetry phantom can be effectively utilized for the comprehension of CT AEC systems performance and the way that different scan conditions affect the mA modulation patterns, DLP values, and image noise. However, in depth analysis of the reasons why these two systems exhibited such different behaviors in response to the same phantom requires further investigation which is beyond the scope of this study.« less

  15. Geochemical Data Package for Performance Assessment Calculations Related to the Savannah River Site

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kaplan, Daniel I.

    The Savannah River Site (SRS) disposes of low-level radioactive waste (LLW) and stabilizes high-level radioactive waste (HLW) tanks in the subsurface environment. Calculations used to establish the radiological limits of these facilities are referred to as Performance Assessments (PA), Special Analyses (SA), and Composite Analyses (CA). The objective of this document is to revise existing geochemical input values used for these calculations. This work builds on earlier compilations of geochemical data (2007, 2010), referred to a geochemical data packages. This work is being conducted as part of the on-going maintenance program of the SRS PA programs that periodically updates calculationsmore » and data packages when new information becomes available. Because application of values without full understanding of their original purpose may lead to misuse, this document also provides the geochemical conceptual model, the approach used for selecting the values, the justification for selecting data, and the assumptions made to assure that the conceptual and numerical geochemical models are reasonably conservative (i.e., bias the recommended input values to reflect conditions that will tend to predict the maximum risk to the hypothetical recipient). This document provides 1088 input parameters for geochemical parameters describing transport processes for 64 elements (>740 radioisotopes) potentially occurring within eight subsurface disposal or tank closure areas: Slit Trenches (ST), Engineered Trenches (ET), Low Activity Waste Vault (LAWV), Intermediate Level (ILV) Vaults, Naval Reactor Component Disposal Areas (NRCDA), Components-in-Grout (CIG) Trenches, Saltstone Facility, and Closed Liquid Waste Tanks. The geochemical parameters described here are the distribution coefficient, Kd value, apparent solubility concentration, k s value, and the cementitious leachate impact factor.« less

  16. Economic selection index development for Beefmaster cattle I: Terminal breeding objective.

    PubMed

    Ochsner, K P; MacNeil, M D; Lewis, R M; Spangler, M L

    2017-03-01

    The objective of this study was to develop an economic selection index for Beefmaster cattle in a terminal production system where bulls are mated to mature cows with all resulting progeny harvested. National average prices from 2010 to 2014 were used to establish income and expenses for the system. Phenotypic and genetic parameter values among the selection criteria and goal traits were obtained from literature. Economic values were estimated by simulating 100,000 animals and approximating the partial derivatives of the profit function by perturbing traits one at a time, by 1 unit, while holding the other traits constant at their respective means. Relative economic values (REV) for the terminal objective traits HCW, marbling score (MS), ribeye area (REA), 12th-rib fat (FAT), and feed intake (FI) were 91.29, 17.01, 8.38, -7.07, and -29.66, respectively. Consequently, improving the efficiency of beef production is expected to impact profitability greater than improving carcass merit alone. The accuracy of the index lies between 0.338 (phenotypic selection) and 0.503 (breeding values known without error). The application of this index would aid Beefmaster breeders in their sire selection decisions, facilitating genetic improvement for a terminal breeding objective.

  17. Kernel learning at the first level of inference.

    PubMed

    Cawley, Gavin C; Talbot, Nicola L C

    2014-05-01

    Kernel learning methods, whether Bayesian or frequentist, typically involve multiple levels of inference, with the coefficients of the kernel expansion being determined at the first level and the kernel and regularisation parameters carefully tuned at the second level, a process known as model selection. Model selection for kernel machines is commonly performed via optimisation of a suitable model selection criterion, often based on cross-validation or theoretical performance bounds. However, if there are a large number of kernel parameters, as for instance in the case of automatic relevance determination (ARD), there is a substantial risk of over-fitting the model selection criterion, resulting in poor generalisation performance. In this paper we investigate the possibility of learning the kernel, for the Least-Squares Support Vector Machine (LS-SVM) classifier, at the first level of inference, i.e. parameter optimisation. The kernel parameters and the coefficients of the kernel expansion are jointly optimised at the first level of inference, minimising a training criterion with an additional regularisation term acting on the kernel parameters. The key advantage of this approach is that the values of only two regularisation parameters need be determined in model selection, substantially alleviating the problem of over-fitting the model selection criterion. The benefits of this approach are demonstrated using a suite of synthetic and real-world binary classification benchmark problems, where kernel learning at the first level of inference is shown to be statistically superior to the conventional approach, improves on our previous work (Cawley and Talbot, 2007) and is competitive with Multiple Kernel Learning approaches, but with reduced computational expense. Copyright © 2014 Elsevier Ltd. All rights reserved.

  18. Systematic wavelength selection for improved multivariate spectral analysis

    DOEpatents

    Thomas, Edward V.; Robinson, Mark R.; Haaland, David M.

    1995-01-01

    Methods and apparatus for determining in a biological material one or more unknown values of at least one known characteristic (e.g. the concentration of an analyte such as glucose in blood or the concentration of one or more blood gas parameters) with a model based on a set of samples with known values of the known characteristics and a multivariate algorithm using several wavelength subsets. The method includes selecting multiple wavelength subsets, from the electromagnetic spectral region appropriate for determining the known characteristic, for use by an algorithm wherein the selection of wavelength subsets improves the model's fitness of the determination for the unknown values of the known characteristic. The selection process utilizes multivariate search methods that select both predictive and synergistic wavelengths within the range of wavelengths utilized. The fitness of the wavelength subsets is determined by the fitness function F=.function.(cost, performance). The method includes the steps of: (1) using one or more applications of a genetic algorithm to produce one or more count spectra, with multiple count spectra then combined to produce a combined count spectrum; (2) smoothing the count spectrum; (3) selecting a threshold count from a count spectrum to select these wavelength subsets which optimize the fitness function; and (4) eliminating a portion of the selected wavelength subsets. The determination of the unknown values can be made: (1) noninvasively and in vivo; (2) invasively and in vivo; or (3) in vitro.

  19. Seismic activity prediction using computational intelligence techniques in northern Pakistan

    NASA Astrophysics Data System (ADS)

    Asim, Khawaja M.; Awais, Muhammad; Martínez-Álvarez, F.; Iqbal, Talat

    2017-10-01

    Earthquake prediction study is carried out for the region of northern Pakistan. The prediction methodology includes interdisciplinary interaction of seismology and computational intelligence. Eight seismic parameters are computed based upon the past earthquakes. Predictive ability of these eight seismic parameters is evaluated in terms of information gain, which leads to the selection of six parameters to be used in prediction. Multiple computationally intelligent models have been developed for earthquake prediction using selected seismic parameters. These models include feed-forward neural network, recurrent neural network, random forest, multi layer perceptron, radial basis neural network, and support vector machine. The performance of every prediction model is evaluated and McNemar's statistical test is applied to observe the statistical significance of computational methodologies. Feed-forward neural network shows statistically significant predictions along with accuracy of 75% and positive predictive value of 78% in context of northern Pakistan.

  20. Genetic variability and heritability of chlorophyll a fluorescence parameters in Scots pine (Pinus sylvestris L.).

    PubMed

    Čepl, Jaroslav; Holá, Dana; Stejskal, Jan; Korecký, Jiří; Kočová, Marie; Lhotáková, Zuzana; Tomášková, Ivana; Palovská, Markéta; Rothová, Olga; Whetten, Ross W; Kaňák, Jan; Albrechtová, Jana; Lstibůrek, Milan

    2016-07-01

    Current knowledge of the genetic mechanisms underlying the inheritance of photosynthetic activity in forest trees is generally limited, yet it is essential both for various practical forestry purposes and for better understanding of broader evolutionary mechanisms. In this study, we investigated genetic variation underlying selected chlorophyll a fluorescence (ChlF) parameters in structured populations of Scots pine (Pinus sylvestris L.) grown on two sites under non-stress conditions. These parameters were derived from the OJIP part of the ChlF kinetics curve and characterize individual parts of primary photosynthetic processes associated, for example, with the exciton trapping by light-harvesting antennae, energy utilization in photosystem II (PSII) reaction centers (RCs) and its transfer further down the photosynthetic electron-transport chain. An additive relationship matrix was estimated based on pedigree reconstruction, utilizing a set of highly polymorphic single sequence repeat markers. Variance decomposition was conducted using the animal genetic evaluation mixed-linear model. The majority of ChlF parameters in the analyzed pine populations showed significant additive genetic variation. Statistically significant heritability estimates were obtained for most ChlF indices, with the exception of DI0/RC, φD0 and φP0 (Fv/Fm) parameters. Estimated heritabilities varied around the value of 0.15 with the maximal value of 0.23 in the ET0/RC parameter, which indicates electron-transport flux from QA to QB per PSII RC. No significant correlation was found between these indices and selected growth traits. Moreover, no genotype × environment interaction (G × E) was detected, i.e., no differences in genotypes' performance between sites. The absence of significant G × E in our study is interesting, given the relatively low heritability found for the majority of parameters analyzed. Therefore, we infer that polygenic variability of these indices is selectively neutral. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  1. Using noninvasive NIRS to evaluate the metabolic capability of infant brain

    NASA Astrophysics Data System (ADS)

    Huang, Lan; Ding, Haishu; Hou, Xinling; Zhou, Congle; Lie, Zhiguang; Wang, Guangzhi; Tian, Fenghua

    2005-01-01

    The value of cerebral oxygenation saturation is important for optimal treatment and prognosis in neonates during perinatal period. The purpose of this study was to investigate the cerebral oxygen in newborn infants and obtain clinical characteristic parameters by using steady state spatially resolved near infrared spectroscopy. The subjects consist of 239 infants selected from two hospital. The results show that the values of regional cerebral oxygen saturation (rSO2) for preterm infants with gestational ages of 27 - 32 weeks were different from term infants and the value of rSO2 for sick term infants after treatment were better than that of before treatment. Above results suggest that the value of rSO2 may be used as a clinical parameter to assess cerebral oxygen for preterm and sick infants avoiding hypoxia.

  2. Missing-value estimation using linear and non-linear regression with Bayesian gene selection.

    PubMed

    Zhou, Xiaobo; Wang, Xiaodong; Dougherty, Edward R

    2003-11-22

    Data from microarray experiments are usually in the form of large matrices of expression levels of genes under different experimental conditions. Owing to various reasons, there are frequently missing values. Estimating these missing values is important because they affect downstream analysis, such as clustering, classification and network design. Several methods of missing-value estimation are in use. The problem has two parts: (1) selection of genes for estimation and (2) design of an estimation rule. We propose Bayesian variable selection to obtain genes to be used for estimation, and employ both linear and nonlinear regression for the estimation rule itself. Fast implementation issues for these methods are discussed, including the use of QR decomposition for parameter estimation. The proposed methods are tested on data sets arising from hereditary breast cancer and small round blue-cell tumors. The results compare very favorably with currently used methods based on the normalized root-mean-square error. The appendix is available from http://gspsnap.tamu.edu/gspweb/zxb/missing_zxb/ (user: gspweb; passwd: gsplab).

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Freitez, Juan A.; Sanchez, Morella; Ruette, Fernando

    Application of simulated annealing (SA) and simplified GSA (SGSA) techniques for parameter optimization of parametric quantum chemistry method (CATIVIC) was performed. A set of organic molecules were selected for test these techniques. Comparison of the algorithms was carried out for error function minimization with respect to experimental values. Results show that SGSA is more efficient than SA with respect to computer time. Accuracy is similar in both methods; however, there are important differences in the final set of parameters.

  4. Optimization of the fiber laser parameters for local high-temperature impact on metal

    NASA Astrophysics Data System (ADS)

    Yatsko, Dmitrii S.; Polonik, Marina V.; Dudko, Olga V.

    2016-11-01

    This paper presents the local laser heating process of surface layer of the metal sample. The aim is to create the molten pool with the required depth by laser thermal treatment. During the heating the metal temperature at any point of the molten zone should not reach the boiling point of the main material. The laser power, exposure time and the spot size of a laser beam are selected as the variable parameters. The mathematical model for heat transfer in a semi-infinite body, applicable to finite slab, is used for preliminary theoretical estimation of acceptable parameters values of the laser thermal treatment. The optimization problem is solved by using an algorithm based on the scanning method of the search space (the zero-order method of conditional optimization). The calculated values of the parameters (the optimal set of "laser radiation power - exposure time - spot radius") are used to conduct a series of natural experiments to obtain a molten pool with the required depth. A two-stage experiment consists of: a local laser treatment of metal plate (steel) and then the examination of the microsection of the laser irradiated region. According to the experimental results, we can judge the adequacy of the ongoing calculations within the selected models.

  5. Assessing the Internal Consistency of the Marine Carbon Dioxide System at High Latitudes: The Labrador Sea AR7W Line Study Case

    NASA Astrophysics Data System (ADS)

    Raimondi, L.; Azetsu-Scott, K.; Wallace, D.

    2016-02-01

    This work assesses the internal consistency of ocean carbon dioxide through the comparison of discrete measurements and calculated values of four analytical parameters of the inorganic carbon system: Total Alkalinity (TA), Dissolved Inorganic Carbon (DIC), pH and Partial Pressure of CO2 (pCO2). The study is based on 486 seawater samples analyzed for TA, DIC and pH and 86 samples for pCO2 collected during the 2014 Cruise along the AR7W line in Labrador Sea. The internal consistency has been assessed using all combinations of input parameters and eight sets of thermodynamic constants (K1, K2) in calculating each parameter through the CO2SYS software. Residuals of each parameter have been calculated as the differences between measured and calculated values (reported as ΔTA, ΔDIC, ΔpH and ΔpCO2). Although differences between the selected sets of constants were observed, the largest were obtained using different pairs of input parameters. As expected the couple pH-pCO2 produced to poorest results, suggesting that measurements of either TA or DIC are needed to define the carbonate system accurately and precisely. To identify signature of organic alkalinity we isolated the residuals in the bloom area. Therefore only ΔTA from surface waters (0-30 m) along the Greenland side of the basin were selected. The residuals showed that no measured value was higher than calculations and therefore we could not observe presence of organic bases in the shallower water column. The internal consistency in characteristic water masses of Labrador Sea (Denmark Strait Overflow Water, North East Atlantic Deep Water, Newly-ventilated Labrador Sea Water, Greenland and Labrador Shelf waters) will also be discussed.

  6. Image parameters for maturity determination of a composted material containing sewage sludge

    NASA Astrophysics Data System (ADS)

    Kujawa, S.; Nowakowski, K.; Tomczak, R. J.; Boniecki, P.; Dach, J.

    2013-07-01

    Composting is one of the best methods for management of sewage sludge. In a reasonably conducted composting process it is important to early identify the moment in which a material reaches the young compost stage. The objective of this study was to determine parameters contained in images of composted material's samples that can be used for evaluation of the degree of compost maturity. The study focused on two types of compost: containing sewage sludge with corn straw and sewage sludge with rapeseed straw. The photographing of the samples was carried out on a prepared stand for the image acquisition using VIS, UV-A and mixed (VIS + UV-A) light. In the case of UV-A light, three values of the exposure time were assumed. The values of 46 parameters were estimated for each of the images extracted from the photographs of the composted material's samples. Exemplary averaged values of selected parameters obtained from the images of the composted material in the following sampling days were presented. All of the parameters obtained from the composted material's images are the basis for preparation of training, validation and test data sets necessary in development of neural models for classification of the young compost stage.

  7. Guidelines for Assessment of Gait and Reference Values for Spatiotemporal Gait Parameters in Older Adults: The Biomathics and Canadian Gait Consortiums Initiative

    PubMed Central

    Beauchet, Olivier; Allali, Gilles; Sekhon, Harmehr; Verghese, Joe; Guilain, Sylvie; Steinmetz, Jean-Paul; Kressig, Reto W.; Barden, John M.; Szturm, Tony; Launay, Cyrille P.; Grenier, Sébastien; Bherer, Louis; Liu-Ambrose, Teresa; Chester, Vicky L.; Callisaya, Michele L.; Srikanth, Velandai; Léonard, Guillaume; De Cock, Anne-Marie; Sawa, Ryuichi; Duque, Gustavo; Camicioli, Richard; Helbostad, Jorunn L.

    2017-01-01

    Background: Gait disorders, a highly prevalent condition in older adults, are associated with several adverse health consequences. Gait analysis allows qualitative and quantitative assessments of gait that improves the understanding of mechanisms of gait disorders and the choice of interventions. This manuscript aims (1) to give consensus guidance for clinical and spatiotemporal gait analysis based on the recorded footfalls in older adults aged 65 years and over, and (2) to provide reference values for spatiotemporal gait parameters based on the recorded footfalls in healthy older adults free of cognitive impairment and multi-morbidities. Methods: International experts working in a network of two different consortiums (i.e., Biomathics and Canadian Gait Consortium) participated in this initiative. First, they identified items of standardized information following the usual procedure of formulation of consensus findings. Second, they merged databases including spatiotemporal gait assessments with GAITRite® system and clinical information from the “Gait, cOgnitiOn & Decline” (GOOD) initiative and the Generation 100 (Gen 100) study. Only healthy—free of cognitive impairment and multi-morbidities (i.e., ≤ 3 therapeutics taken daily)—participants aged 65 and older were selected. Age, sex, body mass index, mean values, and coefficients of variation (CoV) of gait parameters were used for the analyses. Results: Standardized systematic assessment of three categories of items, which were demographics and clinical information, and gait characteristics (clinical and spatiotemporal gait analysis based on the recorded footfalls), were selected for the proposed guidelines. Two complementary sets of items were distinguished: a minimal data set and a full data set. In addition, a total of 954 participants (mean age 72.8 ± 4.8 years, 45.8% women) were recruited to establish the reference values. Performance of spatiotemporal gait parameters based on the recorded footfalls declined with increasing age (mean values and CoV) and demonstrated sex differences (mean values). Conclusions: Based on an international multicenter collaboration, we propose consensus guidelines for gait assessment and spatiotemporal gait analysis based on the recorded footfalls, and reference values for healthy older adults. PMID:28824393

  8. PERSEUS QC: preparing statistic data sets

    NASA Astrophysics Data System (ADS)

    Belokopytov, Vladimir; Khaliulin, Alexey; Ingerov, Andrey; Zhuk, Elena; Gertman, Isaac; Zodiatis, George; Nikolaidis, Marios; Nikolaidis, Andreas; Stylianou, Stavros

    2017-09-01

    The Desktop Oceanographic Data Processing Module was developed for visual analysis of interdisciplinary cruise measurements. The program provides the possibility of data selection based on different criteria, map plotting, sea horizontal sections, and sea depth vertical profiles. The data selection in the area of interest can be specified according to a set of different physical and chemical parameters complimented by additional parameters, such as the cruise number, ship name, and time period. The visual analysis of a set of vertical profiles in the selected area allows to determine the quality of the data, their location and the time of the in-situ measurements and to exclude any questionable data from the statistical analysis. For each selected set of profiles, the average vertical profile, the minimal and maximal values of the parameter under examination and the root mean square (r.m.s.) are estimated. These estimates are compared with the parameter ranges, set for each sub-region by MEDAR/MEDATLAS-II and SeaDataNet2 projects. In the framework of the PERSEUS project, certain parameters which lacked a range were calculated from scratch, while some of the previously used ranges were re-defined using more comprehensive data sets based on SeaDataNet2, SESAME and PERSEUS projects. In some cases we have used additional sub- regions to redefine the ranges ore precisely. The recalculated ranges are used to improve the PERSEUS Data Quality Control.

  9. Coordinated ionospheric and magnetospheric observations from the ISIS 2 satellite by the ISIS 2 experimenters. Volume 1: Optical auroral images and related direct measurements

    NASA Technical Reports Server (NTRS)

    Murphree, J. S.

    1980-01-01

    A representative set of data from ISIS 2 covering a range of operating modes and geophysical conditions is presented. The data show the typical values and range of ionospheric and magnetospheric characteristics, as viewed from 1400 km with the ISIS 2 instruments. The definition of each data set depends partly on geophysical parameters and partly on satellite operating mode. Preceding the data set is a description of the organizational parameters and a review of the objectives and general characteristics of the data set. The data are shown as a selection from 12 different data formats. Each data has a different selection of formats, but uniformity of a given format selection is preserved throughout each data set.

  10. Intelligent Weather Agent

    NASA Technical Reports Server (NTRS)

    Spirkovska, Liljana (Inventor)

    2006-01-01

    Method and system for automatically displaying, visually and/or audibly and/or by an audible alarm signal, relevant weather data for an identified aircraft pilot, when each of a selected subset of measured or estimated aviation situation parameters, corresponding to a given aviation situation, has a value lying in a selected range. Each range for a particular pilot may be a default range, may be entered by the pilot and/or may be automatically determined from experience and may be subsequently edited by the pilot to change a range and to add or delete parameters describing a situation for which a display should be provided. The pilot can also verbally activate an audible display or visual display of selected information by verbal entry of a first command or a second command, respectively, that specifies the information required.

  11. Parameters Selection for Bivariate Multiscale Entropy Analysis of Postural Fluctuations in Fallers and Non-Fallers Older Adults.

    PubMed

    Ramdani, Sofiane; Bonnet, Vincent; Tallon, Guillaume; Lagarde, Julien; Bernard, Pierre Louis; Blain, Hubert

    2016-08-01

    Entropy measures are often used to quantify the regularity of postural sway time series. Recent methodological developments provided both multivariate and multiscale approaches allowing the extraction of complexity features from physiological signals; see "Dynamical complexity of human responses: A multivariate data-adaptive framework," in Bulletin of Polish Academy of Science and Technology, vol. 60, p. 433, 2012. The resulting entropy measures are good candidates for the analysis of bivariate postural sway signals exhibiting nonstationarity and multiscale properties. These methods are dependant on several input parameters such as embedding parameters. Using two data sets collected from institutionalized frail older adults, we numerically investigate the behavior of a recent multivariate and multiscale entropy estimator; see "Multivariate multiscale entropy: A tool for complexity analysis of multichannel data," Physics Review E, vol. 84, p. 061918, 2011. We propose criteria for the selection of the input parameters. Using these optimal parameters, we statistically compare the multivariate and multiscale entropy values of postural sway data of non-faller subjects to those of fallers. These two groups are discriminated by the resulting measures over multiple time scales. We also demonstrate that the typical parameter settings proposed in the literature lead to entropy measures that do not distinguish the two groups. This last result confirms the importance of the selection of appropriate input parameters.

  12. Ring rolling process simulation for geometry optimization

    NASA Astrophysics Data System (ADS)

    Franchi, Rodolfo; Del Prete, Antonio; Donatiello, Iolanda; Calabrese, Maurizio

    2017-10-01

    Ring Rolling is a complex hot forming process where different rolls are involved in the production of seamless rings. Since each roll must be independently controlled, different speed laws must be set; usually, in the industrial environment, a milling curve is introduced to monitor the shape of the workpiece during the deformation in order to ensure the correct ring production. In the present paper a ring rolling process has been studied and optimized in order to obtain anular components to be used in aerospace applications. In particular, the influence of process input parameters (feed rate of the mandrel and angular speed of main roll) on geometrical features of the final ring has been evaluated. For this purpose, a three-dimensional finite element model for HRR (Hot Ring Rolling) has been implemented in SFTC DEFORM V11. The FEM model has been used to formulate a proper optimization problem. The optimization procedure has been implemented in the commercial software DS ISight in order to find the combination of process parameters which allows to minimize the percentage error of each obtained dimension with respect to its nominal value. The software allows to find the relationship between input and output parameters applying Response Surface Methodology (RSM), by using the exact values of output parameters in the control points of the design space explored through FEM simulation. Once this relationship is known, the values of the output parameters can be calculated for each combination of the input parameters. After the calculation of the response surfaces for the selected output parameters, an optimization procedure based on Genetic Algorithms has been applied. At the end, the error between each obtained dimension and its nominal value has been minimized. The constraints imposed were the maximum values of standard deviations of the dimensions obtained for the final ring.

  13. Discriminative detection of deposited radon daughters on CR-39 track detectors using TRIAC II code

    NASA Astrophysics Data System (ADS)

    Patiris, D. L.; Ioannides, K. G.

    2009-07-01

    A method for detecting deposited 218Po and 214Po by a spectrometric study of CR-39 solid state nuclear track detectors is described. The method is based on the application of software imposed selection criteria, concerning the geometrical and optical properties of the tracks, which correspond to tracks created by alpha particles of specific energy falling on the detector at given angles of incidence. The selection criteria were based on a preliminary study of tracks' parameters (major and minor axes and mean value of brightness), using the TRIAC II code. Since no linear relation was found between the energy and the geometric characteristics of the tracks (major and minor axes), we resorted to the use of an additional parameter in order to classify the tracks according to the particles' energy. Since the brightness of tracks is associated with the tracks' depth, the mean value of brightness was chosen as the parameter of choice. To reduce the energy of the particles, which are emitted by deposited 218Po and 214Po into a quantifiable range, the detectors were covered with an aluminum absorber material. In this way, the discrimination of radon's daughters was finally accomplished by properly selecting amongst all registered tracks. This method could be applied as a low cost tool for the study of the radon's daughters behavior in air.

  14. New tools for evaluating LQAS survey designs

    PubMed Central

    2014-01-01

    Lot Quality Assurance Sampling (LQAS) surveys have become increasingly popular in global health care applications. Incorporating Bayesian ideas into LQAS survey design, such as using reasonable prior beliefs about the distribution of an indicator, can improve the selection of design parameters and decision rules. In this paper, a joint frequentist and Bayesian framework is proposed for evaluating LQAS classification accuracy and informing survey design parameters. Simple software tools are provided for calculating the positive and negative predictive value of a design with respect to an underlying coverage distribution and the selected design parameters. These tools are illustrated using a data example from two consecutive LQAS surveys measuring Oral Rehydration Solution (ORS) preparation. Using the survey tools, the dependence of classification accuracy on benchmark selection and the width of the ‘grey region’ are clarified in the context of ORS preparation across seven supervision areas. Following the completion of an LQAS survey, estimation of the distribution of coverage across areas facilitates quantifying classification accuracy and can help guide intervention decisions. PMID:24528928

  15. New tools for evaluating LQAS survey designs.

    PubMed

    Hund, Lauren

    2014-02-15

    Lot Quality Assurance Sampling (LQAS) surveys have become increasingly popular in global health care applications. Incorporating Bayesian ideas into LQAS survey design, such as using reasonable prior beliefs about the distribution of an indicator, can improve the selection of design parameters and decision rules. In this paper, a joint frequentist and Bayesian framework is proposed for evaluating LQAS classification accuracy and informing survey design parameters. Simple software tools are provided for calculating the positive and negative predictive value of a design with respect to an underlying coverage distribution and the selected design parameters. These tools are illustrated using a data example from two consecutive LQAS surveys measuring Oral Rehydration Solution (ORS) preparation. Using the survey tools, the dependence of classification accuracy on benchmark selection and the width of the 'grey region' are clarified in the context of ORS preparation across seven supervision areas. Following the completion of an LQAS survey, estimation of the distribution of coverage across areas facilitates quantifying classification accuracy and can help guide intervention decisions.

  16. Surveillance system and method having parameter estimation and operating mode partitioning

    NASA Technical Reports Server (NTRS)

    Bickford, Randall L. (Inventor)

    2005-01-01

    A system and method for monitoring an apparatus or process asset including creating a process model comprised of a plurality of process submodels each correlative to at least one training data subset partitioned from an unpartitioned training data set and each having an operating mode associated thereto; acquiring a set of observed signal data values from the asset; determining an operating mode of the asset for the set of observed signal data values; selecting a process submodel from the process model as a function of the determined operating mode of the asset; calculating a set of estimated signal data values from the selected process submodel for the determined operating mode; and determining asset status as a function of the calculated set of estimated signal data values for providing asset surveillance and/or control.

  17. Estimating historical atmospheric mercury concentrations from silver mining and their legacies in present-day surface soil in Potosí, Bolivia

    NASA Astrophysics Data System (ADS)

    Hagan, Nicole; Robins, Nicholas; Hsu-Kim, Heileen; Halabi, Susan; Morris, Mark; Woodall, George; Zhang, Tong; Bacon, Allan; Richter, Daniel De B.; Vandenberg, John

    2011-12-01

    Detailed Spanish records of mercury use and silver production during the colonial period in Potosí, Bolivia were evaluated to estimate atmospheric emissions of mercury from silver smelting. Mercury was used in the silver production process in Potosí and nearly 32,000 metric tons of mercury were released to the environment. AERMOD was used in combination with the estimated emissions to approximate historical air concentrations of mercury from colonial mining operations during 1715, a year of relatively low silver production. Source characteristics were selected from archival documents, colonial maps and images of silver smelters in Potosí and a base case of input parameters was selected. Input parameters were varied to understand the sensitivity of the model to each parameter. Modeled maximum 1-h concentrations were most sensitive to stack height and diameter, whereas an index of community exposure was relatively insensitive to uncertainty in input parameters. Modeled 1-h and long-term concentrations were compared to inhalation reference values for elemental mercury vapor. Estimated 1-h maximum concentrations within 500 m of the silver smelters consistently exceeded present-day occupational inhalation reference values. Additionally, the entire community was estimated to have been exposed to levels of mercury vapor that exceed present-day acute inhalation reference values for the general public. Estimated long-term maximum concentrations of mercury were predicted to substantially exceed the EPA Reference Concentration for areas within 600 m of the silver smelters. A concentration gradient predicted by AERMOD was used to select soil sampling locations along transects in Potosí. Total mercury in soils ranged from 0.105 to 155 mg kg-1, among the highest levels reported for surface soils in the scientific literature. The correlation between estimated air concentrations and measured soil concentrations will guide future research to determine the extent to which the current community of Potosí and vicinity is at risk of adverse health effects from historical mercury contamination.

  18. Functional assessment of three wetlands constructed by the West Virginia Division of Highways

    DOT National Transportation Integrated Search

    2000-11-01

    This study focused on soil nutrients, wildlife usage, diversity of vascular plants and major wildlife groups, and productivity as indicators. To provide a comparison to baseline values for these parameters, we selected three natural wetlands to serve...

  19. Genetic parameters and prediction of genotypic values for root quality traits in cassava using REML/BLUP.

    PubMed

    Oliveira, E J; Santana, F A; Oliveira, L A; Santos, V S

    2014-08-28

    The aim of this study was to estimate the genetic parameters and predict the genotypic values of root quality traits in cassava (Manihot esculenta Crantz) using restricted maximum likelihood (REML) and best linear unbiased prediction (BLUP). A total of 471 cassava accessions were evaluated over two years of cultivation. The evaluated traits included amylose content (AML), root dry matter (DMC), cyanogenic compounds (CyC), and starch yield (StYi). Estimates of the individual broad-sense heritability of AML were low (hg(2) = 0.07 ± 0.02), medium for StYi and DMC, and high for CyC. The heritability of AML was substantially improved based on mean of accessions (hm(2) = 0.28), indicating that some strategies such as increasing the number of repetitions can be used to increase the selective efficiency. In general, the observed genotypic values were very close to the predicted average of the improved population, most likely due to the high accuracy (>0.90), especially for DMC, CyC, and StYi. Gains via selection of the 30 best genotypes for each trait were 4.8 and 3.2% for an increase and decrease for AML, respectively, an increase of 10.75 and 74.62% for DMC for StYi, respectively, and a decrease of 89.60% for CyC in relation to the overall mean of the genotypic values. Genotypic correlations between the quality traits of the cassava roots collected were generally favorable, although they were low in magnitude. The REML/BLUP method was adequate for estimating genetic parameters and predicting the genotypic values, making it useful for cassava breeding.

  20. Perceived benefits of a radiology resident mentoring program: comparison of residents with self-selected vs assigned mentors.

    PubMed

    Yamada, Kei; Slanetz, Priscilla J; Boiselle, Phillip M

    2014-05-01

    It has been suggested that assigned mentoring relationships are less successful than those that develop by free choice. This study evaluates radiology residents' overall experience with a mentoring program and compares the responses of those who self-selected mentors with those who were assigned mentors. A voluntary Web-based survey was sent to 27 radiology residents in postgraduate years 3-5. Data collected included the following: year in residency, method of mentor assignment, duration of relationship, frequency and types of communication, perceived value of mentoring, overall satisfaction with the program, and the perceived impact of mentoring. Twenty-five of 27 residents (93%) responded, with 14 having self-selected mentors (56%) and 11 having assigned mentors (44%). Both groups unanimously agreed that mentoring is beneficial or critical to their training; however, those residents with self-selected mentors were significantly more satisfied with the mentoring program (4 vs 3.3; P = .04) and more likely to consider their mentor as their primary mentor compared with those with assigned mentors (11 [79%] vs 4 [36%]; P = .049). Although all residents perceived a benefit, residents with self-selected mentors rated almost all mentoring parameters more positively than those with assigned mentors, although most of these parameters did not reach statistical significance. Residents highly value the importance of mentoring. However, residents who self-select their mentors are more likely to be satisfied with a mentoring program. Copyright © 2014 Canadian Association of Radiologists. Published by Elsevier Inc. All rights reserved.

  1. User's design handbook for a Standardized Control Module (SCM) for DC to DC Converters, volume 2

    NASA Technical Reports Server (NTRS)

    Lee, F. C.

    1980-01-01

    A unified design procedure is presented for selecting the key SCM control parameters for an arbitrarily given power stage configuration and parameter values, such that all regulator performance specifications can be met and optimized concurrently in a single design attempt. All key results and performance indices, for buck, boost, and buck/boost switching regulators which are relevant to SCM design considerations are included to facilitate frequent references.

  2. Tune-stabilized, non-scaling, fixed-field, alternating gradient accelerator

    DOEpatents

    Johnstone, Carol J [Warrenville, IL

    2011-02-01

    A FFAG is a particle accelerator having turning magnets with a linear field gradient for confinement and a large edge angle to compensate for acceleration. FODO cells contain focus magnets and defocus magnets that are specified by a number of parameters. A set of seven equations, called the FFAG equations relate the parameters to one another. A set of constraints, call the FFAG constraints, constrain the FFAG equations. Selecting a few parameters, such as injection momentum, extraction momentum, and drift distance reduces the number of unknown parameters to seven. Seven equations with seven unknowns can be solved to yield the values for all the parameters and to thereby fully specify a FFAG.

  3. IN718 Additive Manufacturing Properties and Influences

    NASA Technical Reports Server (NTRS)

    Lambert, Dennis M.

    2015-01-01

    The results of tensile, fracture, and fatigue testing of IN718 coupons produced using the selective laser melting (SLM) additive manufacturing technique are presented. The data have been "sanitized" to remove the numerical values, although certain references to material standards are provided. This document provides some knowledge of the effect of variation of controlled build parameters used in the SLM process, a snapshot of the capabilities of SLM in industry at present, and shares some of the lessons learned along the way. For the build parameter characterization, the parameters were varied over a range that was centered about the machine manufacturer's recommended value, and in each case they were varied individually, although some co-variance of those parameters would be expected. Tensile, fracture, and high-cycle fatigue properties equivalent to wrought IN718 are achievable with SLM-produced IN718. Build and post-build processes need to be determined and then controlled to established limits to accomplish this. It is recommended that a multi-variable evaluation, e.g., design-of experiment (DOE), of the build parameters be performed to better evaluate the co-variance of the parameters.

  4. IN718 Additive Manufacturing Properties and Influences

    NASA Technical Reports Server (NTRS)

    Lambert, Dennis M.

    2015-01-01

    The results of tensile, fracture, and fatigue testing of IN718 coupons produced using the selective laser melting (SLM) additive manufacturing technique are presented. The data has been "generalized" to remove the numerical values, although certain references to material standards are provided. This document provides some knowledge of the effect of variation of controlled build parameters used in the SLM process, a snapshot of the capabilities of SLM in industry at present, and shares some of the lessons learned along the way. For the build parameter characterization, the parameters were varied over a range about the machine manufacturer's recommended value, and in each case they were varied individually, although some co-variance of those parameters would be expected. SLM-produced IN718, tensile, fracture, and high-cycle fatigue properties equivalent to wrought IN718 are achievable. Build and post-build processes need to be determined and then controlled to established limits to accomplish this. It is recommended that a multi-variable evaluation, e.g., design-of-experiment (DOE), of the build parameters be performed to better evaluate the co-variance of the parameters.

  5. The influence of uncertainty and location-specific conditions on the environmental prioritisation of human pharmaceuticals in Europe.

    PubMed

    Oldenkamp, Rik; Huijbregts, Mark A J; Ragas, Ad M J

    2016-05-01

    The selection of priority APIs (Active Pharmaceutical Ingredients) can benefit from a spatially explicit approach, since an API might exceed the threshold of environmental concern in one location, while staying below that same threshold in another. However, such a spatially explicit approach is relatively data intensive and subject to parameter uncertainty due to limited data. This raises the question to what extent a spatially explicit approach for the environmental prioritisation of APIs remains worthwhile when accounting for uncertainty in parameter settings. We show here that the inclusion of spatially explicit information enables a more efficient environmental prioritisation of APIs in Europe, compared with a non-spatial EU-wide approach, also under uncertain conditions. In a case study with nine antibiotics, uncertainty distributions of the PAF (Potentially Affected Fraction) of aquatic species were calculated in 100∗100km(2) environmental grid cells throughout Europe, and used for the selection of priority APIs. Two APIs have median PAF values that exceed a threshold PAF of 1% in at least one environmental grid cell in Europe, i.e., oxytetracycline and erythromycin. At a tenfold lower threshold PAF (i.e., 0.1%), two additional APIs would be selected, i.e., cefuroxime and ciprofloxacin. However, in 94% of the environmental grid cells in Europe, no APIs exceed either of the thresholds. This illustrates the advantage of following a location-specific approach in the prioritisation of APIs. This added value remains when accounting for uncertainty in parameter settings, i.e., if the 95th percentile of the PAF instead of its median value is compared with the threshold. In 96% of the environmental grid cells, the location-specific approach still enables a reduction of the selection of priority APIs of at least 50%, compared with a EU-wide prioritisation. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. A new functional method to choose the target lobe for lung volume reduction in emphysema - comparison with the conventional densitometric method.

    PubMed

    Hetzel, Juergen; Boeckeler, Michael; Horger, Marius; Ehab, Ahmed; Kloth, Christopher; Wagner, Robert; Freitag, Lutz; Slebos, Dirk-Jan; Lewis, Richard Alexander; Haentschel, Maik

    2017-01-01

    Lung volume reduction (LVR) improves breathing mechanics by reducing hyperinflation. Lobar selection usually focuses on choosing the most destroyed emphysematous lobes as seen on an inspiratory CT scan. However, it has never been shown to what extent these densitometric CT parameters predict the least deflation of an individual lobe during expiration. The addition of expiratory CT analysis allows measurement of the extent of lobar air trapping and could therefore provide additional functional information for choice of potential treatment targets. To determine lobar vital capacity/lobar total capacity (LVC/LTC) as a functional parameter for lobar air trapping using on an inspiratory and expiratory CT scan. To compare lobar selection by LVC/LTC with the established morphological CT density parameters. 36 patients referred for endoscopic LVR were studied. LVC/LTC, defined as delta volume over maximum volume of a lobe, was calculated using inspiratory and expiratory CT scans. The CT morphological parameters of mean lung density (MLD), low attenuation volume (LAV), and 15th percentile of Hounsfield units (15%P) were determined on an inspiratory CT scan for each lobe. We compared and correlated LVC/LTC with MLD, LAV, and 15%P. There was a weak correlation between the functional parameter LVC/LTC and all inspiratory densitometric parameters. Target lobe selection using lowest lobar deflation (lowest LVC/LTC) correlated with target lobe selection based on lowest MLD in 18 patients (50.0%), with the highest LAV in 13 patients (36.1%), and with the lowest 15%P in 12 patients (33.3%). CT-based measurement of deflation (LVC/LTC) as a functional parameter correlates weakly with all densitometric CT parameters on a lobar level. Therefore, morphological criteria based on inspiratory CT densitometry partially reflect the deflation of particular lung lobes, and may be of limited value as a sole predictor for target lobe selection in LVR.

  7. The Utility of Selection for Military and Civilian Jobs

    DTIC Science & Technology

    1989-07-01

    parsimonious use of information; the relative ease in making threshold (break-even) judgments compared to estimating actual SDy values higher than a... threshold value, even though judges are unlikely to agree on the exact point estimate for the SDy parameter; and greater understanding of how even small...ability, spatial ability, introversion , anxiety) considered to vary or differ across individuals. A construct (sometimes called a latent variable) is not

  8. NLSCIDNT user's guide maximum likehood parameter identification computer program with nonlinear rotorcraft model

    NASA Technical Reports Server (NTRS)

    1979-01-01

    A nonlinear, maximum likelihood, parameter identification computer program (NLSCIDNT) is described which evaluates rotorcraft stability and control coefficients from flight test data. The optimal estimates of the parameters (stability and control coefficients) are determined (identified) by minimizing the negative log likelihood cost function. The minimization technique is the Levenberg-Marquardt method, which behaves like the steepest descent method when it is far from the minimum and behaves like the modified Newton-Raphson method when it is nearer the minimum. Twenty-one states and 40 measurement variables are modeled, and any subset may be selected. States which are not integrated may be fixed at an input value, or time history data may be substituted for the state in the equations of motion. Any aerodynamic coefficient may be expressed as a nonlinear polynomial function of selected 'expansion variables'.

  9. Optimization of single photon detection model based on GM-APD

    NASA Astrophysics Data System (ADS)

    Chen, Yu; Yang, Yi; Hao, Peiyu

    2017-11-01

    One hundred kilometers high precision laser ranging hopes the detector has very strong detection ability for very weak light. At present, Geiger-Mode of Avalanche Photodiode has more use. It has high sensitivity and high photoelectric conversion efficiency. Selecting and designing the detector parameters according to the system index is of great importance to the improvement of photon detection efficiency. Design optimization requires a good model. In this paper, we research the existing Poisson distribution model, and consider the important detector parameters of dark count rate, dead time, quantum efficiency and so on. We improve the optimization of detection model, select the appropriate parameters to achieve optimal photon detection efficiency. The simulation is carried out by using Matlab and compared with the actual test results. The rationality of the model is verified. It has certain reference value in engineering applications.

  10. The selection criteria elements of X-ray optics system

    NASA Astrophysics Data System (ADS)

    Plotnikova, I. V.; Chicherina, N. V.; Bays, S. S.; Bildanov, R. G.; Stary, O.

    2018-01-01

    At the design of new modifications of x-ray tomography there are difficulties in the right choice of elements of X-ray optical system. Now this problem is solved by practical consideration, selection of values of the corresponding parameters - tension on an x-ray tube taking into account the thickness and type of the studied material. For reduction of time and labor input of design it is necessary to create the criteria of the choice, to determine key parameters and characteristics of elements. In the article two main elements of X-ray optical system - an x-ray tube and the detector of x-ray radiation - are considered. Criteria of the choice of elements, their key characteristics, the main dependences of parameters, quality indicators and also recommendations according to the choice of elements of x-ray systems are received.

  11. Selective Laser Melting of Pure Copper

    NASA Astrophysics Data System (ADS)

    Ikeshoji, Toshi-Taka; Nakamura, Kazuya; Yonehara, Makiko; Imai, Ken; Kyogoku, Hideki

    2017-12-01

    Appropriate building parameters for selective laser melting of 99.9% pure copper powder were investigated at relatively high laser power of 800 W for hatch pitch in the range from 0.025 mm to 0.12 mm. The highest relative density of the built material was 99.6%, obtained at hatch pitch of 0.10 mm. Building conditions were also studied using transient heat analysis in finite element modeling of the liquidation and solidification of the powder layer. The estimated melt pool length and width were comparable to values obtained by observations using a thermoviewer. The trend for the melt pool width versus the hatch pitch agreed with experimental values.

  12. Selective Laser Melting of Pure Copper

    NASA Astrophysics Data System (ADS)

    Ikeshoji, Toshi-Taka; Nakamura, Kazuya; Yonehara, Makiko; Imai, Ken; Kyogoku, Hideki

    2018-03-01

    Appropriate building parameters for selective laser melting of 99.9% pure copper powder were investigated at relatively high laser power of 800 W for hatch pitch in the range from 0.025 mm to 0.12 mm. The highest relative density of the built material was 99.6%, obtained at hatch pitch of 0.10 mm. Building conditions were also studied using transient heat analysis in finite element modeling of the liquidation and solidification of the powder layer. The estimated melt pool length and width were comparable to values obtained by observations using a thermoviewer. The trend for the melt pool width versus the hatch pitch agreed with experimental values.

  13. Deleterious Mutations, Apparent Stabilizing Selection and the Maintenance of Quantitative Variation

    PubMed Central

    Kondrashov, A. S.; Turelli, M.

    1992-01-01

    Apparent stabilizing selection on a quantitative trait that is not causally connected to fitness can result from the pleiotropic effects of unconditionally deleterious mutations, because as N. Barton noted, ``... individuals with extreme values of the trait will tend to carry more deleterious alleles ....'' We use a simple model to investigate the dependence of this apparent selection on the genomic deleterious mutation rate, U; the equilibrium distribution of K, the number of deleterious mutations per genome; and the parameters describing directional selection against deleterious mutations. Unlike previous analyses, we allow for epistatic selection against deleterious alleles. For various selection functions and realistic parameter values, the distribution of K, the distribution of breeding values for a pleiotropically affected trait, and the apparent stabilizing selection function are all nearly Gaussian. The additive genetic variance for the quantitative trait is kQa(2), where k is the average number of deleterious mutations per genome, Q is the proportion of deleterious mutations that affect the trait, and a(2) is the variance of pleiotropic effects for individual mutations that do affect the trait. In contrast, when the trait is measured in units of its additive standard deviation, the apparent fitness function is essentially independent of Q and a(2); and β, the intensity of selection, measured as the ratio of additive genetic variance to the ``variance'' of the fitness curve, is very close to s = U/k, the selection coefficient against individual deleterious mutations at equilibrium. Therefore, this model predicts appreciable apparent stabilizing selection if s exceeds about 0.03, which is consistent with various data. However, the model also predicts that β must equal V(m)/V(G), the ratio of new additive variance for the trait introduced each generation by mutation to the standing additive variance. Most, although not all, estimates of this ratio imply apparent stabilizing selection weaker than generally observed. A qualitative argument suggests that even when direct selection is responsible for most of the selection observed on a character, it may be essentially irrelevant to the maintenance of variation for the character by mutation-selection balance. Simple experiments can indicate the fraction of observed stabilizing selection attributable to the pleiotropic effects of deleterious mutations. PMID:1427047

  14. Genetic variation maintained in multilocus models of additive quantitative traits under stabilizing selection.

    PubMed Central

    Bürger, R; Gimelfarb, A

    1999-01-01

    Stabilizing selection for an intermediate optimum is generally considered to deplete genetic variation in quantitative traits. However, conflicting results from various types of models have been obtained. While classical analyses assuming a large number of independent additive loci with individually small effects indicated that no genetic variation is preserved under stabilizing selection, several analyses of two-locus models showed the contrary. We perform a complete analysis of a generalization of Wright's two-locus quadratic-optimum model and investigate numerically the ability of quadratic stabilizing selection to maintain genetic variation in additive quantitative traits controlled by up to five loci. A statistical approach is employed by choosing randomly 4000 parameter sets (allelic effects, recombination rates, and strength of selection) for a given number of loci. For each parameter set we iterate the recursion equations that describe the dynamics of gamete frequencies starting from 20 randomly chosen initial conditions until an equilibrium is reached, record the quantities of interest, and calculate their corresponding mean values. As the number of loci increases from two to five, the fraction of the genome expected to be polymorphic declines surprisingly rapidly, and the loci that are polymorphic increasingly are those with small effects on the trait. As a result, the genetic variance expected to be maintained under stabilizing selection decreases very rapidly with increased number of loci. The equilibrium structure expected under stabilizing selection on an additive trait differs markedly from that expected under selection with no constraints on genotypic fitness values. The expected genetic variance, the expected polymorphic fraction of the genome, as well as other quantities of interest, are only weakly dependent on the selection intensity and the level of recombination. PMID:10353920

  15. On-Line Identification of Simulation Examples for Forgetting Methods to Track Time Varying Parameters Using the Alternative Covariance Matrix in Matlab

    NASA Astrophysics Data System (ADS)

    Vachálek, Ján

    2011-12-01

    The paper compares the abilities of forgetting methods to track time varying parameters of two different simulated models with different types of excitation. The observed parameters in the simulations are the integral sum of the Euclidean norm, deviation of the parameter estimates from their true values and a selected band prediction error count. As supplementary information, we observe the eigenvalues of the covariance matrix. In the paper we used a modified method of Regularized Exponential Forgetting with Alternative Covariance Matrix (REFACM) along with Directional Forgetting (DF) and three standard regularized methods.

  16. Noise and Dynamical Pattern Selection in Solidification

    NASA Technical Reports Server (NTRS)

    Kurtze, Douglas A.

    1997-01-01

    The overall goal of this project was to understand in more detail how a pattern-forming system can adjust its spacing. "Pattern-forming systems," in this context, are nonequilibrium contina whose state is determined by experimentally adjustable control parameter. Below some critical value of the control system then has available to it a range of linearly stable, spatially periodic steady states, each characterized by a spacing which can lie anywhere within some band of values. These systems like directional solidification, where the solidification front is planar when the ratio of growth velocity to thermal gradient is below its critical value, but takes on a cellular shape above critical. They also include systems without interfaces, such as Benard convection, where it is the fluid velocity field which changes from zero to something spatially periodic as the control parameter is increased through its critical value. The basic question to be addressed was that of how the system chooses one of its myriad possible spacings when the control parameter is above critical, and in particular the role of noise in the selection process. Previous work on explosive crystallization had suggested that one spacing in the range should be preferred, in the sense that weak noise should eventually drive the system to that spacing. That work had also suggested a heuristic argument for identifying the preferred spacing. The project had three main objectives: to understand in more detail how a pattern-forming system can adjust its spacing; to investigate how noise drives a system to its preferred spacing; and to extend the heuristic argument for a preferred spacing in explosive crystallization to other pattern-forming systems.

  17. The predictive value of quantitative DCE metrics for immediate therapeutic response of high-intensity focused ultrasound ablation (HIFU) of symptomatic uterine fibroids.

    PubMed

    Wei, Chao; Fang, Xin; Wang, Chuan-Bin; Chen, Yu; Xu, Xiao; Dong, Jiang-Ning

    2017-12-04

    The aim of this study was to investigate the value of quantitative DCE-MRI parameters for predicting the immediate non-perfused volume ratio (NPVR) of HIFU therapy in the treatment of symptomatic uterine fibroids. A total of 78 symptomatic uterine fibroids in 65 female patients were treated with US-HIFU therapy. All patients underwent conventional MRI and DCE-MRI scans 1 day before and 3 days after HIFU treatment. Permeability parameters K trans , K ep , V e , and V p and T1 perfusion parameters BF and BV of pretreatment were measured as a baseline, while NPVR was used to assess immediate ablation efficiency. Data were assigned to NPVR ≧ 70% and NPVR < 70% groups. Then, the predictive performances of different parameters for ablation efficacy were studied to seek the optimal cut-off value, and the length of time to calculate the variable parameters in each case was recorded. (1) It was observed that the pretreatment K trans , K ep , V e , and BF values of the NPVR ≧ 70% group were significantly lower compared to the NPVR < 70% group (p < 0.05). (2) The immediate NPVR was negatively correlated with the K trans , BF, and BV values before HIFU treatment (r = - 0.561, - 0.712, and - 0.528, respectively, p < 0.05 for all). (3) The AUCs of pretreatment K trans , BF, BV values, and K trans combined with BF used to predict the immediate NPVR were 0.810, 0.909, 0.795, and 0.922, respectively (p < 0.05 for all). (4) The mean time to calculate the variable parameters in each case was 7.5 min. Higher K trans , BF, and BV values at baseline DCE-MRI suggested a poor ablation efficacy of HIFU therapy for symptomatic uterine fibroids, while the pretreatment DCE-MRI parameters could be useful biomarkers for predicting the ablation efficacy in select cases. The software used to calculate DCE-MRI parameters was simpler, quicker, and easier to incorporate into clinical practice.

  18. Guidelines for the Selection of Near-Earth Thermal Environment Parameters for Spacecraft Design

    NASA Technical Reports Server (NTRS)

    Anderson, B. J.; Justus, C. G.; Batts, G. W.

    2001-01-01

    Thermal analysis and design of Earth orbiting systems requires specification of three environmental thermal parameters: the direct solar irradiance, Earth's local albedo, and outgoing longwave radiance (OLR). In the early 1990s data sets from the Earth Radiation Budget Experiment were analyzed on behalf of the Space Station Program to provide an accurate description of these parameters as a function of averaging time along the orbital path. This information, documented in SSP 30425 and, in more generic form in NASA/TM-4527, enabled the specification of the proper thermal parameters for systems of various thermal response time constants. However, working with the engineering community and SSP-30425 and TM-4527 products over a number of years revealed difficulties in interpretation and application of this material. For this reason it was decided to develop this guidelines document to help resolve these issues of practical application. In the process, the data were extensively reprocessed and a new computer code, the Simple Thermal Environment Model (STEM) was developed to simplify the process of selecting the parameters for input into extreme hot and cold thermal analyses and design specifications. In the process, greatly improved values for the cold case OLR values for high inclination orbits were derived. Thermal parameters for satellites in low, medium, and high inclination low-Earth orbit and with various system thermal time constraints are recommended for analysis of extreme hot and cold conditions. Practical information as to the interpretation and application of the information and an introduction to the STEM are included. Complete documentation for STEM is found in the user's manual, in preparation.

  19. UNDERSTANDING VARIATION IN PARTITION COEFFICIENT KD, VALUES, VOLUME III: AMERICIUM, ARSENIC, CURIUM, IODINE, NEPTUNIUM, RADIUM, AND TECHNETIUM

    EPA Science Inventory

    This report describes the conceptualization, measurement, and use of the partition (or distribution) coefficient, Kd, parameter, and the geochemical aqueous solution and sorbent properties that are most important in controlling adsorption/retardation behavior of selected contamin...

  20. Agronomic, chemical and genetic profiles of hot peppers (Capsicum annuum ssp.).

    PubMed

    De Masi, Luigi; Siviero, Pietro; Castaldo, Domenico; Cautela, Domenico; Esposito, Castrese; Laratta, Bruna

    2007-08-01

    A study on morphology, productive yield, main quality parameters and genetic variability of eight landraces of hot pepper (Capsicum annuum ssp.) from Southern Italy has been performed. Morphological characters of berries and productivity values were evaluated by agronomic analyses. Chemical and genetic investigations were performed by HPLC and random amplified polymorphic DNA (RAPD)-PCR, respectively. In particular, carotenoid and capsaicinoid (pungency) contents were considered as main quality parameters of hot pepper. For the eight selected samples, genetic similarity values were calculated from the generated RAPD fragments and a dendrogram of genetic similarity was constructed. All the eight landraces exhibited characteristic RAPD patterns that allowed their characterization. Agro-morphological and chemical determinations were found to be adequate for selection, but they resulted useful only for plants grown in the same environmental conditions. RAPD application may provide a more reliable way based on DNA identification. The results of our study led to the identification of three noteworthy populations, suitable for processing, which fitted into different clusters of the dendrogram.

  1. Suitability of spring wheat varieties for the production of best quality pizza.

    PubMed

    Tehseen, Saima; Anjum, Faqir Muhammad; Pasha, Imran; Khan, Muhammad Issa; Saeed, Farhan

    2014-08-01

    The selection of appropriate wheat cultivars is an imperative issue in product development and realization. The nutritional profiling of plants and their cultivars along with their suitability for development of specific products is of considerable interests for multi-national food chains. In this project, Pizza-Hut Pakistan provided funds for the selection of suitable newly developed Pakistani spring variety for pizza production. In this regard, the recent varieties were selected and evaluated for nutritional and functional properties for pizza production. Additionally, emphasis has been paid to assess all varieties for their physico-chemical attributes, rheological parameters and mineral content. Furthermore, pizza prepared from respective flour samples were further evaluated for sensory attributes Results showed that Anmool, Abadgar, Imdad, SKD-1, Shafaq and Moomal have higher values for protein, gluten content, pelshenke value and SDS sedimentation and these were relatively better in studied parameters as compared to other varieties although which were considered best for good quality pizza production. TD-1 got significantly highest score for flavor of pizza and lowest score was observed from wheat variety Kiran. Moreover, it is concluded from current study that all wheat varieties except TJ-83 and Kiran exhibited better results for flavor.

  2. The effect of the combination of acids and tannin in diet on the performance and selected biochemical, haematological and antioxidant enzyme parameters in grower pigs.

    PubMed

    Stukelj, Marina; Valencak, Zdravko; Krsnik, Mladen; Svete, Alenka Nemec

    2010-03-06

    The abolition of in-feed antibiotics or chemotherapeutics as growth promoters have stimulated the swine industry to look for alternatives such as organic acids, botanicals, probiotics and tannin. The objective of the present study was to compare the effects of a combination of acids and tannin with diet with organic acids and diet without growth promoters on the growth performance and selected biochemical, haematological and antioxidant enzyme parameters in grower pigs. Tannin is more natural and cheaper but possibly with the same effectiveness as organic acids with regard to growth performance. Thirty-six 7 week old grower pigs, divided into three equal groups, were used in a three week feeding trial. Group I was fed basal diet, group II basal diet with added organic acids and group III basal diet with added organic and inorganic acids and tannin. Pigs were weighed before and after feeding and observed daily. Blood was collected before and after the feeding trial for the determination of selected biochemical, haematological and antioxidant enzyme parameters. One-way ANOVA was used to assess any diet related changes of all the parameters. Paired t-test was used to evaluate changes of blood parameters individually in each group of growers before and after feeding. No clinical health problems related to diet were noted during the three week feeding trial. The average daily gain (ADG) and selected blood parameters were not affected by the addition to basal diet of either acids and tannin or of organic acids alone. Selected blood parameters remained within the reference range before and after the feeding trial, with the exception of total serum proteins that were below the lower value of reference range at both times. The significant changes (paired t-test) observed in individual groups before and after the feeding trial are related to the growth of pigs. Diet with acids and tannin did not improve the growth performance of grower pigs but had no deleterious effects on selected blood parameters. The possibility of beneficial effects of adding acids and tannin in diets on growth performance over a longer period, however, could not be excluded.

  3. Monte Carlo simulation of elongating metallic nanowires in the presence of surfactants

    NASA Astrophysics Data System (ADS)

    Gimenez, M. Cecilia; Reinaudi, Luis; Leiva, Ezequiel P. M.

    2015-12-01

    Nanowires of different metals undergoing elongation were studied by means of canonical Monte Carlo simulations and the embedded atom method representing the interatomic potentials. The presence of a surfactant medium was emulated by the introduction of an additional stabilization energy, represented by a parameter Q. Several values of the parameter Q and temperatures were analyzed. In general, it was observed for all studied metals that, as Q increases, there is a greater elongation before the nanowire breaks. In the case of silver, linear monatomic chains several atoms long formed at intermediate values of Q and low temperatures. Similar observations were made for the case of silver-gold alloys when the medium interacted selectively with Ag.

  4. Analysis of glottal source parameters in Parkinsonian speech.

    PubMed

    Hanratty, Jane; Deegan, Catherine; Walsh, Mary; Kirkpatrick, Barry

    2016-08-01

    Diagnosis and monitoring of Parkinson's disease has a number of challenges as there is no definitive biomarker despite the broad range of symptoms. Research is ongoing to produce objective measures that can either diagnose Parkinson's or act as an objective decision support tool. Recent research on speech based measures have demonstrated promising results. This study aims to investigate the characteristics of the glottal source signal in Parkinsonian speech. An experiment is conducted in which a selection of glottal parameters are tested for their ability to discriminate between healthy and Parkinsonian speech. Results for each glottal parameter are presented for a database of 50 healthy speakers and a database of 16 speakers with Parkinsonian speech symptoms. Receiver operating characteristic (ROC) curves were employed to analyse the results and the area under the ROC curve (AUC) values were used to quantify the performance of each glottal parameter. The results indicate that glottal parameters can be used to discriminate between healthy and Parkinsonian speech, although results varied for each parameter tested. For the task of separating healthy and Parkinsonian speech, 2 out of the 7 glottal parameters tested produced AUC values of over 0.9.

  5. Surveillance system and method having parameter estimation and operating mode partitioning

    NASA Technical Reports Server (NTRS)

    Bickford, Randall L. (Inventor)

    2003-01-01

    A system and method for monitoring an apparatus or process asset including partitioning an unpartitioned training data set into a plurality of training data subsets each having an operating mode associated thereto; creating a process model comprised of a plurality of process submodels each trained as a function of at least one of the training data subsets; acquiring a current set of observed signal data values from the asset; determining an operating mode of the asset for the current set of observed signal data values; selecting a process submodel from the process model as a function of the determined operating mode of the asset; calculating a current set of estimated signal data values from the selected process submodel for the determined operating mode; and outputting the calculated current set of estimated signal data values for providing asset surveillance and/or control.

  6. Optimization of multi-environment trials for genomic selection based on crop models.

    PubMed

    Rincent, R; Kuhn, E; Monod, H; Oury, F-X; Rousset, M; Allard, V; Le Gouis, J

    2017-08-01

    We propose a statistical criterion to optimize multi-environment trials to predict genotype × environment interactions more efficiently, by combining crop growth models and genomic selection models. Genotype × environment interactions (GEI) are common in plant multi-environment trials (METs). In this context, models developed for genomic selection (GS) that refers to the use of genome-wide information for predicting breeding values of selection candidates need to be adapted. One promising way to increase prediction accuracy in various environments is to combine ecophysiological and genetic modelling thanks to crop growth models (CGM) incorporating genetic parameters. The efficiency of this approach relies on the quality of the parameter estimates, which depends on the environments composing this MET used for calibration. The objective of this study was to determine a method to optimize the set of environments composing the MET for estimating genetic parameters in this context. A criterion called OptiMET was defined to this aim, and was evaluated on simulated and real data, with the example of wheat phenology. The MET defined with OptiMET allowed estimating the genetic parameters with lower error, leading to higher QTL detection power and higher prediction accuracies. MET defined with OptiMET was on average more efficient than random MET composed of twice as many environments, in terms of quality of the parameter estimates. OptiMET is thus a valuable tool to determine optimal experimental conditions to best exploit MET and the phenotyping tools that are currently developed.

  7. Optimization of Selective Laser Melting by Evaluation Method of Multiple Quality Characteristics

    NASA Astrophysics Data System (ADS)

    Khaimovich, A. I.; Stepanenko, I. S.; Smelov, V. G.

    2018-01-01

    Article describes the adoption of the Taguchi method in selective laser melting process of sector of combustion chamber by numerical and natural experiments for achieving minimum temperature deformation. The aim was to produce a quality part with minimum amount of numeric experiments. For the study, the following optimization parameters (independent factors) were chosen: the laser beam power and velocity; two factors for compensating the effect of the residual thermal stresses: the scale factor of the preliminary correction of the part geometry and the number of additional reinforcing elements. We used an orthogonal plan of 9 experiments with a factor variation at three levels (L9). As quality criterias, the values of distortions for 9 zones of the combustion chamber and the maximum strength of the material of the chamber were chosen. Since the quality parameters are multidirectional, a grey relational analysis was used to solve the optimization problem for multiple quality parameters. As a result, according to the parameters obtained, the combustion chamber segments of the gas turbine engine were manufactured.

  8. Just noticeable differences of open quotient and asymmetry coefficient in singing voice.

    PubMed

    Henrich, Nathalie; Sundin, Gunilla; Ambroise, Daniel; d'Alessandro, Christophe; Castellengo, Michèle; Doval, Boris

    2003-12-01

    This study aims to explore the perceptual relevance of the variations of glottal flow parameters and to what extent a small variation can be detected. Just Noticeable Differences (JNDs) have been measured for three values of open quotient (0.4, 0.6, and 0.8) and two values of asymmetry coefficient (2/3 and 0.8), and the effect of changes of vowel, pitch, vibrato, and amplitude parameters has been tested. Two main groups of subjects have been analyzed: a group of 20 untrained subjects and a group of 10 trained subjects. The results show that the JND for open quotient is highly dependent on the target value: an increase of the JND is noticed when the open quotient target value is increased. The relative JND is constant: deltaOq/Oq = 14% for the untrained and 10% for the trained. In the same way, the JND for asymmetry coefficient is also slightly dependent on the target value--an increase of the asymmetry coefficient value leads to a decrease of the JND. The results show that there is no effect from the selected vowel or frequency (two values have been tested), but that the addition of a vibrato has a small effect on the JND of open quotient. The choice of an amplitude parameter also has a great effect on the JND of open quotient.

  9. Statistical Analyses of Femur Parameters for Designing Anatomical Plates.

    PubMed

    Wang, Lin; He, Kunjin; Chen, Zhengming

    2016-01-01

    Femur parameters are key prerequisites for scientifically designing anatomical plates. Meanwhile, individual differences in femurs present a challenge to design well-fitting anatomical plates. Therefore, to design anatomical plates more scientifically, analyses of femur parameters with statistical methods were performed in this study. The specific steps were as follows. First, taking eight anatomical femur parameters as variables, 100 femur samples were classified into three classes with factor analysis and Q-type cluster analysis. Second, based on the mean parameter values of the three classes of femurs, three sizes of average anatomical plates corresponding to the three classes of femurs were designed. Finally, based on Bayes discriminant analysis, a new femur could be assigned to the proper class. Thereafter, the average anatomical plate suitable for that new femur was selected from the three available sizes of plates. Experimental results showed that the classification of femurs was quite reasonable based on the anatomical aspects of the femurs. For instance, three sizes of condylar buttress plates were designed. Meanwhile, 20 new femurs are judged to which classes the femurs belong. Thereafter, suitable condylar buttress plates were determined and selected.

  10. An empirical analysis of the distribution of the duration of overshoots in a stationary gaussian stochastic process

    NASA Technical Reports Server (NTRS)

    Parrish, R. S.; Carter, M. C.

    1974-01-01

    This analysis utilizes computer simulation and statistical estimation. Realizations of stationary gaussian stochastic processes with selected autocorrelation functions are computer simulated. Analysis of the simulated data revealed that the mean and the variance of a process were functionally dependent upon the autocorrelation parameter and crossing level. Using predicted values for the mean and standard deviation, by the method of moments, the distribution parameters was estimated. Thus, given the autocorrelation parameter, crossing level, mean, and standard deviation of a process, the probability of exceeding the crossing level for a particular length of time was calculated.

  11. Investigation on Effect of Material Hardness in High Speed CNC End Milling Process.

    PubMed

    Dhandapani, N V; Thangarasu, V S; Sureshkannan, G

    2015-01-01

    This research paper analyzes the effects of material properties on surface roughness, material removal rate, and tool wear on high speed CNC end milling process with various ferrous and nonferrous materials. The challenge of material specific decision on the process parameters of spindle speed, feed rate, depth of cut, coolant flow rate, cutting tool material, and type of coating for the cutting tool for required quality and quantity of production is addressed. Generally, decision made by the operator on floor is based on suggested values of the tool manufacturer or by trial and error method. This paper describes effect of various parameters on the surface roughness characteristics of the precision machining part. The prediction method suggested is based on various experimental analysis of parameters in different compositions of input conditions which would benefit the industry on standardization of high speed CNC end milling processes. The results show a basis for selection of parameters to get better results of surface roughness values as predicted by the case study results.

  12. Preliminary Investigation of Ice Shape Sensitivity to Parameter Variations

    NASA Technical Reports Server (NTRS)

    Miller, Dean R.; Potapczuk, Mark G.; Langhals, Tammy J.

    2005-01-01

    A parameter sensitivity study was conducted at the NASA Glenn Research Center's Icing Research Tunnel (IRT) using a 36 in. chord (0.91 m) NACA-0012 airfoil. The objective of this preliminary work was to investigate the feasibility of using ice shape feature changes to define requirements for the simulation and measurement of SLD icing conditions. It was desired to identify the minimum change (threshold) in a parameter value, which yielded an observable change in the ice shape. Liquid Water Content (LWC), drop size distribution (MVD), and tunnel static temperature were varied about a nominal value, and the effects of these parameter changes on the resulting ice shapes were documented. The resulting differences in ice shapes were compared on the basis of qualitative and quantitative criteria (e.g., mass, ice horn thickness, ice horn angle, icing limits, and iced area). This paper will provide a description of the experimental method, present selected experimental results, and conclude with an evaluation of these results, followed by a discussion of recommendations for future research.

  13. Investigation on Effect of Material Hardness in High Speed CNC End Milling Process

    PubMed Central

    Dhandapani, N. V.; Thangarasu, V. S.; Sureshkannan, G.

    2015-01-01

    This research paper analyzes the effects of material properties on surface roughness, material removal rate, and tool wear on high speed CNC end milling process with various ferrous and nonferrous materials. The challenge of material specific decision on the process parameters of spindle speed, feed rate, depth of cut, coolant flow rate, cutting tool material, and type of coating for the cutting tool for required quality and quantity of production is addressed. Generally, decision made by the operator on floor is based on suggested values of the tool manufacturer or by trial and error method. This paper describes effect of various parameters on the surface roughness characteristics of the precision machining part. The prediction method suggested is based on various experimental analysis of parameters in different compositions of input conditions which would benefit the industry on standardization of high speed CNC end milling processes. The results show a basis for selection of parameters to get better results of surface roughness values as predicted by the case study results. PMID:26881267

  14. Influence of the power supply parameters on the projectile energy in the permanent magnet electrodynamic accelerator

    NASA Astrophysics Data System (ADS)

    Waindok, Andrzej; Piekielny, Paweł

    2017-10-01

    The main objective of the research is to investigate, how the power supply parameters influence the kinetic energy of the movable element, called commonly a projectile or bullet. A calculation and measurement results of transient characteristics for an electrodynamic accelerator with permanent magnet support were presented in the paper. The calculations were made with using field-circuit model, which includes the parameters of the power supply, mass of the bullet and friction phenomenon. Characteristics of energy and muzzle velocity verso supply voltage (50 V to 350 V) and capacitance value (60 mF to 340.5 mF) were determined, as well. A measurement verification of selected points of calculation characteristics were carried out for investigated values of muzzle velocity. A good conformity between calculation and measurement results was obtained. Concluding, presented characteristics of the muzzle velocity and energy of the projectile vs. power supply parameters indicate, that accelerators could be used for fatigue testing of materials.

  15. Parameter optimization of parenchymal texture analysis for prediction of false-positive recalls from screening mammography

    NASA Astrophysics Data System (ADS)

    Ray, Shonket; Keller, Brad M.; Chen, Jinbo; Conant, Emily F.; Kontos, Despina

    2016-03-01

    This work details a methodology to obtain optimal parameter values for a locally-adaptive texture analysis algorithm that extracts mammographic texture features representative of breast parenchymal complexity for predicting falsepositive (FP) recalls from breast cancer screening with digital mammography. The algorithm has two components: (1) adaptive selection of localized regions of interest (ROIs) and (2) Haralick texture feature extraction via Gray- Level Co-Occurrence Matrices (GLCM). The following parameters were systematically varied: mammographic views used, upper limit of the ROI window size used for adaptive ROI selection, GLCM distance offsets, and gray levels (binning) used for feature extraction. Each iteration per parameter set had logistic regression with stepwise feature selection performed on a clinical screening cohort of 474 non-recalled women and 68 FP recalled women; FP recall prediction was evaluated using area under the curve (AUC) of the receiver operating characteristic (ROC) and associations between the extracted features and FP recall were assessed via odds ratios (OR). A default instance of mediolateral (MLO) view, upper ROI size limit of 143.36 mm (2048 pixels2), GLCM distance offset combination range of 0.07 to 0.84 mm (1 to 12 pixels) and 16 GLCM gray levels was set. The highest ROC performance value of AUC=0.77 [95% confidence intervals: 0.71-0.83] was obtained at three specific instances: the default instance, upper ROI window equal to 17.92 mm (256 pixels2), and gray levels set to 128. The texture feature of sum average was chosen as a statistically significant (p<0.05) predictor and associated with higher odds of FP recall for 12 out of 14 total instances.

  16. IMMAN: free software for information theory-based chemometric analysis.

    PubMed

    Urias, Ricardo W Pino; Barigye, Stephen J; Marrero-Ponce, Yovani; García-Jacas, César R; Valdes-Martiní, José R; Perez-Gimenez, Facundo

    2015-05-01

    The features and theoretical background of a new and free computational program for chemometric analysis denominated IMMAN (acronym for Information theory-based CheMoMetrics ANalysis) are presented. This is multi-platform software developed in the Java programming language, designed with a remarkably user-friendly graphical interface for the computation of a collection of information-theoretic functions adapted for rank-based unsupervised and supervised feature selection tasks. A total of 20 feature selection parameters are presented, with the unsupervised and supervised frameworks represented by 10 approaches in each case. Several information-theoretic parameters traditionally used as molecular descriptors (MDs) are adapted for use as unsupervised rank-based feature selection methods. On the other hand, a generalization scheme for the previously defined differential Shannon's entropy is discussed, as well as the introduction of Jeffreys information measure for supervised feature selection. Moreover, well-known information-theoretic feature selection parameters, such as information gain, gain ratio, and symmetrical uncertainty are incorporated to the IMMAN software ( http://mobiosd-hub.com/imman-soft/ ), following an equal-interval discretization approach. IMMAN offers data pre-processing functionalities, such as missing values processing, dataset partitioning, and browsing. Moreover, single parameter or ensemble (multi-criteria) ranking options are provided. Consequently, this software is suitable for tasks like dimensionality reduction, feature ranking, as well as comparative diversity analysis of data matrices. Simple examples of applications performed with this program are presented. A comparative study between IMMAN and WEKA feature selection tools using the Arcene dataset was performed, demonstrating similar behavior. In addition, it is revealed that the use of IMMAN unsupervised feature selection methods improves the performance of both IMMAN and WEKA supervised algorithms. Graphic representation for Shannon's distribution of MD calculating software.

  17. Parameter learning for performance adaptation

    NASA Technical Reports Server (NTRS)

    Peek, Mark D.; Antsaklis, Panos J.

    1990-01-01

    A parameter learning method is introduced and used to broaden the region of operability of the adaptive control system of a flexible space antenna. The learning system guides the selection of control parameters in a process leading to optimal system performance. A grid search procedure is used to estimate an initial set of parameter values. The optimization search procedure uses a variation of the Hooke and Jeeves multidimensional search algorithm. The method is applicable to any system where performance depends on a number of adjustable parameters. A mathematical model is not necessary, as the learning system can be used whenever the performance can be measured via simulation or experiment. The results of two experiments, the transient regulation and the command following experiment, are presented.

  18. T2 values of articular cartilage in clinically relevant subregions of the asymptomatic knee.

    PubMed

    Surowiec, Rachel K; Lucas, Erin P; Fitzcharles, Eric K; Petre, Benjamin M; Dornan, Grant J; Giphart, J Erik; LaPrade, Robert F; Ho, Charles P

    2014-06-01

    In order for T2 mapping to become more clinically applicable, reproducible subregions and standardized T2 parameters must be defined. This study sought to: (1) define clinically relevant subregions of knee cartilage using bone landmarks identifiable on both MR images and during arthroscopy and (2) determine healthy T2 values and T2 texture parameters within these subregions. Twenty-five asymptomatic volunteers (age 18-35) were evaluated with a sagittal T2 mapping sequence. Manual segmentation was performed by three raters, and cartilage was divided into twenty-one subregions modified from the International Cartilage Repair Society Articular Cartilage Mapping System. Mean T2 values and texture parameters (entropy, variance, contrast, homogeneity) were recorded for each subregion, and inter-rater and intra-rater reliability was assessed. The central regions of the condyles had significantly higher T2 values than the posterior regions (P < 0.05) and higher variance than the posterior region on the medial side (P < 0.001). The central trochlea had significantly greater T2 values than the anterior and posterior condyles. The central lateral plateau had lower T2 values, lower variance, higher homogeneity, and lower contrast than nearly all subregions in the tibia. The central patellar regions had higher entropy than the superior and inferior regions (each P ≤ 0.001). Repeatability was good to excellent for all subregions. Significant differences in mean T2 values and texture parameters were found between subregions in this carefully selected asymptomatic population, which suggest that there is normal variation of T2 values within the knee joint. The clinically relevant subregions were found to be robust as demonstrated by the overall high repeatability.

  19. Mixed models for selection of Jatropha progenies with high adaptability and yield stability in Brazilian regions.

    PubMed

    Teodoro, P E; Bhering, L L; Costa, R D; Rocha, R B; Laviola, B G

    2016-08-19

    The aim of this study was to estimate genetic parameters via mixed models and simultaneously to select Jatropha progenies grown in three regions of Brazil that meet high adaptability and stability. From a previous phenotypic selection, three progeny tests were installed in 2008 in the municipalities of Planaltina-DF (Midwest), Nova Porteirinha-MG (Southeast), and Pelotas-RS (South). We evaluated 18 families of half-sib in a randomized block design with three replications. Genetic parameters were estimated using restricted maximum likelihood/best linear unbiased prediction. Selection was based on the harmonic mean of the relative performance of genetic values method in three strategies considering: 1) performance in each environment (with interaction effect); 2) performance in each environment (with interaction effect); and 3) simultaneous selection for grain yield, stability and adaptability. Accuracy obtained (91%) reveals excellent experimental quality and consequently safety and credibility in the selection of superior progenies for grain yield. The gain with the selection of the best five progenies was more than 20%, regardless of the selection strategy. Thus, based on the three selection strategies used in this study, the progenies 4, 11, and 3 (selected in all environments and the mean environment and by adaptability and phenotypic stability methods) are the most suitable for growing in the three regions evaluated.

  20. The effect of pulsed IR-light on the rheological parameters of blood in vitro.

    PubMed

    Nawrocka-Bogusz, Honorata; Marcinkowska-Gapińska, Anna

    2014-01-01

    In this study we attempted to assess the effect of light of 855 nm wavelength (IR-light) on the rheological parameters of blood in vitro. As an anticoagulant, heparin was used. The source of IR-light was an applicator connected to the special generator--Viofor JPS®. The blood samples were irradiated for 30 min. During the irradiation the energy density was growing at twelve-second intervals starting from 1.06 J/cm2 to 8.46 J/cm2, then the energy density dropped to the initial value; the process was repeated cyclically. The study of blood viscosity was carried out with a Contraves LS40 oscillatory-rotational rheometer, with a decreasing shearing rate from 100 to 0.01 s⁻¹ over 5 min (flow curve) and applying constant frequency oscillations f=0.5 Hz with decreasing shear amplitude ˙γ0 (viscoelasticity measurements). The analysis of the results of rotational measurements was based on the assessment of hematocrit, plasma viscosity, whole blood viscosity at four selected shear rates and on the basis of the numerical values of parameters from Quemada's rheological model: k0 (indicating red cell aggregability), k∞ (indicating red cell rigidity) and ˙γc (the value of the shear rate for which the rouleaux formation begins). In oscillatory experiments we estimated viscous and elastic components of the complex blood viscosity in the same groups of patients. We observed a decrease of the viscous component of complex viscosity (η') at ˙γ0=0.2 s⁻¹, while other rheological parameters, k0, k∞, and relative blood viscosity at selected shear rates showed only a weak tendency towards smaller values after irradiation. The IR-light effect on the rheological properties of blood in vitro turned out to be rather neutral in the studied group of patients.

  1. Investigation into the influence of laser energy input on selective laser melted thin-walled parts by response surface method

    NASA Astrophysics Data System (ADS)

    Liu, Yang; Zhang, Jian; Pang, Zhicong; Wu, Weihui

    2018-04-01

    Selective laser melting (SLM) provides a feasible way for manufacturing of complex thin-walled parts directly, however, the energy input during SLM process, namely derived from the laser power, scanning speed, layer thickness and scanning space, etc. has great influence on the thin wall's qualities. The aim of this work is to relate the thin wall's parameters (responses), namely track width, surface roughness and hardness to the process parameters considered in this research (laser power, scanning speed and layer thickness) and to find out the optimal manufacturing conditions. Design of experiment (DoE) was used by implementing composite central design to achieve better manufacturing qualities. Mathematical models derived from the statistical analysis were used to establish the relationships between the process parameters and the responses. Also, the effects of process parameters on each response were determined. Then, a numerical optimization was performed to find out the optimal process set at which the quality features are at their desired values. Based on this study, the relationship between process parameters and SLMed thin-walled structure was revealed and thus, the corresponding optimal process parameters can be used to manufactured thin-walled parts with high quality.

  2. [Predictive value of postural and dynamic walking parameters after high-volume lumbar puncture in normal pressure hydrocephalus].

    PubMed

    Mary, P; Gallisa, J-M; Laroque, S; Bedou, G; Maillard, A; Bousquet, C; Negre, C; Gaillard, N; Dutray, A; Fadat, B; Jurici, S; Olivier, N; Cisse, B; Sablot, D

    2013-04-01

    Normal pressure hydrocephalus (NPH) was described by Adams et al. (1965). The common clinical presentation is the triad: gait disturbance, cognitive decline and urinary incontinence. Although these symptoms are suggestive, they are not specific to diagnosis. The improvement of symptoms after high-volume lumbar puncture (hVLP) could be a strong criterion for diagnosis. We tried to determine a specific pattern of dynamic walking and posture parameters in NPH. Additionally, we tried to specify the evolution of these criteria after hVLP and to determine predictive values of ventriculoperitoneal shunting (VPS) efficiency. Sixty-four patients were followed during seven years from January 2002 to June 2009. We identified three periods: before (S1), after hVLP (S2) and after VPS (S3). The following criteria concerned walking and posture parameters: walking parameters were speed, step length and step rhythm; posture parameters were statokinesigram total length and surface, length according to the surface (LFS), average value of equilibration for lateral movements (Xmoyen), anteroposterior movements (Ymoyen), total movement length in lateral axis (longX) and anteroposterior axis (longY). Among the 64 patients included, 22 had VPS and 16 were investigated in S3. All kinematic criteria are decreased in S1 compared with normal values. hVLP improved these criteria significantly (S2). Among posture parameters, only total length and surface of statokinesigram showed improvement in S1, but no improvement in S2. A gain in speed greater or equal to 0.15m/s between S1 and S2 predicted the efficacy of VPS with a positive predictive value (PPV) of 87.1% and a negative predictive value (NPV) of 69.7% (area under the ROC curve [AUC]: 0.86). Kinematic walking parameters are the most disruptive and are partially improved after hVLP. These parameters could be an interesting test for selecting candidates for VPS. These data have to be confirmed in a larger cohort. Copyright © 2013 Elsevier Masson SAS. All rights reserved.

  3. Quantization selection in the high-throughput H.264/AVC encoder based on the RD

    NASA Astrophysics Data System (ADS)

    Pastuszak, Grzegorz

    2013-10-01

    In the hardware video encoder, the quantization is responsible for quality losses. On the other hand, it allows the reduction of bit rates to the target one. If the mode selection is based on the rate-distortion criterion, the quantization can also be adjusted to obtain better compression efficiency. Particularly, the use of Lagrangian function with a given multiplier enables the encoder to select the most suitable quantization step determined by the quantization parameter QP. Moreover, the quantization offset added before discarding the fraction value after quantization can be adjusted. In order to select the best quantization parameter and offset in real time, the HD/SD encoder should be implemented in the hardware. In particular, the hardware architecture should embed the transformation and quantization modules able to process the same residuals many times. In this work, such an architecture is used. Experimental results show what improvements in terms of compression efficiency are achievable for Intra coding.

  4. Expected value based fuzzy programming approach to solve integrated supplier selection and inventory control problem with fuzzy demand

    NASA Astrophysics Data System (ADS)

    Sutrisno; Widowati; Sunarsih; Kartono

    2018-01-01

    In this paper, a mathematical model in quadratic programming with fuzzy parameter is proposed to determine the optimal strategy for integrated inventory control and supplier selection problem with fuzzy demand. To solve the corresponding optimization problem, we use the expected value based fuzzy programming. Numerical examples are performed to evaluate the model. From the results, the optimal amount of each product that have to be purchased from each supplier for each time period and the optimal amount of each product that have to be stored in the inventory for each time period were determined with minimum total cost and the inventory level was sufficiently closed to the reference level.

  5. Variable anodic thermal control coating

    NASA Technical Reports Server (NTRS)

    Gilliland, C. S.; Duckett, J. (Inventor)

    1983-01-01

    A process for providing a thermal control solar stable surface coating for aluminum surfaces adapted to be exposed to solar radiation wherein selected values within the range of 0.10 to 0.72 thermal emittance (epsilon sub tau) and 0.2 to 0.4 solar absorptance (alpha subs) are reproducibly obtained by anodizing the surface area in a chromic acid solution for a selected period of time. The rate voltage and time, along with the parameters of initial epsilon sub tau and alpha subs, temperature of the chromic acid solution, acid concentration of the solution and the material anodized determines the final values of epsilon/tau sub and alpha sub S. 9 Claims, 5 Drawing Figures.

  6. TIM Version 3.0 beta Technical Description and User Guide - Appendix A - User's Guidance for TIM v.3.0(beta)

    EPA Pesticide Factsheets

    Provides detailed guidance to the user on how to select input parameters for running the Terrestrial Investigation Model (TIM) and recommendations for default values that can be used when no chemical-specific or species-specific information are available.

  7. Analytic theory for the selection of 2-D needle crystal at arbitrary Peclet number

    NASA Technical Reports Server (NTRS)

    Tanveer, Saleh

    1989-01-01

    An accurate analytic theory is presented for the velocity selection of a two-dimensional needle crystal for arbitrary Peclet number for small values of the surface tension parameter. The velocity selection is caused by the effect of transcendentally small terms which are determined by analytic continuation to the complex plane and analysis of nonlinear equations. The work supports the general conclusion of previous small Peclet number analytical results of other investigators, though there are some discrepancies in details. It also addresses questions raised on the validity of selection theory owing to assumptions made on shape corrections at large distances from the tip.

  8. Validation of systems biology derived molecular markers of renal donor organ status associated with long term allograft function.

    PubMed

    Perco, Paul; Heinzel, Andreas; Leierer, Johannes; Schneeberger, Stefan; Bösmüller, Claudia; Oberhuber, Rupert; Wagner, Silvia; Engler, Franziska; Mayer, Gert

    2018-05-03

    Donor organ quality affects long term outcome after renal transplantation. A variety of prognostic molecular markers is available, yet their validity often remains undetermined. A network-based molecular model reflecting donor kidney status based on transcriptomics data and molecular features reported in scientific literature to be associated with chronic allograft nephropathy was created. Significantly enriched biological processes were identified and representative markers were selected. An independent kidney pre-implantation transcriptomics dataset of 76 organs was used to predict estimated glomerular filtration rate (eGFR) values twelve months after transplantation using available clinical data and marker expression values. The best-performing regression model solely based on the clinical parameters donor age, donor gender, and recipient gender explained 17% of variance in post-transplant eGFR values. The five molecular markers EGF, CD2BP2, RALBP1, SF3B1, and DDX19B representing key molecular processes of the constructed renal donor organ status molecular model in addition to the clinical parameters significantly improved model performance (p-value = 0.0007) explaining around 33% of the variability of eGFR values twelve months after transplantation. Collectively, molecular markers reflecting donor organ status significantly add to prediction of post-transplant renal function when added to the clinical parameters donor age and gender.

  9. Identification of dominant interactions between climatic seasonality, catchment characteristics and agricultural activities on Budyko-type equation parameter estimation

    NASA Astrophysics Data System (ADS)

    Xing, Wanqiu; Wang, Weiguang; Shao, Quanxi; Yong, Bin

    2018-01-01

    Quantifying precipitation (P) partition into evapotranspiration (E) and runoff (Q) is of great importance for global and regional water availability assessment. Budyko framework serves as a powerful tool to make simple and transparent estimation for the partition, using a single parameter, to characterize the shape of the Budyko curve for a "specific basin", where the single parameter reflects the overall effect by not only climatic seasonality, catchment characteristics (e.g., soil, topography and vegetation) but also agricultural activities (e.g., cultivation and irrigation). At the regional scale, these influencing factors are interconnected, and the interactions between them can also affect the single parameter of Budyko-type equations' estimating. Here we employ the multivariate adaptive regression splines (MARS) model to estimate the Budyko curve shape parameter (n in the Choudhury's equation, one form of the Budyko framework) of the selected 96 catchments across China using a data set of long-term averages for climatic seasonality, catchment characteristics and agricultural activities. Results show average storm depth (ASD), vegetation coverage (M), and seasonality index of precipitation (SI) are three statistically significant factors affecting the Budyko parameter. More importantly, four pairs of interactions are recognized by the MARS model as: The interaction between CA (percentage of cultivated land area to total catchment area) and ASD shows that the cultivation can weaken the reducing effect of high ASD (>46.78 mm) on the Budyko parameter estimating. Drought (represented by the value of Palmer drought severity index < -0.74) and uneven distribution of annual rainfall (represented by the value of coefficient of variation of precipitation > 0.23) tend to enhance the Budyko parameter reduction by large SI (>0.797). Low vegetation coverage (34.56%) is likely to intensify the rising effect on evapotranspiration ratio by IA (percentage of irrigation area to total catchment area). The Budyko n values estimated by the MARS model reproduce the calculated ones by the observation well for the selected 96 catchments (with R = 0.817, MAE = 4.09). Compared to the multiple stepwise regression model estimating the parameter n taken the influencing factors as independent inputs, the MARS model enhances the capability of the Budyko framework for assessing water availability at regional scale using readily available data.

  10. Aerodynamic configuration design using response surface methodology analysis

    NASA Technical Reports Server (NTRS)

    Engelund, Walter C.; Stanley, Douglas O.; Lepsch, Roger A.; Mcmillin, Mark M.; Unal, Resit

    1993-01-01

    An investigation has been conducted to determine a set of optimal design parameters for a single-stage-to-orbit reentry vehicle. Several configuration geometry parameters which had a large impact on the entry vehicle flying characteristics were selected as design variables: the fuselage fineness ratio, the nose to body length ratio, the nose camber value, the wing planform area scale factor, and the wing location. The optimal geometry parameter values were chosen using a response surface methodology (RSM) technique which allowed for a minimum dry weight configuration design that met a set of aerodynamic performance constraints on the landing speed, and on the subsonic, supersonic, and hypersonic trim and stability levels. The RSM technique utilized, specifically the central composite design method, is presented, along with the general vehicle conceptual design process. Results are presented for an optimized configuration along with several design trade cases.

  11. Hybrid approach of selecting hyperparameters of support vector machine for regression.

    PubMed

    Jeng, Jin-Tsong

    2006-06-01

    To select the hyperparameters of the support vector machine for regression (SVR), a hybrid approach is proposed to determine the kernel parameter of the Gaussian kernel function and the epsilon value of Vapnik's epsilon-insensitive loss function. The proposed hybrid approach includes a competitive agglomeration (CA) clustering algorithm and a repeated SVR (RSVR) approach. Since the CA clustering algorithm is used to find the nearly "optimal" number of clusters and the centers of clusters in the clustering process, the CA clustering algorithm is applied to select the Gaussian kernel parameter. Additionally, an RSVR approach that relies on the standard deviation of a training error is proposed to obtain an epsilon in the loss function. Finally, two functions, one real data set (i.e., a time series of quarterly unemployment rate for West Germany) and an identification of nonlinear plant are used to verify the usefulness of the hybrid approach.

  12. Influence plots for LASSO

    DOE PAGES

    Jang, Dae -Heung; Anderson-Cook, Christine Michaela

    2016-11-22

    With many predictors in regression, fitting the full model can induce multicollinearity problems. Least Absolute Shrinkage and Selection Operation (LASSO) is useful when the effects of many explanatory variables are sparse in a high-dimensional dataset. Influential points can have a disproportionate impact on the estimated values of model parameters. Here, this paper describes a new influence plot that can be used to increase understanding of the contributions of individual observations and the robustness of results. This can serve as a complement to other regression diagnostics techniques in the LASSO regression setting. Using this influence plot, we can find influential pointsmore » and their impact on shrinkage of model parameters and model selection. Lastly, we provide two examples to illustrate the methods.« less

  13. Influence plots for LASSO

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jang, Dae -Heung; Anderson-Cook, Christine Michaela

    With many predictors in regression, fitting the full model can induce multicollinearity problems. Least Absolute Shrinkage and Selection Operation (LASSO) is useful when the effects of many explanatory variables are sparse in a high-dimensional dataset. Influential points can have a disproportionate impact on the estimated values of model parameters. Here, this paper describes a new influence plot that can be used to increase understanding of the contributions of individual observations and the robustness of results. This can serve as a complement to other regression diagnostics techniques in the LASSO regression setting. Using this influence plot, we can find influential pointsmore » and their impact on shrinkage of model parameters and model selection. Lastly, we provide two examples to illustrate the methods.« less

  14. A testable model of earthquake probability based on changes in mean event size

    NASA Astrophysics Data System (ADS)

    Imoto, Masajiro

    2003-02-01

    We studied changes in mean event size using data on microearthquakes obtained from a local network in Kanto, central Japan, from a viewpoint that a mean event size tends to increase as the critical point is approached. A parameter describing changes was defined using a simple weighting average procedure. In order to obtain the distribution of the parameter in the background, we surveyed values of the parameter from 1982 to 1999 in a 160 × 160 × 80 km volume. The 16 events of M5.5 or larger in this volume were selected as target events. The conditional distribution of the parameter was estimated from the 16 values, each of which referred to the value immediately prior to each target event. The distribution of the background becomes a function of symmetry, the center of which corresponds to no change in b value. In contrast, the conditional distribution exhibits an asymmetric feature, which tends to decrease the b value. The difference in the distributions between the two groups was significant and provided us a hazard function for estimating earthquake probabilities. Comparing the hazard function with a Poisson process, we obtained an Akaike Information Criterion (AIC) reduction of 24. This reduction agreed closely with the probability gains of a retrospective study in a range of 2-4. A successful example of the proposed model can be seen in the earthquake of 3 June 2000, which is the only event during the period of prospective testing.

  15. Understanding identifiability as a crucial step in uncertainty assessment

    NASA Astrophysics Data System (ADS)

    Jakeman, A. J.; Guillaume, J. H. A.; Hill, M. C.; Seo, L.

    2016-12-01

    The topic of identifiability analysis offers concepts and approaches to identify why unique model parameter values cannot be identified, and can suggest possible responses that either increase uniqueness or help to understand the effect of non-uniqueness on predictions. Identifiability analysis typically involves evaluation of the model equations and the parameter estimation process. Non-identifiability can have a number of undesirable effects. In terms of model parameters these effects include: parameters not being estimated uniquely even with ideal data; wildly different values being returned for different initialisations of a parameter optimisation algorithm; and parameters not being physically meaningful in a model attempting to represent a process. This presentation illustrates some of the drastic consequences of ignoring model identifiability analysis. It argues for a more cogent framework and use of identifiability analysis as a way of understanding model limitations and systematically learning about sources of uncertainty and their importance. The presentation specifically distinguishes between five sources of parameter non-uniqueness (and hence uncertainty) within the modelling process, pragmatically capturing key distinctions within existing identifiability literature. It enumerates many of the various approaches discussed in the literature. Admittedly, improving identifiability is often non-trivial. It requires thorough understanding of the cause of non-identifiability, and the time, knowledge and resources to collect or select new data, modify model structures or objective functions, or improve conditioning. But ignoring these problems is not a viable solution. Even simple approaches such as fixing parameter values or naively using a different model structure may have significant impacts on results which are too often overlooked because identifiability analysis is neglected.

  16. Assessing factors that influence deviations between measured and calculated reference evapotranspiration

    NASA Astrophysics Data System (ADS)

    Rodny, Marek; Nolz, Reinhard

    2017-04-01

    Evapotranspiration (ET) is a fundamental component of the hydrological cycle, but challenging to be quantified. Lysimeter facilities, for example, can be installed and operated to determine ET, but they are costly and represent only point measurements. Therefore, lysimeter data are traditionally used to develop, calibrate, and validate models that allow calculating reference evapotranspiration (ET0) based on meteorological data, which can be measured more easily. The standardized form of the well-known FAO Penman-Monteith equation (ASCE-EWRI) is recommended as a standard procedure for estimating ET0 and subsequently plant water requirements. Applied and validated under different climatic conditions, the Penman-Monteith equation is generally known to deliver proper results. On the other hand, several studies documented deviations between measured and calculated ET0 depending on environmental conditions. Potential reasons are, for example, differing or varying surface characteristics of the lysimeter and the location where the weather instruments are placed. Advection of sensible heat (transport of dry and hot air from surrounding areas) might be another reason for deviating ET-values. However, elaborating causal processes is complex and requires comprehensive data of high quality and specific analysis techniques. In order to assess influencing factors, we correlated differences between measured and calculated ET0 with pre-selected meteorological parameters and related system parameters. Basic data were hourly ET0-values from a weighing lysimeter (ET0_lys) with a surface area of 2.85 m2 (reference crop: frequently irrigated grass), weather data (air and soil temperature, relative humidity, air pressure, wind velocity, and solar radiation), and soil water content in different depths. ET0_ref was calculated in hourly time steps according to the standardized procedure after ASCE-EWRI (2005). Deviations between both datasets were calculated as ET0_lys-ET0_ref and separated into positive and negative values. For further interpretation, we calculated daily sums of these values. The respective daily difference (positive or negative) served as independent variable (x) in linear correlation with a selected parameter as dependent variable (y). Quality of correlation was evaluated by means of coefficients of determination (R2). When ET0_lys > ET0_ref, the differences were only weakly correlated with the selected parameters. Hence, the evaluation of the causal processes leading to underestimation of measured hourly ET0 seems to require a more rigorous approach. On the other hand, when ET0_lys < ET0_ref, the differences correlated considerably with the meteorological parameters and related system parameters. Interpreting the particular correlations in detail indicated different (or varying) surface characteristics between the irrigated lysimeter and the nearby (non-irrigated) meteorological station.

  17. Comparison of three models to estimate breeding values for percentage of loin intramuscular fat in Duroc swine.

    PubMed

    Newcom, D W; Baas, T J; Stalder, K J; Schwab, C R

    2005-04-01

    Three selection models were evaluated to compare selection candidate rankings based on EBV and to evaluate subsequent effects of model-derived EBV on the selection differential and expected genetic response in the population. Data were collected from carcass- and ultrasound-derived estimates of loin i.m. fat percent (IMF) in a population of Duroc swine under selection to increase IMF. The models compared were Model 1, a two-trait animal model used in the selection experiment that included ultrasound IMF from all pigs scanned and carcass IMF from pigs slaughtered to estimate breeding values for both carcass (C1) and ultrasound IMF (U1); Model 2, a single-trait animal model that included ultrasound IMF values on all pigs scanned to estimate breeding values for ultrasound IMF (U2); and Model 3, a multiple-trait animal model including carcass IMF from slaughtered pigs and the first three principal components from a total of 10 image parameters averaged across four longitudinal ultrasound images to estimate breeding values for carcass IMF (C3). Rank correlations between breeding value estimates for U1 and C1, U1 and U2, and C1 and C3 were 0.95, 0.97, and 0.92, respectively. Other rank correlations were 0.86 or less. In the selection experiment, approximately the top 10% of boars and 50% of gilts were selected. Selection differentials for pigs in Generation 3 were greatest when ranking pigs based on C1, followed by U1, U2, and C3. In addition, selection differential and estimated response were evaluated when simulating selection of the top 1, 5, and 10% of sires and 50% of dams. Results of this analysis indicated the greatest selection differential was for selection based on C1. The greatest loss in selection differential was found for selection based on C3 when selecting the top 10 and 1% of boars and 50% of gilts. The loss in estimated response when selecting varying percentages of boars and the top 50% of gilts was greatest when selection was based on C3 (16.0 to 25.8%) and least for selection based on U1 (1.3 to 10.9%). Estimated genetic change from selection based on carcass IMF was greater than selection based on ultrasound IMF. Results show that selection based on a combination of ultrasonically predicted IMF and sib carcass IMF produced the greatest selection differentials and should lead to the greatest genetic change.

  18. Spatial parameters of walking gait and footedness.

    PubMed

    Zverev, Y P

    2006-01-01

    The present study was undertaken to assess whether footedness has effects on selected spatial and angular parameters of able-bodied gait by evaluating footprints of young adults. A total of 112 males and 93 females were selected from among students and staff members of the University of Malawi using a simple random sampling method. Footedness of subjects was assessed by the Waterloo Footedness Questionnaire Revised. Gait at natural speed was recorded using the footprint method. The following spatial parameters of gait were derived from the inked footprint sequences of subjects: step and stride lengths, gait angle and base of gait. The anthropometric measurements taken were weight, height, leg and foot length, foot breadth, shoulder width, and hip and waist circumferences. The prevalence of right-, left- and mix-footedness in the whole sample of young Malawian adults was 81%, 8.3% and 10.7%, respectively. One-way analysis of variance did not reveal a statistically significant difference between footedness categories in the mean values of anthropometric measurements (p > 0.05 for all variables). Gender differences in step and stride length values were not statistically significant. Correction of these variables for stature did not change the trend. Males had significantly broader steps than females. Normalized values of base of gait had similar gender difference. The group means of step length and normalized step length of the right and left feet were similar, for males and females. There was a significant side difference in the gait angle in both gender groups of volunteers with higher mean values on the left side compared to the right one (t = 2.64, p < 0.05 for males, and t = 2.78, p < 0.05 for females). One-way analysis of variance did not demonstrate significant difference between footedness categories in the mean values of step length, gait angle, bilateral differences in step length and gait angle, stride length, gait base and normalized gait variables of male and female volunteers (p > 0.05 for all variables). The present study demonstrated that footedness does not affect spatial and angular parameters of walking gait.

  19. Summary of water-quality data for City of Albuquerque drinking-water supply wells, 1988-97

    USGS Publications Warehouse

    Bexfield, Laura M.; Lindberg, William E.; Anderholm, Scott K.

    1999-01-01

    The City of Albuquerque has collected and analyzed more than 5,000 water-quality samples from 113 water-supply wells in the Albuquerque area, including many drinking-water supply wells, since May of 1988. As a result, a large water-quality data base has been compiled that includes data for major ions, nutrients, trace elements, carbon, volatile organic compounds, radiological constituents, and bacteria. These data are intended to improve the understanding and management of the ground-water resources of the region, rather than demonstrate compliance with Federal and State drinking-water standards. This report gives summary statistics for selected physical properties and chemical constituents for ground water from wells used by the City of Albuquerque for drinking-water supply between 1988 and 1997. Maps are provided to show the general spatial distribution of selected parameters and water types around the region. Although the values of some parameters vary substantially across the city, median values for all parameters included in this report are less than their respective maximum contaminant levels in each drinking-water supply well. The dominant water types are sodium plus potassium / carbonate plus bicarbonate in the western part of the city and calcium / carbonate plus bicarbonate in the eastern part of the city.

  20. Manufacturing Feasibility and Forming Properties of Cu-4Sn in Selective Laser Melting.

    PubMed

    Mao, Zhongfa; Zhang, David Z; Wei, Peitang; Zhang, Kaifei

    2017-03-24

    Copper alloys, combined with selective laser melting (SLM) technology, have attracted increasing attention in aerospace engineering, automobile, and medical fields. However, there are some difficulties in SLM forming owing to low laser absorption and excellent thermal conductivity. It is, therefore, necessary to explore a copper alloy in SLM. In this research, manufacturing feasibility and forming properties of Cu-4Sn in SLM were investigated through a systematic experimental approach. Single-track experiments were used to narrow down processing parameter windows. A Greco-Latin square design with orthogonal parameter arrays was employed to control forming qualities of specimens. Analysis of variance was applied to establish statistical relationships, which described the effects of different processing parameters (i.e., laser power, scanning speed, and hatch space) on relative density (RD) and Vickers hardness of specimens. It was found that Cu-4Sn specimens were successfully manufactured by SLM for the first time and both its RD and Vickers hardness were mainly determined by the laser power. The maximum value of RD exceeded 93% theoretical density and the maximum value of Vickers hardness reached 118 HV 0.3/5. The best tensile strength of 316-320 MPa is inferior to that of pressure-processed Cu-4Sn and can be improved further by reducing defects.

  1. Global optimization and reflectivity data fitting for x-ray multilayer mirrors by means of genetic algorithms

    NASA Astrophysics Data System (ADS)

    Sanchez del Rio, Manuel; Pareschi, Giovanni

    2001-01-01

    The x-ray reflectivity of a multilayer is a non-linear function of many parameters (materials, layer thicknesses, densities, roughness). Non-linear fitting of experimental data with simulations requires to use initial values sufficiently close to the optimum value. This is a difficult task when the space topology of the variables is highly structured, as in our case. The application of global optimization methods to fit multilayer reflectivity data is presented. Genetic algorithms are stochastic methods based on the model of natural evolution: the improvement of a population along successive generations. A complete set of initial parameters constitutes an individual. The population is a collection of individuals. Each generation is built from the parent generation by applying some operators (e.g. selection, crossover, mutation) on the members of the parent generation. The pressure of selection drives the population to include 'good' individuals. For large number of generations, the best individuals will approximate the optimum parameters. Some results on fitting experimental hard x-ray reflectivity data for Ni/C multilayers recorded at the ESRF BM5 are presented. This method could be also applied to the help in the design of multilayers optimized for a target application, like for an astronomical grazing-incidence hard X-ray telescopes.

  2. Directional selection in temporally replicated studies is remarkably consistent.

    PubMed

    Morrissey, Michael B; Hadfield, Jarrod D

    2012-02-01

    Temporal variation in selection is a fundamental determinant of evolutionary outcomes. A recent paper presented a synthetic analysis of temporal variation in selection in natural populations. The authors concluded that there is substantial variation in the strength and direction of selection over time, but acknowledged that sampling error would result in estimates of selection that were more variable than the true values. We reanalyze their dataset using techniques that account for the necessary effect of sampling error to inflate apparent levels of variation and show that directional selection is remarkably constant over time, both in magnitude and direction. Thus we cannot claim that the available data support the existence of substantial temporal heterogeneity in selection. Nonetheless, we conject that temporal variation in selection could be important, but that there are good reasons why it may not appear in the available data. These new analyses highlight the importance of applying techniques that estimate parameters of the distribution of selection, rather than parameters of the distribution of estimated selection (which will reflect both sampling error and "real" variation in selection); indeed, despite availability of methods for the former, focus on the latter has been common in synthetic reviews of the aspects of selection in nature, and can lead to serious misinterpretations. © 2011 The Author(s). Evolution© 2011 The Society for the Study of Evolution.

  3. Assessing the relative importance of parameter and forcing uncertainty and their interactions in conceptual hydrological model simulations

    NASA Astrophysics Data System (ADS)

    Mockler, E. M.; Chun, K. P.; Sapriza-Azuri, G.; Bruen, M.; Wheater, H. S.

    2016-11-01

    Predictions of river flow dynamics provide vital information for many aspects of water management including water resource planning, climate adaptation, and flood and drought assessments. Many of the subjective choices that modellers make including model and criteria selection can have a significant impact on the magnitude and distribution of the output uncertainty. Hydrological modellers are tasked with understanding and minimising the uncertainty surrounding streamflow predictions before communicating the overall uncertainty to decision makers. Parameter uncertainty in conceptual rainfall-runoff models has been widely investigated, and model structural uncertainty and forcing data have been receiving increasing attention. This study aimed to assess uncertainties in streamflow predictions due to forcing data and the identification of behavioural parameter sets in 31 Irish catchments. By combining stochastic rainfall ensembles and multiple parameter sets for three conceptual rainfall-runoff models, an analysis of variance model was used to decompose the total uncertainty in streamflow simulations into contributions from (i) forcing data, (ii) identification of model parameters and (iii) interactions between the two. The analysis illustrates that, for our subjective choices, hydrological model selection had a greater contribution to overall uncertainty, while performance criteria selection influenced the relative intra-annual uncertainties in streamflow predictions. Uncertainties in streamflow predictions due to the method of determining parameters were relatively lower for wetter catchments, and more evenly distributed throughout the year when the Nash-Sutcliffe Efficiency of logarithmic values of flow (lnNSE) was the evaluation criterion.

  4. Establishment of baseline haematology and biochemistry parameters in wild adult African penguins (Spheniscus demersus).

    PubMed

    Parsons, Nola J; Schaefer, Adam M; van der Spuy, Stephen D; Gous, Tertius A

    2015-03-25

    There are few publications on the clinical haematology and biochemistry of African penguins (Spheniscus demersus) and these are based on captive populations. Baseline haematology and serum biochemistry parameters were analysed from 108 blood samples from wild, adult African penguins. Samples were collected from the breeding range of the African penguin in South Africa and the results were compared between breeding region and sex. The haematological parameters that were measured were: haematocrit, haemoglobin, red cell count and white cell count. The biochemical parameters that were measured were: sodium, potassium, chloride, calcium, inorganic phosphate, creatinine, cholesterol, serum glucose, uric acid, bile acid, total serum protein, albumin, aspartate transaminase and creatine kinase. All samples were serologically negative for selected avian diseases and no blood parasites were detected. No haemolysis was present in any of the analysed samples. Male African penguins were larger and heavier than females, with higher haematocrit, haemoglobin and red cell count values, but lower calcium and phosphate values. African penguins in the Eastern Cape were heavier than those in the Western Cape, with lower white cell count and globulin values and a higher albumin/globulin ratio, possibly indicating that birds are in a poorer condition in the Western Cape. Results were also compared between multiple penguin species and with African penguins in captivity. These values for healthy, wild, adult penguins can be used for future health and disease assessments.

  5. Regional probability distribution of the annual reference evapotranspiration and its effective parameters in Iran

    NASA Astrophysics Data System (ADS)

    Khanmohammadi, Neda; Rezaie, Hossein; Montaseri, Majid; Behmanesh, Javad

    2017-10-01

    The reference evapotranspiration (ET0) plays an important role in water management plans in arid or semi-arid countries such as Iran. For this reason, the regional analysis of this parameter is important. But, ET0 process is affected by several meteorological parameters such as wind speed, solar radiation, temperature and relative humidity. Therefore, the effect of distribution type of effective meteorological variables on ET0 distribution was analyzed. For this purpose, the regional probability distribution of the annual ET0 and its effective parameters were selected. Used data in this research was recorded data at 30 synoptic stations of Iran during 1960-2014. Using the probability plot correlation coefficient (PPCC) test and the L-moment method, five common distributions were compared and the best distribution was selected. The results of PPCC test and L-moment diagram indicated that the Pearson type III distribution was the best probability distribution for fitting annual ET0 and its four effective parameters. The results of RMSE showed that the ability of the PPCC test and L-moment method for regional analysis of reference evapotranspiration and its effective parameters was similar. The results also showed that the distribution type of the parameters which affected ET0 values can affect the distribution of reference evapotranspiration.

  6. Fuzzy Performance between Surface Fitting and Energy Distribution in Turbulence Runner

    PubMed Central

    Liang, Zhongwei; Liu, Xiaochu; Ye, Bangyan; Brauwer, Richard Kars

    2012-01-01

    Because the application of surface fitting algorithms exerts a considerable fuzzy influence on the mathematical features of kinetic energy distribution, their relation mechanism in different external conditional parameters must be quantitatively analyzed. Through determining the kinetic energy value of each selected representative position coordinate point by calculating kinetic energy parameters, several typical algorithms of complicated surface fitting are applied for constructing microkinetic energy distribution surface models in the objective turbulence runner with those obtained kinetic energy values. On the base of calculating the newly proposed mathematical features, we construct fuzzy evaluation data sequence and present a new three-dimensional fuzzy quantitative evaluation method; then the value change tendencies of kinetic energy distribution surface features can be clearly quantified, and the fuzzy performance mechanism discipline between the performance results of surface fitting algorithms, the spatial features of turbulence kinetic energy distribution surface, and their respective environmental parameter conditions can be quantitatively analyzed in detail, which results in the acquirement of final conclusions concerning the inherent turbulence kinetic energy distribution performance mechanism and its mathematical relation. A further turbulence energy quantitative study can be ensured. PMID:23213287

  7. Real Time Oil Reservoir Evaluation Using Nanotechnology

    NASA Technical Reports Server (NTRS)

    Li, Jing (Inventor); Meyyappan, Meyya (Inventor)

    2011-01-01

    A method and system for evaluating status and response of a mineral-producing field (e.g., oil and/or gas) by monitoring selected chemical and physical properties in or adjacent to a wellsite headspace. Nanotechnology sensors and other sensors are provided for one or more underground (fluid) mineral-producing wellsites to determine presence/absence of each of two or more target molecules in the fluid, relative humidity, temperature and/or fluid pressure adjacent to the wellsite and flow direction and flow velocity for the fluid. A nanosensor measures an electrical parameter value and estimates a corresponding environmental parameter value, such as water content or hydrocarbon content. The system is small enough to be located down-hole in each mineral-producing horizon for the wellsite.

  8. Monte Carlo simulation of elongating metallic nanowires in the presence of surfactants

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gimenez, M. Cecilia; Reinaudi, Luis, E-mail: luis.reinaudi@unc.edu.ar; Leiva, Ezequiel P. M.

    2015-12-28

    Nanowires of different metals undergoing elongation were studied by means of canonical Monte Carlo simulations and the embedded atom method representing the interatomic potentials. The presence of a surfactant medium was emulated by the introduction of an additional stabilization energy, represented by a parameter Q. Several values of the parameter Q and temperatures were analyzed. In general, it was observed for all studied metals that, as Q increases, there is a greater elongation before the nanowire breaks. In the case of silver, linear monatomic chains several atoms long formed at intermediate values of Q and low temperatures. Similar observations weremore » made for the case of silver-gold alloys when the medium interacted selectively with Ag.« less

  9. Breeding of Acrocomia aculeata using genetic diversity parameters and correlations to select accessions based on vegetative, phenological, and reproductive characteristics.

    PubMed

    Coser, S M; Motoike, S Y; Corrêa, T R; Pires, T P; Resende, M D V

    2016-10-17

    Macaw palm (Acrocomia aculeata) is a promising species for use in biofuel production, and establishing breeding programs is important for the development of commercial plantations. The aim of the present study was to analyze genetic diversity, verify correlations between traits, estimate genetic parameters, and select different accessions of A. aculeata in the Macaw Palm Germplasm Bank located in Universidade Federal de Viçosa, to develop a breeding program for this species. Accessions were selected based on precocity (PREC), total spathe (TS), diameter at breast height (DBH), height of the first spathe (HFS), and canopy area (CA). The traits were evaluated in 52 accessions during the 2012/2013 season and analyzed by restricted estimation maximum likelihood/best linear unbiased predictor procedures. Genetic diversity resulted in the formation of four groups by Tocher's clustering method. The correlation analysis showed it was possible to have indirect and early selection for the traits PREC and DBH. Estimated genetic parameters strengthened the genetic variability verified by cluster analysis. Narrow-sense heritability was classified as moderate (PREC, TS, and CA) to high (HFS and DBH), resulting in strong genetic control of the traits and success in obtaining genetic gains by selection. Accuracy values were classified as moderate (PREC and CA) to high (TS, HFS, and DBH), reinforcing the success of the selection process. Selection of accessions for PREC, TS, and HFS by the rank-average method permits selection gains of over 100%, emphasizing the successful use of the accessions in breeding programs and obtaining superior genotypes for commercial plantations.

  10. Clinical trial allocation in multinational pharmaceutical companies - a qualitative study on influential factors.

    PubMed

    Dombernowsky, Tilde; Haedersdal, Merete; Lassen, Ulrik; Thomsen, Simon F

    2017-06-01

    Clinical trial allocation in multinational pharmaceutical companies includes country selection and site selection. With emphasis on site selection, the overall aim of this study was to examine which factors pharmaceutical companies value most when allocating clinical trials. The specific aims were (1) to identify key decision makers during country and site selection, respectively, (2) to evaluate by which parameters subsidiaries are primarily assessed by headquarters with regard to conducting clinical trials, and (3) to evaluate which site-related qualities companies value most when selecting trial sites. Eleven semistructured interviews were conducted among employees engaged in trial allocation at 11 pharmaceutical companies. The interviews were analyzed by deductive content analysis, which included coding of data to a categorization matrix containing categories of site-related qualities. The results suggest that headquarters and regional departments are key decision makers during country selection, whereas subsidiaries decide on site selection. Study participants argued that headquarters primarily value timely patient recruitment and quality of data when assessing subsidiaries. The site-related qualities most commonly emphasized during interviews were study population availability, timely patient recruitment, resources at the site, and site personnel's interest and commitment. Costs of running the trials were described as less important. Site personnel experience in conducting trials was described as valuable but not imperative. In conclusion, multinational pharmaceutical companies consider recruitment-related factors as crucial when allocating clinical trials. Quality of data and site personnel's interest and commitment are also essential, whereas costs seem less important. While valued, site personnel experience in conducting clinical trials is not imperative.

  11. Modeling, Dynamics, Bifurcation Behavior and Stability Analysis of a DC-DC Boost Converter in Photovoltaic Systems

    NASA Astrophysics Data System (ADS)

    Zhioua, M.; El Aroudi, A.; Belghith, S.; Bosque-Moncusí, J. M.; Giral, R.; Al Hosani, K.; Al-Numay, M.

    A study of a DC-DC boost converter fed by a photovoltaic (PV) generator and supplying a constant voltage load is presented. The input port of the converter is controlled using fixed frequency pulse width modulation (PWM) based on the loss-free resistor (LFR) concept whose parameter is selected with the aim to force the PV generator to work at its maximum power point. Under this control strategy, it is shown that the system can exhibit complex nonlinear behaviors for certain ranges of parameter values. First, using the nonlinear models of the converter and the PV source, the dynamics of the system are explored in terms of some of its parameters such as the proportional gain of the controller and the output DC bus voltage. To present a comprehensive approach to the overall system behavior under parameter changes, a series of bifurcation diagrams are computed from the circuit-level switched model and from a simplified model both implemented in PSIM© software showing a remarkable agreement. These diagrams show that the first instability that takes place in the system period-1 orbit when a primary parameter is varied is a smooth period-doubling bifurcation and that the nonlinearity of the PV generator is irrelevant for predicting this phenomenon. Different bifurcation scenarios can take place for the resulting period-2 subharmonic regime depending on a secondary bifurcation parameter. The boundary between the desired period-1 orbit and subharmonic oscillation resulting from period-doubling in the parameter space is obtained by calculating the eigenvalues of the monodromy matrix of the simplified model. The results from this model have been validated with time-domain numerical simulation using the circuit-level switched model and also experimentally from a laboratory prototype. This study can help in selecting the parameter values of the circuit in order to delimit the region of period-1 operation of the converter which is of practical interest in PV systems.

  12. The effects of intraspecific competition and stabilizing selection on a polygenic trait.

    PubMed Central

    Bürger, Reinhard; Gimelfarb, Alexander

    2004-01-01

    The equilibrium properties of an additive multilocus model of a quantitative trait under frequency- and density-dependent selection are investigated. Two opposing evolutionary forces are assumed to act: (i) stabilizing selection on the trait, which favors genotypes with an intermediate phenotype, and (ii) intraspecific competition mediated by that trait, which favors genotypes whose effect on the trait deviates most from that of the prevailing genotypes. Accordingly, fitnesses of genotypes have a frequency-independent component describing stabilizing selection and a frequency- and density-dependent component modeling competition. We study how the equilibrium structure, in particular, number, degree of polymorphism, and genetic variance of stable equilibria, is affected by the strength of frequency dependence, and what role the number of loci, the amount of recombination, and the demographic parameters play. To this end, we employ a statistical and numerical approach, complemented by analytical results, and explore how the equilibrium properties averaged over a large number of genetic systems with a given number of loci and average amount of recombination depend on the ecological and demographic parameters. We identify two parameter regions with a transitory region in between, in which the equilibrium properties of genetic systems are distinctively different. These regions depend on the strength of frequency dependence relative to pure stabilizing selection and on the demographic parameters, but not on the number of loci or the amount of recombination. We further study the shape of the fitness function observed at equilibrium and the extent to which the dynamics in this model are adaptive, and we present examples of equilibrium distributions of genotypic values under strong frequency dependence. Consequences for the maintenance of genetic variation, the detection of disruptive selection, and models of sympatric speciation are discussed. PMID:15280253

  13. Articular cartilage degeneration classification by means of high-frequency ultrasound.

    PubMed

    Männicke, N; Schöne, M; Oelze, M; Raum, K

    2014-10-01

    To date only single ultrasound parameters were regarded in statistical analyses to characterize osteoarthritic changes in articular cartilage and the potential benefit of using parameter combinations for characterization remains unclear. Therefore, the aim of this work was to utilize feature selection and classification of a Mankin subset score (i.e., cartilage surface and cell sub-scores) using ultrasound-based parameter pairs and investigate both classification accuracy and the sensitivity towards different degeneration stages. 40 punch biopsies of human cartilage were previously scanned ex vivo with a 40-MHz transducer. Ultrasound-based surface parameters, as well as backscatter and envelope statistics parameters were available. Logistic regression was performed with each unique US parameter pair as predictor and different degeneration stages as response variables. The best ultrasound-based parameter pair for each Mankin subset score value was assessed by highest classification accuracy and utilized in receiver operating characteristics (ROC) analysis. The classifications discriminating between early degenerations yielded area under the ROC curve (AUC) values of 0.94-0.99 (mean ± SD: 0.97 ± 0.03). In contrast, classifications among higher Mankin subset scores resulted in lower AUC values: 0.75-0.91 (mean ± SD: 0.84 ± 0.08). Variable sensitivities of the different ultrasound features were observed with respect to different degeneration stages. Our results strongly suggest that combinations of high-frequency ultrasound-based parameters exhibit potential to characterize different, particularly very early, degeneration stages of hyaline cartilage. Variable sensitivities towards different degeneration stages suggest that a concurrent estimation of multiple ultrasound-based parameters is diagnostically valuable. In-vivo application of the present findings is conceivable in both minimally invasive arthroscopic ultrasound and high-frequency transcutaneous ultrasound. Copyright © 2014 Osteoarthritis Research Society International. Published by Elsevier Ltd. All rights reserved.

  14. Battery Energy Storage State-of-Charge Forecasting: Models, Optimization, and Accuracy

    DOE PAGES

    Rosewater, David; Ferreira, Summer; Schoenwald, David; ...

    2018-01-25

    Battery energy storage systems (BESS) are a critical technology for integrating high penetration renewable power on an intelligent electrical grid. As limited energy restricts the steady-state operational state-of-charge (SoC) of storage systems, SoC forecasting models are used to determine feasible charge and discharge schedules that supply grid services. Smart grid controllers use SoC forecasts to optimize BESS schedules to make grid operation more efficient and resilient. This study presents three advances in BESS state-of-charge forecasting. First, two forecasting models are reformulated to be conducive to parameter optimization. Second, a new method for selecting optimal parameter values based on operational datamore » is presented. Last, a new framework for quantifying model accuracy is developed that enables a comparison between models, systems, and parameter selection methods. The accuracies achieved by both models, on two example battery systems, with each method of parameter selection are then compared in detail. The results of this analysis suggest variation in the suitability of these models for different battery types and applications. Finally, the proposed model formulations, optimization methods, and accuracy assessment framework can be used to improve the accuracy of SoC forecasts enabling better control over BESS charge/discharge schedules.« less

  15. Battery Energy Storage State-of-Charge Forecasting: Models, Optimization, and Accuracy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rosewater, David; Ferreira, Summer; Schoenwald, David

    Battery energy storage systems (BESS) are a critical technology for integrating high penetration renewable power on an intelligent electrical grid. As limited energy restricts the steady-state operational state-of-charge (SoC) of storage systems, SoC forecasting models are used to determine feasible charge and discharge schedules that supply grid services. Smart grid controllers use SoC forecasts to optimize BESS schedules to make grid operation more efficient and resilient. This study presents three advances in BESS state-of-charge forecasting. First, two forecasting models are reformulated to be conducive to parameter optimization. Second, a new method for selecting optimal parameter values based on operational datamore » is presented. Last, a new framework for quantifying model accuracy is developed that enables a comparison between models, systems, and parameter selection methods. The accuracies achieved by both models, on two example battery systems, with each method of parameter selection are then compared in detail. The results of this analysis suggest variation in the suitability of these models for different battery types and applications. Finally, the proposed model formulations, optimization methods, and accuracy assessment framework can be used to improve the accuracy of SoC forecasts enabling better control over BESS charge/discharge schedules.« less

  16. Clinical evaluation of selected Yogic procedures in individuals with low back pain

    PubMed Central

    Pushpika Attanayake, A. M.; Somarathna, K. I. W. K.; Vyas, G. H.; Dash, S. C.

    2010-01-01

    The present study has been conducted to evaluate selected yogic procedures on individuals with low back pain. The understanding of back pain as one of the commonest clinical presentations during clinical practice made the path to the present study. It has also been calculated that more than three-quarters of the world's population experience back pain at some time in their lives. Twelve patients were selected and randomly divided into two groups, viz., group A yogic group and group B control group. Advice for life style and diet was given for all the patients. The effect of the therapy was assessed subjectively and objectively. Particular scores drawn for yogic group and control group were individually analyzed before and after treatment and the values were compared using standard statistical protocols. Yogic intervention revealed 79% relief in both subjective and objective parameters (i.e., 7 out of 14 parameters showed statistically highly significant P < 0.01 results, while 4 showed significant results P < 0.05). Comparative effect of yogic group and control group showed 79% relief in both subjective and objective parameters. (i.e., total 6 out of 14 parameters showed statistically highly significant (P < 0.01) results, while 5 showed significant results (P < 0.05). PMID:22131719

  17. Pareto-Zipf law in growing systems with multiplicative interactions

    NASA Astrophysics Data System (ADS)

    Ohtsuki, Toshiya; Tanimoto, Satoshi; Sekiyama, Makoto; Fujihara, Akihiro; Yamamoto, Hiroshi

    2018-06-01

    Numerical simulations of multiplicatively interacting stochastic processes with weighted selections were conducted. A feedback mechanism to control the weight w of selections was proposed. It becomes evident that when w is moderately controlled around 0, such systems spontaneously exhibit the Pareto-Zipf distribution. The simulation results are universal in the sense that microscopic details, such as parameter values and the type of control and weight, are irrelevant. The central ingredient of the Pareto-Zipf law is argued to be the mild control of interactions.

  18. [Hyperspectral remote sensing image classification based on SVM optimized by clonal selection].

    PubMed

    Liu, Qing-Jie; Jing, Lin-Hai; Wang, Meng-Fei; Lin, Qi-Zhong

    2013-03-01

    Model selection for support vector machine (SVM) involving kernel and the margin parameter values selection is usually time-consuming, impacts training efficiency of SVM model and final classification accuracies of SVM hyperspectral remote sensing image classifier greatly. Firstly, based on combinatorial optimization theory and cross-validation method, artificial immune clonal selection algorithm is introduced to the optimal selection of SVM (CSSVM) kernel parameter a and margin parameter C to improve the training efficiency of SVM model. Then an experiment of classifying AVIRIS in India Pine site of USA was performed for testing the novel CSSVM, as well as a traditional SVM classifier with general Grid Searching cross-validation method (GSSVM) for comparison. And then, evaluation indexes including SVM model training time, classification overall accuracy (OA) and Kappa index of both CSSVM and GSSVM were all analyzed quantitatively. It is demonstrated that OA of CSSVM on test samples and whole image are 85.1% and 81.58, the differences from that of GSSVM are both within 0.08% respectively; And Kappa indexes reach 0.8213 and 0.7728, the differences from that of GSSVM are both within 0.001; While the ratio of model training time of CSSVM and GSSVM is between 1/6 and 1/10. Therefore, CSSVM is fast and accurate algorithm for hyperspectral image classification and is superior to GSSVM.

  19. Effect of insecticide resistance on development, longevity and reproduction of field or laboratory selected Aedes aegypti populations.

    PubMed

    Martins, Ademir Jesus; Ribeiro, Camila Dutra e Mello; Bellinato, Diogo Fernandes; Peixoto, Alexandre Afranio; Valle, Denise; Lima, José Bento Pereira

    2012-01-01

    Aedes aegypti dispersion is the major reason for the increase in dengue transmission in South America. In Brazil, control of this mosquito strongly relies on the use of pyrethroids and organophosphates against adults and larvae, respectively. In consequence, many Ae. aegypti field populations are resistant to these compounds. Resistance has a significant adaptive value in the presence of insecticide treatment. However some selected mechanisms can influence important biological processes, leading to a high fitness cost in the absence of insecticide pressure. We investigated the dynamics of insecticide resistance and its potential fitness cost in five field populations and in a lineage selected for deltamethrin resistance in the laboratory, for nine generations. For all populations the life-trait parameters investigated were larval development, sex ratio, adult longevity, relative amount of ingested blood, rate of ovipositing females, size of egglaying and eggs viability. In the five natural populations, the effects on the life-trait parameters were discrete but directly proportional to resistance level. In addition, several viability parameters were strongly affected in the laboratory selected population compared to its unselected control. Our results suggest that mechanisms selected for organophosphate and pyrethroid resistance caused the accumulation of alleles with negative effects on different life-traits and corroborate the hypothesis that insecticide resistance is associated with a high fitness cost.

  20. Effect of Insecticide Resistance on Development, Longevity and Reproduction of Field or Laboratory Selected Aedes aegypti Populations

    PubMed Central

    Bellinato, Diogo Fernandes; Peixoto, Alexandre Afranio; Valle, Denise; Lima, José Bento Pereira

    2012-01-01

    Aedes aegypti dispersion is the major reason for the increase in dengue transmission in South America. In Brazil, control of this mosquito strongly relies on the use of pyrethroids and organophosphates against adults and larvae, respectively. In consequence, many Ae. aegypti field populations are resistant to these compounds. Resistance has a significant adaptive value in the presence of insecticide treatment. However some selected mechanisms can influence important biological processes, leading to a high fitness cost in the absence of insecticide pressure. We investigated the dynamics of insecticide resistance and its potential fitness cost in five field populations and in a lineage selected for deltamethrin resistance in the laboratory, for nine generations. For all populations the life-trait parameters investigated were larval development, sex ratio, adult longevity, relative amount of ingested blood, rate of ovipositing females, size of egglaying and eggs viability. In the five natural populations, the effects on the life-trait parameters were discrete but directly proportional to resistance level. In addition, several viability parameters were strongly affected in the laboratory selected population compared to its unselected control. Our results suggest that mechanisms selected for organophosphate and pyrethroid resistance caused the accumulation of alleles with negative effects on different life-traits and corroborate the hypothesis that insecticide resistance is associated with a high fitness cost. PMID:22431967

  1. Rapid optimization of MRM-MS instrument parameters by subtle alteration of precursor and product m/z targets.

    PubMed

    Sherwood, Carly A; Eastham, Ashley; Lee, Lik Wee; Risler, Jenni; Mirzaei, Hamid; Falkner, Jayson A; Martin, Daniel B

    2009-07-01

    Multiple reaction monitoring (MRM) is a highly sensitive method of targeted mass spectrometry (MS) that can be used to selectively detect and quantify peptides based on the screening of specified precursor peptide-to-fragment ion transitions. MRM-MS sensitivity depends critically on the tuning of instrument parameters, such as collision energy and cone voltage, for the generation of maximal product ion signal. Although generalized equations and values exist for such instrument parameters, there is no clear indication that optimal signal can be reliably produced for all types of MRM transitions using such an algorithmic approach. To address this issue, we have devised a workflow functional on both Waters Quattro Premier and ABI 4000 QTRAP triple quadrupole instruments that allows rapid determination of the optimal value of any programmable instrument parameter for each MRM transition. Here, we demonstrate the strategy for the optimizations of collision energy and cone voltage, but the method could be applied to other instrument parameters, such as declustering potential, as well. The workflow makes use of the incremental adjustment of the precursor and product m/z values at the hundredth decimal place to create a series of MRM targets at different collision energies that can be cycled through in rapid succession within a single run, avoiding any run-to-run variability in execution or comparison. Results are easily visualized and quantified using the MRM software package Mr. M to determine the optimal instrument parameters for each transition.

  2. Rapid Optimization of MRM-MS Instrument Parameters by Subtle Alteration of Precursor and Product m/z Targets

    PubMed Central

    Sherwood, Carly A.; Eastham, Ashley; Lee, Lik Wee; Risler, Jenni; Mirzaei, Hamid; Falkner, Jayson A.; Martin, Daniel B.

    2009-01-01

    Multiple reaction monitoring (MRM) is a highly sensitive method of targeted mass spectrometry (MS) that can be used to selectively detect and quantify peptides based on the screening of specified precursor peptide-to-fragment ion transitions. MRM-MS sensitivity depends critically on the tuning of instrument parameters, such as collision energy and cone voltage, for the generation of maximal product ion signal. Although generalized equations and values exist for such instrument parameters, there is no clear indication that optimal signal can be reliably produced for all types of MRM transitions using such an algorithmic approach. To address this issue, we have devised a workflow functional on both Waters Quattro Premier and ABI 4000 QTRAP triple quadrupole instruments that allows rapid determination of the optimal value of any programmable instrument parameter for each MRM transition. Here, we demonstrate the strategy for the optimizations of collision energy and cone voltage, but the method could be applied to other instrument parameters, such as declustering potential, as well. The workflow makes use of the incremental adjustment of the precursor and product m/z values at the hundredth decimal place to create a series of MRM targets at different collision energies that can be cycled through in rapid succession within a single run, avoiding any run-to-run variability in execution or comparison. Results are easily visualized and quantified using the MRM software package Mr. M to determine the optimal instrument parameters for each transition. PMID:19405522

  3. Parameter Estimation and Model Selection in Computational Biology

    PubMed Central

    Lillacci, Gabriele; Khammash, Mustafa

    2010-01-01

    A central challenge in computational modeling of biological systems is the determination of the model parameters. Typically, only a fraction of the parameters (such as kinetic rate constants) are experimentally measured, while the rest are often fitted. The fitting process is usually based on experimental time course measurements of observables, which are used to assign parameter values that minimize some measure of the error between these measurements and the corresponding model prediction. The measurements, which can come from immunoblotting assays, fluorescent markers, etc., tend to be very noisy and taken at a limited number of time points. In this work we present a new approach to the problem of parameter selection of biological models. We show how one can use a dynamic recursive estimator, known as extended Kalman filter, to arrive at estimates of the model parameters. The proposed method follows. First, we use a variation of the Kalman filter that is particularly well suited to biological applications to obtain a first guess for the unknown parameters. Secondly, we employ an a posteriori identifiability test to check the reliability of the estimates. Finally, we solve an optimization problem to refine the first guess in case it should not be accurate enough. The final estimates are guaranteed to be statistically consistent with the measurements. Furthermore, we show how the same tools can be used to discriminate among alternate models of the same biological process. We demonstrate these ideas by applying our methods to two examples, namely a model of the heat shock response in E. coli, and a model of a synthetic gene regulation system. The methods presented are quite general and may be applied to a wide class of biological systems where noisy measurements are used for parameter estimation or model selection. PMID:20221262

  4. A Systematic Approach for Model-Based Aircraft Engine Performance Estimation

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Garg, Sanjay

    2010-01-01

    A requirement for effective aircraft engine performance estimation is the ability to account for engine degradation, generally described in terms of unmeasurable health parameters such as efficiencies and flow capacities related to each major engine module. This paper presents a linear point design methodology for minimizing the degradation-induced error in model-based aircraft engine performance estimation applications. The technique specifically focuses on the underdetermined estimation problem, where there are more unknown health parameters than available sensor measurements. A condition for Kalman filter-based estimation is that the number of health parameters estimated cannot exceed the number of sensed measurements. In this paper, the estimated health parameter vector will be replaced by a reduced order tuner vector whose dimension is equivalent to the sensed measurement vector. The reduced order tuner vector is systematically selected to minimize the theoretical mean squared estimation error of a maximum a posteriori estimator formulation. This paper derives theoretical estimation errors at steady-state operating conditions, and presents the tuner selection routine applied to minimize these values. Results from the application of the technique to an aircraft engine simulation are presented and compared to the estimation accuracy achieved through conventional maximum a posteriori and Kalman filter estimation approaches. Maximum a posteriori estimation results demonstrate that reduced order tuning parameter vectors can be found that approximate the accuracy of estimating all health parameters directly. Kalman filter estimation results based on the same reduced order tuning parameter vectors demonstrate that significantly improved estimation accuracy can be achieved over the conventional approach of selecting a subset of health parameters to serve as the tuner vector. However, additional development is necessary to fully extend the methodology to Kalman filter-based estimation applications.

  5. Development of Software to Digitize Historic Hardcopy Seismograms from Nuclear Explosions

    DTIC Science & Technology

    2010-09-01

    portion. As will be discussed below, this complicates the preparation of the image for subsequent digitization because background threshold values are...is the output image and  −1 < β ≤ 0 is a user selectable parameter. Global contrast enhancement uses a whitening transform to make a given image

  6. How much complexity is warranted in a rainfall-runoff model?

    Treesearch

    A.J. Jakeman; G.M. Hornberger

    1993-01-01

    Development of mathmatical models relating the precipitation incident upon a catchment to the streamflow emanating from the catchment has been a major focus af surface water hydrology for decades. Generally, values for parameters in such models must be selected so that runoff calculated from the model "matches" recorded runoff from some historical period....

  7. Mixture optimization for mixed gas Joule-Thomson cycle

    NASA Astrophysics Data System (ADS)

    Detlor, J.; Pfotenhauer, J.; Nellis, G.

    2017-12-01

    An appropriate gas mixture can provide lower temperatures and higher cooling power when used in a Joule-Thomson (JT) cycle than is possible with a pure fluid. However, selecting gas mixtures to meet specific cooling loads and cycle parameters is a challenging design problem. This study focuses on the development of a computational tool to optimize gas mixture compositions for specific operating parameters. This study expands on prior research by exploring higher heat rejection temperatures and lower pressure ratios. A mixture optimization model has been developed which determines an optimal three-component mixture based on the analysis of the maximum value of the minimum value of isothermal enthalpy change, ΔhT , that occurs over the temperature range. This allows optimal mixture compositions to be determined for a mixed gas JT system with load temperatures down to 110 K and supply temperatures above room temperature for pressure ratios as small as 3:1. The mixture optimization model has been paired with a separate evaluation of the percent of the heat exchanger that exists in a two-phase range in order to begin the process of selecting a mixture for experimental investigation.

  8. Vibration and acoustic frequency spectra for industrial process modeling using selective fusion multi-condition samples and multi-source features

    NASA Astrophysics Data System (ADS)

    Tang, Jian; Qiao, Junfei; Wu, ZhiWei; Chai, Tianyou; Zhang, Jian; Yu, Wen

    2018-01-01

    Frequency spectral data of mechanical vibration and acoustic signals relate to difficult-to-measure production quality and quantity parameters of complex industrial processes. A selective ensemble (SEN) algorithm can be used to build a soft sensor model of these process parameters by fusing valued information selectively from different perspectives. However, a combination of several optimized ensemble sub-models with SEN cannot guarantee the best prediction model. In this study, we use several techniques to construct mechanical vibration and acoustic frequency spectra of a data-driven industrial process parameter model based on selective fusion multi-condition samples and multi-source features. Multi-layer SEN (MLSEN) strategy is used to simulate the domain expert cognitive process. Genetic algorithm and kernel partial least squares are used to construct the inside-layer SEN sub-model based on each mechanical vibration and acoustic frequency spectral feature subset. Branch-and-bound and adaptive weighted fusion algorithms are integrated to select and combine outputs of the inside-layer SEN sub-models. Then, the outside-layer SEN is constructed. Thus, "sub-sampling training examples"-based and "manipulating input features"-based ensemble construction methods are integrated, thereby realizing the selective information fusion process based on multi-condition history samples and multi-source input features. This novel approach is applied to a laboratory-scale ball mill grinding process. A comparison with other methods indicates that the proposed MLSEN approach effectively models mechanical vibration and acoustic signals.

  9. The effect of the combination of acids and tannin in diet on the performance and selected biochemical, haematological and antioxidant enzyme parameters in grower pigs

    PubMed Central

    2010-01-01

    Background The abolition of in-feed antibiotics or chemotherapeutics as growth promoters have stimulated the swine industry to look for alternatives such as organic acids, botanicals, probiotics and tannin. The objective of the present study was to compare the effects of a combination of acids and tannin with diet with organic acids and diet without growth promoters on the growth performance and selected biochemical, haematological and antioxidant enzyme parameters in grower pigs. Tannin is more natural and cheaper but possibly with the same effectiveness as organic acids with regard to growth performance. Methods Thirty-six 7 week old grower pigs, divided into three equal groups, were used in a three week feeding trial. Group I was fed basal diet, group II basal diet with added organic acids and group III basal diet with added organic and inorganic acids and tannin. Pigs were weighed before and after feeding and observed daily. Blood was collected before and after the feeding trial for the determination of selected biochemical, haematological and antioxidant enzyme parameters. One-way ANOVA was used to assess any diet related changes of all the parameters. Paired t-test was used to evaluate changes of blood parameters individually in each group of growers before and after feeding. Results No clinical health problems related to diet were noted during the three week feeding trial. The average daily gain (ADG) and selected blood parameters were not affected by the addition to basal diet of either acids and tannin or of organic acids alone. Selected blood parameters remained within the reference range before and after the feeding trial, with the exception of total serum proteins that were below the lower value of reference range at both times. The significant changes (paired t-test) observed in individual groups before and after the feeding trial are related to the growth of pigs. Conclusion Diet with acids and tannin did not improve the growth performance of grower pigs but had no deleterious effects on selected blood parameters. The possibility of beneficial effects of adding acids and tannin in diets on growth performance over a longer period, however, could not be excluded. PMID:20205921

  10. Intratumoral heterogeneity characterized by pretreatment PET in non-small cell lung cancer patients predicts progression-free survival on EGFR tyrosine kinase inhibitor

    PubMed Central

    Paeng, Jin Chul; Keam, Bhumsuk; Kim, Tae Min; Kim, Dong-Wan; Heo, Dae Seog

    2018-01-01

    Intratumoral heterogeneity has been suggested to be an important resistance mechanism leading to treatment failure. We hypothesized that radiologic images could be an alternative method for identification of tumor heterogeneity. We tested heterogeneity textural parameters on pretreatment FDG-PET/CT in order to assess the predictive value of target therapy. Recurred or metastatic non-small cell lung cancer (NSCLC) subjects with an activating EGFR mutation treated with either gefitinib or erlotinib were reviewed. An exploratory data set (n = 161) and a validation data set (n = 21) were evaluated, and eight parameters were selected for survival analysis. The optimal cutoff value was determined by the recursive partitioning method, and the predictive value was calculated using Harrell’s C-index. Univariate analysis revealed that all eight parameters showed an increased hazard ratio (HR) for progression-free survival (PFS). The highest HR was 6.41 (P<0.01) with co-occurrence (Co) entropy. Increased risk remained present after adjusting for initial stage, performance status (PS), and metabolic volume (MV) (aHR: 4.86, P<0.01). Textural parameters were found to have an incremental predictive value of early EGFR tyrosine kinase inhibitor (TKI) failure compared to that of the base model of the stage and PS (C-index 0.596 vs. 0.662, P = 0.02, by Co entropy). Heterogeneity textural parameters acquired from pretreatment FDG-PET/CT are highly predictive factors for PFS of EGFR TKI in EGFR-mutated NSCLC patients. These parameters are easily applicable to the identification of a subpopulation at increased risk of early EGFR TKI failure. Correlation to genomic alteration should be determined in future studies. PMID:29385152

  11. A supplier selection and order allocation problem with stochastic demands

    NASA Astrophysics Data System (ADS)

    Zhou, Yun; Zhao, Lei; Zhao, Xiaobo; Jiang, Jianhua

    2011-08-01

    We consider a system comprising a retailer and a set of candidate suppliers that operates within a finite planning horizon of multiple periods. The retailer replenishes its inventory from the suppliers and satisfies stochastic customer demands. At the beginning of each period, the retailer makes decisions on the replenishment quantity, supplier selection and order allocation among the selected suppliers. An optimisation problem is formulated to minimise the total expected system cost, which includes an outer level stochastic dynamic program for the optimal replenishment quantity and an inner level integer program for supplier selection and order allocation with a given replenishment quantity. For the inner level subproblem, we develop a polynomial algorithm to obtain optimal decisions. For the outer level subproblem, we propose an efficient heuristic for the system with integer-valued inventory, based on the structural properties of the system with real-valued inventory. We investigate the efficiency of the proposed solution approach, as well as the impact of parameters on the optimal replenishment decision with numerical experiments.

  12. Using constraints and their value for optimization of large ODE systems

    PubMed Central

    Domijan, Mirela; Rand, David A.

    2015-01-01

    We provide analytical tools to facilitate a rigorous assessment of the quality and value of the fit of a complex model to data. We use this to provide approaches to model fitting, parameter estimation, the design of optimization functions and experimental optimization. This is in the context where multiple constraints are used to select or optimize a large model defined by differential equations. We illustrate the approach using models of circadian clocks and the NF-κB signalling system. PMID:25673300

  13. Simultaneous selection for cowpea (Vigna unguiculata L.) genotypes with adaptability and yield stability using mixed models.

    PubMed

    Torres, F E; Teodoro, P E; Rodrigues, E V; Santos, A; Corrêa, A M; Ceccon, G

    2016-04-29

    The aim of this study was to select erect cowpea (Vigna unguiculata L.) genotypes simultaneously for high adaptability, stability, and yield grain in Mato Grosso do Sul, Brazil using mixed models. We conducted six trials of different cowpea genotypes in 2005 and 2006 in Aquidauana, Chapadão do Sul, Dourados, and Primavera do Leste. The experimental design was randomized complete blocks with four replications and 20 genotypes. Genetic parameters were estimated by restricted maximum likelihood/best linear unbiased prediction, and selection was based on the harmonic mean of the relative performance of genetic values method using three strategies: selection based on the predicted breeding value, having considered the performance mean of the genotypes in all environments (no interaction effect); the performance in each environment (with an interaction effect); and the simultaneous selection for grain yield, stability, and adaptability. The MNC99542F-5 and MNC99-537F-4 genotypes could be grown in various environments, as they exhibited high grain yield, adaptability, and stability. The average heritability of the genotypes was moderate to high and the selective accuracy was 82%, indicating an excellent potential for selection.

  14. Indices estimated using REML/BLUP and introduction of a super-trait for the selection of progenies in popcorn.

    PubMed

    Vittorazzi, C; Amaral Junior, A T; Guimarães, A G; Viana, A P; Silva, F H L; Pena, G F; Daher, R F; Gerhardt, I F S; Oliveira, G H F; Pereira, M G

    2017-09-27

    Selection indices commonly utilize economic weights, which become arbitrary genetic gains. In popcorn, this is even more evident due to the negative correlation between the main characteristics of economic importance - grain yield and popping expansion. As an option in the use of classical biometrics as a selection index, the optimal procedure restricted maximum likelihood/best linear unbiased predictor (REML/BLUP) allows the simultaneous estimation of genetic parameters and the prediction of genotypic values. Based on the mixed model methodology, the objective of this study was to investigate the comparative efficiency of eight selection indices estimated by REML/BLUP for the effective selection of superior popcorn families in the eighth intrapopulation recurrent selection cycle. We also investigated the efficiency of the inclusion of the variable "expanded popcorn volume per hectare" in the most advantageous selection of superior progenies. In total, 200 full-sib families were evaluated in two different areas in the North and Northwest regions of the State of Rio de Janeiro, Brazil. The REML/BLUP procedure resulted in higher estimated gains than those obtained with classical biometric selection index methodologies and should be incorporated into the selection of progenies. The following indices resulted in higher gains in the characteristics of greatest economic importance: the classical selection index/values attributed by trial, via REML/BLUP, and the greatest genotypic values/expanded popcorn volume per hectare, via REML. The expanded popcorn volume per hectare characteristic enabled satisfactory gains in grain yield and popping expansion; this characteristic should be considered super-trait in popcorn breeding programs.

  15. Geometric parameter analysis to predetermine optimal radiosurgery technique for the treatment of arteriovenous malformation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mestrovic, Ante; Clark, Brenda G.; Department of Medical Physics, British Columbia Cancer Agency, Vancouver, British Columbia

    2005-11-01

    Purpose: To develop a method of predicting the values of dose distribution parameters of different radiosurgery techniques for treatment of arteriovenous malformation (AVM) based on internal geometric parameters. Methods and Materials: For each of 18 previously treated AVM patients, four treatment plans were created: circular collimator arcs, dynamic conformal arcs, fixed conformal fields, and intensity-modulated radiosurgery. An algorithm was developed to characterize the target and critical structure shape complexity and the position of the critical structures with respect to the target. Multiple regression was employed to establish the correlation between the internal geometric parameters and the dose distribution for differentmore » treatment techniques. The results from the model were applied to predict the dosimetric outcomes of different radiosurgery techniques and select the optimal radiosurgery technique for a number of AVM patients. Results: Several internal geometric parameters showing statistically significant correlation (p < 0.05) with the treatment planning results for each technique were identified. The target volume and the average minimum distance between the target and the critical structures were the most effective predictors for normal tissue dose distribution. The structure overlap volume with the target and the mean distance between the target and the critical structure were the most effective predictors for critical structure dose distribution. The predicted values of dose distribution parameters of different radiosurgery techniques were in close agreement with the original data. Conclusions: A statistical model has been described that successfully predicts the values of dose distribution parameters of different radiosurgery techniques and may be used to predetermine the optimal technique on a patient-to-patient basis.« less

  16. Exponential parameter and tracking error convergence guarantees for adaptive controllers without persistency of excitation

    NASA Astrophysics Data System (ADS)

    Chowdhary, Girish; Mühlegg, Maximilian; Johnson, Eric

    2014-08-01

    In model reference adaptive control (MRAC) the modelling uncertainty is often assumed to be parameterised with time-invariant unknown ideal parameters. The convergence of parameters of the adaptive element to these ideal parameters is beneficial, as it guarantees exponential stability, and makes an online learned model of the system available. Most MRAC methods, however, require persistent excitation of the states to guarantee that the adaptive parameters converge to the ideal values. Enforcing PE may be resource intensive and often infeasible in practice. This paper presents theoretical analysis and illustrative examples of an adaptive control method that leverages the increasing ability to record and process data online by using specifically selected and online recorded data concurrently with instantaneous data for adaptation. It is shown that when the system uncertainty can be modelled as a combination of known nonlinear bases, simultaneous exponential tracking and parameter error convergence can be guaranteed if the system states are exciting over finite intervals such that rich data can be recorded online; PE is not required. Furthermore, the rate of convergence is directly proportional to the minimum singular value of the matrix containing online recorded data. Consequently, an online algorithm to record and forget data is presented and its effects on the resulting switched closed-loop dynamics are analysed. It is also shown that when radial basis function neural networks (NNs) are used as adaptive elements, the method guarantees exponential convergence of the NN parameters to a compact neighbourhood of their ideal values without requiring PE. Flight test results on a fixed-wing unmanned aerial vehicle demonstrate the effectiveness of the method.

  17. Determination of remodeling parameters for a strain-adaptive finite element model of the distal ulna.

    PubMed

    Neuert, Mark A C; Dunning, Cynthia E

    2013-09-01

    Strain energy-based adaptive material models are used to predict bone resorption resulting from stress shielding induced by prosthetic joint implants. Generally, such models are governed by two key parameters: a homeostatic strain-energy state (K) and a threshold deviation from this state required to initiate bone reformation (s). A refinement procedure has been performed to estimate these parameters in the femur and glenoid; this study investigates the specific influences of these parameters on resulting density distributions in the distal ulna. A finite element model of a human ulna was created using micro-computed tomography (µCT) data, initialized to a homogeneous density distribution, and subjected to approximate in vivo loading. Values for K and s were tested, and the resulting steady-state density distribution compared with values derived from µCT images. The sensitivity of these parameters to initial conditions was examined by altering the initial homogeneous density value. The refined model parameters selected were then applied to six additional human ulnae to determine their performance across individuals. Model accuracy using the refined parameters was found to be comparable with that found in previous studies of the glenoid and femur, and gross bone structures, such as the cortical shell and medullary canal, were reproduced. The model was found to be insensitive to initial conditions; however, a fair degree of variation was observed between the six specimens. This work represents an important contribution to the study of changes in load transfer in the distal ulna following the implementation of commercial orthopedic implants.

  18. Determination of Air Enthalpy Based on Meteorological Data as an Indicator for Heat Stress Assessment in Occupational Outdoor Environments, a Field Study in IRAN.

    PubMed

    Heidari, Hamidreza; Golbabaei, Farideh; Shamsipour, Aliakbar; Rahimi Forushani, Abbas; Gaeini, Abbasali

    2016-01-01

    Heat stress evaluation and timely notification, especially using meteorological data is an important issue attracted attention in recent years. Therefore, this study aimed at answering the following research questions: 1) can enthalpy as a common environmental parameter reported by meteorological agencies be applied accurately for evaluation of thermal condition of outdoor settings, and 2) if so, what is it's the best criterion to detect areas in stress or stress-free situations, separately. Nine climatic regions were selected throughout Iran covering a wide variety of climatic conditions like those, which exist around the world. Three types of parameters including measured (ta, RH, Pa and WBGT), estimated (metabolic rate and cloth thermal insulation), and calculated parameters (enthalpy and effective WBGT) were recorded for 1452 different situations. Enthalpy as a new indicator in this research was compared to WBGT in selected regions. Altogether, a good consistency was obtained between enthalpy and WBGT in selected regions (Kappa value: 0.815). Based on the good ROC curve obtained using MedCal software, the criterion of the values more than 74.24 for the new index was determined to explain heat stress situation for outdoor environments. Because of simplicity in measurement, applicability of the indicator for weather agencies, the consistency observed between enthalpy and a valid as well as accurate index (WBGT), sensor requirements which take only a few seconds to reach equilibrium and so on, enthalpy indicator can be introduced and applied as a good substitute for WBGT for outdoor settings.

  19. Chemical and toxicological evaluation of underground coal gasification (UCG) effluents. The coal rank effect.

    PubMed

    Kapusta, Krzysztof; Stańczyk, Krzysztof

    2015-02-01

    The effect of coal rank on the composition and toxicity of water effluents resulting from two underground coal gasification experiments with distinct coal samples (lignite and hard coal) was investigated. A broad range of organic and inorganic parameters was determined in the sampled condensates. The physicochemical tests were supplemented by toxicity bioassays based on the luminescent bacteria Vibrio fischeri as the test organism. The principal component analysis and Pearson correlation analysis were adopted to assist in the interpretation of the raw experimental data, and the multiple regression statistical method was subsequently employed to enable predictions of the toxicity based on the values of the selected parameters. Significant differences in the qualitative and quantitative description of the contamination profiles were identified for both types of coal under study. Independent of the coal rank, the most characteristic organic components of the studied condensates were phenols, naphthalene and benzene. In the inorganic array, ammonia, sulphates and selected heavy metals and metalloids were identified as the dominant constituents. Except for benzene with its alkyl homologues (BTEX), selected polycyclic aromatic hydrocarbons (PAHs), zinc and selenium, the values of the remaining parameters were considerably greater for the hard coal condensates. The studies revealed that all of the tested UCG condensates were extremely toxic to V. fischeri; however, the average toxicity level for the hard coal condensates was approximately 56% higher than that obtained for the lignite. The statistical analysis provided results supporting that the toxicity of the condensates was most positively correlated with the concentrations of free ammonia, phenols and certain heavy metals. Copyright © 2014 Elsevier Inc. All rights reserved.

  20. Estimating the parameters of background selection and selective sweeps in Drosophila in the presence of gene conversion

    PubMed Central

    Campos, José Luis; Charlesworth, Brian

    2017-01-01

    We used whole-genome resequencing data from a population of Drosophila melanogaster to investigate the causes of the negative correlation between the within-population synonymous nucleotide site diversity (πS) of a gene and its degree of divergence from related species at nonsynonymous nucleotide sites (KA). By using the estimated distributions of mutational effects on fitness at nonsynonymous and UTR sites, we predicted the effects of background selection at sites within a gene on πS and found that these could account for only part of the observed correlation between πS and KA. We developed a model of the effects of selective sweeps that included gene conversion as well as crossing over. We used this model to estimate the average strength of selection on positively selected mutations in coding sequences and in UTRs, as well as the proportions of new mutations that are selectively advantageous. Genes with high levels of selective constraint on nonsynonymous sites were found to have lower strengths of positive selection and lower proportions of advantageous mutations than genes with low levels of constraint. Overall, background selection and selective sweeps within a typical gene reduce its synonymous diversity to ∼75% of its value in the absence of selection, with larger reductions for genes with high KA. Gene conversion has a major effect on the estimates of the parameters of positive selection, such that the estimated strength of selection on favorable mutations is greatly reduced if it is ignored. PMID:28559322

  1. Dependence of intravoxel incoherent motion diffusion MR threshold b-value selection for separating perfusion and diffusion compartments and liver fibrosis diagnostic performance.

    PubMed

    Wáng, Yì Xiáng J; Li, Yáo T; Chevallier, Olivier; Huang, Hua; Leung, Jason Chi Shun; Chen, Weitian; Lu, Pu-Xuan

    2018-01-01

    Background Intravoxel incoherent motion (IVIM) tissue parameters depend on the threshold b-value. Purpose To explore how threshold b-value impacts PF ( f), D slow ( D), and D fast ( D*) values and their performance for liver fibrosis detection. Material and Methods Fifteen healthy volunteers and 33 hepatitis B patients were included. With a 1.5-T magnetic resonance (MR) scanner and respiration gating, IVIM data were acquired with ten b-values of 10, 20, 40, 60, 80, 100, 150, 200, 400, and 800 s/mm 2 . Signal measurement was performed on the right liver. Segmented-unconstrained analysis was used to compute IVIM parameters and six threshold b-values in the range of 40-200 s/mm 2 were compared. PF, D slow , and D fast values were placed along the x-axis, y-axis, and z-axis, and a plane was defined to separate volunteers from patients. Results Higher threshold b-values were associated with higher PF measurement; while lower threshold b-values led to higher D slow and D fast measurements. The dependence of PF, D slow , and D fast on threshold b-value differed between healthy livers and fibrotic livers; with the healthy livers showing a higher dependence. Threshold b-value = 60 s/mm 2 showed the largest mean distance between healthy liver datapoints vs. fibrotic liver datapoints, and a classification and regression tree showed that a combination of PF (PF < 9.5%), D slow (D slow  < 1.239 × 10 -3 mm 2 /s), and D fast (D fast  < 20.85 × 10 -3 mm 2 /s) differentiated healthy individuals and all individual fibrotic livers with an area under the curve of logistic regression (AUC) of 1. Conclusion For segmented-unconstrained analysis, the selection of threshold b-value = 60 s/mm 2 improves IVIM differentiation between healthy livers and fibrotic livers.

  2. Optimum Strategies for Selecting Descent Flight-Path Angles

    NASA Technical Reports Server (NTRS)

    Wu, Minghong G. (Inventor); Green, Steven M. (Inventor)

    2016-01-01

    An information processing system and method for adaptively selecting an aircraft descent flight path for an aircraft, are provided. The system receives flight adaptation parameters, including aircraft flight descent time period, aircraft flight descent airspace region, and aircraft flight descent flyability constraints. The system queries a plurality of flight data sources and retrieves flight information including any of winds and temperatures aloft data, airspace/navigation constraints, airspace traffic demand, and airspace arrival delay model. The system calculates a set of candidate descent profiles, each defined by at least one of a flight path angle and a descent rate, and each including an aggregated total fuel consumption value for the aircraft following a calculated trajectory, and a flyability constraints metric for the calculated trajectory. The system selects a best candidate descent profile having the least fuel consumption value while the fly ability constraints metric remains within aircraft flight descent flyability constraints.

  3. Prediction of bioavailability of selected bisphosphonates using in silico methods towards categorization into a biopharmaceutical classification system.

    PubMed

    Biernacka, Joanna; Betlejewska-Kielak, Katarzyna; Kłosińska-Szmurło, Ewa; Pluciński, Franciszek A; Mazurek, Aleksander P

    2013-01-01

    The physicochemical properties relevant to biological activity of selected bisphosphonates such as clodronate disodium salt, etidronate disodium salt, pamidronate disodium salt, alendronate sodium salt, ibandronate sodium salt, risedronate sodium salt and zoledronate disodium salt were determined using in silico methods. The main aim of our research was to investigate and propose molecular determinants thataffect bioavailability of above mentioned compounds. These determinants are: stabilization energy (deltaE), free energy of solvation (deltaG(solv)), electrostatic potential, dipole moment, as well as partition and distribution coefficients estimated by the log P and log D values. Presented values indicate that selected bisphosphonates a recharacterized by high solubility and low permeability. The calculated parameters describing both solubility and permeability through biological membranes seem to be a good bioavailability indicators of bisphosphonates examined and can be a useful tool to include into Biopharmaceutical Classification System (BCS) development.

  4. Selection of latent variables for multiple mixed-outcome models

    PubMed Central

    ZHOU, LING; LIN, HUAZHEN; SONG, XINYUAN; LI, YI

    2014-01-01

    Latent variable models have been widely used for modeling the dependence structure of multiple outcomes data. However, the formulation of a latent variable model is often unknown a priori, the misspecification will distort the dependence structure and lead to unreliable model inference. Moreover, multiple outcomes with varying types present enormous analytical challenges. In this paper, we present a class of general latent variable models that can accommodate mixed types of outcomes. We propose a novel selection approach that simultaneously selects latent variables and estimates parameters. We show that the proposed estimator is consistent, asymptotically normal and has the oracle property. The practical utility of the methods is confirmed via simulations as well as an application to the analysis of the World Values Survey, a global research project that explores peoples’ values and beliefs and the social and personal characteristics that might influence them. PMID:27642219

  5. High throughput screening: an in silico solubility parameter approach for lipids and solvents in SLN preparations.

    PubMed

    Shah, Malay; Agrawal, Yadvendra

    2013-01-01

    The present paper describes an in silico solubility behavior of drug and lipids, an essential screening study in preparation of solid lipid nanoparticles (SLN). Ciprofloxacin HCl was selected as a model drug along with 11 lipids and 5 organic solvents. In silico miscibility study of drug/lipid/solvent was performed using Hansen solubility parameter approach calculated by group contribution method of Van Krevelen and Hoftyzer. Predicted solubility was validated by determining solubility of lipids in various solvent at different temperature range, while miscibility of drug in lipids was determined by apparent solubility study and partition experiment. The presence of oxygen and OH functionality increases the polarity and hydrogen bonding possibilities of the compound which has reflected the highest solubility parameter values for Geleol and Capmul MCM C8. Ethyl acetate, Geleol and Capmul MCM C8 was identified as suitable organic solvent, solid lipid and liquid lipid respectively based on a solubility parameter approach which was in agreement with the result of an apparent solubility study and partition coefficient. These works demonstrate the validity of solubility parameter approach and provide a feasible predictor to the rational selection of excipients in designing SLN formulation.

  6. Practical identifiability analysis of a minimal cardiovascular system model.

    PubMed

    Pironet, Antoine; Docherty, Paul D; Dauby, Pierre C; Chase, J Geoffrey; Desaive, Thomas

    2017-01-17

    Parameters of mathematical models of the cardiovascular system can be used to monitor cardiovascular state, such as total stressed blood volume status, vessel elastance and resistance. To do so, the model parameters have to be estimated from data collected at the patient's bedside. This work considers a seven-parameter model of the cardiovascular system and investigates whether these parameters can be uniquely determined using indices derived from measurements of arterial and venous pressures, and stroke volume. An error vector defined the residuals between the simulated and reference values of the seven clinically available haemodynamic indices. The sensitivity of this error vector to each model parameter was analysed, as well as the collinearity between parameters. To assess practical identifiability of the model parameters, profile-likelihood curves were constructed for each parameter. Four of the seven model parameters were found to be practically identifiable from the selected data. The remaining three parameters were practically non-identifiable. Among these non-identifiable parameters, one could be decreased as much as possible. The other two non-identifiable parameters were inversely correlated, which prevented their precise estimation. This work presented the practical identifiability analysis of a seven-parameter cardiovascular system model, from limited clinical data. The analysis showed that three of the seven parameters were practically non-identifiable, thus limiting the use of the model as a monitoring tool. Slight changes in the time-varying function modeling cardiac contraction and use of larger values for the reference range of venous pressure made the model fully practically identifiable. Copyright © 2017. Published by Elsevier B.V.

  7. Pre-selection and assessment of green organic solvents by clustering chemometric tools.

    PubMed

    Tobiszewski, Marek; Nedyalkova, Miroslava; Madurga, Sergio; Pena-Pereira, Francisco; Namieśnik, Jacek; Simeonov, Vasil

    2018-01-01

    The study presents the result of the application of chemometric tools for selection of physicochemical parameters of solvents for predicting missing variables - bioconcentration factors, water-octanol and octanol-air partitioning constants. EPI Suite software was successfully applied to predict missing values for solvents commonly considered as "green". Values for logBCF, logK OW and logK OA were modelled for 43 rather nonpolar solvents and 69 polar ones. Application of multivariate statistics was also proved to be useful in the assessment of the obtained modelling results. The presented approach can be one of the first steps and support tools in the assessment of chemicals in terms of their greenness. Copyright © 2017 Elsevier Inc. All rights reserved.

  8. The predicted influence of climate change on lesser prairie-chicken reproductive parameters

    USGS Publications Warehouse

    Grisham, Blake A.; Boal, Clint W.; Haukos, David A.; Davis, D.; Boydston, Kathy K.; Dixon, Charles; Heck, Willard R.

    2013-01-01

    The Southern High Plains is anticipated to experience significant changes in temperature and precipitation due to climate change. These changes may influence the lesser prairie-chicken (Tympanuchus pallidicinctus) in positive or negative ways. We assessed the potential changes in clutch size, incubation start date, and nest survival for lesser prairie-chickens for the years 2050 and 2080 based on modeled predictions of climate change and reproductive data for lesser prairie-chickens from 2001-2011 on the Southern High Plains of Texas and New Mexico. We developed 9 a priori models to assess the relationship between reproductive parameters and biologically relevant weather conditions. We selected weather variable(s) with the most model support and then obtained future predicted values from climatewizard.org. We conducted 1,000 simulations using each reproductive parameter's linear equation obtained from regression calculations, and the future predicted value for each weather variable to predict future reproductive parameter values for lesser prairie-chickens. There was a high degree of model uncertainty for each reproductive value. Winter temperature had the greatest effect size for all three parameters, suggesting a negative relationship between above-average winter temperature and reproductive output. The above-average winter temperatures are correlated to La Nina events, which negatively affect lesser prairie-chickens through resulting drought conditions. By 2050 and 2080, nest survival was predicted to be below levels considered viable for population persistence; however, our assessment did not consider annual survival of adults, chick survival, or the positive benefit of habitat management and conservation, which may ultimately offset the potentially negative effect of drought on nest survival.

  9. The predicted influence of climate change on lesser prairie-chicken reproductive parameters.

    PubMed

    Grisham, Blake A; Boal, Clint W; Haukos, David A; Davis, Dawn M; Boydston, Kathy K; Dixon, Charles; Heck, Willard R

    2013-01-01

    The Southern High Plains is anticipated to experience significant changes in temperature and precipitation due to climate change. These changes may influence the lesser prairie-chicken (Tympanuchus pallidicinctus) in positive or negative ways. We assessed the potential changes in clutch size, incubation start date, and nest survival for lesser prairie-chickens for the years 2050 and 2080 based on modeled predictions of climate change and reproductive data for lesser prairie-chickens from 2001-2011 on the Southern High Plains of Texas and New Mexico. We developed 9 a priori models to assess the relationship between reproductive parameters and biologically relevant weather conditions. We selected weather variable(s) with the most model support and then obtained future predicted values from climatewizard.org. We conducted 1,000 simulations using each reproductive parameter's linear equation obtained from regression calculations, and the future predicted value for each weather variable to predict future reproductive parameter values for lesser prairie-chickens. There was a high degree of model uncertainty for each reproductive value. Winter temperature had the greatest effect size for all three parameters, suggesting a negative relationship between above-average winter temperature and reproductive output. The above-average winter temperatures are correlated to La Niña events, which negatively affect lesser prairie-chickens through resulting drought conditions. By 2050 and 2080, nest survival was predicted to be below levels considered viable for population persistence; however, our assessment did not consider annual survival of adults, chick survival, or the positive benefit of habitat management and conservation, which may ultimately offset the potentially negative effect of drought on nest survival.

  10. The structural, morphological and thermal properties of grafted pH-sensitive interpenetrating highly porous polymeric composites of sodium alginate/acrylic acid copolymers for controlled delivery of diclofenac potassium.

    PubMed

    Jalil, Aamir; Khan, Samiullah; Naeem, Fahad; Haider, Malik Suleman; Sarwar, Shoaib; Riaz, Amna; Ranjha, Nazar Muhammad

    2017-01-01

    In present investigation new formulations of Sodium Alginate/Acrylic acid hydrogels with high porous structure were synthesized by free radical polymerization technique for the controlled drug delivery of analgesic agent to colon. Many structural parameters like molecular weight between crosslinks ( M c ), crosslink density ( M r ), volume interaction parameter ( v 2, s ), Flory Huggins water interaction parameter and diffusion coefficient ( Q ) were calculated. Water uptake studies was conducted in different USP phosphate buffer solutions. All samples showed higher swelling ratio with increasing pH values because of ionization of carboxylic groups at higher pH values. Porosity and gel fraction of all the samples were calculated. New selected samples were loaded with the model drug (diclofenac potassium).The amount of drug loaded and released was determined and it was found that all the samples showed higher release of drug at higher pH values. Release of diclofenac potassium was found to be dependent on the ratio of sodium alginate/acrylic acid, EGDMA and pH of the medium. Experimental data was fitted to various model equations and corresponding parameters were calculated to study the release mechanism. The Structural, Morphological and Thermal Properties of interpenetrating hydrogels were studied by FTIR, XRD, DSC, and SEM.

  11. An Anaylsis of Control Requirements and Control Parameters for Direct-Coupled Turbojet Engines

    NASA Technical Reports Server (NTRS)

    Novik, David; Otto, Edward W.

    1947-01-01

    Requirements of an automatic engine control, as affected by engine characteristics, have been analyzed for a direct-coupled turbojet engine. Control parameters for various conditions of engine operation are discussed. A hypothetical engine control is presented to illustrate the use of these parameters. An adjustable speed governor was found to offer a desirable method of over-all engine control. The selection of a minimum value of fuel flow was found to offer a means of preventing unstable burner operation during steady-state operation. Until satisfactory high-temperature-measuring devices are developed, air-fuel ratio is considered to be a satisfactory acceleration-control parameter for the attainment of the maximum acceleration rates consistent with safe turbine temperatures. No danger of unstable burner operation exists during acceleration if a temperature-limiting acceleration control is assumed to be effective. Deceleration was found to be accompanied by the possibility of burner blow-out even if a minimum fuel-flow control that prevents burner blow-out during steady-state operation is assumed to be effective. Burner blow-out during deceleration may be eliminated by varying the value of minimum fuel flow as a function of compressor-discharge pressure, but in no case should the fuel flow be allowed to fall below the value required for steady-state burner operation.

  12. Results of the exploratory drill hole Ue5n,Frenchman Flat, Nevada Test Site. [Geologic and geophysical parameters of selected locations with anomalous seismic signals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ramspott, L.D.; McArthur, R.D.

    1977-02-18

    Exploratory hole Ue5n was drilled to a depth of 514 m in central Frenchmam Flat, Nevada Test Site, as part of a program sponsored by the Nuclear Monitoring Office (NMO) of the Advanced Research Projects Agency (ARPA) to determine the geologic and geophysical parameters of selected locations with anomalous seismic signals. The specific goal of drilling Ue5n was to provide the site characteristics for emplacement sites U5b and U5e. We present here data on samples, geophysical logs, lithology and stratigraphy, and depth to the water table. From an analysis of the measurements of the physical properties, a set of recommendedmore » values is given.« less

  13. Criteria for the use of regression analysis for remote sensing of sediment and pollutants

    NASA Technical Reports Server (NTRS)

    Whitlock, C. H.; Kuo, C. Y.; Lecroy, S. R.

    1982-01-01

    An examination of limitations, requirements, and precision of the linear multiple-regression technique for quantification of marine environmental parameters is conducted. Both environmental and optical physics conditions have been defined for which an exact solution to the signal response equations is of the same form as the multiple regression equation. Various statistical parameters are examined to define a criteria for selection of an unbiased fit when upwelled radiance values contain error and are correlated with each other. Field experimental data are examined to define data smoothing requirements in order to satisfy the criteria of Daniel and Wood (1971). Recommendations are made concerning improved selection of ground-truth locations to maximize variance and to minimize physical errors associated with the remote sensing experiment.

  14. Two statistics for evaluating parameter identifiability and error reduction

    USGS Publications Warehouse

    Doherty, John; Hunt, Randall J.

    2009-01-01

    Two statistics are presented that can be used to rank input parameters utilized by a model in terms of their relative identifiability based on a given or possible future calibration dataset. Identifiability is defined here as the capability of model calibration to constrain parameters used by a model. Both statistics require that the sensitivity of each model parameter be calculated for each model output for which there are actual or presumed field measurements. Singular value decomposition (SVD) of the weighted sensitivity matrix is then undertaken to quantify the relation between the parameters and observations that, in turn, allows selection of calibration solution and null spaces spanned by unit orthogonal vectors. The first statistic presented, "parameter identifiability", is quantitatively defined as the direction cosine between a parameter and its projection onto the calibration solution space. This varies between zero and one, with zero indicating complete non-identifiability and one indicating complete identifiability. The second statistic, "relative error reduction", indicates the extent to which the calibration process reduces error in estimation of a parameter from its pre-calibration level where its value must be assigned purely on the basis of prior expert knowledge. This is more sophisticated than identifiability, in that it takes greater account of the noise associated with the calibration dataset. Like identifiability, it has a maximum value of one (which can only be achieved if there is no measurement noise). Conceptually it can fall to zero; and even below zero if a calibration problem is poorly posed. An example, based on a coupled groundwater/surface-water model, is included that demonstrates the utility of the statistics. ?? 2009 Elsevier B.V.

  15. Sustainable breeding objectives and possible selection response: Finding the balance between economics and breeders' preferences.

    PubMed

    Fuerst-Waltl, Birgit; Fuerst, Christian; Obritzhauser, Walter; Egger-Danner, Christa

    2016-12-01

    To optimize breeding objectives of Fleckvieh and Brown Swiss cattle, economic values were re-estimated using updated prices, costs, and population parameters. Subsequently, the expected selection responses for the total merit index (TMI) were calculated using previous and newly derived economic values. The responses were compared for alternative scenarios that consider breeders' preferences. A dairy herd with milk production, bull fattening, and rearing of replacement stock was modeled. The economic value of a trait was derived by calculating the difference in herd profit before and after genetic improvement. Economic values for each trait were derived while keeping all other traits constant. The traits considered were dairy, beef, and fitness traits, the latter including direct health traits. The calculation of the TMI and the expected selection responses was done using selection index methodology with estimated breeding values instead of phenotypic deviations. For the scenario representing the situation up to 2016, all traits included in the TMI were considered with their respective economic values before the update. Selection response was also calculated for newly derived economic values and some alternative scenarios, including the new trait vitality index (subindex comprising stillbirth and rearing losses). For Fleckvieh, the relative economic value for the trait groups milk, beef, and fitness were 38, 16, and 46%, respectively, up to 2016, and 39, 13, and 48%, respectively, for the newly derived economic values. Approximately the same selection response may be expected for the milk trait group, whereas the new weightings resulted in a substantially decreased response in beef traits. Within the fitness block, all traits, with the exception of fertility, showed a positive selection response. For Brown Swiss, the relative economic values for the main trait groups milk, beef, and fitness were 48, 5, and 47% before 2016, respectively, whereas for the newly derived scenario they were 40, 14, and 39%. For both Brown Swiss and Fleckvieh, the fertility complex was expected to further deteriorate, whereas all other expected selection responses for fitness traits were positive. Several additional and alternative scenarios were calculated as a basis for discussion with breeders. A decision was made to implement TMI with relative economic values for milk, beef, and fitness with 38, 18, and 44% for Fleckvieh and 50, 5, and 45% for Brown Swiss, respectively. In both breeds, no positive expected selection response was predicted for fertility, although this trait complex received a markedly higher weight than that derived economically. An even higher weight for fertility could not be agreed on due to the effect on selection response of other traits. Hence, breeders decided to direct more attention toward the preselection of bulls with regard to fertility. Copyright © 2016 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  16. Influence of laser power on the penetration depth and geometry of scanning tracks in selective laser melting

    NASA Astrophysics Data System (ADS)

    Stopyra, Wojciech; Kurzac, Jarosław; Gruber, Konrad; Kurzynowski, Tomasz; Chlebus, Edward

    2016-12-01

    SLM technology allows production of a fully functional objects from metal and ceramic powders, with true density of more than 99,9%. The quality of manufactured items in SLM method affects more than 100 parameters, which can be divided into fixed and variable. Fixed parameters are those whose value before the process should be defined and maintained in an appropriate range during the process, e.g. chemical composition and morphology of the powder, oxygen level in working chamber, heating temperature of the substrate plate. In SLM technology, five parameters are variables that optimal set allows to produce parts without defects (pores, cracks) and with an acceptable speed. These parameters are: laser power, distance between points, time of exposure, distance between lines and layer thickness. To develop optimal parameters thin walls or single track experiments are performed, to select the best sets narrowed to three parameters: laser power, exposure time and distance between points. In this paper, the effect of laser power on the penetration depth and geometry of scanned single track was shown. In this experiment, titanium (grade 2) substrate plate was used and scanned by fibre laser of 1064 nm wavelength. For each track width, height and penetration depth of laser beam was measured.

  17. Reference Values for Cardiac and Aortic Magnetic Resonance Imaging in Healthy, Young Caucasian Adults.

    PubMed

    Eikendal, Anouk L M; Bots, Michiel L; Haaring, Cees; Saam, Tobias; van der Geest, Rob J; Westenberg, Jos J M; den Ruijter, Hester M; Hoefer, Imo E; Leiner, Tim

    2016-01-01

    Reference values for morphological and functional parameters of the cardiovascular system in early life are relevant since they may help to identify young adults who fall outside the physiological range of arterial and cardiac ageing. This study provides age and sex specific reference values for aortic wall characteristics, cardiac function parameters and aortic pulse wave velocity (PWV) in a population-based sample of healthy, young adults using magnetic resonance (MR) imaging. In 131 randomly selected healthy, young adults aged between 25 and 35 years (mean age 31.8 years, 63 men) of the general-population based Atherosclerosis-Monitoring-and-Biomarker-measurements-In-The-YOuNg (AMBITYON) study, descending thoracic aortic dimensions and wall thickness, thoracic aortic PWV and cardiac function parameters were measured using a 3.0T MR-system. Age and sex specific reference values were generated using dedicated software. Differences in reference values between two age groups (25-30 and 30-35 years) and both sexes were tested. Aortic diameters and areas were higher in the older age group (all p<0.007). Moreover, aortic dimensions, left ventricular mass, left and right ventricular volumes and cardiac output were lower in women than in men (all p<0.001). For mean and maximum aortic wall thickness, left and right ejection fraction and aortic PWV we did not observe a significant age or sex effect. This study provides age and sex specific reference values for cardiovascular MR parameters in healthy, young Caucasian adults. These may aid in MR guided pre-clinical identification of young adults who fall outside the physiological range of arterial and cardiac ageing.

  18. Designing occupancy studies when false-positive detections occur

    USGS Publications Warehouse

    Clement, Matthew

    2016-01-01

    1.Recently, estimators have been developed to estimate occupancy probabilities when false-positive detections occur during presence-absence surveys. Some of these estimators combine different types of survey data to improve estimates of occupancy. With these estimators, there is a tradeoff between the number of sample units surveyed, and the number and type of surveys at each sample unit. Guidance on efficient design of studies when false positives occur is unavailable. 2.For a range of scenarios, I identified survey designs that minimized the mean square error of the estimate of occupancy. I considered an approach that uses one survey method and two observation states and an approach that uses two survey methods. For each approach, I used numerical methods to identify optimal survey designs when model assumptions were met and parameter values were correctly anticipated, when parameter values were not correctly anticipated, and when the assumption of no unmodelled detection heterogeneity was violated. 3.Under the approach with two observation states, false positive detections increased the number of recommended surveys, relative to standard occupancy models. If parameter values could not be anticipated, pessimism about detection probabilities avoided poor designs. Detection heterogeneity could require more or fewer repeat surveys, depending on parameter values. If model assumptions were met, the approach with two survey methods was inefficient. However, with poor anticipation of parameter values, with detection heterogeneity, or with removal sampling schemes, combining two survey methods could improve estimates of occupancy. 4.Ignoring false positives can yield biased parameter estimates, yet false positives greatly complicate the design of occupancy studies. Specific guidance for major types of false-positive occupancy models, and for two assumption violations common in field data, can conserve survey resources. This guidance can be used to design efficient monitoring programs and studies of species occurrence, species distribution, or habitat selection, when false positives occur during surveys.

  19. Monte-Carlo Method Application for Precising Meteor Velocity from TV Observations

    NASA Astrophysics Data System (ADS)

    Kozak, P.

    2014-12-01

    Monte-Carlo method (method of statistical trials) as an application for meteor observations processing was developed in author's Ph.D. thesis in 2005 and first used in his works in 2008. The idea of using the method consists in that if we generate random values of input data - equatorial coordinates of the meteor head in a sequence of TV frames - in accordance with their statistical distributions we get a possibility to plot the probability density distributions for all its kinematical parameters, and to obtain their mean values and dispersions. At that the theoretical possibility appears to precise the most important parameter - geocentric velocity of a meteor - which has the highest influence onto precision of meteor heliocentric orbit elements calculation. In classical approach the velocity vector was calculated in two stages: first we calculate the vector direction as a vector multiplication of vectors of poles of meteor trajectory big circles, calculated from two observational points. Then we calculated the absolute value of velocity independently from each observational point selecting any of them from some reasons as a final parameter. In the given method we propose to obtain a statistical distribution of velocity absolute value as an intersection of two distributions corresponding to velocity values obtained from different points. We suppose that such an approach has to substantially increase the precision of meteor velocity calculation and remove any subjective inaccuracies.

  20. Utility usage forecasting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hosking, Jonathan R. M.; Natarajan, Ramesh

    The computer creates a utility demand forecast model for weather parameters by receiving a plurality of utility parameter values, wherein each received utility parameter value corresponds to a weather parameter value. Determining that a range of weather parameter values lacks a sufficient amount of corresponding received utility parameter values. Determining one or more utility parameter values that corresponds to the range of weather parameter values. Creating a model which correlates the received and the determined utility parameter values with the corresponding weather parameters values.

  1. Evaluation of orbits with incomplete knowledge of the mathematical expectancy and the matrix of covariation of errors

    NASA Technical Reports Server (NTRS)

    Bakhshiyan, B. T.; Nazirov, R. R.; Elyasberg, P. E.

    1980-01-01

    The problem of selecting the optimal algorithm of filtration and the optimal composition of the measurements is examined assuming that the precise values of the mathematical expectancy and the matrix of covariation of errors are unknown. It is demonstrated that the optimal algorithm of filtration may be utilized for making some parameters more precise (for example, the parameters of the gravitational fields) after preliminary determination of the elements of the orbit by a simpler method of processing (for example, the method of least squares).

  2. Non-invasive continuous blood pressure measurement based on mean impact value method, BP neural network, and genetic algorithm.

    PubMed

    Tan, Xia; Ji, Zhong; Zhang, Yadan

    2018-04-25

    Non-invasive continuous blood pressure monitoring can provide an important reference and guidance for doctors wishing to analyze the physiological and pathological status of patients and to prevent and diagnose cardiovascular diseases in the clinical setting. Therefore, it is very important to explore a more accurate method of non-invasive continuous blood pressure measurement. To address the shortcomings of existing blood pressure measurement models based on pulse wave transit time or pulse wave parameters, a new method of non-invasive continuous blood pressure measurement - the GA-MIV-BP neural network model - is presented. The mean impact value (MIV) method is used to select the factors that greatly influence blood pressure from the extracted pulse wave transit time and pulse wave parameters. These factors are used as inputs, and the actual blood pressure values as outputs, to train the BP neural network model. The individual parameters are then optimized using a genetic algorithm (GA) to establish the GA-MIV-BP neural network model. Bland-Altman consistency analysis indicated that the measured and predicted blood pressure values were consistent and interchangeable. Therefore, this algorithm is of great significance to promote the clinical application of a non-invasive continuous blood pressure monitoring method.

  3. Effects of complete water fasting and regeneration diet on kidney function, oxidative stress and antioxidants.

    PubMed

    Mojto, V; Gvozdjakova, A; Kucharska, J; Rausova, Z; Vancova, O; Valuch, J

    2018-01-01

    The aim of the study was to observe the influence of 11-days complete water fasting (WF) and regeneration diet (RD) on renal function, body weight, blood pressure and oxidative stress. Therapeutic WF is considered a healing method. Ten volunteers drank only water for 11 days, followed by RD for the next 11 days. Data on body weight, blood pressure, kidney functions, antioxidants, lipid peroxidation, cholesterols, triacylglycerols and selected biochemical parameters were obtained. WF increased uric acid and creatinine and decreased glomerular filtration rate. After RD, the parameters were comparable to baseline values. Urea was not affected. Lipid peroxidation (TBARS) decreased and maintained stable after RD. Fasting decreased α-tocopherol and increased γ-tocopherol, no significant changes were found after RD. Coenzyme Q10 decreased after RD. HDL-cholesterol decreased in WF. Total- and LDL-cholesterol decreased after RD. Other biochemical parameters were within the range of reference values. The effect of the complete fasting on kidney function was manifested by hyperuricemia. Renal function was slightly decreased, however maintained within the reference values. After RD, it returned to baseline values. The positive effect of the complete water fasting was in the reduction of oxidative stress, body weight and blood pressure (Tab. 3, Ref. 25).

  4. Use of system identification techniques for improving airframe finite element models using test data

    NASA Technical Reports Server (NTRS)

    Hanagud, Sathya V.; Zhou, Weiyu; Craig, James I.; Weston, Neil J.

    1991-01-01

    A method for using system identification techniques to improve airframe finite element models was developed and demonstrated. The method uses linear sensitivity matrices to relate changes in selected physical parameters to changes in total system matrices. The values for these physical parameters were determined using constrained optimization with singular value decomposition. The method was confirmed using both simple and complex finite element models for which pseudo-experimental data was synthesized directly from the finite element model. The method was then applied to a real airframe model which incorporated all the complexities and details of a large finite element model and for which extensive test data was available. The method was shown to work, and the differences between the identified model and the measured results were considered satisfactory.

  5. Semi-experimental equilibrium structure of pyrazinamide from gas-phase electron diffraction. How much experimental is it?

    NASA Astrophysics Data System (ADS)

    Tikhonov, Denis S.; Vishnevskiy, Yury V.; Rykov, Anatolii N.; Grikina, Olga E.; Khaikin, Leonid S.

    2017-03-01

    A semi-experimental equilibrium structure of free molecules of pyrazinamide has been determined for the first time using gas electron diffraction method. The refinement was carried using regularization of geometry by calculated quantum chemical parameters. It is discussed to which extent is the final structure experimental. A numerical approach for estimation of the amount of experimental information in the refined parameters is suggested. The following values of selected internuclear distances were determined (values are in Å with 1σ in the parentheses): re(Cpyrazine-Cpyrazine)av = 1.397(2), re(Npyrazine-Cpyrazine)av = 1.332(3), re(Cpyrazine-Camide) = 1.493(1), re(Namide-Camide) = 1.335(2), re(Oamide-Camide) = 1.219(1). The given standard deviations represent pure experimental uncertainties without the influence of regularization.

  6. Body growth and reproduction of individuals of the sciaenid fish Stellifer rastrifer in a shallow tropical bight: A cautionary tale for assumptions regarding population parameters

    NASA Astrophysics Data System (ADS)

    Pombo, Maíra; Denadai, Márcia Regina; Turra, Alexander

    2013-05-01

    Knowledge of population parameters and the ability to predict their responses to environmental changes are useful tools to aid in the appropriate management and conservation of natural resources. Samples of the sciaenid fish Stellifer rastrifer were taken from August 2003 through October 2004 in shallow areas of Caraguatatuba Bight, southeastern Brazil. The results showed a consistent presence of length-frequency classes throughout the year and low values of the gonadosomatic index of this species, indicating that the area is not used for spawning or residence of adults, but rather shelters individuals in late stages of development. The results may serve as a caveat for assessments of transitional areas such as the present one, the nursery function of which is neglected compared to estuaries and mangroves. The danger of mismanaging these areas by not considering their peculiarities is emphasized by using these data as a study case for the development of some broadly used population-parameter analyses. The individuals' body growth parameters from the von Bertalanffy model were estimated based on the most common approaches, and the best values obtained from traditional quantification methods of selection were very prone to bias. The low gonadosomatic index (GSI) estimated during the period was an important factor in stimulating us to select more reliable parameters of body growth (L∞ = 20.9, K = 0.37 and Z = 2.81), which were estimated based on assuming the existence of spatial segregation by size. The data obtained suggest that the estimated mortality rate included a high rate of migration of older individuals to deeper areas, where we assume that they completed their development.

  7. Hydrologic Model Selection using Markov chain Monte Carlo methods

    NASA Astrophysics Data System (ADS)

    Marshall, L.; Sharma, A.; Nott, D.

    2002-12-01

    Estimation of parameter uncertainty (and in turn model uncertainty) allows assessment of the risk in likely applications of hydrological models. Bayesian statistical inference provides an ideal means of assessing parameter uncertainty whereby prior knowledge about the parameter is combined with information from the available data to produce a probability distribution (the posterior distribution) that describes uncertainty about the parameter and serves as a basis for selecting appropriate values for use in modelling applications. Widespread use of Bayesian techniques in hydrology has been hindered by difficulties in summarizing and exploring the posterior distribution. These difficulties have been largely overcome by recent advances in Markov chain Monte Carlo (MCMC) methods that involve random sampling of the posterior distribution. This study presents an adaptive MCMC sampling algorithm which has characteristics that are well suited to model parameters with a high degree of correlation and interdependence, as is often evident in hydrological models. The MCMC sampling technique is used to compare six alternative configurations of a commonly used conceptual rainfall-runoff model, the Australian Water Balance Model (AWBM), using 11 years of daily rainfall runoff data from the Bass river catchment in Australia. The alternative configurations considered fall into two classes - those that consider model errors to be independent of prior values, and those that model the errors as an autoregressive process. Each such class consists of three formulations that represent increasing levels of complexity (and parameterisation) of the original model structure. The results from this study point both to the importance of using Bayesian approaches in evaluating model performance, as well as the simplicity of the MCMC sampling framework that has the ability to bring such approaches within the reach of the applied hydrological community.

  8. X-Ray Morphological Analysis of the Planck ESZ Clusters

    NASA Astrophysics Data System (ADS)

    Lovisari, Lorenzo; Forman, William R.; Jones, Christine; Ettori, Stefano; Andrade-Santos, Felipe; Arnaud, Monique; Démoclès, Jessica; Pratt, Gabriel W.; Randall, Scott; Kraft, Ralph

    2017-09-01

    X-ray observations show that galaxy clusters have a very large range of morphologies. The most disturbed systems, which are good to study how clusters form and grow and to test physical models, may potentially complicate cosmological studies because the cluster mass determination becomes more challenging. Thus, we need to understand the cluster properties of our samples to reduce possible biases. This is complicated by the fact that different experiments may detect different cluster populations. For example, Sunyaev-Zeldovich (SZ) selected cluster samples have been found to include a greater fraction of disturbed systems than X-ray selected samples. In this paper we determine eight morphological parameters for the Planck Early Sunyaev-Zeldovich (ESZ) objects observed with XMM-Newton. We found that two parameters, concentration and centroid shift, are the best to distinguish between relaxed and disturbed systems. For each parameter we provide the values that allow selecting the most relaxed or most disturbed objects from a sample. We found that there is no mass dependence on the cluster dynamical state. By comparing our results with what was obtained with REXCESS clusters, we also confirm that the ESZ clusters indeed tend to be more disturbed, as found by previous studies.

  9. X-Ray Morphological Analysis of the Planck ESZ Clusters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lovisari, Lorenzo; Forman, William R.; Jones, Christine

    2017-09-01

    X-ray observations show that galaxy clusters have a very large range of morphologies. The most disturbed systems, which are good to study how clusters form and grow and to test physical models, may potentially complicate cosmological studies because the cluster mass determination becomes more challenging. Thus, we need to understand the cluster properties of our samples to reduce possible biases. This is complicated by the fact that different experiments may detect different cluster populations. For example, Sunyaev–Zeldovich (SZ) selected cluster samples have been found to include a greater fraction of disturbed systems than X-ray selected samples. In this paper wemore » determine eight morphological parameters for the Planck Early Sunyaev–Zeldovich (ESZ) objects observed with XMM-Newton . We found that two parameters, concentration and centroid shift, are the best to distinguish between relaxed and disturbed systems. For each parameter we provide the values that allow selecting the most relaxed or most disturbed objects from a sample. We found that there is no mass dependence on the cluster dynamical state. By comparing our results with what was obtained with REXCESS clusters, we also confirm that the ESZ clusters indeed tend to be more disturbed, as found by previous studies.« less

  10. A meta-cognitive learning algorithm for a Fully Complex-valued Relaxation Network.

    PubMed

    Savitha, R; Suresh, S; Sundararajan, N

    2012-08-01

    This paper presents a meta-cognitive learning algorithm for a single hidden layer complex-valued neural network called "Meta-cognitive Fully Complex-valued Relaxation Network (McFCRN)". McFCRN has two components: a cognitive component and a meta-cognitive component. A Fully Complex-valued Relaxation Network (FCRN) with a fully complex-valued Gaussian like activation function (sech) in the hidden layer and an exponential activation function in the output layer forms the cognitive component. The meta-cognitive component contains a self-regulatory learning mechanism which controls the learning ability of FCRN by deciding what-to-learn, when-to-learn and how-to-learn from a sequence of training data. The input parameters of cognitive components are chosen randomly and the output parameters are estimated by minimizing a logarithmic error function. The problem of explicit minimization of magnitude and phase errors in the logarithmic error function is converted to system of linear equations and output parameters of FCRN are computed analytically. McFCRN starts with zero hidden neuron and builds the number of neurons required to approximate the target function. The meta-cognitive component selects the best learning strategy for FCRN to acquire the knowledge from training data and also adapts the learning strategies to implement best human learning components. Performance studies on a function approximation and real-valued classification problems show that proposed McFCRN performs better than the existing results reported in the literature. Copyright © 2012 Elsevier Ltd. All rights reserved.

  11. Phenology of Scramble Polygyny in a Wild Population of Chrysolemid Beetles: The Opportunity for and the Strength of Sexual Selection

    PubMed Central

    Baena, Martha Lucía; Macías-Ordóñez, Rogelio

    2012-01-01

    Recent debate has highlighted the importance of estimating both the strength of sexual selection on phenotypic traits, and the opportunity for sexual selection. We describe seasonal fluctuations in mating dynamics of Leptinotarsa undecimlineata (Coleoptera: Chrysomelidae). We compared several estimates of the opportunity for, and the strength of, sexual selection and male precopulatory competition over the reproductive season. First, using a null model, we suggest that the ratio between observed values of the opportunity for sexual selections and their expected value under random mating results in unbiased estimates of the actual nonrandom mating behavior of the population. Second, we found that estimates for the whole reproductive season often misrepresent the actual value at any given time period. Third, mating differentials on male size and mobility, frequency of male fighting and three estimates of the opportunity for sexual selection provide contrasting but complementary information. More intense sexual selection associated to male mobility, but not to male size, was observed in periods with high opportunity for sexual selection and high frequency of male fights. Fourth, based on parameters of spatial and temporal aggregation of female receptivity, we describe the mating system of L. undecimlineata as a scramble mating polygyny in which the opportunity for sexual selection varies widely throughout the season, but the strength of sexual selection on male size remains fairly weak, while male mobility inversely covaries with mating success. We suggest that different estimates for the opportunity for, and intensity of, sexual selection should be applied in order to discriminate how different behavioral and demographic factors shape the reproductive dynamic of populations. PMID:22761675

  12. 40 CFR 63.9634 - How do I demonstrate continuous compliance with the emission limitations that apply to me?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... selected for initial performance testing and defined within a group of similar emission units in accordance... similar air pollution control device applied to each similar emission unit within a defined group using... emission units within group “k”; Pi = Daily average parametric monitoring parameter value corresponding to...

  13. 40 CFR 63.9634 - How do I demonstrate continuous compliance with the emission limitations that apply to me?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... selected for initial performance testing and defined within a group of similar emission units in accordance... similar air pollution control device applied to each similar emission unit within a defined group using... emission units within group “k”; Pi = Daily average parametric monitoring parameter value corresponding to...

  14. Optimization of Dimensional accuracy in plasma arc cutting process employing parametric modelling approach

    NASA Astrophysics Data System (ADS)

    Naik, Deepak kumar; Maity, K. P.

    2018-03-01

    Plasma arc cutting (PAC) is a high temperature thermal cutting process employed for the cutting of extensively high strength material which are difficult to cut through any other manufacturing process. This process involves high energized plasma arc to cut any conducting material with better dimensional accuracy in lesser time. This research work presents the effect of process parameter on to the dimensional accuracy of PAC process. The input process parameters were selected as arc voltage, standoff distance and cutting speed. A rectangular plate of 304L stainless steel of 10 mm thickness was taken for the experiment as a workpiece. Stainless steel is very extensively used material in manufacturing industries. Linear dimension were measured following Taguchi’s L16 orthogonal array design approach. Three levels were selected to conduct the experiment for each of the process parameter. In all experiments, clockwise cut direction was followed. The result obtained thorough measurement is further analyzed. Analysis of variance (ANOVA) and Analysis of means (ANOM) were performed to evaluate the effect of each process parameter. ANOVA analysis reveals the effect of input process parameter upon leaner dimension in X axis. The results of the work shows that the optimal setting of process parameter values for the leaner dimension on the X axis. The result of the investigations clearly show that the specific range of input process parameter achieved the improved machinability.

  15. Apparatus Characterizes Transient Voltages in Real Time

    NASA Technical Reports Server (NTRS)

    Medelius, Pedro

    2005-01-01

    The figure shows a prototype of a relatively inexpensive electronic monitoring apparatus that measures and records selected parameters of lightning-induced transient voltages on communication and power cables. The selected parameters, listed below, are those most relevant to the ability of lightning-induced transients to damage electronic equipment. This apparatus bridges a gap between some traditional transient-voltage recorders that record complete waveforms and other traditional transient-voltage recorders that record only peak values: By recording the most relevant parameters and only those parameters this apparatus yields more useful information than does a traditional peak-value (only) recorder while imposing much smaller data-storage and data-transmission burdens than does a traditional complete-waveform recorder. Also, relative to a complete-waveform recorder, this apparatus is more reliable and can be built at lower cost because it contains fewer electronic components. The transients generated by sources other than lightning tend to have frequency components well below 1 MHz. Most commercial transient recorders can detect and record such transients, but cannot respond rapidly enough for recording lightning-induced transient voltage peaks, which can rise from 10 to 90 percent of maximum amplitude in a fraction of a microsecond. Moreover, commercial transient recorders cannot rearm themselves rapidly enough to respond to the multiple transients that occur within milliseconds of each other on some lightning strikes. One transient recorder, designed for Kennedy Space Center earlier [ Fast Transient-Voltage Recorder (KSC- 11991), NASA Tech Briefs, Vol. 23, No. 10, page 6a (October 1999)], is capable of sampling transient voltages at peak values up to 50 V in four channels at a rate of 20 MHz. That recorder contains a trigger circuit that continuously compares the amplitudes of the signals on four channels to a preset triggering threshold. When a trigger signal is received, a volatile memory is filled with data for a total time of 200 ms. After the data are transferred to nonvolatile memory, the recorder rearms itself within 400 ms to enable recording of subsequent transients. Unfortunately, the recorded data must be retrieved through a serial communication link. Depending on the amount of data recorded, the memory can be filled before retrieval is completed. Although large amounts of data are recorded and retrieved, only a small part of the information (the selected parameters) is usually required. The present transient-voltage recorder provides the required information, without incurring the overhead associated with the recording, storage, and retrieval of complete transient-waveform data. In operation, this apparatus processes transient voltage waveforms in real time to extract and record the selected parameters. An analog-to-digital converter that operates at a speed of as much as 100 mega-samples per second is used to sample a transient waveform. A real-time comparator and peak detector are implemented by use of fast field-programmable gate arrays.

  16. SU-F-R-51: Radiomics in CT Perfusion Maps of Head and Neck Cancer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nesteruk, M; Riesterer, O; Veit-Haibach, P

    2016-06-15

    Purpose: The aim of this study was to test the predictive value of radiomics features of CT perfusion (CTP) for tumor control, based on a preselection of radiomics features in a robustness study. Methods: 11 patients with head and neck cancer (HNC) and 11 patients with lung cancer were included in the robustness study to preselect stable radiomics parameters. Data from 36 HNC patients treated with definitive radiochemotherapy (median follow-up 30 months) was used to build a predictive model based on these parameters. All patients underwent pre-treatment CTP. 315 texture parameters were computed for three perfusion maps: blood volume, bloodmore » flow and mean transit time. The variability of texture parameters was tested with respect to non-standardizable perfusion computation factors (noise level and artery contouring) using intraclass correlation coefficients (ICC). The parameter with the highest ICC in the correlated group of parameters (inter-parameter Spearman correlations) was tested for its predictive value. The final model to predict tumor control was built using multivariate Cox regression analysis with backward selection of the variables. For comparison, a predictive model based on tumor volume was created. Results: Ten parameters were found to be stable in both HNC and lung cancer regarding potentially non-standardizable factors after the correction for inter-parameter correlations. In the multivariate backward selection of the variables, blood flow entropy showed a highly significant impact on tumor control (p=0.03) with concordance index (CI) of 0.76. Blood flow entropy was significantly lower in the patient group with controlled tumors at 18 months (p<0.1). The new model showed a higher concordance index compared to the tumor volume model (CI=0.68). Conclusion: The preselection of variables in the robustness study allowed building a predictive radiomics-based model of tumor control in HNC despite a small patient cohort. This model was found to be superior to the volume-based model. The project was supported by the KFSP Tumor Oxygenation of the University of Zurich, by a grant of the Center for Clinical Research, University and University Hospital Zurich and by a research grant from Merck (Schweiz) AG.« less

  17. A Regionalization Approach to select the final watershed parameter set among the Pareto solutions

    NASA Astrophysics Data System (ADS)

    Park, G. H.; Micheletty, P. D.; Carney, S.; Quebbeman, J.; Day, G. N.

    2017-12-01

    The calibration of hydrological models often results in model parameters that are inconsistent with those from neighboring basins. Considering that physical similarity exists within neighboring basins some of the physically related parameters should be consistent among them. Traditional manual calibration techniques require an iterative process to make the parameters consistent, which takes additional effort in model calibration. We developed a multi-objective optimization procedure to calibrate the National Weather Service (NWS) Research Distributed Hydrological Model (RDHM), using the Nondominant Sorting Genetic Algorithm (NSGA-II) with expert knowledge of the model parameter interrelationships one objective function. The multi-objective algorithm enables us to obtain diverse parameter sets that are equally acceptable with respect to the objective functions and to choose one from the pool of the parameter sets during a subsequent regionalization step. Although all Pareto solutions are non-inferior, we exclude some of the parameter sets that show extremely values for any of the objective functions to expedite the selection process. We use an apriori model parameter set derived from the physical properties of the watershed (Koren et al., 2000) to assess the similarity for a given parameter across basins. Each parameter is assigned a weight based on its assumed similarity, such that parameters that are similar across basins are given higher weights. The parameter weights are useful to compute a closeness measure between Pareto sets of nearby basins. The regionalization approach chooses the Pareto parameter sets that minimize the closeness measure of the basin being regionalized. The presentation will describe the results of applying the regionalization approach to a set of pilot basins in the Upper Colorado basin as part of a NASA-funded project.

  18. Estimation of hyper-parameters for a hierarchical model of combined cortical and extra-brain current sources in the MEG inverse problem.

    PubMed

    Morishige, Ken-ichi; Yoshioka, Taku; Kawawaki, Dai; Hiroe, Nobuo; Sato, Masa-aki; Kawato, Mitsuo

    2014-11-01

    One of the major obstacles in estimating cortical currents from MEG signals is the disturbance caused by magnetic artifacts derived from extra-cortical current sources such as heartbeats and eye movements. To remove the effect of such extra-brain sources, we improved the hybrid hierarchical variational Bayesian method (hyVBED) proposed by Fujiwara et al. (NeuroImage, 2009). hyVBED simultaneously estimates cortical and extra-brain source currents by placing dipoles on cortical surfaces as well as extra-brain sources. This method requires EOG data for an EOG forward model that describes the relationship between eye dipoles and electric potentials. In contrast, our improved approach requires no EOG and less a priori knowledge about the current variance of extra-brain sources. We propose a new method, "extra-dipole," that optimally selects hyper-parameter values regarding current variances of the cortical surface and extra-brain source dipoles. With the selected parameter values, the cortical and extra-brain dipole currents were accurately estimated from the simulated MEG data. The performance of this method was demonstrated to be better than conventional approaches, such as principal component analysis and independent component analysis, which use only statistical properties of MEG signals. Furthermore, we applied our proposed method to measured MEG data during covert pursuit of a smoothly moving target and confirmed its effectiveness. Copyright © 2014 Elsevier Inc. All rights reserved.

  19. Seismic hazard along a crude oil pipeline in the event of an 1811-1812 type New Madrid earthquake. Technical report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hwang, H.H.M.; Chen, C.H.S.

    1990-04-16

    An assessment of the seismic hazard that exists along the major crude oil pipeline running through the New Madrid seismic zone from southeastern Louisiana to Patoka, Illinois is examined in the report. An 1811-1812 type New Madrid earthquake with moment magnitude 8.2 is assumed to occur at three locations where large historical earthquakes have occurred. Six pipeline crossings of the major rivers in West Tennessee are chosen as the sites for hazard evaluation because of the liquefaction potential at these sites. A seismologically-based model is used to predict the bedrock accelerations. Uncertainties in three model parameters, i.e., stress parameter, cutoffmore » frequency, and strong-motion duration are included in the analysis. Each parameter is represented by three typical values. From the combination of these typical values, a total of 27 earthquake time histories can be generated for each selected site due to an 1811-1812 type New Madrid earthquake occurring at a postulated seismic source.« less

  20. Parameter optimization of differential evolution algorithm for automatic playlist generation problem

    NASA Astrophysics Data System (ADS)

    Alamag, Kaye Melina Natividad B.; Addawe, Joel M.

    2017-11-01

    With the digitalization of music, the number of collection of music increased largely and there is a need to create lists of music that filter the collection according to user preferences, thus giving rise to the Automatic Playlist Generation Problem (APGP). Previous attempts to solve this problem include the use of search and optimization algorithms. If a music database is very large, the algorithm to be used must be able to search the lists thoroughly taking into account the quality of the playlist given a set of user constraints. In this paper we perform an evolutionary meta-heuristic optimization algorithm, Differential Evolution (DE) using different combination of parameter values and select the best performing set when used to solve four standard test functions. Performance of the proposed algorithm is then compared with normal Genetic Algorithm (GA) and a hybrid GA with Tabu Search. Numerical simulations are carried out to show better results from Differential Evolution approach with the optimized parameter values.

  1. Comparative Study of Light Sources for Household

    NASA Astrophysics Data System (ADS)

    Pawlak, Andrzej; Zalesińska, Małgorzata

    2017-03-01

    The article describes test results that provided the ground to define and evaluate basic photometric, colorimetric and electric parameters of selected, widely available light sources, which are equivalent to a traditional incandescent 60-Watt light bulb. Overall, one halogen light bulb, three compact fluorescent lamps and eleven LED light sources were tested. In general, it was concluded that in most cases (branded products, in particular) the measured and calculated parameters differ from the values declared by manufacturers only to a small degree. LED sources prove to be the most beneficial substitute for traditional light bulbs, considering both their operational parameters and their price, which is comparable with the price of compact fluorescent lamps or, in some instances, even lower.

  2. Reference values for biochemical parameters in blood serum of young and adult alpacas (Vicugna pacos).

    PubMed

    Husakova, T; Pavlata, L; Pechova, A; Hauptmanova, K; Pitropovska, E; Tichy, L

    2014-09-01

    The aim of this study was to establish reference interval for biochemical parameters in blood of alpacas on the basis of large population of clinically healthy animals, and to determine the influence of sex, age and season on nitrogen and lipid metabolites, enzymes, electrolytes, vitamins and minerals in blood of alpacas. Blood samples were collected from 311 alpacas (61 males and 201 females >6 months of age and 49 crias (21 males and 28 females) ⩽6 months of age). Selected farms were located in Central Europe (Czech Republic and Germany). We determined 24 biochemical parameters from blood serum. We performed the comparison of results by the sex of animals and for the older group also the comparison of the results with regard to the season, respectively, to the feeding period. We found no highly significant difference (P<0.01) between males and females with the exception of γ-glutamyl transferase (GGT), alkaline phosphatase (ALP) and cholesterol. We found 15 significantly different parameters between the group of crias 6 months of age and the older alpacas. Based on our findings we suggest for most parameters to use different reference intervals (especially ALP, cholesterol, total protein, globulin, non-esterified fatty acids (NEFA), GGT and phosphorus) for the two above-mentioned age groups. Another important finding is the differences between some parameters in older group of alpacas in summer/winter feeding period. Animals in the summer feeding period have higher values of parameters related to fat mobilization (β-hydroxybutyrate, NEFA) and liver metabolism (bilirubin, alanine aminotransferase). The winter period with increased feeding of supplements with higher amount of fat, vitamins and minerals is characteristic by increased values of cholesterol, triglycerides, vitamins A and E, and some minerals (K, Ca, Mg and Cl) in blood serum. Clinical laboratory diagnosis of metabolic disturbances may be improved with use of age-based reference values and with consideration of seasonal differences.

  3. Free Dendritic Growth of Succinonitrile-Acetone Alloys with Thermosolutal Melt Convection

    NASA Technical Reports Server (NTRS)

    Beckermann, Christoph; Li, Ben Q.

    2003-01-01

    A stagnant film model of the effects of thermosolutal convection on free dendritic growth of alloys is developed, and its predictions are compared to available earth-based experimental data for succinonitrileacetone alloys. It is found that the convection model gives excellent agreement with the measured dendrite tip velocities and radii for low solute concentrations. However, at higher solute concentrations the present predictions show some deviations from the measured data, and the measured (thermal) Peclet numbers tend to fall even below the predictions from diffusion theory. Furthermore, the measured selection parameter (sigma*) is significantly above the expected value of 0.02 and exhibits strong scatter. It is shown that convection is not responsible for these discrepancies. Some of the deviations between the predicted and measured data at higher supercoolings could be caused by measurement difficulties. The systematic disagreement in the selection parameter for higher solute concentrations and all supercoolings examined, indicates that the theory for the selection of the dendrite tip operating state in alloys may need to be reexamined.

  4. A meta-analysis of inferior thyroid artery variations in different human ethnic groups and their clinical implications.

    PubMed

    Toni, Roberto; Casa, Claudia Della; Castorina, Sergio; Roti, Elio; Ceda, Gianpaolo; Valenti, Giorgio

    2005-09-01

    We have recently found ethnic differences in superior thyroid artery (STA) variational anatomy. Therefore, we now focus on the inferior thyroid artery (ITA). In particular, we analyze whether presence, numerical variations and site of origin of ITA are influenced by ethnic group and gender, whether and which neck side has the largest arterial caliber, whether differences occur between the presence of ITA and STA, to which extent a non-selective thyroid angiography is effective in visualizing ITA, also in comparison to STA, and which clinical value this information may have in selected pathologies of the thyroid, parathyroid and larynx. A meta-analysis has been performed, including 33 library- and Medline-selected publications on Caucasoids (European and non-European) and East Asians, and a set of original data on European Caucasoids. A total of 6285 Caucasoid and 847 East Asian items, comprising half bodies and arteries, were analyzed. After testing the homogeneity of the available data sources in relation to the anatomical variables under study we calculated a cumulative value for each selected anatomical parameter and evaluated differences using non-parametric statistics. The effectiveness of non-selective thyroid angiography was determined using sensitivity, specificity, positive and negative predictive values. The ITA was more frequently absent in East Asians than in Caucasoids, and respectively either more or less frequently arising from thyrocervical and subclavian arteries, in East Asians versus Caucasoids. In contrast, the ITA was less frequently present both in Caucasoids and East Asians than the STA. In addition, the ITA was more frequently present on the right than on the left side in both ethnic groups, but no neck side predominated in size of arterial caliber in European Caucasoids. Finally, the ITA was more frequently present in East Asian males than females, and the effectiveness of a non-selective thyroid angiography showed higher numbers for ITA than STA in Caucasoids. Statistically significant variations occur in some ITA parameters between Caucasoids and East Asians, and in its presence with respect to STA, within each ethnic group. These differences, together with a sexual dimorphic presence of ITA in East Asians and high effectiveness of its visualization by non-selective angiography in European Caucasoids, may represent an evidence-based supply of anatomical information for analysis in selected pathologies of the thyroid, parathyroid and larynx.

  5. Analysis of Indoor Environment in Classroom Based on Hygienic Requirements

    NASA Astrophysics Data System (ADS)

    Javorček, Miroslav; Sternová, Zuzana

    2016-06-01

    The article contains the analysis of experimental ventilation measurement in selected classrooms of the Elementary School Štrba. Mathematical model of selected classroom was prepared according to in-situ measurements and air exchange was calculated. Interior air temperature and quality influences the students ´ comfort. Evaluated data were compared to requirements of standard (STN EN 15251,2008) applicable to classroom indoor environment during lectures, highlighting the difference between required ambiance quality and actually measured values. CO2 concentration refers to one of the parameters indicating indoor environment quality.

  6. Selection of optimum ionic liquid solvents for flavonoid and phenolic acids extraction

    NASA Astrophysics Data System (ADS)

    Rahman, N. R. A.; Yunus, N. A.; Mustaffa, A. A.

    2017-06-01

    Phytochemicals are important in improving human health with their functions as antioxidants, antimicrobials and anticancer agents. However, the quality of phytochemicals extract relies on the efficiency of extraction process. Ionic liquids (ILs) have become a research phenomenal as extraction solvent due to their unique properties such as unlimited range of ILs, non-volatile, strongly solvating and may become either polarity. In phytochemical extraction, the determination of the best solvent that can extract highest yield of solute (phytochemical) is very important. Therefore, this study is conducted to determine the best IL solvent to extract flavonoids and phenolic acids through a property prediction modeling approach. ILs were selected from the imidazolium-based anion for alkyl chains ranging from ethyl > octyl and cations consisting of Br, Cl, [PF6], BF4], [H2PO4], [SO4], [CF3SO3], [TF2N] and [HSO4]. This work are divided into several stages. In Stage 1, a Microsoft Excel-based database containing available solubility parameter values of phytochemicals and ILs including its prediction models and their parameters has been established. The database also includes available solubility data of phytochemicals in IL, and activity coefficient models, for solid-liquid phase equilibrium (SLE) calculations. In Stage 2, the solubility parameter values of the flavonoids (e.g. kaempferol, quercetin and myricetin) and phenolic acids (e.g. gallic acid and caffeic acid) are determined either directly from database or predicted using Stefanis and Marrero-Gani group contribution model for the phytochemicals. A cation-anion contribution model is used for IL. In Stage 3, the amount of phytochemicals extracted can be determined by using SLE relationship involving UNIFAC-IL model. For missing parameters (UNIFAC-IL), they are regressed using available solubility data. Finally, in Stage 4, the solvent candidates are ranked and five ILs, ([OMIM] [TF2N], [HeMIM] [TF2N], [HMIM] [TF2N], [HeMIM] [CF3SO3] and [HMIM] [CF3SO3]) were identified and selected.

  7. Construction of optimal 3-node plate bending triangles by templates

    NASA Astrophysics Data System (ADS)

    Felippa, C. A.; Militello, C.

    A finite element template is a parametrized algebraic form that reduces to specific finite elements by setting numerical values to the free parameters. The present study concerns Kirchhoff Plate-Bending Triangles (KPT) with 3 nodes and 9 degrees of freedom. A 37-parameter template is constructed using the Assumed Natural Deviatoric Strain (ANDES). Specialization of this template includes well known elements such as DKT and HCT. The question addressed here is: can these parameters be selected to produce high performance elements? The study is carried out by staged application of constraints on the free parameters. The first stage produces element families satisfying invariance and aspect ratio insensitivity conditions. Application of energy balance constraints produces specific elements. The performance of such elements in benchmark tests is presently under study.

  8. Estimating skin blood saturation by selecting a subset of hyperspectral imaging data

    NASA Astrophysics Data System (ADS)

    Ewerlöf, Maria; Salerud, E. Göran; Strömberg, Tomas; Larsson, Marcus

    2015-03-01

    Skin blood haemoglobin saturation (?b) can be estimated with hyperspectral imaging using the wavelength (λ) range of 450-700 nm where haemoglobin absorption displays distinct spectral characteristics. Depending on the image size and photon transport algorithm, computations may be demanding. Therefore, this work aims to evaluate subsets with a reduced number of wavelengths for ?b estimation. White Monte Carlo simulations are performed using a two-layered tissue model with discrete values for epidermal thickness (?epi) and the reduced scattering coefficient (μ's ), mimicking an imaging setup. A detected intensity look-up table is calculated for a range of model parameter values relevant to human skin, adding absorption effects in the post-processing. Skin model parameters, including absorbers, are; μ's (λ), ?epi, haemoglobin saturation (?b), tissue fraction blood (?b) and tissue fraction melanin (?mel). The skin model paired with the look-up table allow spectra to be calculated swiftly. Three inverse models with varying number of free parameters are evaluated: A(?b, ?b), B(?b, ?b, ?mel) and C(all parameters free). Fourteen wavelength candidates are selected by analysing the maximal spectral sensitivity to ?b and minimizing the sensitivity to ?b. All possible combinations of these candidates with three, four and 14 wavelengths, as well as the full spectral range, are evaluated for estimating ?b for 1000 randomly generated evaluation spectra. The results show that the simplified models A and B estimated ?b accurately using four wavelengths (mean error 2.2% for model B). If the number of wavelengths increased, the model complexity needed to be increased to avoid poor estimations.

  9. Automatic range selector

    DOEpatents

    McNeilly, Clyde E.

    1977-01-04

    A device is provided for automatically selecting from a plurality of ranges of a scale of values to which a meter may be made responsive, that range which encompasses the value of an unknown parameter. A meter relay indicates whether the unknown is of greater or lesser value than the range to which the meter is then responsive. The rotatable part of a stepping relay is rotated in one direction or the other in response to the indication from the meter relay. Various positions of the rotatable part are associated with particular scales. Switching means are sensitive to the position of the rotatable part to couple the associated range to the meter.

  10. Choose, rate or squeeze: Comparison of economic value functions elicited by different behavioral tasks

    PubMed Central

    Pessiglione, Mathias

    2017-01-01

    A standard view in neuroeconomics is that to make a choice, an agent first assigns subjective values to available options, and then compares them to select the best. In choice tasks, these cardinal values are typically inferred from the preference expressed by subjects between options presented in pairs. Alternatively, cardinal values can be directly elicited by asking subjects to place a cursor on an analog scale (rating task) or to exert a force on a power grip (effort task). These tasks can vary in many respects: they can notably be more or less costly and consequential. Here, we compared the value functions elicited by choice, rating and effort tasks on options composed of two monetary amounts: one for the subject (gain) and one for a charity (donation). Bayesian model selection showed that despite important differences between the three tasks, they all elicited a same value function, with similar weighting of gain and donation, but variable concavity. Moreover, value functions elicited by the different tasks could predict choices with equivalent accuracy. Our finding therefore suggests that comparable value functions can account for various motivated behaviors, beyond economic choice. Nevertheless, we report slight differences in the computational efficiency of parameter estimation that may guide the design of future studies. PMID:29161252

  11. Anticipatory Monitoring and Control of Complex Systems using a Fuzzy based Fusion of Support Vector Regressors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miltiadis Alamaniotis; Vivek Agarwal

    This paper places itself in the realm of anticipatory systems and envisions monitoring and control methods being capable of making predictions over system critical parameters. Anticipatory systems allow intelligent control of complex systems by predicting their future state. In the current work, an intelligent model aimed at implementing anticipatory monitoring and control in energy industry is presented and tested. More particularly, a set of support vector regressors (SVRs) are trained using both historical and observed data. The trained SVRs are used to predict the future value of the system based on current operational system parameter. The predicted values are thenmore » inputted to a fuzzy logic based module where the values are fused to obtain a single value, i.e., final system output prediction. The methodology is tested on real turbine degradation datasets. The outcome of the approach presented in this paper highlights the superiority over single support vector regressors. In addition, it is shown that appropriate selection of fuzzy sets and fuzzy rules plays an important role in improving system performance.« less

  12. Optimizing the learning rate for adaptive estimation of neural encoding models

    PubMed Central

    2018-01-01

    Closed-loop neurotechnologies often need to adaptively learn an encoding model that relates the neural activity to the brain state, and is used for brain state decoding. The speed and accuracy of adaptive learning algorithms are critically affected by the learning rate, which dictates how fast model parameters are updated based on new observations. Despite the importance of the learning rate, currently an analytical approach for its selection is largely lacking and existing signal processing methods vastly tune it empirically or heuristically. Here, we develop a novel analytical calibration algorithm for optimal selection of the learning rate in adaptive Bayesian filters. We formulate the problem through a fundamental trade-off that learning rate introduces between the steady-state error and the convergence time of the estimated model parameters. We derive explicit functions that predict the effect of learning rate on error and convergence time. Using these functions, our calibration algorithm can keep the steady-state parameter error covariance smaller than a desired upper-bound while minimizing the convergence time, or keep the convergence time faster than a desired value while minimizing the error. We derive the algorithm both for discrete-valued spikes modeled as point processes nonlinearly dependent on the brain state, and for continuous-valued neural recordings modeled as Gaussian processes linearly dependent on the brain state. Using extensive closed-loop simulations, we show that the analytical solution of the calibration algorithm accurately predicts the effect of learning rate on parameter error and convergence time. Moreover, the calibration algorithm allows for fast and accurate learning of the encoding model and for fast convergence of decoding to accurate performance. Finally, larger learning rates result in inaccurate encoding models and decoders, and smaller learning rates delay their convergence. The calibration algorithm provides a novel analytical approach to predictably achieve a desired level of error and convergence time in adaptive learning, with application to closed-loop neurotechnologies and other signal processing domains. PMID:29813069

  13. Optimizing the learning rate for adaptive estimation of neural encoding models.

    PubMed

    Hsieh, Han-Lin; Shanechi, Maryam M

    2018-05-01

    Closed-loop neurotechnologies often need to adaptively learn an encoding model that relates the neural activity to the brain state, and is used for brain state decoding. The speed and accuracy of adaptive learning algorithms are critically affected by the learning rate, which dictates how fast model parameters are updated based on new observations. Despite the importance of the learning rate, currently an analytical approach for its selection is largely lacking and existing signal processing methods vastly tune it empirically or heuristically. Here, we develop a novel analytical calibration algorithm for optimal selection of the learning rate in adaptive Bayesian filters. We formulate the problem through a fundamental trade-off that learning rate introduces between the steady-state error and the convergence time of the estimated model parameters. We derive explicit functions that predict the effect of learning rate on error and convergence time. Using these functions, our calibration algorithm can keep the steady-state parameter error covariance smaller than a desired upper-bound while minimizing the convergence time, or keep the convergence time faster than a desired value while minimizing the error. We derive the algorithm both for discrete-valued spikes modeled as point processes nonlinearly dependent on the brain state, and for continuous-valued neural recordings modeled as Gaussian processes linearly dependent on the brain state. Using extensive closed-loop simulations, we show that the analytical solution of the calibration algorithm accurately predicts the effect of learning rate on parameter error and convergence time. Moreover, the calibration algorithm allows for fast and accurate learning of the encoding model and for fast convergence of decoding to accurate performance. Finally, larger learning rates result in inaccurate encoding models and decoders, and smaller learning rates delay their convergence. The calibration algorithm provides a novel analytical approach to predictably achieve a desired level of error and convergence time in adaptive learning, with application to closed-loop neurotechnologies and other signal processing domains.

  14. The use of laboratory-determined ion exchange parameters in the predictive modelling of field-scale major cation migration in groundwater over a 40-year period.

    PubMed

    Carlyle, Harriet F; Tellam, John H; Parker, Karen E

    2004-01-01

    An attempt has been made to estimate quantitatively cation concentration changes as estuary water invades a Triassic Sandstone aquifer in northwest England. Cation exchange capacities and selectivity coefficients for Na(+), K(+), Ca(2+), and Mg(2+) were measured in the laboratory using standard techniques. Selectivity coefficients were also determined using a method involving optimized back-calculation from flushing experiments, thus permitting better representation of field conditions; in all cases, the Gaines-Thomas/constant cation exchange capacity (CEC) model was found to be a reasonable, though not perfect, first description. The exchange parameters interpreted from the laboratory experiments were used in a one-dimensional reactive transport mixing cell model, and predictions compared with field pumping well data (Cl and hardness spanning a period of around 40 years, and full major ion analyses in approximately 1980). The concentration patterns predicted using Gaines-Thomas exchange with calcite equilibrium were similar to the observed patterns, but the concentrations of the divalent ions were significantly overestimated, as were 1980 sulphate concentrations, and 1980 alkalinity concentrations were underestimated. Including representation of sulphate reduction in the estuarine alluvium failed to replicate 1980 HCO(3) and pH values. However, by including partial CO(2) degassing following sulphate reduction, a process for which there is 34S and 18O evidence from a previous study, a good match for SO(4), HCO(3), and pH was attained. Using this modified estuary water and averaged values from the laboratory ion exchange parameter determinations, good predictions for the field cation data were obtained. It is concluded that the Gaines-Thomas/constant exchange capacity model with averaged parameter values can be used successfully in ion exchange predictions in this aquifer at a regional scale and over extended time scales, despite the numerous assumptions inherent in the approach; this has also been found to be the case in the few other published studies of regional ion exchanging flow.

  15. The use of laboratory-determined ion exchange parameters in the predictive modelling of field-scale major cation migration in groundwater over a 40-year period

    NASA Astrophysics Data System (ADS)

    Carlyle, Harriet F.; Tellam, John H.; Parker, Karen E.

    2004-01-01

    An attempt has been made to estimate quantitatively cation concentration changes as estuary water invades a Triassic Sandstone aquifer in northwest England. Cation exchange capacities and selectivity coefficients for Na +, K +, Ca 2+, and Mg 2+ were measured in the laboratory using standard techniques. Selectivity coefficients were also determined using a method involving optimized back-calculation from flushing experiments, thus permitting better representation of field conditions; in all cases, the Gaines-Thomas/constant cation exchange capacity (CEC) model was found to be a reasonable, though not perfect, first description. The exchange parameters interpreted from the laboratory experiments were used in a one-dimensional reactive transport mixing cell model, and predictions compared with field pumping well data (Cl and hardness spanning a period of around 40 years, and full major ion analyses in ˜1980). The concentration patterns predicted using Gaines-Thomas exchange with calcite equilibrium were similar to the observed patterns, but the concentrations of the divalent ions were significantly overestimated, as were 1980 sulphate concentrations, and 1980 alkalinity concentrations were underestimated. Including representation of sulphate reduction in the estuarine alluvium failed to replicate 1980 HCO 3 and pH values. However, by including partial CO 2 degassing following sulphate reduction, a process for which there is 34S and 18O evidence from a previous study, a good match for SO 4, HCO 3, and pH was attained. Using this modified estuary water and averaged values from the laboratory ion exchange parameter determinations, good predictions for the field cation data were obtained. It is concluded that the Gaines-Thomas/constant exchange capacity model with averaged parameter values can be used successfully in ion exchange predictions in this aquifer at a regional scale and over extended time scales, despite the numerous assumptions inherent in the approach; this has also been found to be the case in the few other published studies of regional ion exchanging flow.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kaczmarski, Krzysztof; Guiochon, Georges A

    The adsorption isotherms of selected compounds are our main source of information on the mechanisms of adsorption processes. Thus, the selection of the methods used to determine adsorption isotherm data and to evaluate the errors made is critical. Three chromatographic methods were evaluated, frontal analysis (FA), frontal analysis by characteristic point (FACP), and the pulse or perturbation method (PM), and their accuracies were compared. Using the equilibrium-dispersive (ED) model of chromatography, breakthrough curves of single components were generated corresponding to three different adsorption isotherm models: the Langmuir, the bi-Langmuir, and the Moreau isotherms. For each breakthrough curve, the best conventionalmore » procedures of each method (FA, FACP, PM) were used to calculate the corresponding data point, using typical values of the parameters of each isotherm model, for four different values of the column efficiency (N = 500, 1000, 2000, and 10,000). Then, the data points were fitted to each isotherm model and the corresponding isotherm parameters were compared to those of the initial isotherm model. When isotherm data are derived with a chromatographic method, they may suffer from two types of errors: (1) the errors made in deriving the experimental data points from the chromatographic records; (2) the errors made in selecting an incorrect isotherm model and fitting to it the experimental data. Both errors decrease significantly with increasing column efficiency with FA and FACP, but not with PM.« less

  17. Genetic parameter and breeding value estimation of donkeys' problem-focused coping styles.

    PubMed

    Navas González, Francisco Javier; Jordana Vidal, Jordi; León Jurado, José Manuel; Arando Arbulu, Ander; McLean, Amy Katherine; Delgado Bermejo, Juan Vicente

    2018-05-12

    Donkeys are recognized therapy or leisure-riding animals. Anecdotal evidence has suggested that more reactive donkeys or those more easily engaging flight mechanisms tend to be easier to train compared to those displaying the natural donkey behaviour of fight. This context brings together the need to quantify such traits and to genetically select donkeys displaying a neutral reaction during training, because of its implication with handler/rider safety and trainability. We analysed the scores for coping style traits from 300 Andalusian donkeys from 2013 to 2015. Three scales were applied to describe donkeys' response to 12 stimuli. Genetic parameters were estimated using multivariate models with year, sex, husbandry system and stimulus as fixed effects and age as a linear and quadratic covariable. Heritabilities were moderate, 0.18 ± 0.020 to 0.21 ± 0.021. Phenotypic correlations between intensity and mood/emotion or response type were negative and moderate (-0.21 and -0.25, respectively). Genetic correlations between the same variables were negative and moderately high (-0.46 and -0.53, respectively). Phenotypic and genetic correlations between mood/emotion and response type were positive and high (0.92 and 0.95, respectively). Breeding values enable selection methods that could lead to endangered breed preservation and genetically selecting donkeys for the uses that they may be most suitable. Copyright © 2018 Elsevier B.V. All rights reserved.

  18. A Study on the Basic Criteria for Selecting Heterogeneity Parameters of F18-FDG PET Images.

    PubMed

    Forgacs, Attila; Pall Jonsson, Hermann; Dahlbom, Magnus; Daver, Freddie; D DiFranco, Matthew; Opposits, Gabor; K Krizsan, Aron; Garai, Ildiko; Czernin, Johannes; Varga, Jozsef; Tron, Lajos; Balkay, Laszlo

    2016-01-01

    Textural analysis might give new insights into the quantitative characterization of metabolically active tumors. More than thirty textural parameters have been investigated in former F18-FDG studies already. The purpose of the paper is to declare basic requirements as a selection strategy to identify the most appropriate heterogeneity parameters to measure textural features. Our predefined requirements were: a reliable heterogeneity parameter has to be volume independent, reproducible, and suitable for expressing quantitatively the degree of heterogeneity. Based on this criteria, we compared various suggested measures of homogeneity. A homogeneous cylindrical phantom was measured on three different PET/CT scanners using the commonly used protocol. In addition, a custom-made inhomogeneous tumor insert placed into the NEMA image quality phantom was imaged with a set of acquisition times and several different reconstruction protocols. PET data of 65 patients with proven lung lesions were retrospectively analyzed as well. Four heterogeneity parameters out of 27 were found as the most attractive ones to characterize the textural properties of metabolically active tumors in FDG PET images. These four parameters included Entropy, Contrast, Correlation, and Coefficient of Variation. These parameters were independent of delineated tumor volume (bigger than 25-30 ml), provided reproducible values (relative standard deviation< 10%), and showed high sensitivity to changes in heterogeneity. Phantom measurements are a viable way to test the reliability of heterogeneity parameters that would be of interest to nuclear imaging clinicians.

  19. A Study on the Basic Criteria for Selecting Heterogeneity Parameters of F18-FDG PET Images

    PubMed Central

    Forgacs, Attila; Pall Jonsson, Hermann; Dahlbom, Magnus; Daver, Freddie; D. DiFranco, Matthew; Opposits, Gabor; K. Krizsan, Aron; Garai, Ildiko; Czernin, Johannes; Varga, Jozsef; Tron, Lajos; Balkay, Laszlo

    2016-01-01

    Textural analysis might give new insights into the quantitative characterization of metabolically active tumors. More than thirty textural parameters have been investigated in former F18-FDG studies already. The purpose of the paper is to declare basic requirements as a selection strategy to identify the most appropriate heterogeneity parameters to measure textural features. Our predefined requirements were: a reliable heterogeneity parameter has to be volume independent, reproducible, and suitable for expressing quantitatively the degree of heterogeneity. Based on this criteria, we compared various suggested measures of homogeneity. A homogeneous cylindrical phantom was measured on three different PET/CT scanners using the commonly used protocol. In addition, a custom-made inhomogeneous tumor insert placed into the NEMA image quality phantom was imaged with a set of acquisition times and several different reconstruction protocols. PET data of 65 patients with proven lung lesions were retrospectively analyzed as well. Four heterogeneity parameters out of 27 were found as the most attractive ones to characterize the textural properties of metabolically active tumors in FDG PET images. These four parameters included Entropy, Contrast, Correlation, and Coefficient of Variation. These parameters were independent of delineated tumor volume (bigger than 25–30 ml), provided reproducible values (relative standard deviation< 10%), and showed high sensitivity to changes in heterogeneity. Phantom measurements are a viable way to test the reliability of heterogeneity parameters that would be of interest to nuclear imaging clinicians. PMID:27736888

  20. High-Resolution Source Parameter and Site Characteristics Using Near-Field Recordings - Decoding the Trade-off Problems Between Site and Source

    NASA Astrophysics Data System (ADS)

    Chen, X.; Abercrombie, R. E.; Pennington, C.

    2017-12-01

    Recorded seismic waveforms include contributions from earthquake source properties and propagation effects, leading to long-standing trade-off problems between site/path effects and source effects. With near-field recordings, the path effect is relatively small, so the trade-off problem can be simplified to between source and site effects (commonly referred as "kappa value"). This problem is especially significant for small earthquakes where the corner frequencies are within similar ranges of kappa values, so direct spectrum fitting often leads to systematic biases due to corner frequency and magnitude. In response to the significantly increased seismicity rate in Oklahoma, several local networks have been deployed following major earthquakes: the Prague, Pawnee and Fairview earthquakes. Each network provides dense observations within 20 km surrounding the fault zone, recording tens of thousands of aftershocks between M1 to M3. Using near-field recordings in the Prague area, we apply a stacking approach to separate path/site and source effects. The resulting source parameters are consistent with parameters derived from ground motion and spectral ratio methods from other studies; they exhibit spatial coherence within the fault zone for different fault patches. We apply these source parameter constraints in an analysis of kappa values for stations within 20 km of the fault zone. The resulting kappa values show significantly reduced variability compared to those from direct spectral fitting without constraints on the source spectrum; they are not biased by earthquake magnitudes. With these improvements, we plan to apply the stacking analysis to other local arrays to analyze source properties and site characteristics. For selected individual earthquakes, we will also use individual-pair empirical Green's function (EGF) analysis to validate the source parameter estimations.

  1. Stabilizing Selection, Purifying Selection, and Mutational Bias in Finite Populations

    PubMed Central

    Charlesworth, Brian

    2013-01-01

    Genomic traits such as codon usage and the lengths of noncoding sequences may be subject to stabilizing selection rather than purifying selection. Mutations affecting these traits are often biased in one direction. To investigate the potential role of stabilizing selection on genomic traits, the effects of mutational bias on the equilibrium value of a trait under stabilizing selection in a finite population were investigated, using two different mutational models. Numerical results were generated using a matrix method for calculating the probability distribution of variant frequencies at sites affecting the trait, as well as by Monte Carlo simulations. Analytical approximations were also derived, which provided useful insights into the numerical results. A novel conclusion is that the scaled intensity of selection acting on individual variants is nearly independent of the effective population size over a wide range of parameter space and is strongly determined by the logarithm of the mutational bias parameter. This is true even when there is a very small departure of the mean from the optimum, as is usually the case. This implies that studies of the frequency spectra of DNA sequence variants may be unable to distinguish between stabilizing and purifying selection. A similar investigation of purifying selection against deleterious mutations was also carried out. Contrary to previous suggestions, the scaled intensity of purifying selection with synergistic fitness effects is sensitive to population size, which is inconsistent with the general lack of sensitivity of codon usage to effective population size. PMID:23709636

  2. [Non-destructive detection research for hollow heart of potato based on semi-transmission hyperspectral imaging and SVM].

    PubMed

    Huang, Tao; Li, Xiao-yu; Xu, Meng-ling; Jin, Rui; Ku, Jing; Xu, Sen-miao; Wu, Zhen-zhong

    2015-01-01

    The quality of potato is directly related to their edible value and industrial value. Hollow heart of potato, as a physiological disease occurred inside the tuber, is difficult to be detected. This paper put forward a non-destructive detection method by using semi-transmission hyperspectral imaging with support vector machine (SVM) to detect hollow heart of potato. Compared to reflection and transmission hyperspectral image, semi-transmission hyperspectral image can get clearer image which contains the internal quality information of agricultural products. In this study, 224 potato samples (149 normal samples and 75 hollow samples) were selected as the research object, and semi-transmission hyperspectral image acquisition system was constructed to acquire the hyperspectral images (390-1 040 nn) of the potato samples, and then the average spectrum of region of interest were extracted for spectral characteristics analysis. Normalize was used to preprocess the original spectrum, and prediction model were developed based on SVM using all wave bands, the accurate recognition rate of test set is only 87. 5%. In order to simplify the model competitive.adaptive reweighed sampling algorithm (CARS) and successive projection algorithm (SPA) were utilized to select important variables from the all 520 spectral variables and 8 variables were selected (454, 601, 639, 664, 748, 827, 874 and 936 nm). 94. 64% of the accurate recognition rate of test set was obtained by using the 8 variables to develop SVM model. Parameter optimization algorithms, including artificial fish swarm algorithm (AFSA), genetic algorithm (GA) and grid search algorithm, were used to optimize the SVM model parameters: penalty parameter c and kernel parameter g. After comparative analysis, AFSA, a new bionic optimization algorithm based on the foraging behavior of fish swarm, was proved to get the optimal model parameter (c=10. 659 1, g=0. 349 7), and the recognition accuracy of 10% were obtained for the AFSA-SVM model. The results indicate that combining the semi-transmission hyperspectral imaging technology with CARS-SPA and AFSA-SVM can accurately detect hollow heart of potato, and also provide technical support for rapid non-destructive detecting of hollow heart of potato.

  3. Space Shuttle Main Engine performance analysis

    NASA Technical Reports Server (NTRS)

    Santi, L. Michael

    1993-01-01

    For a number of years, NASA has relied primarily upon periodically updated versions of Rocketdyne's power balance model (PBM) to provide space shuttle main engine (SSME) steady-state performance prediction. A recent computational study indicated that PBM predictions do not satisfy fundamental energy conservation principles. More recently, SSME test results provided by the Technology Test Bed (TTB) program have indicated significant discrepancies between PBM flow and temperature predictions and TTB observations. Results of these investigations have diminished confidence in the predictions provided by PBM, and motivated the development of new computational tools for supporting SSME performance analysis. A multivariate least squares regression algorithm was developed and implemented during this effort in order to efficiently characterize TTB data. This procedure, called the 'gains model,' was used to approximate the variation of SSME performance parameters such as flow rate, pressure, temperature, speed, and assorted hardware characteristics in terms of six assumed independent influences. These six influences were engine power level, mixture ratio, fuel inlet pressure and temperature, and oxidizer inlet pressure and temperature. A BFGS optimization algorithm provided the base procedure for determining regression coefficients for both linear and full quadratic approximations of parameter variation. Statistical information relative to data deviation from regression derived relations was also computed. A new strategy for integrating test data with theoretical performance prediction was also investigated. The current integration procedure employed by PBM treats test data as pristine and adjusts hardware characteristics in a heuristic manner to achieve engine balance. Within PBM, this integration procedure is called 'data reduction.' By contrast, the new data integration procedure, termed 'reconciliation,' uses mathematical optimization techniques, and requires both measurement and balance uncertainty estimates. The reconciler attempts to select operational parameters that minimize the difference between theoretical prediction and observation. Selected values are further constrained to fall within measurement uncertainty limits and to satisfy fundamental physical relations (mass conservation, energy conservation, pressure drop relations, etc.) within uncertainty estimates for all SSME subsystems. The parameter selection problem described above is a traditional nonlinear programming problem. The reconciler employs a mixed penalty method to determine optimum values of SSME operating parameters associated with this problem formulation.

  4. Advanced Electrocardiography Can Identify Occult Cardiomyopathy in Doberman Pinschers

    NASA Technical Reports Server (NTRS)

    Spiljak, M.; Petric, A. Domanjko; Wilberg, M.; Olsen, L. H.; Stepancic, A.; Schlegel, T. T.; Starc, V.

    2011-01-01

    Recently, multiple advanced resting electrocardiographic (A-ECG) techniques have improved the diagnostic value of short-duration ECG in detection of dilated cardiomyopathy (DCM) in humans. This study investigated whether 12-lead A-ECG recordings could accurately identify the occult phase of DCM in dogs. Short-duration (3-5 min) high-fidelity 12-lead ECG recordings were obtained from 31 privately-owned, clinically healthy Doberman Pinschers (5.4 +/- 1.7 years, 11/20 males/females). Dogs were divided into 2 groups: 1) 19 healthy dogs with normal echocardiographic M-mode measurements: left ventricular internal diameter in diastole (LVIDd . 47mm) and in systole (LVIDs . 38mm) and normal 24-hour ECG recordings (<50 ventricular premature complexes, VPCs); and 2) 12 dogs with occult DCM: 11/12 dogs had increased M-mode measurements (LVIDd . 49mm and/or LVIDs . 40mm) and 5/11 dogs had also >100 VPCs/24h; 1/12 dogs had only abnormal 24-hour ECG recordings (>100 VPCs/24h). ECG recordings were evaluated via custom software programs to calculate multiple parameters of high-frequency (HF) QRS ECG, heart rate variability, QT variability, waveform complexity and 3-D ECG. Student's t-tests determined 19 ECG parameters that were significantly different (P < 0.05) between groups. Principal component factor analysis identified a 5-factor model with 81.4% explained variance. QRS dipolar and non-dipolar voltages, Cornell voltage criteria and QRS waveform residuum were increased significantly (P < 0.05), whereas mean HF QRS amplitude was decreased significantly (P < 0.05) in dogs with occult DCM. For the 5 selected parameters the prediction of occult DCM was performed using a binary logistic regression model with Chi-square tested significance (P < 0.01). ROC analyses showed that the five selected ECG parameters could identify occult ECG with sensitivity 89% and specificity 83%. Results suggest that 12-lead A-ECG might improve diagnostic value of short-duration ECG in earlier detection of canine DCM as five selected ECG parameters can with reasonable accuracy identify occult DCM in Doberman Pinschers. Future extensive clinical studies need to clarify if 12-lead A-ECG could be useful as an additional screening test for canine DCM.

  5. Decision-making deficits in patients with chronic schizophrenia: Iowa Gambling Task and Prospect Valence Learning model.

    PubMed

    Kim, Myung-Sun; Kang, Bit-Na; Lim, Jae Young

    2016-01-01

    Decision-making is the process of forming preferences for possible options, selecting and executing actions, and evaluating the outcome. This study used the Iowa Gambling Task (IGT) and the Prospect Valence Learning (PVL) model to investigate deficits in risk-reward related decision-making in patients with chronic schizophrenia, and to identify decision-making processes that contribute to poor IGT performance in these patients. Thirty-nine patients with schizophrenia and 31 healthy controls participated. Decision-making was measured by total net score, block net scores, and the total number of cards selected from each deck of the IGT. PVL parameters were estimated with the Markov chain Monte Carlo sampling scheme in OpenBugs and BRugs, its interface to R, and the estimated parameters were analyzed with the Mann-Whitney U-test. The schizophrenia group received significantly lower total net scores compared to the control group. In terms of block net scores, an interaction effect of group × block was observed. The block net scores of the schizophrenia group did not differ across the five blocks, whereas those of the control group increased as the blocks progressed. The schizophrenia group obtained significantly lower block net scores in the fourth and fifth blocks of the IGT and selected cards from deck D (advantageous) less frequently than the control group. Additionally, the schizophrenia group had significantly lower values on the utility-shape, loss-aversion, recency, and consistency parameters of the PVL model. These results indicate that patients with schizophrenia experience deficits in decision-making, possibly due to failure in learning the expected value of each deck, and incorporating outcome experiences of previous trials into expectancies about options in the present trial.

  6. The prognostic value of functional and anatomical parameters for the selection of patients receiving yttrium-90 microspheres for the treatment of liver cancer

    NASA Astrophysics Data System (ADS)

    Mesoloras, Geraldine

    Yttrium-90 (90Y) microsphere therapy is being utilized as a treatment option for patients with primary and metastatic liver cancer due to its ability to target tumors within the liver. The success of this treatment is dependent on many factors, including the extent and type of disease and the nature of prior treatments received. Metabolic activity, as determined by PET imaging, may correlate with the number of viable cancer cells and reflect changes in viable cancer cell volume. However, contouring of PET images by hand is labor intensive and introduces an element of irreproducibility into the determination of functional target/tumor volume (FTV). A computer-assisted method to aid in the automatic contouring of FTV has the potential to substantially improve treatment individualization and outcome assessment. Commercial software to determine FTV in FDG-avid primary and metastatic liver tumors has been evaluated and optimized. Volumes determined using the automated technique were compared to those from manually drawn contours identified using the same cutoff in the standard uptake value (SUV). The reproducibility of FTV is improved through the introduction of an optimal threshold value determined from phantom experiments. Application of the optimal threshold value from the phantom experiments to patient scans was in good agreement with hand-drawn determinations of the FTV. It is concluded that computer-assisted contouring of the FTV for primary and metastatic liver tumors improves reproducibility and increases accuracy, especially when combined with the selection of an optimal SUV threshold determined from phantom experiments. A method to link the pre-treatment assessment of functional (PET based) and anatomical (CT based) parameters to post-treatment survival and time to progression was evaluated in 22 patients with colorectal cancer liver metastases treated using 90Y microspheres and chemotherapy. The values for pre-treatment parameters that were the best predictors of response were determined for FTV, anatomical tumor volume, total lesion glycolysis, and the tumor marker, CEA. Of the parameters considered, the best predictors of response were found to be pre-treatment FTV ≤153 cm3, ATV ≤163 cm3, TLG ≤144 g in the chemo-SIRT treated field, and CEA ≤11.6 ng/mL.

  7. Comparison of soil moisture retrieval algorithms based on the synergy between SMAP and SMOS-IC

    NASA Astrophysics Data System (ADS)

    Ebrahimi-Khusfi, Mohsen; Alavipanah, Seyed Kazem; Hamzeh, Saeid; Amiraslani, Farshad; Neysani Samany, Najmeh; Wigneron, Jean-Pierre

    2018-05-01

    This study was carried out to evaluate possible improvements of the soil moisture (SM) retrievals from the SMAP observations, based on the synergy between SMAP and SMOS. We assessed the impacts of the vegetation and soil roughness parameters on SM retrievals from SMAP observations. To do so, the effects of three key input parameters including the vegetation optical depth (VOD), effective scattering albedo (ω) and soil roughness (HR) parameters were assessed with the emphasis on the synergy with the VOD product derived from SMOS-IC, a new and simpler version of the SMOS algorithm, over two years of data (April 2015 to April 2017). First, a comprehensive comparison of seven SM retrieval algorithms was made to find the best one for SM retrievals from the SMAP observations. All results were evaluated against in situ measurements over 548 stations from the International Soil Moisture Network (ISMN) in terms of four statistical metrics: correlation coefficient (R), root mean square error (RMSE), bias and unbiased RMSE (UbRMSE). The comparison of seven SM retrieval algorithms showed that the dual channel algorithm based on the additional use of the SMOS-IC VOD product (selected algorithm) led to the best results of SM retrievals over 378, 399, 330 and 271 stations (out of a total of 548 stations) in terms of R, RMSE, UbRMSE and both R & UbRMSE, respectively. Moreover, comparing the measured and retrieved SM values showed that this synergy approach led to an increase in median R value from 0.6 to 0.65 and a decrease in median UbRMSE from 0.09 m3/m3 to 0.06 m3/m3. Second, using the algorithm selected in a first step and defined above, the ω and HR parameters were calibrated over 218 rather homogenous ISMN stations. 72 combinations of various values of ω and HR were used for the calibration over different land cover classes. In this calibration process, the optimal values of ω and HR were found for the different land cover classes. The obtained results indicated that the impact of the VOD parameter on SM retrievals is more considerable than the effects of HR and ω. Overall, the inclusion of the VOD parameter in the SMAP SM retrieval algorithm was found to be a very interesting approach and showed the large potential benefit of the synergy between SMAP and SMOS.

  8. CURE-SMOTE algorithm and hybrid algorithm for feature selection and parameter optimization based on random forests.

    PubMed

    Ma, Li; Fan, Suohai

    2017-03-14

    The random forests algorithm is a type of classifier with prominent universality, a wide application range, and robustness for avoiding overfitting. But there are still some drawbacks to random forests. Therefore, to improve the performance of random forests, this paper seeks to improve imbalanced data processing, feature selection and parameter optimization. We propose the CURE-SMOTE algorithm for the imbalanced data classification problem. Experiments on imbalanced UCI data reveal that the combination of Clustering Using Representatives (CURE) enhances the original synthetic minority oversampling technique (SMOTE) algorithms effectively compared with the classification results on the original data using random sampling, Borderline-SMOTE1, safe-level SMOTE, C-SMOTE, and k-means-SMOTE. Additionally, the hybrid RF (random forests) algorithm has been proposed for feature selection and parameter optimization, which uses the minimum out of bag (OOB) data error as its objective function. Simulation results on binary and higher-dimensional data indicate that the proposed hybrid RF algorithms, hybrid genetic-random forests algorithm, hybrid particle swarm-random forests algorithm and hybrid fish swarm-random forests algorithm can achieve the minimum OOB error and show the best generalization ability. The training set produced from the proposed CURE-SMOTE algorithm is closer to the original data distribution because it contains minimal noise. Thus, better classification results are produced from this feasible and effective algorithm. Moreover, the hybrid algorithm's F-value, G-mean, AUC and OOB scores demonstrate that they surpass the performance of the original RF algorithm. Hence, this hybrid algorithm provides a new way to perform feature selection and parameter optimization.

  9. Physico-Chemical and Bacterial Evaluation of Packaged Drinking Water Marketed in Delhi - Potential Public Health Implications

    PubMed Central

    Singla, Ashish; Kundu, Hansa; P., Basavaraj; Singh, Shilpi; Singh, Khushboo; Jain, Swati

    2014-01-01

    Introduction: Quality of drinking water is a powerful environmental determinant of health. The main objective of introduction of bottled water in the society was its better safety, taste and convenience over tap water. The present study was conducted to assess physicochemical and bacterial qualities of bottled water and sachet water which were available in various markets of Delhi. Materials and Methods: Sixteen water bottles and four water sachets were selected through stratified random sampling from various public places in Delhi and their analysis was done at National Test House, Ghaziabad. Results were then compared with national (IS10500, IS14543) and international (WHO, FDA, USEPA) standards. Results: Bottled water showed better quality than sachet water. The mean value of copper (0.0746mg/l) in bottles exceeded the standard values of IS10500 and IS14543(0.05), while the mean value of lead (0.008mg/l) exceeded the FDA standard value (0.005). When the results of sachets were compared with those of standards, the mean values of selenium (0.1195mg/l) and lead (0.862mg/l) were found to exceed values of both Indian and International standards. For the biological parameter i.e. coliform count, the mean value for bottles was 0 (nil), whereas the mean value for sachets was 16.75, which showed the unhealthy nature of sachets. Conclusion: The parameters which were tested in the present study showed excess of various chemical and bacterial parameters in drinking water, which could pose serious threats to consumers. Thus, these results suggest a more stringent standardization of bottled water market with special attention to quality, identity and licensing by concerned authorities, to safeguard health of consumers. PMID:24783149

  10. [Sensitivity analysis of AnnAGNPS model's hydrology and water quality parameters based on the perturbation analysis method].

    PubMed

    Xi, Qing; Li, Zhao-Fu; Luo, Chuan

    2014-05-01

    Sensitivity analysis of hydrology and water quality parameters has a great significance for integrated model's construction and application. Based on AnnAGNPS model's mechanism, terrain, hydrology and meteorology, field management, soil and other four major categories of 31 parameters were selected for the sensitivity analysis in Zhongtian river watershed which is a typical small watershed of hilly region in the Taihu Lake, and then used the perturbation method to evaluate the sensitivity of the parameters to the model's simulation results. The results showed that: in the 11 terrain parameters, LS was sensitive to all the model results, RMN, RS and RVC were generally sensitive and less sensitive to the output of sediment but insensitive to the remaining results. For hydrometeorological parameters, CN was more sensitive to runoff and sediment and relatively sensitive for the rest results. In field management, fertilizer and vegetation parameters, CCC, CRM and RR were less sensitive to sediment and particulate pollutants, the six fertilizer parameters (FR, FD, FID, FOD, FIP, FOP) were particularly sensitive for nitrogen and phosphorus nutrients. For soil parameters, K is quite sensitive to all the results except the runoff, the four parameters of the soil's nitrogen and phosphorus ratio (SONR, SINR, SOPR, SIPR) were less sensitive to the corresponding results. The simulation and verification results of runoff in Zhongtian watershed show a good accuracy with the deviation less than 10% during 2005- 2010. Research results have a direct reference value on AnnAGNPS model's parameter selection and calibration adjustment. The runoff simulation results of the study area also proved that the sensitivity analysis was practicable to the parameter's adjustment and showed the adaptability to the hydrology simulation in the Taihu Lake basin's hilly region and provide reference for the model's promotion in China.

  11. Vapor hydrogen peroxide as alternative to dry heat microbial reduction

    NASA Astrophysics Data System (ADS)

    Chung, S.; Kern, R.; Koukol, R.; Barengoltz, J.; Cash, H.

    2008-09-01

    The Jet Propulsion Laboratory (JPL), in conjunction with the NASA Planetary Protection Officer, has selected vapor phase hydrogen peroxide (VHP) sterilization process for continued development as a NASA approved sterilization technique for spacecraft subsystems and systems. The goal was to include this technique, with an appropriate specification, in NASA Procedural Requirements 8020.12 as a low-temperature complementary technique to the dry heat sterilization process. The VHP process is widely used by the medical industry to sterilize surgical instruments and biomedical devices, but high doses of VHP may degrade the performance of flight hardware, or compromise material compatibility. The goal for this study was to determine the minimum VHP process conditions for planetary protection acceptable microbial reduction levels. Experiments were conducted by the STERIS Corporation, under contract to JPL, to evaluate the effectiveness of vapor hydrogen peroxide for the inactivation of the standard spore challenge, Geobacillus stearothermophilus. VHP process parameters were determined that provide significant reductions in spore viability while allowing survival of sufficient spores for statistically significant enumeration. In addition to the obvious process parameters of interest: hydrogen peroxide concentration, number of injection cycles, and exposure duration, the investigation also considered the possible effect on lethality of environmental parameters: temperature, absolute humidity, and material substrate. This study delineated a range of test sterilizer process conditions: VHP concentration, process duration, a process temperature range for which the worst case D-value may be imposed, a process humidity range for which the worst case D-value may be imposed, and the dependence on selected spacecraft material substrates. The derivation of D-values from the lethality data permitted conservative planetary protection recommendations.

  12. Prediction of compressibility parameters of the soils using artificial neural network.

    PubMed

    Kurnaz, T Fikret; Dagdeviren, Ugur; Yildiz, Murat; Ozkan, Ozhan

    2016-01-01

    The compression index and recompression index are one of the important compressibility parameters to determine the settlement calculation for fine-grained soil layers. These parameters can be determined by carrying out laboratory oedometer test on undisturbed samples; however, the test is quite time-consuming and expensive. Therefore, many empirical formulas based on regression analysis have been presented to estimate the compressibility parameters using soil index properties. In this paper, an artificial neural network (ANN) model is suggested for prediction of compressibility parameters from basic soil properties. For this purpose, the input parameters are selected as the natural water content, initial void ratio, liquid limit and plasticity index. In this model, two output parameters, including compression index and recompression index, are predicted in a combined network structure. As the result of the study, proposed ANN model is successful for the prediction of the compression index, however the predicted recompression index values are not satisfying compared to the compression index.

  13. The structural, morphological and thermal properties of grafted pH-sensitive interpenetrating highly porous polymeric composites of sodium alginate/acrylic acid copolymers for controlled delivery of diclofenac potassium

    PubMed Central

    Jalil, Aamir; Khan, Samiullah; Naeem, Fahad; Haider, Malik Suleman; Sarwar, Shoaib; Riaz, Amna; Ranjha, Nazar Muhammad

    2017-01-01

    Abstract In present investigation new formulations of Sodium Alginate/Acrylic acid hydrogels with high porous structure were synthesized by free radical polymerization technique for the controlled drug delivery of analgesic agent to colon. Many structural parameters like molecular weight between crosslinks (M c), crosslink density (M r), volume interaction parameter (v 2,s), Flory Huggins water interaction parameter and diffusion coefficient (Q) were calculated. Water uptake studies was conducted in different USP phosphate buffer solutions. All samples showed higher swelling ratio with increasing pH values because of ionization of carboxylic groups at higher pH values. Porosity and gel fraction of all the samples were calculated. New selected samples were loaded with the model drug (diclofenac potassium).The amount of drug loaded and released was determined and it was found that all the samples showed higher release of drug at higher pH values. Release of diclofenac potassium was found to be dependent on the ratio of sodium alginate/acrylic acid, EGDMA and pH of the medium. Experimental data was fitted to various model equations and corresponding parameters were calculated to study the release mechanism. The Structural, Morphological and Thermal Properties of interpenetrating hydrogels were studied by FTIR, XRD, DSC, and SEM. PMID:29491802

  14. The effect of epoch length on time and frequency domain parameters of electromyographic and mechanomyographic signals.

    PubMed

    Keller, Joshua L; Housh, Terry J; Camic, Clayton L; Bergstrom, Haley C; Smith, Doug B; Smith, Cory M; Hill, Ethan C; Schmidt, Richard J; Johnson, Glen O; Zuniga, Jorge M

    2018-06-01

    The selection of epoch lengths affects the time and frequency resolution of electromyographic (EMG) and mechanomyographic (MMG) signals, as well as decisions regarding the signal processing techniques to use for determining the power density spectrum. No previous studies, however, have examined the effects of epoch length on parameters of the MMG signal. The purpose of this study was to examine the differences between epoch lengths for EMG amplitude, EMG mean power frequency (MPF), MMG amplitude, and MMG MPF from the VL and VM muscles during MVIC muscle actions as well as at each 10% of the time to exhaustion (TTE) during a continuous isometric muscle action of the leg extensors at 50% of MVIC. During the MVIC trial, there were no significant (p > 0.05) differences between epoch lengths (0.25, 0.50, 1.00, and 2.00-s) for mean absolute values for any of the EMG or MMG parameters. During the submaximal, sustained muscle action, however, absolute MMG amplitude and MMG MPF were affected by the length of epoch. All epoch related differences were eliminated by normalizing the absolute values to MVIC. These findings supported normalizing EMG and MMG parameter values to MVIC and utilizing epoch lengths that ranged from 0.25 to 2.00-s. Copyright © 2018 Elsevier Ltd. All rights reserved.

  15. Relationship between genetic parameters in maize (Zea mays) with seedling growth parameters under 40-100% soil moisture conditions.

    PubMed

    Muhammad, R W; Qayyum, A

    2013-10-18

    We estimated the association of genetic parameters with production characters in 64 maize (Zea mays) genotypes in a green house in soil with 40-100% moisture levels (percent of soil moisture capacity). To identify the major parameters that account for variation among the genotypes, we used single linkage cluster analysis and principle component analysis. Ten plant characters were measured. The first two, four, three, and again three components, with eigen values > 1 contributed 75.05, 80.11, 68.67, and 75.87% of the variability among the genotypes under the different moisture levels, i.e., 40, 60, 80, and 100%, respectively. Other principal components (3-10, 5-10, and 4-10) had eigen values less than 1. The highest estimates of heritability were found for root fresh weight, root volume (0.99), and shoot fresh weight (0.995) in 40% soil moisture. Values of genetic advance ranged from 23.4024 for SR at 40% soil moisture to 0.2538 for shoot dry weight in 60% soil moisture. The high magnitude of broad sense heritability provides evidence that these plant characters are under the control of additive genetic effects. This indicates that selection should lead to fast genetic improvement of the material. The superior agronomic types that we identified may be exploited for genetic potential to improve yield potential of the maize crop.

  16. A hybrid genetic algorithm-extreme learning machine approach for accurate significant wave height reconstruction

    NASA Astrophysics Data System (ADS)

    Alexandre, E.; Cuadra, L.; Nieto-Borge, J. C.; Candil-García, G.; del Pino, M.; Salcedo-Sanz, S.

    2015-08-01

    Wave parameters computed from time series measured by buoys (significant wave height Hs, mean wave period, etc.) play a key role in coastal engineering and in the design and operation of wave energy converters. Storms or navigation accidents can make measuring buoys break down, leading to missing data gaps. In this paper we tackle the problem of locally reconstructing Hs at out-of-operation buoys by using wave parameters from nearby buoys, based on the spatial correlation among values at neighboring buoy locations. The novelty of our approach for its potential application to problems in coastal engineering is twofold. On one hand, we propose a genetic algorithm hybridized with an extreme learning machine that selects, among the available wave parameters from the nearby buoys, a subset FnSP with nSP parameters that minimizes the Hs reconstruction error. On the other hand, we evaluate to what extent the selected parameters in subset FnSP are good enough in assisting other machine learning (ML) regressors (extreme learning machines, support vector machines and gaussian process regression) to reconstruct Hs. The results show that all the ML method explored achieve a good Hs reconstruction in the two different locations studied (Caribbean Sea and West Atlantic).

  17. Biochemical parameters as monitoring markers of the inflammatory reaction by patients with chronic obstructive pulmonary disease (COPD)

    PubMed

    Lenártová, Petra; Kopčeková, Jana; Gažarová, Martina; Mrázová, Jana; Wyka, Joanna

    Chronic obstructive pulmonary disease (COPD) is an airway inflammatory disease caused by inhalation of toxic particles, mainly cigarette smoking, and now is accepted as a disease associated with systemic characteristics. The aim of this work was to investigate and compare selected biochemical parameters in patients with and without COPD. Observation group consisted of clinically stable patients with COPD (n = 60). The control group was healthy persons from the general population, without COPD, who were divided into two subgroups – smokers (n = 30) and non-smokers (n = 30). Laboratory parameters were investigated by automated clinical chemistry analyzer LISA 200th. Albumin in our measurements showed an average value of 39.55 g.l-1 in the patient population; 38.89 g.l-1 in smokers and in non-smokers group 44.65 g.l-1. The average value of pre-albumin in the group of patients was 0.28 ± 0.28 g.l-1 and 0.30 ± 0.04 g.l-1 in smokers group. The average value of the orosomucoid in patients was about 1.11 ± 0.90 mg.ml-1. In the group of smokers, the mean value of orosomucoid was 0.60 ± 0.13 mg.ml-1. The level of C-reactive protein (CRP) in the patient group reached an average value of 15.31 ± 22.04 mg.l-1, in the group of smokers was 5.18 ± 4.58 mg. l-1. Prognostic inflammatory and nutritional index (PINI) in the group of patients showed a mean value of 4.65 ± 10.77 and 0.026 ± 0.025 in smokers. The results of this work show, that the values of index PINI in COPD patients are significantly higher than in smokers (P <0.001). This along with other monitored parameters indicative inflammation as well as a catabolic process that occurs in the organism of patients with COPD.

  18. Use of system identification techniques for improving airframe finite element models using test data

    NASA Technical Reports Server (NTRS)

    Hanagud, Sathya V.; Zhou, Weiyu; Craig, James I.; Weston, Neil J.

    1993-01-01

    A method for using system identification techniques to improve airframe finite element models using test data was developed and demonstrated. The method uses linear sensitivity matrices to relate changes in selected physical parameters to changes in the total system matrices. The values for these physical parameters were determined using constrained optimization with singular value decomposition. The method was confirmed using both simple and complex finite element models for which pseudo-experimental data was synthesized directly from the finite element model. The method was then applied to a real airframe model which incorporated all of the complexities and details of a large finite element model and for which extensive test data was available. The method was shown to work, and the differences between the identified model and the measured results were considered satisfactory.

  19. Strategy Developed for Selecting Optimal Sensors for Monitoring Engine Health

    NASA Technical Reports Server (NTRS)

    2004-01-01

    Sensor indications during rocket engine operation are the primary means of assessing engine performance and health. Effective selection and location of sensors in the operating engine environment enables accurate real-time condition monitoring and rapid engine controller response to mitigate critical fault conditions. These capabilities are crucial to ensure crew safety and mission success. Effective sensor selection also facilitates postflight condition assessment, which contributes to efficient engine maintenance and reduced operating costs. Under the Next Generation Launch Technology program, the NASA Glenn Research Center, in partnership with Rocketdyne Propulsion and Power, has developed a model-based procedure for systematically selecting an optimal sensor suite for assessing rocket engine system health. This optimization process is termed the systematic sensor selection strategy. Engine health management (EHM) systems generally employ multiple diagnostic procedures including data validation, anomaly detection, fault-isolation, and information fusion. The effectiveness of each diagnostic component is affected by the quality, availability, and compatibility of sensor data. Therefore systematic sensor selection is an enabling technology for EHM. Information in three categories is required by the systematic sensor selection strategy. The first category consists of targeted engine fault information; including the description and estimated risk-reduction factor for each identified fault. Risk-reduction factors are used to define and rank the potential merit of timely fault diagnoses. The second category is composed of candidate sensor information; including type, location, and estimated variance in normal operation. The final category includes the definition of fault scenarios characteristic of each targeted engine fault. These scenarios are defined in terms of engine model hardware parameters. Values of these parameters define engine simulations that generate expected sensor values for targeted fault scenarios. Taken together, this information provides an efficient condensation of the engineering experience and engine flow physics needed for sensor selection. The systematic sensor selection strategy is composed of three primary algorithms. The core of the selection process is a genetic algorithm that iteratively improves a defined quality measure of selected sensor suites. A merit algorithm is employed to compute the quality measure for each test sensor suite presented by the selection process. The quality measure is based on the fidelity of fault detection and the level of fault source discrimination provided by the test sensor suite. An inverse engine model, whose function is to derive hardware performance parameters from sensor data, is an integral part of the merit algorithm. The final component is a statistical evaluation algorithm that characterizes the impact of interference effects, such as control-induced sensor variation and sensor noise, on the probability of fault detection and isolation for optimal and near-optimal sensor suites.

  20. Application of a statistical emulator to fire emission modeling

    Treesearch

    Marwan Katurji; Jovanka Nikolic; Shiyuan Zhong; Scott Pratt; Lejiang Yu; Warren E. Heilman

    2015-01-01

    We have demonstrated the use of an advanced Gaussian-Process (GP) emulator to estimate wildland fire emissions over a wide range of fuel and atmospheric conditions. The Fire Emission Production Simulator, or FEPS, is used to produce an initial set of emissions data that correspond to some selected values in the domain of the input fuel and atmospheric parameters for...

  1. User's Manual for Aerofcn: a FORTRAN Program to Compute Aerodynamic Parameters

    NASA Technical Reports Server (NTRS)

    Conley, Joseph L.

    1992-01-01

    The computer program AeroFcn is discussed. AeroFcn is a utility program that computes the following aerodynamic parameters: geopotential altitude, Mach number, true velocity, dynamic pressure, calibrated airspeed, equivalent airspeed, impact pressure, total pressure, total temperature, Reynolds number, speed of sound, static density, static pressure, static temperature, coefficient of dynamic viscosity, kinematic viscosity, geometric altitude, and specific energy for a standard- or a modified standard-day atmosphere using compressible flow and normal shock relations. Any two parameters that define a unique flight condition are selected, and their values are entered interactively. The remaining parameters are computed, and the solutions are stored in an output file. Multiple cases can be run, and the multiple case solutions can be stored in another output file for plotting. Parameter units, the output format, and primary constants in the atmospheric and aerodynamic equations can also be changed.

  2. Assessment of color parameters of composite resin shade guides using digital imaging versus colorimeter.

    PubMed

    Yamanel, Kivanc; Caglar, Alper; Özcan, Mutlu; Gulsah, Kamran; Bagis, Bora

    2010-12-01

    This study evaluated the color parameters of resin composite shade guides determined using a colorimeter and digital imaging method. Four composite shade guides, namely: two nanohybrid (Grandio [Voco GmbH, Cuxhaven, Germany]; Premise [KerrHawe SA, Bioggio, Switzerland]) and two hybrid (Charisma [Heraeus Kulzer, GmbH & Co. KG, Hanau, Germany]; Filtek Z250 [3M ESPE, Seefeld, Germany]) were evaluated. Ten shade tabs were selected (A1, A2, A3, A3,5, A4, B1, B2, B3, C2, C3) from each shade guide. CIE Lab values were obtained using digital imaging and a colorimeter (ShadeEye NCC Dental Chroma Meter, Shofu Inc., Kyoto, Japan). The data were analyzed using two-way analysis of variance and Bonferroni post hoc test. Overall, the mean ΔE values from different composite pairs demonstrated statistically significant differences when evaluated with the colorimeter (p < 0.001) but there was no significant difference with the digital imaging method (p = 0.099). With both measurement methods in total, 80% of the shade guide pairs from different composites (97/120) showed color differences greater than 3.7 (moderately perceptible mismatch), and 49% (59/120) had obvious mismatch (ΔE > 6.8). For all shade pairs evaluated, the most significant shade mismatches were obtained between Grandio-Filtek Z250 (p = 0.021) and Filtek Z250-Premise (p = 0.01) regarding ΔE mean values, whereas the best shade match was between Grandio-Charisma (p = 0.255) regardless of the measurement method. The best color match (mean ΔE values) was recorded for A1, A2, and A3 shade pairs in both methods. When proper object-camera distance, digital camera settings, and suitable illumination conditions are provided, digital imaging method could be used in the assessment of color parameters. Interchanging use of shade guides from different composite systems should be avoided during color selection. © 2010, COPYRIGHT THE AUTHORS. JOURNAL COMPILATION © 2010, WILEY PERIODICALS, INC.

  3. Fluorescence in situ detection of human cutaneous melanoma: study of diagnostic parameters of the method.

    PubMed

    Chwirot, B W; Chwirot, S; Sypniewska, N; Michniewicz, Z; Redzinski, J; Kurzawski, G; Ruka, W

    2001-12-01

    Multicenter study of the diagnostic parameters was conducted by three groups in Poland to determine if in situ fluorescence detection of human cutaneous melanoma based on digital imaging of spectrally resolved autofluorescence can be used as a tool for a preliminary selection of patients at increased risk of the disease. Fluorescence examinations were performed for 7228 pigmented lesions in 4079 subjects. Histopathologic examinations showed 56 cases of melanoma. A sensitivity of fluorescence detection of melanoma was 82.7% in agreement with 82.5% found in earlier work. Using as a reference only the results of histopathologic examinations obtained for 568 cases we found a specificity of 59.9% and a positive predictive value of 17.5% (melanomas versus all pigmented lesions) or 24% (melanomas versus common and dysplastic naevi). The specificity and positive predictive value found in this work are significantly lower than reported earlier but still comparable with those reported for typical screening programs. In conclusion, the fluorescence method of in situ detection of melanoma can be used in screening large populations of patients for a selection of patients who should be examined by specialists.

  4. Genetic parameters for test day somatic cell score in Brazilian Holstein cattle.

    PubMed

    Costa, C N; Santos, G G; Cobuci, J A; Thompson, G; Carvalheira, J G V

    2015-12-29

    Selection for lower somatic cell count has been included in the breeding objectives of several countries in order to increase resistance to mastitis. Genetic parameters of somatic cell scores (SCS) were estimated from the first lactation test day records of Brazilian Holstein cows using random-regression models with Legendre polynomials (LP) of the order 3-5. Data consisted of 87,711 TD produced by 10,084 cows, sired by 619 bulls calved from 1993 to 2007. Heritability estimates varied from 0.06 to 0.14 and decreased from the beginning of the lactation up to 60 days in milk (DIM) and increased thereafter to the end of lactation. Genetic correlations between adjacent DIM were very high (>0.83) but decreased to negative values, obtained with LP of order four, between DIM in the extremes of lactation. Despite the favorable trend, genetic changes in SCS were not significant and did not differ among LP. There was little benefit of fitting an LP of an order >3 to model animal genetic and permanent environment effects for SCS. Estimates of variance components found in this study may be used for breeding value estimation for SCS and selection for mastitis resistance in Holstein cattle in Brazil.

  5. Selective laser melting of high-performance pure tungsten: parameter design, densification behavior and mechanical properties

    PubMed Central

    Zhou, Kesong; Ma, Wenyou; Attard, Bonnie; Zhang, Panpan; Kuang, Tongchun

    2018-01-01

    Abstract Selective laser melting (SLM) additive manufacturing of pure tungsten encounters nearly all intractable difficulties of SLM metals fields due to its intrinsic properties. The key factors, including powder characteristics, layer thickness, and laser parameters of SLM high density tungsten are elucidated and discussed in detail. The main parameters were designed from theoretical calculations prior to the SLM process and experimentally optimized. Pure tungsten products with a density of 19.01 g/cm3 (98.50% theoretical density) were produced using SLM with the optimized processing parameters. A high density microstructure is formed without significant balling or macrocracks. The formation mechanisms for pores and the densification behaviors are systematically elucidated. Electron backscattered diffraction analysis confirms that the columnar grains stretch across several layers and parallel to the maximum temperature gradient, which can ensure good bonding between the layers. The mechanical properties of the SLM-produced tungsten are comparable to that produced by the conventional fabrication methods, with hardness values exceeding 460 HV0.05 and an ultimate compressive strength of about 1 GPa. This finding offers new potential applications of refractory metals in additive manufacturing. PMID:29707073

  6. Selective laser melting of high-performance pure tungsten: parameter design, densification behavior and mechanical properties.

    PubMed

    Tan, Chaolin; Zhou, Kesong; Ma, Wenyou; Attard, Bonnie; Zhang, Panpan; Kuang, Tongchun

    2018-01-01

    Selective laser melting (SLM) additive manufacturing of pure tungsten encounters nearly all intractable difficulties of SLM metals fields due to its intrinsic properties. The key factors, including powder characteristics, layer thickness, and laser parameters of SLM high density tungsten are elucidated and discussed in detail. The main parameters were designed from theoretical calculations prior to the SLM process and experimentally optimized. Pure tungsten products with a density of 19.01 g/cm 3 (98.50% theoretical density) were produced using SLM with the optimized processing parameters. A high density microstructure is formed without significant balling or macrocracks. The formation mechanisms for pores and the densification behaviors are systematically elucidated. Electron backscattered diffraction analysis confirms that the columnar grains stretch across several layers and parallel to the maximum temperature gradient, which can ensure good bonding between the layers. The mechanical properties of the SLM-produced tungsten are comparable to that produced by the conventional fabrication methods, with hardness values exceeding 460 HV 0.05 and an ultimate compressive strength of about 1 GPa. This finding offers new potential applications of refractory metals in additive manufacturing.

  7. A variant of sparse partial least squares for variable selection and data exploration.

    PubMed

    Olson Hunt, Megan J; Weissfeld, Lisa; Boudreau, Robert M; Aizenstein, Howard; Newman, Anne B; Simonsick, Eleanor M; Van Domelen, Dane R; Thomas, Fridtjof; Yaffe, Kristine; Rosano, Caterina

    2014-01-01

    When data are sparse and/or predictors multicollinear, current implementation of sparse partial least squares (SPLS) does not give estimates for non-selected predictors nor provide a measure of inference. In response, an approach termed "all-possible" SPLS is proposed, which fits a SPLS model for all tuning parameter values across a set grid. Noted is the percentage of time a given predictor is chosen, as well as the average non-zero parameter estimate. Using a "large" number of multicollinear predictors, simulation confirmed variables not associated with the outcome were least likely to be chosen as sparsity increased across the grid of tuning parameters, while the opposite was true for those strongly associated. Lastly, variables with a weak association were chosen more often than those with no association, but less often than those with a strong relationship to the outcome. Similarly, predictors most strongly related to the outcome had the largest average parameter estimate magnitude, followed by those with a weak relationship, followed by those with no relationship. Across two independent studies regarding the relationship between volumetric MRI measures and a cognitive test score, this method confirmed a priori hypotheses about which brain regions would be selected most often and have the largest average parameter estimates. In conclusion, the percentage of time a predictor is chosen is a useful measure for ordering the strength of the relationship between the independent and dependent variables, serving as a form of inference. The average parameter estimates give further insight regarding the direction and strength of association. As a result, all-possible SPLS gives more information than the dichotomous output of traditional SPLS, making it useful when undertaking data exploration and hypothesis generation for a large number of potential predictors.

  8. Atmospheric environment for Space Shuttle (STS-3) launch

    NASA Technical Reports Server (NTRS)

    Johnson, D. L.; Brown, S. C.; Batts, G. W.

    1982-01-01

    Selected atmospheric conditions observed near Space Shuttle STS-3 launch time on March 22, 1982, at Kennedy Space Center, Florida are summarized. Values of ambient pressure, temperature, moisture, ground winds, visual observations (cloud), and winds aloft are included. The sequence of prlaunch Jimsphere measured vertical wind profiles and the wind and thermodynamic parameters measured at the surface and aloft in the SRB descent/impact ocean area are presented. Final meteorological tapes, which consist of wind and thermodynamic parameters versus altitude, for STS-3 vehicle ascent and SRB descent were constructed. The STS-3 ascent meteorological data tape is constructed.

  9. Digital robust active control law synthesis for large order flexible structure using parameter optimization

    NASA Technical Reports Server (NTRS)

    Mukhopadhyay, V.

    1988-01-01

    A generic procedure for the parameter optimization of a digital control law for a large-order flexible flight vehicle or large space structure modeled as a sampled data system is presented. A linear quadratic Guassian type cost function was minimized, while satisfying a set of constraints on the steady-state rms values of selected design responses, using a constrained optimization technique to meet multiple design requirements. Analytical expressions for the gradients of the cost function and the design constraints on mean square responses with respect to the control law design variables are presented.

  10. Assessment of predictive models for chlorophyll-a concentration of a tropical lake

    PubMed Central

    2011-01-01

    Background This study assesses four predictive ecological models; Fuzzy Logic (FL), Recurrent Artificial Neural Network (RANN), Hybrid Evolutionary Algorithm (HEA) and multiple linear regressions (MLR) to forecast chlorophyll- a concentration using limnological data from 2001 through 2004 of unstratified shallow, oligotrophic to mesotrophic tropical Putrajaya Lake (Malaysia). Performances of the models are assessed using Root Mean Square Error (RMSE), correlation coefficient (r), and Area under the Receiving Operating Characteristic (ROC) curve (AUC). Chlorophyll-a have been used to estimate algal biomass in aquatic ecosystem as it is common in most algae. Algal biomass indicates of the trophic status of a water body. Chlorophyll- a therefore, is an effective indicator for monitoring eutrophication which is a common problem of lakes and reservoirs all over the world. Assessments of these predictive models are necessary towards developing a reliable algorithm to estimate chlorophyll- a concentration for eutrophication management of tropical lakes. Results Same data set was used for models development and the data was divided into two sets; training and testing to avoid biasness in results. FL and RANN models were developed using parameters selected through sensitivity analysis. The selected variables were water temperature, pH, dissolved oxygen, ammonia nitrogen, nitrate nitrogen and Secchi depth. Dissolved oxygen, selected through stepwise procedure, was used to develop the MLR model. HEA model used parameters selected using genetic algorithm (GA). The selected parameters were pH, Secchi depth, dissolved oxygen and nitrate nitrogen. RMSE, r, and AUC values for MLR model were (4.60, 0.5, and 0.76), FL model were (4.49, 0.6, and 0.84), RANN model were (4.28, 0.7, and 0.79) and HEA model were (4.27, 0.7, and 0.82) respectively. Performance inconsistencies between four models in terms of performance criteria in this study resulted from the methodology used in measuring the performance. RMSE is based on the level of error of prediction whereas AUC is based on binary classification task. Conclusions Overall, HEA produced the best performance in terms of RMSE, r, and AUC values. This was followed by FL, RANN, and MLR. PMID:22372859

  11. Evidence for a role of 5-HT2C receptors in the motor aspects of performance, but not the efficacy of food reinforcers, in a progressive ratio schedule.

    PubMed

    Bezzina, G; Body, S; Cheung, T H C; Hampson, C L; Bradshaw, C M; Glennon, J C; Szabadi, E

    2015-02-01

    5-Hydroxytryptamine2C (5-HT2C) receptor agonists reduce the breakpoint in progressive ratio schedules of reinforcement, an effect that has been attributed to a decrease of the efficacy of positive reinforcers. However, a reduction of the breakpoint may also reflect motor impairment. Mathematical models can help to differentiate between these processes. The effects of the 5-HT2C receptor agonist Ro-600175 ((αS)-6-chloro-5-fluoro-α-methyl-1H-indole-1-ethanamine) and the non-selective 5-HT receptor agonist 1-(m-chlorophenyl)piperazine (mCPP) on rats' performance on a progressive ratio schedule maintained by food pellet reinforcers were assessed using a model derived from Killeen's Behav Brain Sci 17:105-172, 1994 general theory of schedule-controlled behaviour, 'mathematical principles of reinforcement'. Rats were trained under the progressive ratio schedule, and running and overall response rates in successive ratios were analysed using the model. The effects of the agonists on estimates of the model's parameters, and the sensitivity of these effects to selective antagonists, were examined. Ro-600175 and mCPP reduced the breakpoint. Neither agonist significantly affected a (the parameter expressing incentive value), but both agonists increased δ (the parameter expressing minimum response time). The effects of both agonists could be attenuated by the selective 5-HT2C receptor antagonist SB-242084 (6-chloro-5-methyl-N-{6-[(2-methylpyridin-3-yl)oxy]pyridin-3-yl}indoline-1-carboxamide). The effect of mCPP was not altered by isamoltane, a selective 5-HT1B receptor antagonist, or MDL-100907 ((±)2,3-dimethoxyphenyl-1-(2-(4-piperidine)methanol)), a selective 5-HT2A receptor antagonist. The results are consistent with the hypothesis that the effect of the 5-HT2C receptor agonists on progressive ratio schedule performance is mediated by an impairment of motor capacity rather than by a reduction of the incentive value of the food reinforcer.

  12. Parameter optimisation for a better representation of drought by LSMs: inverse modelling vs. sequential data assimilation

    NASA Astrophysics Data System (ADS)

    Dewaele, Hélène; Munier, Simon; Albergel, Clément; Planque, Carole; Laanaia, Nabil; Carrer, Dominique; Calvet, Jean-Christophe

    2017-09-01

    Soil maximum available water content (MaxAWC) is a key parameter in land surface models (LSMs). However, being difficult to measure, this parameter is usually uncertain. This study assesses the feasibility of using a 15-year (1999-2013) time series of satellite-derived low-resolution observations of leaf area index (LAI) to estimate MaxAWC for rainfed croplands over France. LAI interannual variability is simulated using the CO2-responsive version of the Interactions between Soil, Biosphere and Atmosphere (ISBA) LSM for various values of MaxAWC. Optimal value is then selected by using (1) a simple inverse modelling technique, comparing simulated and observed LAI and (2) a more complex method consisting in integrating observed LAI in ISBA through a land data assimilation system (LDAS) and minimising LAI analysis increments. The evaluation of the MaxAWC estimates from both methods is done using simulated annual maximum above-ground biomass (Bag) and straw cereal grain yield (GY) values from the Agreste French agricultural statistics portal, for 45 administrative units presenting a high proportion of straw cereals. Significant correlations (p value < 0.01) between Bag and GY are found for up to 36 and 53 % of the administrative units for the inverse modelling and LDAS tuning methods, respectively. It is found that the LDAS tuning experiment gives more realistic values of MaxAWC and maximum Bag than the inverse modelling experiment. Using undisaggregated LAI observations leads to an underestimation of MaxAWC and maximum Bag in both experiments. Median annual maximum values of disaggregated LAI observations are found to correlate very well with MaxAWC.

  13. Composite multi-parameter ranking of real and virtual compounds for design of MC4R agonists: renaissance of the Free-Wilson methodology.

    PubMed

    Nilsson, Ingemar; Polla, Magnus O

    2012-10-01

    Drug design is a multi-parameter task present in the analysis of experimental data for synthesized compounds and in the prediction of new compounds with desired properties. This article describes the implementation of a binned scoring and composite ranking scheme for 11 experimental parameters that were identified as key drivers in the MC4R project. The composite ranking scheme was implemented in an AstraZeneca tool for analysis of project data, thereby providing an immediate re-ranking as new experimental data was added. The automated ranking also highlighted compounds overlooked by the project team. The successful implementation of a composite ranking on experimental data led to the development of an equivalent virtual score, which was based on Free-Wilson models of the parameters from the experimental ranking. The individual Free-Wilson models showed good to high predictive power with a correlation coefficient between 0.45 and 0.97 based on the external test set. The virtual ranking adds value to the selection of compounds for synthesis but error propagation must be controlled. The experimental ranking approach adds significant value, is parameter independent and can be tuned and applied to any drug discovery project.

  14. Bayesian model for fate and transport of polychlorinated biphenyl in upper Hudson River

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Steinberg, L.J.; Reckhow, K.H.; Wolpert, R.L.

    1996-05-01

    Modelers of contaminant fate and transport in surface waters typically rely on literature values when selecting parameter values for mechanistic models. While the expert judgment with which these selections are made is valuable, the information contained in contaminant concentration measurements should not be ignored. In this full-scale Bayesian analysis of polychlorinated biphenyl (PCB) contamination in the upper Hudson River, these two sources of information are combined using Bayes` theorem. A simulation model for the fate and transport of the PCBs in the upper Hudson River forms the basis of the likelihood function while the prior density is developed from literaturemore » values. The method provides estimates for the anaerobic biodegradation half-life, aerobic biodegradation plus volatilization half-life, contaminated sediment depth, and resuspension velocity of 4,400 d, 3.2 d, 0.32 m, and 0.02 m/yr, respectively. These are significantly different than values obtained with more traditional methods, and are shown to produce better predictions than those methods when used in a cross-validation study.« less

  15. Trends in hydrological extremes in the Senegal and the Niger Rivers

    NASA Astrophysics Data System (ADS)

    Wilcox, C.; Bodian, A.; Vischel, T.; Panthou, G.; Quantin, G.

    2017-12-01

    In recent years, West Africa has witnessed several floods of unprecedented magnitude. Although the evolution of hydrological extremes has been evaluated in the region to some extent, results lack regional coverage, significance levels, uncertainty estimations, model selection criteria, or a combination of the above. In this study, Generalized Extreme Value (GEV) distributions with and without various non-stationary temporal covariates are applied to annual maxima of daily discharge (AMAX) data sets in the Sudano-Guinean part of the Senegal River basin and in the Sahelian part of the Niger River basin. The data ranges from the 1950s to the 2010s. The two models of best fit most often selected (with an alpha=0.05 certainty level) were 1) a double-linear model for the central tendency parameter (μ) with stationary dispersion (σ) and 2) a double-linear model for both parameters. Change points are relatively consistent for the Senegal basin, with stations switching from a decreasing streamflow trend to an increasing streamflow trend in the early 1980s. In the Niger basin the trend in μ was generally positive with an increase in slope after the change point, but the change point location was less consistent. The study clearly demonstrates the significant trends in extreme discharge values in West Africa over the past six decades. Moreover, it proposes a clear methodology for comparing GEV models and selecting the best for use. The return levels generated from the chosen models can be applied to river basin management and hydraulic works sizing. The results provide a first evaluation of non-stationarity in extreme hydrological values in West Africa that is accompanied by significance levels, uncertainties, and non-stationary return level estimations .

  16. A new method of building footprints detection using airborne laser scanning data and multispectral image

    NASA Astrophysics Data System (ADS)

    Luo, Yiping; Jiang, Ting; Gao, Shengli; Wang, Xin

    2010-10-01

    It presents a new approach for detecting building footprints in a combination of registered aerial image with multispectral bands and airborne laser scanning data synchronously obtained by Leica-Geosystems ALS40 and Applanix DACS-301 on the same platform. A two-step method for building detection was presented consisting of selecting 'building' candidate points and then classifying candidate points. A digital surface model(DSM) derived from last pulse laser scanning data was first filtered and the laser points were classified into classes 'ground' and 'building or tree' based on mathematic morphological filter. Then, 'ground' points were resample into digital elevation model(DEM), and a Normalized DSM(nDSM) was generated from DEM and DSM. The candidate points were selected from 'building or tree' points by height value and area threshold in nDSM. The candidate points were further classified into building points and tree points by using the support vector machines(SVM) classification method. Two classification tests were carried out using features only from laser scanning data and associated features from two input data sources. The features included height, height finite difference, RGB bands value, and so on. The RGB value of points was acquired by matching laser scanning data and image using collinear equation. The features of training points were presented as input data for SVM classification method, and cross validation was used to select best classification parameters. The determinant function could be constructed by the classification parameters and the class of candidate points was determined by determinant function. The result showed that associated features from two input data sources were superior to features only from laser scanning data. The accuracy of more than 90% was achieved for buildings in first kind of features.

  17. Automated selection of computed tomography display parameters using neural networks

    NASA Astrophysics Data System (ADS)

    Zhang, Di; Neu, Scott; Valentino, Daniel J.

    2001-07-01

    A collection of artificial neural networks (ANN's) was trained to identify simple anatomical structures in a set of x-ray computed tomography (CT) images. These neural networks learned to associate a point in an image with the anatomical structure containing the point by using the image pixels located on the horizontal and vertical lines that ran through the point. The neural networks were integrated into a computer software tool whose function is to select an index into a list of CT window/level values from the location of the user's mouse cursor. Based upon the anatomical structure selected by the user, the software tool automatically adjusts the image display to optimally view the structure.

  18. A PET reconstruction formulation that enforces non-negativity in projection space for bias reduction in Y-90 imaging

    NASA Astrophysics Data System (ADS)

    Lim, Hongki; Dewaraja, Yuni K.; Fessler, Jeffrey A.

    2018-02-01

    Most existing PET image reconstruction methods impose a nonnegativity constraint in the image domain that is natural physically, but can lead to biased reconstructions. This bias is particularly problematic for Y-90 PET because of the low probability positron production and high random coincidence fraction. This paper investigates a new PET reconstruction formulation that enforces nonnegativity of the projections instead of the voxel values. This formulation allows some negative voxel values, thereby potentially reducing bias. Unlike the previously reported NEG-ML approach that modifies the Poisson log-likelihood to allow negative values, the new formulation retains the classical Poisson statistical model. To relax the non-negativity constraint embedded in the standard methods for PET reconstruction, we used an alternating direction method of multipliers (ADMM). Because choice of ADMM parameters can greatly influence convergence rate, we applied an automatic parameter selection method to improve the convergence speed. We investigated the methods using lung to liver slices of XCAT phantom. We simulated low true coincidence count-rates with high random fractions corresponding to the typical values from patient imaging in Y-90 microsphere radioembolization. We compared our new methods with standard reconstruction algorithms and NEG-ML and a regularized version thereof. Both our new method and NEG-ML allow more accurate quantification in all volumes of interest while yielding lower noise than the standard method. The performance of NEG-ML can degrade when its user-defined parameter is tuned poorly, while the proposed algorithm is robust to any count level without requiring parameter tuning.

  19. EmpiriciSN: Re-sampling Observed Supernova/Host Galaxy Populations Using an XD Gaussian Mixture Model

    NASA Astrophysics Data System (ADS)

    Holoien, Thomas W.-S.; Marshall, Philip J.; Wechsler, Risa H.

    2017-06-01

    We describe two new open-source tools written in Python for performing extreme deconvolution Gaussian mixture modeling (XDGMM) and using a conditioned model to re-sample observed supernova and host galaxy populations. XDGMM is new program that uses Gaussian mixtures to perform density estimation of noisy data using extreme deconvolution (XD) algorithms. Additionally, it has functionality not available in other XD tools. It allows the user to select between the AstroML and Bovy et al. fitting methods and is compatible with scikit-learn machine learning algorithms. Most crucially, it allows the user to condition a model based on the known values of a subset of parameters. This gives the user the ability to produce a tool that can predict unknown parameters based on a model that is conditioned on known values of other parameters. EmpiriciSN is an exemplary application of this functionality, which can be used to fit an XDGMM model to observed supernova/host data sets and predict likely supernova parameters using a model conditioned on observed host properties. It is primarily intended to simulate realistic supernovae for LSST data simulations based on empirical galaxy properties.

  20. The influence of pH adjustment on kinetics parameters in tapioca wastewater treatment using aerobic sequencing batch reactor system

    NASA Astrophysics Data System (ADS)

    Mulyani, Happy; Budianto, Gregorius Prima Indra; Margono, Kaavessina, Mujtahid

    2018-02-01

    The present investigation deals with the aerobic sequencing batch reactor system of tapioca wastewater treatment with varying pH influent conditions. This project was carried out to evaluate the effect of pH on kinetics parameters of system. It was done by operating aerobic sequencing batch reactor system during 8 hours in many tapioca wastewater conditions (pH 4.91, pH 7, pH 8). The Chemical Oxygen Demand (COD) and Mixed Liquor Volatile Suspended Solids (MLVSS) of the aerobic sequencing batch reactor system effluent at steady state condition were determined at interval time of two hours to generate data for substrate inhibition kinetics parameters. Values of the kinetics constants were determined using Monod and Andrews models. There was no inhibition constant (Ki) detected in all process variation of aerobic sequencing batch reactor system for tapioca wastewater treatment in this study. Furthermore, pH 8 was selected as the preferred aerobic sequencing batch reactor system condition in those ranging pH investigated due to its achievement of values of kinetics parameters such µmax = 0.010457/hour and Ks = 255.0664 mg/L COD.

  1. EmpiriciSN: Re-sampling Observed Supernova/Host Galaxy Populations Using an XD Gaussian Mixture Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Holoien, Thomas W. -S.; Marshall, Philip J.; Wechsler, Risa H.

    We describe two new open-source tools written in Python for performing extreme deconvolution Gaussian mixture modeling (XDGMM) and using a conditioned model to re-sample observed supernova and host galaxy populations. XDGMM is new program that uses Gaussian mixtures to perform density estimation of noisy data using extreme deconvolution (XD) algorithms. Additionally, it has functionality not available in other XD tools. It allows the user to select between the AstroML and Bovy et al. fitting methods and is compatible with scikit-learn machine learning algorithms. Most crucially, it allows the user to condition a model based on the known values of amore » subset of parameters. This gives the user the ability to produce a tool that can predict unknown parameters based on a model that is conditioned on known values of other parameters. EmpiriciSN is an exemplary application of this functionality, which can be used to fit an XDGMM model to observed supernova/host data sets and predict likely supernova parameters using a model conditioned on observed host properties. It is primarily intended to simulate realistic supernovae for LSST data simulations based on empirical galaxy properties.« less

  2. Variability induced by the MR imager in dynamic contrast-enhanced imaging of the prostate.

    PubMed

    Brunelle, S; Zemmour, C; Bratan, F; Mège-Lechevallier, F; Ruffion, A; Colombel, M; Crouzet, S; Sarran, A; Rouvière, O

    2018-04-01

    To evaluate the variability induced by the imager in discriminating high-grade (Gleason≥7) prostate cancers (HGC) using dynamic contrast-enhanced MRI. We retrospectively selected 3T MRIs with temporal resolution<10 seconds and comprising T1 mapping from a prospective radiologic-pathologic database of patients treated by prostatectomy. Ktrans, Kep, Ve and Vp were calculated for each lesion seen on MRI using the Weinmann arterial input function (AIF) and three patient-specific AIFs measured in the right and left iliac arteries in pixels in the center of the lumen (psAIF-ST) or manually selected by two independent readers (psAIF-R1 and psAIF-R2). A total of 43 patients (mean age, 63.6±4.9 [SD]; range: 48-72 years) with 100 lesions on MRI (55 HGC) were selected. MRIs were performed on imager A (22 patients, 49 lesions) or B (21 patients, 51 lesions) from two different manufacturers. Using the Weinmann AIF, Kep (P=0.005), Ve (P=0.04) and Vp (P=0.01) significantly discriminated HCG. After adjusting on tissue classes, the imager significantly influenced the values of Kep (P=0.049) and Ve (P=0.007). Using patient-specific AIFs, Vp with psAIF-ST (P=0.008) and psAIF-R2 (P=0.04), and Kep with psAIF-R1 (P=0.03) significantly discriminated HGC. After adjusting on tissue classes, types of patient-specific AIF and side of measurement, the imager significantly influenced the values of Ktrans (P=0.0002), Ve (P=0.0072) and Vp (P=0.0003). For all AIFs, the diagnostic value of pharmacokinetic parameters remained unchanged after adjustment on the imager, with stable odds ratios. The imager induced variability in the absolute values of pharmacokinetic parameters but did not change their diagnostic performance. Copyright © 2018 Société française de radiologie. Published by Elsevier Masson SAS. All rights reserved.

  3. Method for Automatic Selection of Parameters in Normal Tissue Complication Probability Modeling.

    PubMed

    Christophides, Damianos; Appelt, Ane L; Gusnanto, Arief; Lilley, John; Sebag-Montefiore, David

    2018-07-01

    To present a fully automatic method to generate multiparameter normal tissue complication probability (NTCP) models and compare its results with those of a published model, using the same patient cohort. Data were analyzed from 345 rectal cancer patients treated with external radiation therapy to predict the risk of patients developing grade 1 or ≥2 cystitis. In total, 23 clinical factors were included in the analysis as candidate predictors of cystitis. Principal component analysis was used to decompose the bladder dose-volume histogram into 8 principal components, explaining more than 95% of the variance. The data set of clinical factors and principal components was divided into training (70%) and test (30%) data sets, with the training data set used by the algorithm to compute an NTCP model. The first step of the algorithm was to obtain a bootstrap sample, followed by multicollinearity reduction using the variance inflation factor and genetic algorithm optimization to determine an ordinal logistic regression model that minimizes the Bayesian information criterion. The process was repeated 100 times, and the model with the minimum Bayesian information criterion was recorded on each iteration. The most frequent model was selected as the final "automatically generated model" (AGM). The published model and AGM were fitted on the training data sets, and the risk of cystitis was calculated. The 2 models had no significant differences in predictive performance, both for the training and test data sets (P value > .05) and found similar clinical and dosimetric factors as predictors. Both models exhibited good explanatory performance on the training data set (P values > .44), which was reduced on the test data sets (P values < .05). The predictive value of the AGM is equivalent to that of the expert-derived published model. It demonstrates potential in saving time, tackling problems with a large number of parameters, and standardizing variable selection in NTCP modeling. Crown Copyright © 2018. Published by Elsevier Inc. All rights reserved.

  4. The 57Fe Mössbauer parameters of pyrite and marcasite with different provenances

    USGS Publications Warehouse

    Evans, B.J.; Johnson, R.G.; Senftle, F.E.; Cecil, C.B.; Dulong, F.

    1982-01-01

    The Mössbauer parameters of pyrite and marcasite exhibit appreciable variations, which bear no simple relationship to the geological environment in which they occur but appear to be selectively influenced by impurities, especially arsenic, in the pyrite lattice. Quantitative and qualitative determinations of pyrite/marcasite mechanical mixtures are straightforward at 298 K and 77 K but do require least-squares computer fittings and are limited to accuracies ranging from ±5 to ±15 per cent by uncertainties in the parameter values of the pure phases. The methodology and results of this investigation are directly applicable to coals for which the presence and relative amounts of pyrite and marcasite could be of considerable genetic significance.

  5. Sensitivity of ecological soil-screening levels for metals to exposure model parameterization and toxicity reference values.

    PubMed

    Sample, Bradley E; Fairbrother, Anne; Kaiser, Ashley; Law, Sheryl; Adams, Bill

    2014-10-01

    Ecological soil-screening levels (Eco-SSLs) were developed by the United States Environmental Protection Agency (USEPA) for the purposes of setting conservative soil screening values that can be used to eliminate the need for further ecological assessment for specific analytes at a given site. Ecological soil-screening levels for wildlife represent a simplified dietary exposure model solved in terms of soil concentrations to produce exposure equal to a no-observed-adverse-effect toxicity reference value (TRV). Sensitivity analyses were performed for 6 avian and mammalian model species, and 16 metals/metalloids for which Eco-SSLs have been developed. The relative influence of model parameters was expressed as the absolute value of the range of variation observed in the resulting soil concentration when exposure is equal to the TRV. Rank analysis of variance was used to identify parameters with greatest influence on model output. For both birds and mammals, soil ingestion displayed the broadest overall range (variability), although TRVs consistently had the greatest influence on calculated soil concentrations; bioavailability in food was consistently the least influential parameter, although an important site-specific variable. Relative importance of parameters differed by trophic group. Soil ingestion ranked 2nd for carnivores and herbivores, but was 4th for invertivores. Different patterns were exhibited, depending on which parameter, trophic group, and analyte combination was considered. The approach for TRV selection was also examined in detail, with Cu as the representative analyte. The underlying assumption that generic body-weight-normalized TRVs can be used to derive protective levels for any species is not supported by the data. Whereas the use of site-, species-, and analyte-specific exposure parameters is recommended to reduce variation in exposure estimates (soil protection level), improvement of TRVs is more problematic. © 2014 The Authors. Environmental Toxicology and Chemistry Published by Wiley Periodicals, Inc.

  6. Impact of Multileaf Collimator Configuration Parameters on the Dosimetric Accuracy of 6-MV Intensity-Modulated Radiation Therapy Treatment Plans.

    PubMed

    Petersen, Nick; Perrin, David; Newhauser, Wayne; Zhang, Rui

    2017-01-01

    The purpose of this study was to evaluate the impact of selected configuration parameters that govern multileaf collimator (MLC) transmission and rounded leaf offset in a commercial treatment planning system (TPS) (Pinnacle 3 , Philips Medical Systems, Andover, MA, USA) on the accuracy of intensity-modulated radiation therapy (IMRT) dose calculation. The MLC leaf transmission factor was modified based on measurements made with ionization chambers. The table of parameters containing rounded-leaf-end offset values was modified by measuring the radiation field edge as a function of leaf bank position with an ionization chamber in a scanning water-tank dosimetry system and comparing the locations to those predicted by the TPS. The modified parameter values were validated by performing IMRT quality assurance (QA) measurements on 19 gantry-static IMRT plans. Planar dose measurements were performed with radiographic film and a diode array (MapCHECK2) and compared to TPS calculated dose distributions using default and modified configuration parameters. Based on measurements, the leaf transmission factor was changed from a default value of 0.001 to 0.005. Surprisingly, this modification resulted in a small but statistically significant worsening of IMRT QA gamma-index passing rate, which revealed that the overall dosimetric accuracy of the TPS depends on multiple configuration parameters in a manner that is coupled and not intuitive because of the commissioning protocol used in our clinic. The rounded leaf offset table had little room for improvement, with the average difference between the default and modified offset values being -0.2 ± 0.7 mm. While our results depend on the current clinical protocols, treatment unit and TPS used, the methodology used in this study is generally applicable. Different clinics could potentially obtain different results and improve their dosimetric accuracy using our approach.

  7. Sensitivity of ecological soil-screening levels for metals to exposure model parameterization and toxicity reference values

    PubMed Central

    Sample, Bradley E; Fairbrother, Anne; Kaiser, Ashley; Law, Sheryl; Adams, Bill

    2014-01-01

    Ecological soil-screening levels (Eco-SSLs) were developed by the United States Environmental Protection Agency (USEPA) for the purposes of setting conservative soil screening values that can be used to eliminate the need for further ecological assessment for specific analytes at a given site. Ecological soil-screening levels for wildlife represent a simplified dietary exposure model solved in terms of soil concentrations to produce exposure equal to a no-observed-adverse-effect toxicity reference value (TRV). Sensitivity analyses were performed for 6 avian and mammalian model species, and 16 metals/metalloids for which Eco-SSLs have been developed. The relative influence of model parameters was expressed as the absolute value of the range of variation observed in the resulting soil concentration when exposure is equal to the TRV. Rank analysis of variance was used to identify parameters with greatest influence on model output. For both birds and mammals, soil ingestion displayed the broadest overall range (variability), although TRVs consistently had the greatest influence on calculated soil concentrations; bioavailability in food was consistently the least influential parameter, although an important site-specific variable. Relative importance of parameters differed by trophic group. Soil ingestion ranked 2nd for carnivores and herbivores, but was 4th for invertivores. Different patterns were exhibited, depending on which parameter, trophic group, and analyte combination was considered. The approach for TRV selection was also examined in detail, with Cu as the representative analyte. The underlying assumption that generic body-weight–normalized TRVs can be used to derive protective levels for any species is not supported by the data. Whereas the use of site-, species-, and analyte-specific exposure parameters is recommended to reduce variation in exposure estimates (soil protection level), improvement of TRVs is more problematic. Environ Toxicol Chem 2014;33:2386–2398. PMID:24944000

  8. The predictive value of haemodynamic parameters for outcome of deep venous reconstructions in patients with chronic deep vein obstruction - A systematic review.

    PubMed

    Kurstjens, Rlm; de Wolf, Maf; Kleijnen, J; de Graaf, R; Wittens, Cha

    2017-09-01

    Objective The aim of this study was to investigate the predictive value of haemodynamic parameters on success of stenting or bypass surgery in patients with non-thrombotic or post-thrombotic deep venous obstruction. Methods EMBASE, MEDLINE and trial registries were searched up to 5 February 2016. Studies needed to investigate stenting or bypass surgery in patients with post-thrombotic obstruction or stenting for non-thrombotic iliac vein compression. Haemodynamic data needed to be available with prognostic analysis for success of treatment. Two authors, independently, selected studies and extracted data with risk bias assessment using the Quality in Prognosis Studies tool. Results Two studies using stenting and two using bypass surgery were included. Three investigated plethysmography, though results varied and confounding was not properly taken into account. Dorsal foot vein pressure and venous refill times appeared to be of influence in one study, though confounding by deep vein incompetence was likely. Another investigated femoral-central pressure gradients without finding statistical significance, though sample size was small without details on statistical methodology. Reduced femoral inflow was found to be a predictor for stent stenosis or occlusion in one study, though patients also received additional surgery to improve stent inflow. Data on prediction of haemodynamic parameters for stenting of non-thrombotic iliac vein compression were not available. Conclusions Data on the predictive value of haemodynamic parameters for success of treatment in deep venous obstructive disease are scant and of poor quality. Plethysmography does not seem to be of value in predicting outcome of stenting or bypass surgery in post-thrombotic disease. The relevance of pressure-related parameters is unclear. Reduced flow into the common femoral vein seems to be predictive for in-stent stenosis or occlusion. Further research into the predictive effect of haemodynamic parameters is warranted and the possibility of developing new techniques that evaluate various haemodynamic aspects should be explored.

  9. Development of a parameter optimization technique for the design of automatic control systems

    NASA Technical Reports Server (NTRS)

    Whitaker, P. H.

    1977-01-01

    Parameter optimization techniques for the design of linear automatic control systems that are applicable to both continuous and digital systems are described. The model performance index is used as the optimization criterion because of the physical insight that can be attached to it. The design emphasis is to start with the simplest system configuration that experience indicates would be practical. Design parameters are specified, and a digital computer program is used to select that set of parameter values which minimizes the performance index. The resulting design is examined, and complexity, through the use of more complex information processing or more feedback paths, is added only if performance fails to meet operational specifications. System performance specifications are assumed to be such that the desired step function time response of the system can be inferred.

  10. Research on degradation of omethoate with Y2O3:Er3+ and TiO2

    NASA Astrophysics Data System (ADS)

    Liu, Zhiping; Mai, Yanling; Yan, Aiguo; Fan, Hailu; Yuan, Taidou

    2018-06-01

    Application of visible light excited photocatalytic degradation reagent of pesticide residues is not only suitable for the farmers, can also be used for city residents for daily use. Up conversion material Y2O3:Er3+ was prepared by sol gel method, then mixed with anatase TiO2 sol solution, to carry out the research of omethoate degradation under visible light. In order to get the higher degradability, it's important to study the technological parameters. Among so many parameters, four parameters were selected. They were vegetable surface omethoate concentration, photocatalytic degradation reagent dosage, pH value and degradation time. Utilizing orthogonal experimental design program, all parameters were optimized. The results showed that: the degradation rate was the largest concerned with the vegetable surface omethoate concentration, and then the degradation time.

  11. Realization of station for testing asynchronous three-phase motors

    NASA Astrophysics Data System (ADS)

    Wróbel, A.; Surma, W.

    2016-08-01

    Nowadays, you cannot imagine the construction and operation of machines without the use of electric motors [13-15]. The proposed position is designed to allow testing of asynchronous three-phase motors. The position consists of a tested engine and the engine running as a load, both engines combined with a mechanical clutch [2]. The value of the load is recorded by measuring shaft created with Strain Gauge Bridge. This concept will allow to study the basic parameters of the engines, visualization motor parameters both vector and scalar controlled, during varying load drive system. In addition, registration during the variable physical parameters of the working electric motor, controlled by a frequency converter or controlled by a contactor will be possible. Position is designed as a teaching and research position to characterize the engines. It will be also possible selection of inverter parameters.

  12. Vapor Hydrogen Peroxide as Alternative to Dry Heat Microbial Reduction

    NASA Technical Reports Server (NTRS)

    Cash, Howard A.; Kern, Roger G.; Chung, Shirley Y.; Koukol, Robert C.; Barengoltz, Jack B.

    2006-01-01

    The Jet Propulsion Laboratory, in conjunction with the NASA Planetary Protection Officer, has selected vapor phase hydrogen peroxide (VHP) sterilization process for continued development as a NASA approved sterilization technique for spacecraft subsystems and systems. The goal is to include this technique, with appropriate specification, in NPG8020.12C as a low temperature complementary technique to the dry heat sterilization process. A series of experiments were conducted in vacuum to determine VHP process parameters that provided significant reductions in spore viability while allowing survival of sufficient spores for statistically significant enumeration. With this knowledge of D values, sensible margins can be applied in a planetary protection specification. The outcome of this study provided an optimization of test sterilizer process conditions: VHP concentration, process duration, a process temperature range for which the worst case D value may be imposed, a process humidity range for which the worst case D value may be imposed, and robustness to selected spacecraft material substrates.

  13. Low-frequency fluctuations in vertical cavity lasers: Experiments versus Lang-Kobayashi dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Torcini, Alessandro; Istituto Nazionale di Fisica Nucleare, Sezione di Firenze, via Sansone 1, 50019 Sesto Fiorentino; Barland, Stephane

    2006-12-15

    The limits of applicability of the Lang-Kobayashi (LK) model for a semiconductor laser with optical feedback are analyzed. The model equations, equipped with realistic values of the parameters, are investigated below the solitary laser threshold where low-frequency fluctuations (LFF's) are usually observed. The numerical findings are compared with experimental data obtained for the selected polarization mode from a vertical cavity surface emitting laser (VCSEL) subject to polarization selective external feedback. The comparison reveals the bounds within which the dynamics of the LK model can be considered as realistic. In particular, it clearly demonstrates that the deterministic LK model, for realisticmore » values of the linewidth enhancement factor {alpha}, reproduces the LFF's only as a transient dynamics towards one of the stationary modes with maximal gain. A reasonable reproduction of real data from VCSEL's can be obtained only by considering the noisy LK or alternatively deterministic LK model for extremely high {alpha} values.« less

  14. Kinetic approach to the study of froth flotation applied to a lepidolite ore

    NASA Astrophysics Data System (ADS)

    Vieceli, Nathália; Durão, Fernando O.; Guimarães, Carlos; Nogueira, Carlos A.; Pereira, Manuel F. C.; Margarido, Fernanda

    2016-07-01

    The number of published studies related to the optimization of lithium extraction from low-grade ores has increased as the demand for lithium has grown. However, no study related to the kinetics of the concentration stage of lithium-containing minerals by froth flotation has yet been reported. To establish a factorial design of batch flotation experiments, we conducted a set of kinetic tests to determine the most selective alternative collector, define a range of pulp pH values, and estimate a near-optimum flotation time. Both collectors (Aeromine 3000C and Armeen 12D) provided the required flotation selectivity, although this selectivity was lost in the case of pulp pH values outside the range between 2 and 4. Cumulative mineral recovery curves were used to adjust a classical kinetic model that was modified with a non-negative parameter representing a delay time. The computation of the near-optimum flotation time as the maximizer of a separation efficiency (SE) function must be performed with caution. We instead propose to define the near-optimum flotation time as the time interval required to achieve 95%-99% of the maximum value of the SE function.

  15. Photovoltaic efficiency of intermediate band solar cells based on CdTe/CdMnTe coupled quantum dots

    NASA Astrophysics Data System (ADS)

    Prado, Silvio J.; Marques, Gilmar E.; Alcalde, Augusto M.

    2017-11-01

    In this work we show the calculation of optimized efficiencies of intermediate band solar cells (IBSCs) based on Mn-doped II-VI CdTe/CdMnTe coupled quantum dot (QD) structures. We focus our attention on the combined effects of geometrical and Mn-doping parameters on optical properties and solar cell efficiency. In the framework of {k \\cdot p} theory, we accomplish detailed calculations of electronic structure, transition energies, optical selection rules and their corresponding intra- and interband oscillator strengths. With these results and by following the intermediate band model, we have developed a strategy which allows us to find optimal photovoltaic efficiency values. We also show that the effects of band admixture which can lead to degradation of optical transitions and reduction of efficiency can be partly minimized by a careful selection of the structural parameters and Mn-concentration. Thus, the improvement of band engineering is mandatory for any practical implementation of QD systems as IBSC hardware. Finally, our calculations show that it is possible to reach significant efficiency, up to  ∼26%, by selecting a restricted space of parameters such as quantum dot size and shape and Mn-concentration effects, to improve the modulation of optical absorption in the structures.

  16. Photovoltaic efficiency of intermediate band solar cells based on CdTe/CdMnTe coupled quantum dots.

    PubMed

    Prado, Silvio J; Marques, Gilmar E; Alcalde, Augusto M

    2017-11-08

    In this work we show the calculation of optimized efficiencies of intermediate band solar cells (IBSCs) based on Mn-doped II-VI CdTe/CdMnTe coupled quantum dot (QD) structures. We focus our attention on the combined effects of geometrical and Mn-doping parameters on optical properties and solar cell efficiency. In the framework of [Formula: see text] theory, we accomplish detailed calculations of electronic structure, transition energies, optical selection rules and their corresponding intra- and interband oscillator strengths. With these results and by following the intermediate band model, we have developed a strategy which allows us to find optimal photovoltaic efficiency values. We also show that the effects of band admixture which can lead to degradation of optical transitions and reduction of efficiency can be partly minimized by a careful selection of the structural parameters and Mn-concentration. Thus, the improvement of band engineering is mandatory for any practical implementation of QD systems as IBSC hardware. Finally, our calculations show that it is possible to reach significant efficiency, up to  ∼26%, by selecting a restricted space of parameters such as quantum dot size and shape and Mn-concentration effects, to improve the modulation of optical absorption in the structures.

  17. Assessing the impact of a targeted plyometric training on changes in selected kinematic parameters of the swimming start.

    PubMed

    Rejman, Marek; Bilewski, Marek; Szczepan, Stefan; Klarowicz, Andrzej; Rudnik, Daria; Maćkała, Krzysztof

    2017-01-01

    The aim of this study was to analyse changes taking place within selected kinematic parameters of the swimming start, after completing a six-week plyometric training, assuming that the take-off power training improves its effectiveness. The experiment included nine male swimmers. In the pre-test the swimmers performed three starts focusing on the best performance. Next, a plyometric training programme, adapted from sprint running, was introduced in order to increase a power of the lower extremities. The programme entailed 75 minute sessions conducted twice a week. Afterwards, a post-test was performed, analogous to the pre-test. Spatio-temporal structure data of the swimming start were gathered from video recordings of the swimmer above and under water. Impulses triggered by the plyometric training contributed to a shorter start time (the main measure of start effectiveness) and glide time as well as increasing average take-off, flight and glide velocities including take-off, entry and glide instantaneous velocities. The glide angle decreased. The changes in selected parameters of the swimming start and its confirmed diagnostic values, showed the areas to be susceptible to plyometric training and suggested that applied plyometric training programme aimed at increasing take-off power enhances the effectiveness of the swimming start.

  18. Effects of reaction-kinetic parameters on modeling reaction pathways in GaN MOVPE growth

    NASA Astrophysics Data System (ADS)

    Zhang, Hong; Zuo, Ran; Zhang, Guoyi

    2017-11-01

    In the modeling of the reaction-transport process in GaN MOVPE growth, the selections of kinetic parameters (activation energy Ea and pre-exponential factor A) for gas reactions are quite uncertain, which cause uncertainties in both gas reaction path and growth rate. In this study, numerical modeling of the reaction-transport process for GaN MOVPE growth in a vertical rotating disk reactor is conducted with varying kinetic parameters for main reaction paths. By comparisons of the molar concentrations of major Ga-containing species and the growth rates, the effects of kinetic parameters on gas reaction paths are determined. The results show that, depending on the values of the kinetic parameters, the gas reaction path may be dominated either by adduct/amide formation path, or by TMG pyrolysis path, or by both. Although the reaction path varies with different kinetic parameters, the predicted growth rates change only slightly because the total transport rate of Ga-containing species to the substrate changes slightly with reaction paths. This explains why previous authors using different chemical models predicted growth rates close to the experiment values. By varying the pre-exponential factor for the amide trimerization, it is found that the more trimers are formed, the lower the growth rates are than the experimental value, which indicates that trimers are poor growth precursors, because of thermal diffusion effect caused by high temperature gradient. The effective order for the contribution of major species to growth rate is found as: pyrolysis species > amides > trimers. The study also shows that radical reactions have little effect on gas reaction path because of the generation and depletion of H radicals in the chain reactions when NH2 is considered as the end species.

  19. Programmable Gain Amplifiers with DC Suppression and Low Output Offset for Bioelectric Sensors

    PubMed Central

    Carrera, Albano; de la Rosa, Ramón; Alonso, Alonso

    2013-01-01

    DC-offset and DC-suppression are key parameters in bioelectric amplifiers. However, specific DC analyses are not often explained. Several factors influence the DC-budget: the programmable gain, the programmable cut-off frequencies for high pass filtering and, the low cut-off values and the capacitor blocking issues involved. A new intermediate stage is proposed to address the DC problem entirely. Two implementations were tested. The stage is composed of a programmable gain amplifier (PGA) with DC-rejection and low output offset. Cut-off frequencies are selectable and values from 0.016 to 31.83 Hz were tested, and the capacitor deblocking is embedded in the design. Hence, this PGA delivers most of the required gain with constant low output offset, notwithstanding the gain or cut-off frequency selected. PMID:24084109

  20. Bayesian model selection: Evidence estimation based on DREAM simulation and bridge sampling

    NASA Astrophysics Data System (ADS)

    Volpi, Elena; Schoups, Gerrit; Firmani, Giovanni; Vrugt, Jasper A.

    2017-04-01

    Bayesian inference has found widespread application in Earth and Environmental Systems Modeling, providing an effective tool for prediction, data assimilation, parameter estimation, uncertainty analysis and hypothesis testing. Under multiple competing hypotheses, the Bayesian approach also provides an attractive alternative to traditional information criteria (e.g. AIC, BIC) for model selection. The key variable for Bayesian model selection is the evidence (or marginal likelihood) that is the normalizing constant in the denominator of Bayes theorem; while it is fundamental for model selection, the evidence is not required for Bayesian inference. It is computed for each hypothesis (model) by averaging the likelihood function over the prior parameter distribution, rather than maximizing it as by information criteria; the larger a model evidence the more support it receives among a collection of hypothesis as the simulated values assign relatively high probability density to the observed data. Hence, the evidence naturally acts as an Occam's razor, preferring simpler and more constrained models against the selection of over-fitted ones by information criteria that incorporate only the likelihood maximum. Since it is not particularly easy to estimate the evidence in practice, Bayesian model selection via the marginal likelihood has not yet found mainstream use. We illustrate here the properties of a new estimator of the Bayesian model evidence, which provides robust and unbiased estimates of the marginal likelihood; the method is coined Gaussian Mixture Importance Sampling (GMIS). GMIS uses multidimensional numerical integration of the posterior parameter distribution via bridge sampling (a generalization of importance sampling) of a mixture distribution fitted to samples of the posterior distribution derived from the DREAM algorithm (Vrugt et al., 2008; 2009). Some illustrative examples are presented to show the robustness and superiority of the GMIS estimator with respect to other commonly used approaches in the literature.

  1. Spirotetramat Resistance Selected in the Phenacoccus solenopsis (Homoptera: Pseudococcidae): Cross-Resistance Patterns, Stability, and Fitness Costs Analysis.

    PubMed

    Ejaz, Masood; Ali Shad, Sarfraz

    2017-06-01

    The Phenacoccus solenopsis Tinsley (Homoptera: Pseudococcidae) is a major agricultural and horticultural pest of crops throughout the world. To develop a better resistance management strategy for P. solenopsis, we conducted a study on life history parameters of different populations of this pest, one selected with spirotetramat (Spiro-SEL), an unselected (UNSEL) population, and their reciprocal crosses. We also studied the cross-resistance and the stability of spirotetramat resistance. The Spiro-SEL of P. solenopsis exhibited a 328.69-fold resistance compared to the susceptible population (Lab-PK). The Spiro-SEL population also displayed a moderate level of cross-resistance to profenofos and bifenthrin and a high level of cross-resistance to abamectin. Resistance to spirotetramat in Spiro-SEL was unstable in the absence of selection. The study of life history parameters showed that there was a significant reduction in fitness parameters of Spiro-SEL population with a relative fitness value of 0.14. There was a significant decrease in survival rate, pupal weight, fecundity, egg hatching percentage, male and female generation time, intrinsic rate of population increase of males and females, biotic potential, and mean relative growth rate. It is concluded that selection with spirotetramat had marked effect on resistance development in P. solenopsis and upon removal of selection pressure spirotetramat resistance declined significantly, indicating unstable resistance. Development of resistance led to high fitness costs for the spirotetramat-selected population. Our study may provide the basic information on spirotetramat resistance and its mechanism to help develop the resistance management strategies. © The Authors 2017. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  2. Volume and mass distribution in selected families of asteroids

    NASA Astrophysics Data System (ADS)

    Wlodarczyk, I.; Leliwa-Kopystynski, J.

    2014-07-01

    Members of five asteroid families (Vesta, Eos, Eunomia, Koronis, and Themis) were identified using the Hierarchical Clustering Method (HCM) for a data set containing 292,003 numbered asteroids. The influence of the choice of the best value of the parameter v_{cut} that controls the distances of asteroids in the proper elements space a, e, i was investigated with a step as small as 1 m/s. Results are given in a set of figures showing the families on the planes (a, e), (a, i), (e, i). Another form for the presentation of results is related to the secular resonances in the asteroids' motion with the giant planets, mostly with Saturn. Relations among asteroid radius, albedo, and absolute magnitude allow us to calculate the volumes of individual members of an asteroid family. After summation, the volumes of the parent bodies of the families were found. This paper presents the possibility and the first results of using a combined method for asteroid family identifications based on the following items: (i) Parameter v_{cut} is established with precision as high as 1 m/s; (ii) the albedo (if available) of the potential members is considered for approving or rejecting the family membership; (iii) a color classification is used for the same purpose as well. Searching for the most reliable parameter values for the family populations was performed by means of a consecutive application of the HCM with increasing parameter v_{cut}. The results are illustrated in the figure. Increasing v_{cut} in steps as small as 1 m/s allowed to observe the computational strength of the HCM: the critical value of the parameter v_{cut} (see the breaking-points of the plots in the figure) separates the assemblage of potential family members from 'an ocean' of background asteroids that are not related to the family. The critical values of v_{cut} vary from 57 m/s for the Vesta family to 92 m/s for the Eos family. If the parameter v_{cut} surpasses its critical value, the number of HCM-discovered family members increases enormously and without any physical reason.

  3. Method And Apparatus For Two Dimensional Surface Property Analysis Based On Boundary Measurement

    DOEpatents

    Richardson, John G.

    2005-11-15

    An apparatus and method for determining properties of a conductive film is disclosed. A plurality of probe locations selected around a periphery of the conductive film define a plurality of measurement lines between each probe location and all other probe locations. Electrical resistance may be measured along each of the measurement lines. A lumped parameter model may be developed based on the measured values of electrical resistance. The lumped parameter model may be used to estimate resistivity at one or more selected locations encompassed by the plurality of probe locations. The resistivity may be extrapolated to other physical properties if the conductive film includes a correlation between resistivity and the other physical properties. A profile of the conductive film may be developed by determining resistivity at a plurality of locations. The conductive film may be applied to a structure such that resistivity may be estimated and profiled for the structure's surface.

  4. On fitting the Pareto Levy distribution to stock market index data: Selecting a suitable cutoff value

    NASA Astrophysics Data System (ADS)

    Coronel-Brizio, H. F.; Hernández-Montoya, A. R.

    2005-08-01

    The so-called Pareto-Levy or power-law distribution has been successfully used as a model to describe probabilities associated to extreme variations of stock markets indexes worldwide. The selection of the threshold parameter from empirical data and consequently, the determination of the exponent of the distribution, is often done using a simple graphical method based on a log-log scale, where a power-law probability plot shows a straight line with slope equal to the exponent of the power-law distribution. This procedure can be considered subjective, particularly with regard to the choice of the threshold or cutoff parameter. In this work, a more objective procedure based on a statistical measure of discrepancy between the empirical and the Pareto-Levy distribution is presented. The technique is illustrated for data sets from the New York Stock Exchange (DJIA) and the Mexican Stock Market (IPC).

  5. Why anthropic reasoning cannot predict Lambda.

    PubMed

    Starkman, Glenn D; Trotta, Roberto

    2006-11-17

    We revisit anthropic arguments purporting to explain the measured value of the cosmological constant. We argue that different ways of assigning probabilities to candidate universes lead to totally different anthropic predictions. As an explicit example, we show that weighting different universes by the total number of possible observations leads to an extremely small probability for observing a value of Lambda equal to or greater than what we now measure. We conclude that anthropic reasoning within the framework of probability as frequency is ill-defined and that in the absence of a fundamental motivation for selecting one weighting scheme over another the anthropic principle cannot be used to explain the value of Lambda, nor, likely, any other physical parameters.

  6. Forming of film surface of very viscous liquid flowing with gas in pipes

    NASA Astrophysics Data System (ADS)

    Czernek, Krystian; Witczak, Stanisław

    2017-10-01

    The study presents the possible use of optoelectronic system for the measurement of the values, which are specific for hydrodynamics of two-phase gas liquid flow in vertical pipes, where a very-high-viscosity liquid forms a falling film in a pipe. The experimental method was provided, and the findings were presented and analysed for selected values, which characterize the two-phase flow. Attempt was also made to evaluate the effects of flow parameters and properties of the liquid on the gas-liquid interface value, which is decisive for the conditions of heat exchange and mass transfer in falling film equipment. The nature and form of created waves at various velocities were also described.

  7. Adjusted Levenberg-Marquardt method application to methene retrieval from IASI/METOP spectra

    NASA Astrophysics Data System (ADS)

    Khamatnurova, Marina; Gribanov, Konstantin

    2016-04-01

    Levenberg-Marquardt method [1] with iteratively adjusted parameter and simultaneous evaluation of averaging kernels together with technique of parameters selection are developed and applied to the retrieval of methane vertical profiles in the atmosphere from IASI/METOP spectra. Retrieved methane vertical profiles are then used for calculation of total atmospheric column amount. NCEP/NCAR reanalysis data provided by ESRL (NOAA, Boulder,USA) [2] are taken as initial guess for retrieval algorithm. Surface temperature, temperature and humidity vertical profiles are retrieved before methane vertical profile retrieval for each selected spectrum. Modified software package FIRE-ARMS [3] were used for numerical experiments. To adjust parameters and validate the method we used ECMWF MACC reanalysis data [4]. Methane columnar values retrieved from cloudless IASI spectra demonstrate good agreement with MACC columnar values. Comparison is performed for IASI spectra measured in May of 2012 over Western Siberia. Application of the method for current IASI/METOP measurements are discussed. 1.Ma C., Jiang L. Some Research on Levenberg-Marquardt Method for the Nonlinear Equations // Applied Mathematics and Computation. 2007. V.184. P. 1032-1040 2.http://www.esrl.noaa.gov/psdhttp://www.esrl.noaa.gov/psd 3.Gribanov K.G., Zakharov V.I., Tashkun S.A., Tyuterev Vl.G.. A New Software Tool for Radiative Transfer Calculations and its application to IMG/ADEOS data // JQSRT.2001.V.68.№ 4. P. 435-451. 4.http://www.ecmwf.int/http://www.ecmwf.int

  8. On Nb Silicide Based Alloys: Alloy Design and Selection.

    PubMed

    Tsakiropoulos, Panos

    2018-05-18

    The development of Nb-silicide based alloys is frustrated by the lack of composition-process-microstructure-property data for the new alloys, and by the shortage of and/or disagreement between thermodynamic data for key binary and ternary systems that are essential for designing (selecting) alloys to meet property goals. Recent publications have discussed the importance of the parameters δ (related to atomic size), Δχ (related to electronegativity) and valence electron concentration (VEC) (number of valence electrons per atom filled into the valence band) for the alloying behavior of Nb-silicide based alloys (J Alloys Compd 748 (2018) 569), their solid solutions (J Alloys Compd 708 (2017) 961), the tetragonal Nb₅Si₃ (Materials 11 (2018) 69), and hexagonal C14-NbCr₂ and cubic A15-Nb₃X phases (Materials 11 (2018) 395) and eutectics with Nb ss and Nb₅Si₃ (Materials 11 (2018) 592). The parameter values were calculated using actual compositions for alloys, their phases and eutectics. This paper is about the relationships that exist between the alloy parameters δ, Δχ and VEC, and creep rate and isothermal oxidation (weight gain) and the concentrations of solute elements in the alloys. Different approaches to alloy design (selection) that use property goals and these relationships for Nb-silicide based alloys are discussed and examples of selected alloy compositions and their predicted properties are given. The alloy design methodology, which has been called NICE (Niobium Intermetallic Composite Elaboration), enables one to design (select) new alloys and to predict their creep and oxidation properties and the macrosegregation of Si in cast alloys.

  9. On Nb Silicide Based Alloys: Alloy Design and Selection

    PubMed Central

    Tsakiropoulos, Panos.

    2018-01-01

    The development of Nb-silicide based alloys is frustrated by the lack of composition-process-microstructure-property data for the new alloys, and by the shortage of and/or disagreement between thermodynamic data for key binary and ternary systems that are essential for designing (selecting) alloys to meet property goals. Recent publications have discussed the importance of the parameters δ (related to atomic size), Δχ (related to electronegativity) and valence electron concentration (VEC) (number of valence electrons per atom filled into the valence band) for the alloying behavior of Nb-silicide based alloys (J Alloys Compd 748 (2018) 569), their solid solutions (J Alloys Compd 708 (2017) 961), the tetragonal Nb5Si3 (Materials 11 (2018) 69), and hexagonal C14-NbCr2 and cubic A15-Nb3X phases (Materials 11 (2018) 395) and eutectics with Nbss and Nb5Si3 (Materials 11 (2018) 592). The parameter values were calculated using actual compositions for alloys, their phases and eutectics. This paper is about the relationships that exist between the alloy parameters δ, Δχ and VEC, and creep rate and isothermal oxidation (weight gain) and the concentrations of solute elements in the alloys. Different approaches to alloy design (selection) that use property goals and these relationships for Nb-silicide based alloys are discussed and examples of selected alloy compositions and their predicted properties are given. The alloy design methodology, which has been called NICE (Niobium Intermetallic Composite Elaboration), enables one to design (select) new alloys and to predict their creep and oxidation properties and the macrosegregation of Si in cast alloys. PMID:29783707

  10. Effects of the sitting position on the body posture of children aged 11 to 13 years.

    PubMed

    Drza-Grabiec, Justyna; Snela, Sławomir; Rykała, Justyna; Podgórska, Justyna; Rachwal, Maciej

    2015-01-01

    Nowadays, children spend increasingly more time in a seated position, both at school during class and at home in front of a computer or television. The aim of this study was to compare selected parameters describing body posture and scoliosis among children in sitting and standing positions. It was an observational, cross-sectional study involving 91 primary school children aged 11-13 years. The children's backs were photographed in standing and sitting positions. The values of selected parameters were calculated using photogrammetric examination based on the Moire projection phenomenon. The results show significant statistical differences for the parameters defining the anteroposterior curves of the spine. The sitting position resulted in a decreased angle of inclination of the thoracolumbar spine, reduced depths of thoracic kyphosis and lumbar lordosis, and pelvic asymmetry. Maintaining a sitting position for a long time results in advanced asymmetries of the trunk and scoliosis, and causes a decrease in lumbar lordosis and kyphosis of a child's entire spine. Therefore, we advocate the introduction of posture education programs for schoolchildren.

  11. Semi-automated segmentation of neuroblastoma nuclei using the gradient energy tensor: a user driven approach

    NASA Astrophysics Data System (ADS)

    Kromp, Florian; Taschner-Mandl, Sabine; Schwarz, Magdalena; Blaha, Johanna; Weiss, Tamara; Ambros, Peter F.; Reiter, Michael

    2015-02-01

    We propose a user-driven method for the segmentation of neuroblastoma nuclei in microscopic fluorescence images involving the gradient energy tensor. Multispectral fluorescence images contain intensity and spatial information about antigene expression, fluorescence in situ hybridization (FISH) signals and nucleus morphology. The latter serves as basis for the detection of single cells and the calculation of shape features, which are used to validate the segmentation and to reject false detections. Accurate segmentation is difficult due to varying staining intensities and aggregated cells. It requires several (meta-) parameters, which have a strong influence on the segmentation results and have to be selected carefully for each sample (or group of similar samples) by user interactions. Because our method is designed for clinicians and biologists, who may have only limited image processing background, an interactive parameter selection step allows the implicit tuning of parameter values. With this simple but intuitive method, segmentation results with high precision for a large number of cells can be achieved by minimal user interaction. The strategy was validated on handsegmented datasets of three neuroblastoma cell lines.

  12. Evolutionary algorithm for vehicle driving cycle generation.

    PubMed

    Perhinschi, Mario G; Marlowe, Christopher; Tamayo, Sergio; Tu, Jun; Wayne, W Scott

    2011-09-01

    Modeling transit bus emissions and fuel economy requires a large amount of experimental data over wide ranges of operational conditions. Chassis dynamometer tests are typically performed using representative driving cycles defined based on vehicle instantaneous speed as sequences of "microtrips", which are intervals between consecutive vehicle stops. Overall significant parameters of the driving cycle, such as average speed, stops per mile, kinetic intensity, and others, are used as independent variables in the modeling process. Performing tests at all the necessary combinations of parameters is expensive and time consuming. In this paper, a methodology is proposed for building driving cycles at prescribed independent variable values using experimental data through the concatenation of "microtrips" isolated from a limited number of standard chassis dynamometer test cycles. The selection of the adequate "microtrips" is achieved through a customized evolutionary algorithm. The genetic representation uses microtrip definitions as genes. Specific mutation, crossover, and karyotype alteration operators have been defined. The Roulette-Wheel selection technique with elitist strategy drives the optimization process, which consists of minimizing the errors to desired overall cycle parameters. This utility is part of the Integrated Bus Information System developed at West Virginia University.

  13. Study of TLIPSS formation on different metals and alloys and their selective etching

    NASA Astrophysics Data System (ADS)

    Dostovalov, Alexandr V.; Korolkov, Victor P.; Terentiev, Vadim S.; Okotrub, Konstantin A.; Dultsev, Fedor N.; Nemykin, Anton; Babin, Sergey A.

    2017-02-01

    Experimental investigation of thermochemical laser-induced periodic surface structures (TLIPSS) formation on metal films (Ti, Cr, Ni, NiCr) at different processing conditions is presented. The hypothesis that the TLIPSS formation depends significantly on parabolic rate constant for oxide thin film growth is discussed. Evidently, low value of this parameter for Ni is the reason of TLIPSS absence on Ni and NiCr film with low Cr content. The effect of simultaneous ablative (with period ≍λ) and thermochemical (with period ≍λ) LIPSS formation was observed. The formation of structures after TLIPSS selective etching was demonstrated.

  14. Escherichia coli promoter sequences predict in vitro RNA polymerase selectivity.

    PubMed

    Mulligan, M E; Hawley, D K; Entriken, R; McClure, W R

    1984-01-11

    We describe a simple algorithm for computing a homology score for Escherichia coli promoters based on DNA sequence alone. The homology score was related to 31 values, measured in vitro, of RNA polymerase selectivity, which we define as the product KBk2, the apparent second order rate constant for open complex formation. We found that promoter strength could be predicted to within a factor of +/-4.1 in KBk2 over a range of 10(4) in the same parameter. The quantitative evaluation was linked to an automated (Apple II) procedure for searching and evaluating possible promoters in DNA sequence files.

  15. A Constraint-Based Planner for Data Production

    NASA Technical Reports Server (NTRS)

    Pang, Wanlin; Golden, Keith

    2005-01-01

    This paper presents a graph-based backtracking algorithm designed to support constrain-tbased planning in data production domains. This algorithm performs backtracking at two nested levels: the outer- backtracking following the structure of the planning graph to select planner subgoals and actions to achieve them and the inner-backtracking inside a subproblem associated with a selected action to find action parameter values. We show this algorithm works well in a planner applied to automating data production in an ecological forecasting system. We also discuss how the idea of multi-level backtracking may improve efficiency of solving semi-structured constraint problems.

  16. Health-related quality-of-life parameters as independent prognostic factors in advanced or metastatic bladder cancer.

    PubMed

    Roychowdhury, D F; Hayden, A; Liepa, A M

    2003-02-15

    This retrospective analysis examined prognostic significance of health-related quality-of-life (HRQoL) parameters combined with baseline clinical factors on outcomes (overall survival, time to progressive disease, and time to treatment failure) in bladder cancer. Outcome and HRQoL (European Organization for Research and Treatment of Cancer Quality of Life Questionnaire C30) data were collected prospectively in a phase III study assessing gemcitabine and cisplatin versus methotrexate, vinblastine, doxorubicin, and cisplatin in locally advanced or metastatic bladder cancer. Prespecified baseline clinical factors (performance status, tumor-node-metastasis staging, visceral metastases [VM], alkaline phosphatase [AP] level, number of metastatic sites, prior radiotherapy, disease measurability, sex, time from diagnosis, and sites of disease) and selected HRQoL parameters (global QoL; all functional scales; symptoms: pain, fatigue, insomnia, dyspnea, anorexia) were evaluated using Cox's proportional hazards model. Factors with individual prognostic value (P <.05) on outcomes in univariate models were assessed for joint prognostic value in a multivariate model. A final model was developed using a backward selection strategy. Patients with baseline HRQoL were included (364 of 405, 90%). The final model predicted longer survival with low/normal AP levels, no VM, high physical functioning, low role functioning, and no anorexia. Positive prognostic factors for time to progressive disease were good performance status, low/normal AP levels, no VM, and minimal fatigue; for time to treatment failure, they were low/normal AP levels, minimal fatigue, and no anorexia. Global QoL was a significant predictor of outcome in univariate analyses but was not retained in the multivariate model. HRQoL parameters are independent prognostic factors for outcome in advanced bladder cancer; their prognostic importance needs further evaluation.

  17. Quantifying Effects of Pharmacological Blockers of Cardiac Autonomous Control Using Variability Parameters.

    PubMed

    Miyabara, Renata; Berg, Karsten; Kraemer, Jan F; Baltatu, Ovidiu C; Wessel, Niels; Campos, Luciana A

    2017-01-01

    Objective: The aim of this study was to identify the most sensitive heart rate and blood pressure variability (HRV and BPV) parameters from a given set of well-known methods for the quantification of cardiovascular autonomic function after several autonomic blockades. Methods: Cardiovascular sympathetic and parasympathetic functions were studied in freely moving rats following peripheral muscarinic (methylatropine), β1-adrenergic (metoprolol), muscarinic + β1-adrenergic, α1-adrenergic (prazosin), and ganglionic (hexamethonium) blockades. Time domain, frequency domain and symbolic dynamics measures for each of HRV and BPV were classified through paired Wilcoxon test for all autonomic drugs separately. In order to select those variables that have a high relevance to, and stable influence on our target measurements (HRV, BPV) we used Fisher's Method to combine the p -value of multiple tests. Results: This analysis led to the following best set of cardiovascular variability parameters: The mean normal beat-to-beat-interval/value (HRV/BPV: meanNN), the coefficient of variation (cvNN = standard deviation over meanNN) and the root mean square differences of successive (RMSSD) of the time domain analysis. In frequency domain analysis the very-low-frequency (VLF) component was selected. From symbolic dynamics Shannon entropy of the word distribution (FWSHANNON) as well as POLVAR3, the non-linear parameter to detect intermittently decreased variability, showed the best ability to discriminate between the different autonomic blockades. Conclusion: Throughout a complex comparative analysis of HRV and BPV measures altered by a set of autonomic drugs, we identified the most sensitive set of informative cardiovascular variability indexes able to pick up the modifications imposed by the autonomic challenges. These indexes may help to increase our understanding of cardiovascular sympathetic and parasympathetic functions in translational studies of experimental diseases.

  18. MODFLOW-2000, the U.S. Geological Survey modular ground-water model; user guide to the observation, sensitivity, and parameter-estimation processes and three post-processing programs

    USGS Publications Warehouse

    Hill, Mary C.; Banta, E.R.; Harbaugh, A.W.; Anderman, E.R.

    2000-01-01

    This report documents the Observation, Sensitivity, and Parameter-Estimation Processes of the ground-water modeling computer program MODFLOW-2000. The Observation Process generates model-calculated values for comparison with measured, or observed, quantities. A variety of statistics is calculated to quantify this comparison, including a weighted least-squares objective function. In addition, a number of files are produced that can be used to compare the values graphically. The Sensitivity Process calculates the sensitivity of hydraulic heads throughout the model with respect to specified parameters using the accurate sensitivity-equation method. These are called grid sensitivities. If the Observation Process is active, it uses the grid sensitivities to calculate sensitivities for the simulated values associated with the observations. These are called observation sensitivities. Observation sensitivities are used to calculate a number of statistics that can be used (1) to diagnose inadequate data, (2) to identify parameters that probably cannot be estimated by regression using the available observations, and (3) to evaluate the utility of proposed new data. The Parameter-Estimation Process uses a modified Gauss-Newton method to adjust values of user-selected input parameters in an iterative procedure to minimize the value of the weighted least-squares objective function. Statistics produced by the Parameter-Estimation Process can be used to evaluate estimated parameter values; statistics produced by the Observation Process and post-processing program RESAN-2000 can be used to evaluate how accurately the model represents the actual processes; statistics produced by post-processing program YCINT-2000 can be used to quantify the uncertainty of model simulated values. Parameters are defined in the Ground-Water Flow Process input files and can be used to calculate most model inputs, such as: for explicitly defined model layers, horizontal hydraulic conductivity, horizontal anisotropy, vertical hydraulic conductivity or vertical anisotropy, specific storage, and specific yield; and, for implicitly represented layers, vertical hydraulic conductivity. In addition, parameters can be defined to calculate the hydraulic conductance of the River, General-Head Boundary, and Drain Packages; areal recharge rates of the Recharge Package; maximum evapotranspiration of the Evapotranspiration Package; pumpage or the rate of flow at defined-flux boundaries of the Well Package; and the hydraulic head at constant-head boundaries. The spatial variation of model inputs produced using defined parameters is very flexible, including interpolated distributions that require the summation of contributions from different parameters. Observations can include measured hydraulic heads or temporal changes in hydraulic heads, measured gains and losses along head-dependent boundaries (such as streams), flows through constant-head boundaries, and advective transport through the system, which generally would be inferred from measured concentrations. MODFLOW-2000 is intended for use on any computer operating system. The program consists of algorithms programmed in Fortran 90, which efficiently performs numerical calculations and is fully compatible with the newer Fortran 95. The code is easily modified to be compatible with FORTRAN 77. Coordination for multiple processors is accommodated using Message Passing Interface (MPI) commands. The program is designed in a modular fashion that is intended to support inclusion of new capabilities.

  19. The evolution of phenotypes and genetic parameters under preferential mating

    PubMed Central

    Roff, Derek A; Fairbairn, Daphne J

    2014-01-01

    This article extends and adds more realism to Lande's analytical model for evolution under mate choice by using individual-based simulations in which females sample a finite number of males and the genetic architecture of the preference and preferred trait evolves. The simulations show that the equilibrium heritabilities of the preference and preferred trait and the genetic correlation between them (rG), depend critically on aspects of the mating system (the preference function, mode of mate choice, choosiness, and number of potential mates sampled), the presence or absence of natural selection on the preferred trait, and the initial genetic parameters. Under some parameter combinations, preferential mating increased the heritability of the preferred trait, providing a possible resolution for the lek paradox. The Kirkpatrick–Barton approximation for rG proved to be biased downward, but the realized genetic correlations were also low, generally <0.2. Such low values of rG indicate that coevolution of the preference and preferred trait is likely to be very slow and subject to significant stochastic variation. Lande's model accurately predicted the incidence of runaway selection in the simulations, except where preferences were relative and the preferred trait was subject to natural selection. In these cases, runaways were over- or underestimated, depending on the number of males sampled. We conclude that rapid coevolution of preferences and preferred traits is unlikely in natural populations, but that the parameter combinations most conducive to it are most likely to occur in lekking species. PMID:25077025

  20. Observational constraints on Hubble parameter in viscous generalized Chaplygin gas

    NASA Astrophysics Data System (ADS)

    Thakur, P.

    2018-04-01

    Cosmological model with viscous generalized Chaplygin gas (in short, VGCG) is considered here to determine observational constraints on its equation of state parameters (in short, EoS) from background data. These data consists of H(z)-z (OHD) data, Baryonic Acoustic Oscillations peak parameter, CMB shift parameter and SN Ia data (Union 2.1). Best-fit values of the EoS parameters including present Hubble parameter (H0) and their acceptable range at different confidence limits are determined. In this model the permitted range for the present Hubble parameter and the transition redshift (zt) at 1σ confidence limits are H0= 70.24^{+0.34}_{-0.36} and zt=0.76^{+0.07}_{-0.07} respectively. These EoS parameters are then compared with those of other models. Present age of the Universe (t0) have also been determined here. Akaike information criterion and Bayesian information criterion for the model selection have been adopted for comparison with other models. It is noted that VGCG model satisfactorily accommodates the present accelerating phase of the Universe.

  1. Bandpass of microwave signals in a system of orthogonal magnetostatic-wave antennas

    NASA Astrophysics Data System (ADS)

    Zavisliak, I. V.; Zagorodnii, V. V.

    1990-12-01

    Experimental results are presented on a system consisting of MSW receive and transmit antennas integrated with a 36.7-micron-thick epitaxial YIG film. The amplitude-frequency response of this system was investigated for different values of magnetization parameters, and it was shown that the system has the property of selective transmittivity only in a narrow band of angles, phi = 48-53 deg.

  2. Material Selection Guide Derived from Material - Chemical Compatibility Database: Feasibility Based on Database and Predictive Model Evaluation

    DTIC Science & Technology

    1992-09-01

    PI) 297 S S S S 15 Polyamideimide (PAI) 297 S S S S 14 Polyamide 6:6 (PA 6:6) 297 S S S S 35 Perfluoroakloxyethylene ( PFA ) 297 S S S S 42 Phenol...Procedures for the Measurement of Vapor Sorption Followed by Desorption and Comparisons with Polymer Cohesion Parameter and Polymer Coil Expansion Values

  3. Test Operations Procedure (TOP) 1-2-612 Nuclear Environment Survivability

    DTIC Science & Technology

    2008-10-24

    measurements. The area equal to the area of gamma dose sensitive electronics will be mapped using CaF2 (Mn) TLDs . The selection of each STT...October 2008 8 2.3.3 HEMP / SREMP Instrumentation / Dosimetry . Measurement Parameter Preferred Device Measurement Accuracy Current...Calcium Fluoride Manganese CaF2 (Mn) Thermoluminescent Dosimeter ( TLDs ) and Compton diodes, respectively. The measured gamma dose values will be

  4. Stability analysis of multipoint tool equipped with metal cutting ceramics

    NASA Astrophysics Data System (ADS)

    Maksarov, V. V.; Khalimonenko, A. D.; Matrenichev, K. G.

    2017-10-01

    The article highlights the issues of determining the stability of the cutting process by a multipoint cutting tool equipped with cutting ceramics. There were some recommendations offered on the choice of parameters of replaceable cutting ceramic plates for milling based of the conducted researches. Ceramic plates for milling are proposed to be selected on the basis of value of their electrical volume resistivity.

  5. The predictive consequences of parameterization

    NASA Astrophysics Data System (ADS)

    White, J.; Hughes, J. D.; Doherty, J. E.

    2013-12-01

    In numerical groundwater modeling, parameterization is the process of selecting the aspects of a computer model that will be allowed to vary during history matching. This selection process is dependent on professional judgment and is, therefore, inherently subjective. Ideally, a robust parameterization should be commensurate with the spatial and temporal resolution of the model and should include all uncertain aspects of the model. Limited computing resources typically require reducing the number of adjustable parameters so that only a subset of the uncertain model aspects are treated as estimable parameters; the remaining aspects are treated as fixed parameters during history matching. We use linear subspace theory to develop expressions for the predictive error incurred by fixing parameters. The predictive error is comprised of two terms. The first term arises directly from the sensitivity of a prediction to fixed parameters. The second term arises from prediction-sensitive adjustable parameters that are forced to compensate for fixed parameters during history matching. The compensation is accompanied by inappropriate adjustment of otherwise uninformed, null-space parameter components. Unwarranted adjustment of null-space components away from prior maximum likelihood values may produce bias if a prediction is sensitive to those components. The potential for subjective parameterization choices to corrupt predictions is examined using a synthetic model. Several strategies are evaluated, including use of piecewise constant zones, use of pilot points with Tikhonov regularization and use of the Karhunen-Loeve transformation. The best choice of parameterization (as defined by minimum error variance) is strongly dependent on the types of predictions to be made by the model.

  6. Examination of Parameters Affecting the House Prices by Multiple Regression Analysis and its Contributions to Earthquake-Based Urban Transformation

    NASA Astrophysics Data System (ADS)

    Denli, H. H.; Durmus, B.

    2016-12-01

    The purpose of this study is to examine the factors which may affect the apartment prices with multiple linear regression analysis models and visualize the results by value maps. The study is focused on a county of Istanbul - Turkey. Totally 390 apartments around the county Umraniye are evaluated due to their physical and locational conditions. The identification of factors affecting the price of apartments in the county with a population of approximately 600k is expected to provide a significant contribution to the apartment market.Physical factors are selected as the age, number of rooms, size, floor numbers of the building and the floor that the apartment is positioned in. Positional factors are selected as the distances to the nearest hospital, school, park and police station. Totally ten physical and locational parameters are examined by regression analysis.After the regression analysis has been performed, value maps are composed from the parameters age, price and price per square meters. The most significant of the composed maps is the price per square meters map. Results show that the location of the apartment has the most influence to the square meter price information of the apartment. A different practice is developed from the composed maps by searching the ability of using price per square meters map in urban transformation practices. By marking the buildings older than 15 years in the price per square meters map, a different and new interpretation has been made to determine the buildings, to which should be given priority during an urban transformation in the county.This county is very close to the North Anatolian Fault zone and is under the threat of earthquakes. By marking the apartments older than 15 years on the price per square meters map, both older and expensive square meters apartments list can be gathered. By the help of this list, the priority could be given to the selected higher valued old apartments to support the economy of the country during an earthquake loss. We may call this urban transformation as earthquake-based urban transformation.

  7. Genetic Parameters and the Impact of Off-Types for Theobroma cacao L. in a Breeding Program in Brazil

    PubMed Central

    DuVal, Ashley; Gezan, Salvador A.; Mustiga, Guiliana; Stack, Conrad; Marelli, Jean-Philippe; Chaparro, José; Livingstone, Donald; Royaert, Stefan; Motamayor, Juan C.

    2017-01-01

    Breeding programs of cacao (Theobroma cacao L.) trees share the many challenges of breeding long-living perennial crops, and genetic progress is further constrained by both the limited understanding of the inheritance of complex traits and the prevalence of technical issues, such as mislabeled individuals (off-types). To better understand the genetic architecture of cacao, in this study, 13 years of phenotypic data collected from four progeny trials in Bahia, Brazil were analyzed jointly in a multisite analysis. Three separate analyses (multisite, single site with and without off-types) were performed to estimate genetic parameters from statistical models fitted on nine important agronomic traits (yield, seed index, pod index, % healthy pods, % pods infected with witches broom, % of pods other loss, vegetative brooms, diameter, and tree height). Genetic parameters were estimated along with variance components and heritabilities from the multisite analysis, and a trial was fingerprinted with low-density SNP markers to determine the impact of off-types on estimations. Heritabilities ranged from 0.37 to 0.64 for yield and its components and from 0.03 to 0.16 for disease resistance traits. A weighted index was used to make selections for clonal evaluation, and breeding values estimated for the parental selection and estimation of genetic gain. The impact of off-types to breeding progress in cacao was assessed for the first time. Even when present at <5% of the total population, off-types altered selections by 48%, and impacted heritability estimations for all nine of the traits analyzed, including a 41% difference in estimated heritability for yield. These results show that in a mixed model analysis, even a low level of pedigree error can significantly alter estimations of genetic parameters and selections in a breeding program. PMID:29250097

  8. Wind energy potential assessment to estimate performance of selected wind turbine in northern coastal region of Semarang-Indonesia

    NASA Astrophysics Data System (ADS)

    Premono, B. S.; Tjahjana, D. D. D. P.; Hadi, S.

    2017-01-01

    The aims of this paper are to investigate the characteristic of the wind speed and wind energy potential in the northern coastal region of Semarang, Central Java, Indonesia. The wind data was gained from Meteorological Station of Semarang, with ten-min average time series wind data for one year period, at the height of 10 m. Weibull distribution has been used to determine the wind power density and wind energy density of the site. It was shown that the value of the two parameters, shape parameter k, and scale parameter c, were 3.37 and 5.61 m/s, respectively. The annual mean wind speed and wind speed carrying the maximum energy were 5.32 m/s and 6.45 m/s, respectively. Further, the annual energy density at the site was found at a value of 103.87 W/m2, and based on Pacific North-west Laboratory (PNL) wind power classification, at the height of 10 m, the value of annual energy density is classified into class 2. The commercial wind turbine is chosen to simulate the wind energy potential of the site. The POLARIS P25-100 is most suitable to the site. It has the capacity factor 29.79% and can produce energy 261 MWh/year.

  9. Simulation-based sensitivity analysis for non-ignorably missing data.

    PubMed

    Yin, Peng; Shi, Jian Q

    2017-01-01

    Sensitivity analysis is popular in dealing with missing data problems particularly for non-ignorable missingness, where full-likelihood method cannot be adopted. It analyses how sensitively the conclusions (output) may depend on assumptions or parameters (input) about missing data, i.e. missing data mechanism. We call models with the problem of uncertainty sensitivity models. To make conventional sensitivity analysis more useful in practice we need to define some simple and interpretable statistical quantities to assess the sensitivity models and make evidence based analysis. We propose a novel approach in this paper on attempting to investigate the possibility of each missing data mechanism model assumption, by comparing the simulated datasets from various MNAR models with the observed data non-parametrically, using the K-nearest-neighbour distances. Some asymptotic theory has also been provided. A key step of this method is to plug in a plausibility evaluation system towards each sensitivity parameter, to select plausible values and reject unlikely values, instead of considering all proposed values of sensitivity parameters as in the conventional sensitivity analysis method. The method is generic and has been applied successfully to several specific models in this paper including meta-analysis model with publication bias, analysis of incomplete longitudinal data and mean estimation with non-ignorable missing data.

  10. Estimation of αL, velocity, Kd and confidence limits from tracer injection test data

    USGS Publications Warehouse

    Broermann, James; Bassett, R.L.; Weeks, Edwin P.; Borgstrom, Mark

    1997-01-01

    Bromide and boron were used as tracers during an injection experiment conducted at an artificial recharge facility near Stanton, Texas. The Ogallala aquifer at the Stanton site represents a heterogeneous alluvial environment and provides the opportunity to report scale dependent dispersivities at observation distances of 2 to 15 m in this setting. Values of longitudinal dispersivities are compared with other published values. Water samples were collected at selected depths both from piezometers and from fully screened observation wells at radii of 2, 5, 10 and 15 m. An exact analytical solution is used to simulate the concentration breakthrough curves and estimate longitudinal dispersivities and velocity parameters. Greater confidence can be placed on these data because the estimated parameters are error bounded using the bootstrap method. The non-conservative behavior of boron transport in clay rich sections of the aquifer were quantified with distribution coefficients by using bromide as a conservative reference tracer.

  11. Estimation of αL, velocity, Kd, and confidence limits from tracer injection data

    USGS Publications Warehouse

    Broermann, James; Bassett, R.L.; Weeks, Edwin P.; Borgstrom, Mark

    1997-01-01

    Bromide and boron were used as tracers during an injection experiment conducted at an artificial recharge facility near Stanton, Texas. The Ogallala aquifer at the Stanton site represents a heterogeneous alluvial environment and provides the opportunity to report scale dependent dispersivities at observation distances of 2 to 15 m in this setting. Values of longitudinal dispersivities are compared with other published values. Water samples were collected at selected depths both from piezometers and from fully screened observation wells at radii of 2, 5, 10 and 15 m. An exact analytical solution is used to simulate the concentration breakthrough curves and estimate longitudinal dispersivities and velocity parameters. Greater confidence can be placed on these data because the estimated parameters are error bounded using the bootstrap method. The non-conservative behavior of boron transport in clay rich sections of the aquifer were quantified with distribution coefficients by using bromide as a conservative reference tracer.

  12. Using Active Learning for Speeding up Calibration in Simulation Models.

    PubMed

    Cevik, Mucahit; Ergun, Mehmet Ali; Stout, Natasha K; Trentham-Dietz, Amy; Craven, Mark; Alagoz, Oguzhan

    2016-07-01

    Most cancer simulation models include unobservable parameters that determine disease onset and tumor growth. These parameters play an important role in matching key outcomes such as cancer incidence and mortality, and their values are typically estimated via a lengthy calibration procedure, which involves evaluating a large number of combinations of parameter values via simulation. The objective of this study is to demonstrate how machine learning approaches can be used to accelerate the calibration process by reducing the number of parameter combinations that are actually evaluated. Active learning is a popular machine learning method that enables a learning algorithm such as artificial neural networks to interactively choose which parameter combinations to evaluate. We developed an active learning algorithm to expedite the calibration process. Our algorithm determines the parameter combinations that are more likely to produce desired outputs and therefore reduces the number of simulation runs performed during calibration. We demonstrate our method using the previously developed University of Wisconsin breast cancer simulation model (UWBCS). In a recent study, calibration of the UWBCS required the evaluation of 378 000 input parameter combinations to build a race-specific model, and only 69 of these combinations produced results that closely matched observed data. By using the active learning algorithm in conjunction with standard calibration methods, we identify all 69 parameter combinations by evaluating only 5620 of the 378 000 combinations. Machine learning methods hold potential in guiding model developers in the selection of more promising parameter combinations and hence speeding up the calibration process. Applying our machine learning algorithm to one model shows that evaluating only 1.49% of all parameter combinations would be sufficient for the calibration. © The Author(s) 2015.

  13. Using Active Learning for Speeding up Calibration in Simulation Models

    PubMed Central

    Cevik, Mucahit; Ali Ergun, Mehmet; Stout, Natasha K.; Trentham-Dietz, Amy; Craven, Mark; Alagoz, Oguzhan

    2015-01-01

    Background Most cancer simulation models include unobservable parameters that determine the disease onset and tumor growth. These parameters play an important role in matching key outcomes such as cancer incidence and mortality and their values are typically estimated via lengthy calibration procedure, which involves evaluating large number of combinations of parameter values via simulation. The objective of this study is to demonstrate how machine learning approaches can be used to accelerate the calibration process by reducing the number of parameter combinations that are actually evaluated. Methods Active learning is a popular machine learning method that enables a learning algorithm such as artificial neural networks to interactively choose which parameter combinations to evaluate. We develop an active learning algorithm to expedite the calibration process. Our algorithm determines the parameter combinations that are more likely to produce desired outputs, therefore reduces the number of simulation runs performed during calibration. We demonstrate our method using previously developed University of Wisconsin Breast Cancer Simulation Model (UWBCS). Results In a recent study, calibration of the UWBCS required the evaluation of 378,000 input parameter combinations to build a race-specific model and only 69 of these combinations produced results that closely matched observed data. By using the active learning algorithm in conjunction with standard calibration methods, we identify all 69 parameter combinations by evaluating only 5620 of the 378,000 combinations. Conclusion Machine learning methods hold potential in guiding model developers in the selection of more promising parameter combinations and hence speeding up the calibration process. Applying our machine learning algorithm to one model shows that evaluating only 1.49% of all parameter combinations would be sufficient for the calibration. PMID:26471190

  14. Experiment K305: Quantitative analysis of selected bone parameters. Supplement 3A: Trabecular spacing and orientation in the long bones

    NASA Technical Reports Server (NTRS)

    Judy, M. M.

    1981-01-01

    Values of mean trabecular spacing computed from optical diffraction patterns of 1:1 X-ray micrographs of tibial metaphysis and those obtained by standard image digitization techniques show excellent agreement. Upper limits on values of mean trabecular orientation deduced from diffraction patterns and the images are also in excellent agreement. Values of the ratio of mean trabecular spatial density in a region of 300 micrometers distal to the downwardly directed convexity in the cartilage growth plate to the value adjacent to the plate determined for flight animals sacrificed at recovery were significantly smaller than values for vivarium control animals. No significant differences were found in proximal regions. No significant differences in mean trabecular orientation were detected. Decreased values of trabecular spatial density and of both obsteoblastic activity and trabecular cross-sectional area noted in collateral researches suggest decreased modeling activity under weightlessness.

  15. A chain-retrieval model for voluntary task switching.

    PubMed

    Vandierendonck, André; Demanet, Jelle; Liefooghe, Baptist; Verbruggen, Frederick

    2012-09-01

    To account for the findings obtained in voluntary task switching, this article describes and tests the chain-retrieval model. This model postulates that voluntary task selection involves retrieval of task information from long-term memory, which is then used to guide task selection and task execution. The model assumes that the retrieved information consists of acquired sequences (or chains) of tasks, that selection may be biased towards chains containing more task repetitions and that bottom-up triggered repetitions may overrule the intended task. To test this model, four experiments are reported. In Studies 1 and 2, sequences of task choices and the corresponding transition sequences (task repetitions or switches) were analyzed with the help of dependency statistics. The free parameters of the chain-retrieval model were estimated on the observed task sequences and these estimates were used to predict autocorrelations of tasks and transitions. In Studies 3 and 4, sequences of hand choices and their transitions were analyzed similarly. In all studies, the chain-retrieval model yielded better fits and predictions than statistical models of event choice. In applications to voluntary task switching (Studies 1 and 2), all three parameters of the model were needed to account for the data. When no task switching was required (Studies 3 and 4), the chain-retrieval model could account for the data with one or two parameters clamped to a neutral value. Implications for our understanding of voluntary task selection and broader theoretical implications are discussed. Copyright © 2012 Elsevier Inc. All rights reserved.

  16. Level set method with automatic selective local statistics for brain tumor segmentation in MR images.

    PubMed

    Thapaliya, Kiran; Pyun, Jae-Young; Park, Chun-Su; Kwon, Goo-Rak

    2013-01-01

    The level set approach is a powerful tool for segmenting images. This paper proposes a method for segmenting brain tumor images from MR images. A new signed pressure function (SPF) that can efficiently stop the contours at weak or blurred edges is introduced. The local statistics of the different objects present in the MR images were calculated. Using local statistics, the tumor objects were identified among different objects. In this level set method, the calculation of the parameters is a challenging task. The calculations of different parameters for different types of images were automatic. The basic thresholding value was updated and adjusted automatically for different MR images. This thresholding value was used to calculate the different parameters in the proposed algorithm. The proposed algorithm was tested on the magnetic resonance images of the brain for tumor segmentation and its performance was evaluated visually and quantitatively. Numerical experiments on some brain tumor images highlighted the efficiency and robustness of this method. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.

  17. Optimization of Acid Black 172 decolorization by electrocoagulation using response surface methodology

    PubMed Central

    2012-01-01

    This paper utilizes a statistical approach, the response surface optimization methodology, to determine the optimum conditions for the Acid Black 172 dye removal efficiency from aqueous solution by electrocoagulation. The experimental parameters investigated were initial pH: 4–10; initial dye concentration: 0–600 mg/L; applied current: 0.5-3.5 A and reaction time: 3–15 min. These parameters were changed at five levels according to the central composite design to evaluate their effects on decolorization through analysis of variance. High R2 value of 94.48% shows a high correlation between the experimental and predicted values and expresses that the second-order regression model is acceptable for Acid Black 172 dye removal efficiency. It was also found that some interactions and squares influenced the electrocoagulation performance as well as the selected parameters. Optimum dye removal efficiency of 90.4% was observed experimentally at initial pH of 7, initial dye concentration of 300 mg/L, applied current of 2 A and reaction time of 9.16 min, which is close to model predicted (90%) result. PMID:23369574

  18. Theoretical study of the accuracy of the pulse method, frontal analysis, and frontal analysis by characteristic points for the determination of single component adsorption isotherms.

    PubMed

    Andrzejewska, Anna; Kaczmarski, Krzysztof; Guiochon, Georges

    2009-02-13

    The adsorption isotherms of selected compounds are our main source of information on the mechanisms of adsorption processes. Thus, the selection of the methods used to determine adsorption isotherm data and to evaluate the errors made is critical. Three chromatographic methods were evaluated, frontal analysis (FA), frontal analysis by characteristic point (FACP), and the pulse or perturbation method (PM), and their accuracies were compared. Using the equilibrium-dispersive (ED) model of chromatography, breakthrough curves of single components were generated corresponding to three different adsorption isotherm models: the Langmuir, the bi-Langmuir, and the Moreau isotherms. For each breakthrough curve, the best conventional procedures of each method (FA, FACP, PM) were used to calculate the corresponding data point, using typical values of the parameters of each isotherm model, for four different values of the column efficiency (N=500, 1000, 2000, and 10,000). Then, the data points were fitted to each isotherm model and the corresponding isotherm parameters were compared to those of the initial isotherm model. When isotherm data are derived with a chromatographic method, they may suffer from two types of errors: (1) the errors made in deriving the experimental data points from the chromatographic records; (2) the errors made in selecting an incorrect isotherm model and fitting to it the experimental data. Both errors decrease significantly with increasing column efficiency with FA and FACP, but not with PM.

  19. Geostatistical Characteristic of Space -Time Variation in Underground Water Selected Quality Parameters in Klodzko Water Intake Area (SW Part of Poland)

    NASA Astrophysics Data System (ADS)

    Namysłowska-Wilczyńska, Barbara

    2016-04-01

    This paper presents selected results of research connected with the development of a (3D) geostatistical hydrogeochemical model of the Klodzko Drainage Basin, dedicated to the spatial and time variation in the selected quality parameters of underground water in the Klodzko water intake area (SW part of Poland). The research covers the period 2011÷2012. Spatial analyses of the variation in various quality parameters, i.e, contents of: ammonium ion [gNH4+/m3], NO3- (nitrate ion) [gNO3/m3], PO4-3 (phosphate ion) [gPO4-3/m3], total organic carbon C (TOC) [gC/m3], pH redox potential and temperature C [degrees], were carried out on the basis of the chemical determinations of the quality parameters of underground water samples taken from the wells in the water intake area. Spatial and time variation in the quality parameters was analyzed on the basis of archival data (period 1977÷1999) for 22 (pump and siphon) wells with a depth ranging from 9.5 to 38.0 m b.g.l., later data obtained (November 2011) from tests of water taken from 14 existing wells. The wells were built in the years 1954÷1998. The water abstraction depth (difference between the terrain elevation and the dynamic water table level) is ranged from 276÷286 m a.s.l., with an average of 282.05 m a.s.l. Dynamic water table level is contained between 6.22 m÷16.44 m b.g.l., with a mean value of 9.64 m b.g.l. The latest data (January 2012) acquired from 3 new piezometers, with a depth of 9÷10m, which were made in other locations in the relevant area. Thematic databases, containing original data on coordinates X, Y (latitude, longitude) and Z (terrain elevation and time - years) and on regionalized variables, i.e. the underground water quality parameters in the Klodzko water intake area determined for different analytical configurations (22 wells, 14 wells, 14 wells + 3 piezometers), were created. Both archival data (acquired in the years 1977÷1999) and the latest data (collected in 2011÷2012) were analyzed. These data were subjected to spatial analyses using statistical and geostatistical methods. The evaluation of basic statistics of the investigated quality parameters, including their histograms of distributions, scatter diagrams between these parameters and also correlation coefficients r were presented in this article. The directional semivariogram function and the ordinary (block) kriging procedure were used to build the 3D geostatistical model. The geostatistical parameters of the theoretical models of directional semivariograms of the studied water quality parameters, calculated along the time interval and along the wells depth (taking into account the terrain elevation), were used in the ordinary (block) kriging estimation. The obtained results of estimation, i.e. block diagrams allowed to determine the levels of increased values Z* of studied underground water quality parameters. Analysis of the variability in the selected quality parameters of underground water for an analyzed area in Klodzko water intake was enriched by referring to the results of geostatistical studies carried out for underground water quality parameters and also for a treated water and in Klodzko water supply system (iron Fe, manganese Mn, ammonium ion NH4+ contents), discussed in earlier works. Spatial and time variation in the latter-mentioned parameters was analysed on the basis of the data (2007÷2011, 2008÷2011). Generally, the behaviour of the underground water quality parameters has been found to vary in space and time. Thanks to the spatial analyses of the variation in the quality parameters in the Kłodzko underground water intake area some regularities (trends) in the variation in water quality have been identified.

  20. Depth-Duration Frequency of Precipitation for Oklahoma

    USGS Publications Warehouse

    Tortorelli, Robert L.; Rea, Alan; Asquith, William H.

    1999-01-01

    A regional frequency analysis was conducted to estimate the depth-duration frequency of precipitation for 12 durations in Oklahoma (15, 30, and 60 minutes; 1, 2, 3, 6, 12, and 24 hours; and 1, 3, and 7 days). Seven selected frequencies, expressed as recurrence intervals, were investigated (2, 5, 10, 25, 50, 100, and 500 years). L-moment statistics were used to summarize depth-duration data and to determine the appropriate statistical distributions. Three different rain-gage networks provided the data (15minute, 1-hour, and 1-day). The 60-minute, and 1-hour; and the 24-hour, and 1-day durations were analyzed separately. Data were used from rain-gage stations with at least 10-years of record and within Oklahoma or about 50 kilometers into bordering states. Precipitation annual maxima (depths) were determined from the data for 110 15-minute, 141 hourly, and 413 daily stations. The L-moment statistics for depths for all durations were calculated for each station using unbiased L-mo-ment estimators for the mean, L-scale, L-coefficient of variation, L-skew, and L-kur-tosis. The relation between L-skew and L-kurtosis (L-moment ratio diagram) and goodness-of-fit measures were used to select the frequency distributions. The three-parameter generalized logistic distribution was selected to model the frequencies of 15-, 30-, and 60-minute annual maxima; and the three-parameter generalized extreme-value distribution was selected to model the frequencies of 1-hour to 7-day annual maxima. The mean for each station and duration was corrected for the bias associated with fixed interval recording of precipitation amounts. The L-scale and spatially averaged L-skew statistics were used to compute the location, scale, and shape parameters of the selected distribution for each station and duration. The three parameters were used to calculate the depth-duration-frequency relations for each station. The precipitation depths for selected frequencies were contoured from weighted depth surfaces to produce maps from which the precipitation depth-duration-frequency curve for selected storm durations can be determined for any site in Oklahoma.

  1. A Study on Possibility of Clinical Application for Color Measurements of Shade Guides Using an Intraoral Digital Scanner.

    PubMed

    Yoon, Hyung-In; Bae, Ji-Won; Park, Ji-Man; Chun, Youn-Sic; Kim, Mi-Ae; Kim, Minji

    2016-11-07

    To assess if color measurement with intraoral scanner correlates with digital colorimeter and to evaluate the possibility of application of a digital scanner for shade selection. The L*a*b* values of the five shade tabs (A1, A2, A3, A3.5, and A4) were obtained with an intraoral scanner (TRIOS Pod) and a colorimeter (ShadeEye). Both devices were calibrated according to the manufacturer's instructions before measurements. Color measurement values were compared with paired t-test, and a Pearson's correlation analysis was performed to evaluate the relationship of two methods. The L*a*b* values of the colorimeter were significantly different from those of the digital scanner (p < 0.001). The L* and b* values of both methods were strongly correlated with each other (both p < 0.05). The device repeatability in both methods were reported to be excellent (p < 0.05). Within the limitations of this study, color measurements with digital intraoral scanners and computer-assisted image analysis were in accordance with those of the colorimeter with respect to L* and b* values; however, all the coordinates of shade tabs were significantly different between two methods. The digital intraoral scanner may not be used as the primary method of color selection in clinical practices, considering significant differences in color parameters with colorimeter. The scanner's capability in shade selection should be further evaluated. © 2016 by the American College of Prosthodontists.

  2. The electrical conductivity of in vivo human uterine fibroids.

    PubMed

    DeLonzor, Russ; Spero, Richard K; Williams, Joseph J

    2011-01-01

    The purpose of this study was to determine the value of electrical conductivity that can be used for numerical modelling in vivo radiofrequency ablation (RFA) treatments of human uterine fibroids. No experimental electrical conductivity data have previously been reported for human uterine fibroids. In this study electrical data (voltage) from selected in vivo clinical procedures on human uterine fibroids were used to numerically model the treatments. Measured versus calculated power dissipation profiles were compared to determine uterine fibroid electrical conductivity. Numerical simulations were conducted utilising a wide range of values for tissue thermal conductivity, heat capacity and blood perfusion coefficient. The simulations demonstrated that power dissipation was insensitive to the exact values of these parameters for the simulated geometry, treatment duration, and power level. Consequently, it was possible to determine tissue electrical conductivity without precise knowledge of the values for these parameters. Results of this study showed that an electrical conductivity for uterine fibroids of 0.305 S/m at 37°C and a temperature coefficient of 0.2%/°C can be used for modelling Radio Frequency Ablation of human uterine fibroids at a frequency of 460 kHz for temperatures from 37°C to 100°C.

  3. Determination of pKa values of some antipsychotic drugs by HPLC--correlations with the Kamlet and taft solvatochromic parameters and HPLC analysis in dosage forms.

    PubMed

    Sanli, Senem; Akmese, Bediha; Altun, Yuksel

    2013-01-01

    In this study, ionization constant (pKa) values were determined by using the dependence of the retention factor on the pH of the mobile phase for four ionizable drugs, namely, risperidone (RI), clozapine (CL), olanzapine (OL), and sertindole (SE). The effect of the mobile phase composition on the pKa was studied by measuring the pKa at different acetonitrile-water mixtures in an HPLC-UV method. To explain the variation of the pKa values obtained over the whole composition range studied, the quasi-lattice quasi-chemical theory of preferential solvation was applied. The pKa values of drugs were correlated with the Kamlet and Taft solvatochromic parameters. Kamlet and Taft's general equation was reduced to two terms by using combined factor analysis and target factor analysis in these mixtures: the independent term and the hydrogen-bond donating ability a. The HPLC-UV method was successfully applied for the determination of RI, OL, and SE in pharmaceutical dosage forms. CL was chosen as an internal standard. Additionally, the repeatability, reproducibility, selectivity, precision, and accuracy of the method in all media were investigated and calculated.

  4. Optimize Short Term load Forcasting Anomalous Based Feed Forward Backpropagation

    NASA Astrophysics Data System (ADS)

    Mulyadi, Y.; Abdullah, A. G.; Rohmah, K. A.

    2017-03-01

    This paper contains the Short-Term Load Forecasting (STLF) using artificial neural network especially feed forward back propagation algorithm which is particularly optimized in order to getting a reduced error value result. Electrical load forecasting target is a holiday that hasn’t identical pattern and different from weekday’s pattern, in other words the pattern of holiday load is an anomalous. Under these conditions, the level of forecasting accuracy will be decrease. Hence we need a method that capable to reducing error value in anomalous load forecasting. Learning process of algorithm is supervised or controlled, then some parameters are arranged before performing computation process. Momentum constant a value is set at 0.8 which serve as a reference because it has the greatest converge tendency. Learning rate selection is made up to 2 decimal digits. In addition, hidden layer and input component are tested in several variation of number also. The test result leads to the conclusion that the number of hidden layer impact on the forecasting accuracy and test duration determined by the number of iterations when performing input data until it reaches the maximum of a parameter value.

  5. Canola Cake as a Potential Substrate for Proteolytic Enzymes Production by a Selected Strain of Aspergillus oryzae: Selection of Process Conditions and Product Characterization

    PubMed Central

    Freitas, Adriana C.; Castro, Ruann J. S.; Fontenele, Maria A.; Egito, Antonio S.; Farinas, Cristiane S.; Pinto, Gustavo A. S.

    2013-01-01

    Oil cakes have excellent nutritional value and offer considerable potential for use in biotechnological processes that employ solid-state fermentation (SSF) for the production of high value products. This work evaluates the feasibility of using canola cake as a substrate for protease production by a selected strain of Aspergillus oryzae cultivated under SSF. The influences of the following process parameters were considered: initial substrate moisture content, incubation temperature, inoculum size, and pH of the buffer used for protease extraction and activity analysis. Maximum protease activity was obtained after cultivating Aspergillus oryzae CCBP 001 at 20°C, using an inoculum size of 107 spores/g in canola cake medium moistened with 40 mL of water to 100 g of cake. Cultivation and extraction under selected conditions increased protease activity 5.8-fold, compared to the initial conditions. Zymogram analysis of the enzymatic extract showed that the protease molecular weights varied between 31 and 200 kDa. The concentrated protease extract induced clotting of casein in 5 min. The results demonstrate the potential application of canola cake for protease production under SSF and contribute to the technological advances needed to increase the efficiency of processes designed to add value to agroindustrial wastes. PMID:24455400

  6. Reference-free error estimation for multiple measurement methods.

    PubMed

    Madan, Hennadii; Pernuš, Franjo; Špiclin, Žiga

    2018-01-01

    We present a computational framework to select the most accurate and precise method of measurement of a certain quantity, when there is no access to the true value of the measurand. A typical use case is when several image analysis methods are applied to measure the value of a particular quantitative imaging biomarker from the same images. The accuracy of each measurement method is characterized by systematic error (bias), which is modeled as a polynomial in true values of measurand, and the precision as random error modeled with a Gaussian random variable. In contrast to previous works, the random errors are modeled jointly across all methods, thereby enabling the framework to analyze measurement methods based on similar principles, which may have correlated random errors. Furthermore, the posterior distribution of the error model parameters is estimated from samples obtained by Markov chain Monte-Carlo and analyzed to estimate the parameter values and the unknown true values of the measurand. The framework was validated on six synthetic and one clinical dataset containing measurements of total lesion load, a biomarker of neurodegenerative diseases, which was obtained with four automatic methods by analyzing brain magnetic resonance images. The estimates of bias and random error were in a good agreement with the corresponding least squares regression estimates against a reference.

  7. Examining the effect of initialization strategies on the performance of Gaussian mixture modeling.

    PubMed

    Shireman, Emilie; Steinley, Douglas; Brusco, Michael J

    2017-02-01

    Mixture modeling is a popular technique for identifying unobserved subpopulations (e.g., components) within a data set, with Gaussian (normal) mixture modeling being the form most widely used. Generally, the parameters of these Gaussian mixtures cannot be estimated in closed form, so estimates are typically obtained via an iterative process. The most common estimation procedure is maximum likelihood via the expectation-maximization (EM) algorithm. Like many approaches for identifying subpopulations, finite mixture modeling can suffer from locally optimal solutions, and the final parameter estimates are dependent on the initial starting values of the EM algorithm. Initial values have been shown to significantly impact the quality of the solution, and researchers have proposed several approaches for selecting the set of starting values. Five techniques for obtaining starting values that are implemented in popular software packages are compared. Their performances are assessed in terms of the following four measures: (1) the ability to find the best observed solution, (2) settling on a solution that classifies observations correctly, (3) the number of local solutions found by each technique, and (4) the speed at which the start values are obtained. On the basis of these results, a set of recommendations is provided to the user.

  8. Dynamic fractal signature dissimilarity analysis for therapeutic response assessment using dynamic contrast-enhanced MRI

    PubMed Central

    Wang, Chunhao; Subashi, Ergys; Yin, Fang-Fang; Chang, Zheng

    2016-01-01

    Purpose: To develop a dynamic fractal signature dissimilarity (FSD) method as a novel image texture analysis technique for the quantification of tumor heterogeneity information for better therapeutic response assessment with dynamic contrast-enhanced (DCE)-MRI. Methods: A small animal antiangiogenesis drug treatment experiment was used to demonstrate the proposed method. Sixteen LS-174T implanted mice were randomly assigned into treatment and control groups (n = 8/group). All mice received bevacizumab (treatment) or saline (control) three times in two weeks, and one pretreatment and two post-treatment DCE-MRI scans were performed. In the proposed dynamic FSD method, a dynamic FSD curve was generated to characterize the heterogeneity evolution during the contrast agent uptake, and the area under FSD curve (AUCFSD) and the maximum enhancement (MEFSD) were selected as representative parameters. As for comparison, the pharmacokinetic parameter Ktrans map and area under MR intensity enhancement curve AUCMR map were calculated. Besides the tumor’s mean value and coefficient of variation, the kurtosis, skewness, and classic Rényi dimensions d1 and d2 of Ktrans and AUCMR maps were evaluated for heterogeneity assessment for comparison. For post-treatment scans, the Mann–Whitney U-test was used to assess the differences of the investigated parameters between treatment/control groups. The support vector machine (SVM) was applied to classify treatment/control groups using the investigated parameters at each post-treatment scan day. Results: The tumor mean Ktrans and its heterogeneity measurements d1 and d2 values showed significant differences between treatment/control groups in the second post-treatment scan. In contrast, the relative values (in reference to the pretreatment value) of AUCFSD and MEFSD in both post-treatment scans showed significant differences between treatment/control groups. When using AUCFSD and MEFSD as SVM input for treatment/control classification, the achieved accuracies were 93.8% and 93.8% at first and second post-treatment scan days, respectively. In comparison, the classification accuracies using d1 and d2 of Ktrans map were 87.5% and 100% at first and second post-treatment scan days, respectively. Conclusions: As quantitative metrics of tumor contrast agent uptake heterogeneity, the selected parameters from the dynamic FSD method accurately captured the therapeutic response in the experiment. The potential application of the proposed method is promising, and its addition to the existing DCE-MRI techniques could improve DCE-MRI performance in early assessment of treatment response. PMID:26936718

  9. Voluntary strategy suppresses the positive impact of preferential selection in prisoner’s dilemma

    NASA Astrophysics Data System (ADS)

    Sun, Lei; Lin, Pei-jie; Chen, Ya-shan

    2014-11-01

    Impact of aspiration is ubiquitous in social and biological disciplines. In this work, we try to explore the impact of such a trait on voluntary prisoners’ dilemma game via a selection parameter w. w=0 returns the traditional version of random selection. For positive w, the opponent of high payoff will be selected; while negative w means that the partner of low payoff will be chosen. We find that for positive w, cooperation will be greatly promoted in the interval of small b, at variance cooperation is inhibited with large b. For negative w, cooperation is fully restrained, irrespective of b value. It is found that the positive impact of preferential selection is suppressed by the voluntary strategy in prisoner’s dilemma. These observations can be supported by the spatial patterns. Our work may shed light on the emergence and persistence of cooperation with voluntary participation in social dilemma.

  10. Evaluation of selection index: application to the choice of an indirect multitrait selection index for soybean breeding.

    PubMed

    Bouchez, A; Goffinet, B

    1990-02-01

    Selection indices can be used to predict one trait from information available on several traits in order to improve the prediction accuracy. Plant or animal breeders are interested in selecting only the best individuals, and need to compare the efficiency of different trait combinations in order to choose the index ensuring the best prediction quality for individual values. As the usual tools for index evaluation do not remain unbiased in all cases, we propose a robust way of evaluation by means of an estimator of the mean-square error of prediction (EMSEP). This estimator remains valid even when parameters are not known, as usually assumed, but are estimated. EMSEP is applied to the choice of an indirect multitrait selection index at the F5 generation of a classical breeding scheme for soybeans. Best predictions for precocity are obtained by means of indices using only part of the available information.

  11. Quality index of radiological devices: results of one year of use.

    PubMed

    Tofani, Alessandro; Imbordino, Patrizia; Lecci, Antonio; Bonannini, Claudia; Del Corona, Alberto; Pizzi, Stefano

    2003-01-01

    The physical quality index (QI) of radiological devices summarises in a single numerical value between 0 and 1 the results of constancy tests. The aim of this paper is to illustrate the results of the use of such an index on all public radiological devices in the Livorno province over one year. The quality index was calculated for 82 radiological devices of a wide range of types by implementing its algorithm in a spreadsheet-based software for the automatic handling of quality control data. The distribution of quality index values was computed together with the associated statistical quantities. This distribution is strongly asymmetrical, with a sharp peak near the highest QI values. The mean quality index values for the different types of device show some inhomogeneity: in particular, mammography and panoramic dental radiography devices show far lower quality than other devices. In addition, our analysis has identified the parameters that most frequently do not pass the quality tests for each type of device. Finally, we sought some correlation between quality and age of the device, but this was poorly significant. The quality index proved to be a useful tool providing an overview of the physical conditions of radiological devices. By selecting adequate QI threshold values for, it also helps to decide whether a given device should be upgraded or replaced. The identification of critical parameters for each type of device may be used to improve the definition of the QI by attributing greater weights to critical parameters, so as to better address the maintenance of radiological devices.

  12. Sphericity index and E-point-to-septal-separation (EPSS) to diagnose dilated cardiomyopathy in Doberman Pinschers.

    PubMed

    Holler, P J; Wess, G

    2014-01-01

    E-point-to-septal-separation (EPSS) and the sphericity index (SI) are echocardiographic parameters that are recommended in the ESVC-DCM guidelines. However, SI cutoff values to diagnose dilated cardiomyopathy (DCM) have never been evaluated. To establish reference ranges, calculate cutoff values, and assess the clinical value of SI and EPSS to diagnose DCM in Doberman Pinschers. One hundred seventy-nine client-owned Doberman Pinschers. Three groups were formed in this prospective longitudinal study according to established Holter and echocardiographic criteria using the Simpson method of disk (SMOD): control group (97 dogs), DCM with echocardiographic changes (75 dogs) and "last normal" group (n = 7), which included dogs that developed DCM within 1.5 years, but were still normal at this time point. In a substudy, dogs with early DCM based upon SMOD values above the reference range but still normal M-Mode measurements were selected, to evaluate if EPSS or SI were abnormal using the established cutoff values. ROC-curve analysis determined <1.65 for the SI (sensitivity 86.8%; specificity 87.6%) and >6.5 mm for EPSS (sensitivity 100%; specificity 99.0%) as optimal cutoff values to diagnose DCM. Both parameters were significantly different between the control group and the DCM group (P < 0.001), but were not abnormal in the "last normal" group. In the substudy, EPSS was abnormal in 13/13 dogs and SI in 2/13 dogs. E-point-to-septal-separation is a valuable additional parameter for the diagnosis of DCM, which can enhance diagnostic capabilities of M-Mode and which performs similar as well as SMOD. Copyright © 2013 by the American College of Veterinary Internal Medicine.

  13. Quiet High Speed Fan (QHSF) Flutter Calculations Using the TURBO Code

    NASA Technical Reports Server (NTRS)

    Bakhle, Milind A.; Srivastava, Rakesh; Keith, Theo G., Jr.; Min, James B.; Mehmed, Oral

    2006-01-01

    A scale model of the NASA/Honeywell Engines Quiet High Speed Fan (QHSF) encountered flutter wind tunnel testing. This report documents aeroelastic calculations done for the QHSF scale model using the blade vibration capability of the TURBO code. Calculations at design speed were used to quantify the effect of numerical parameters on the aerodynamic damping predictions. This numerical study allowed the selection of appropriate values of these parameters, and also allowed an assessment of the variability in the calculated aerodynamic damping. Calculations were also done at 90 percent of design speed. The predicted trends in aerodynamic damping corresponded to those observed during testing.

  14. Kinetics of Mixed Microbial Assemblages Enhance Removal of Highly Dilute Organic Substrates

    PubMed Central

    Lewis, David L.; Hodson, Robert E.; Hwang, Huey-Min

    1988-01-01

    Our experiments with selected organic substrates reveal that the rate-limiting process governing microbial degradation rates changes with substrate concentration, S, in such a manner that substrate removal is enhanced at lower values of S. This enhancement is the result of the dominance of very efficient systems for substrate removal at low substrate concentrations. The variability of dominant kinetic parameters over a range of S causes the kinetics of complex assemblages to be profoundly dissimilar to those of systems possessing a single set of kinetic parameters; these findings necessitate taking a new approach to predicting substrate removal rates over wide ranges of S. PMID:16347715

  15. Effect of the tubular-fan drum shapes on the performance of cleaning head module

    NASA Astrophysics Data System (ADS)

    Hong, C. K.; Y Cho, M.; Kim, Y. J.

    2013-12-01

    The geometrical effects of a tubular-fan drum on the performance improvement of the cleaning head module of a vacuum cleaner were investigated. In this study, the number of blades and the width of the blade were selected as the design parameters. Static pressure, eccentric vortex, turbulence kinetic energy (TKE) and suction efficiency were analysed and tabulated. Three-dimensional computational fluid dynamics method was used with an SST (Shear Stress Transfer) turbulence model to simulate the flow field at the suction of the cleaning head module using the commercial code ANSYS-CFX. Suction pressure distributions were graphically depicted for different values of the design parameters.

  16. Drug efficiency: a new concept to guide lead optimization programs towards the selection of better clinical candidates.

    PubMed

    Braggio, Simone; Montanari, Dino; Rossi, Tino; Ratti, Emiliangelo

    2010-07-01

    As a result of their wide acceptance and conceptual simplicity, drug-like concepts are having a major influence on the drug discovery process, particularly in the selection of the 'optimal' absorption, distribution, metabolism, excretion and toxicity and physicochemical parameters space. While they have an undisputable value when assessing the potential of lead series or in evaluating inherent risk of a portfolio of drug candidates, they result much less useful in weighing up compounds for the selection of the best potential clinical candidate. We introduce the concept of drug efficiency as a new tool both to guide the drug discovery program teams during the lead optimization phase and to better assess the developability potential of a drug candidate.

  17. Wavelength selection in the crown splash

    NASA Astrophysics Data System (ADS)

    Zhang, Li V.; Brunet, Philippe; Eggers, Jens; Deegan, Robert D.

    2010-12-01

    The impact of a drop onto a liquid layer produces a splash that results from the ejection and dissolution of one or more liquid sheets, which expand radially from the point of impact. In the crown splash parameter regime, secondary droplets appear at fairly regularly spaced intervals along the rim of the sheet. By performing many experiments for the same parameter values, we measure the spectrum of small-amplitude perturbations growing on the rim. We show that for a range of parameters in the crown splash regime, the generation of secondary droplets results from a Rayleigh-Plateau instability of the rim, whose shape is almost cylindrical. In our theoretical calculation, we include the time dependence of the base state. The remaining irregularity of the pattern is explained by the finite width of the Rayleigh-Plateau dispersion relation. Alternative mechanisms, such as the Rayleigh-Taylor instability, can be excluded for the experimental parameters of our study.

  18. Harmony search optimization in dimensional accuracy of die sinking EDM process using SS316L stainless steel

    NASA Astrophysics Data System (ADS)

    Deris, A. M.; Zain, A. M.; Sallehuddin, R.; Sharif, S.

    2017-09-01

    Electric discharge machine (EDM) is one of the widely used nonconventional machining processes for hard and difficult to machine materials. Due to the large number of machining parameters in EDM and its complicated structural, the selection of the optimal solution of machining parameters for obtaining minimum machining performance is remain as a challenging task to the researchers. This paper proposed experimental investigation and optimization of machining parameters for EDM process on stainless steel 316L work piece using Harmony Search (HS) algorithm. The mathematical model was developed based on regression approach with four input parameters which are pulse on time, peak current, servo voltage and servo speed to the output response which is dimensional accuracy (DA). The optimal result of HS approach was compared with regression analysis and it was found HS gave better result y giving the most minimum DA value compared with regression approach.

  19. Hematology and biochemistry reference intervals for Ontario commercial nursing pigs close to the time of weaning

    PubMed Central

    Perri, Amanda M.; O’Sullivan, Terri L.; Harding, John C.S.; Wood, R. Darren; Friendship, Robert M.

    2017-01-01

    The evaluation of pig hematology and biochemistry parameters is rarely done largely due to the costs associated with laboratory testing and labor, and the limited availability of reference intervals needed for interpretation. Within-herd and between-herd biological variation of these values also make it difficult to establish reference intervals. Regardless, baseline reference intervals are important to aid veterinarians in the interpretation of blood parameters for the diagnosis and treatment of diseased swine. The objective of this research was to provide reference intervals for hematology and biochemistry parameters of 3-week-old commercial nursing piglets in Ontario. A total of 1032 pigs lacking clinical signs of disease from 20 swine farms were sampled for hematology and iron panel evaluation, with biochemistry analysis performed on a subset of 189 randomly selected pigs. The 95% reference interval, mean, median, range, and 90% confidence intervals were calculated for each parameter. PMID:28373729

  20. Master-Leader-Slave Cuckoo Search with Parameter Control for ANN Optimization and Its Real-World Application to Water Quality Prediction

    PubMed Central

    Jaddi, Najmeh Sadat; Abdullah, Salwani; Abdul Malek, Marlinda

    2017-01-01

    Artificial neural networks (ANNs) have been employed to solve a broad variety of tasks. The selection of an ANN model with appropriate weights is important in achieving accurate results. This paper presents an optimization strategy for ANN model selection based on the cuckoo search (CS) algorithm, which is rooted in the obligate brood parasitic actions of some cuckoo species. In order to enhance the convergence ability of basic CS, some modifications are proposed. The fraction Pa of the n nests replaced by new nests is a fixed parameter in basic CS. As the selection of Pa is a challenging issue and has a direct effect on exploration and therefore on convergence ability, in this work the Pa is set to a maximum value at initialization to achieve more exploration in early iterations and it is decreased during the search to achieve more exploitation in later iterations until it reaches the minimum value in the final iteration. In addition, a novel master-leader-slave multi-population strategy is used where the slaves employ the best fitness function among all slaves, which is selected by the leader under a certain condition. This fitness function is used for subsequent Lévy flights. In each iteration a copy of the best solution of each slave is migrated to the master and then the best solution is found by the master. The method is tested on benchmark classification and time series prediction problems and the statistical analysis proves the ability of the method. This method is also applied to a real-world water quality prediction problem with promising results. PMID:28125609

  1. Master-Leader-Slave Cuckoo Search with Parameter Control for ANN Optimization and Its Real-World Application to Water Quality Prediction.

    PubMed

    Jaddi, Najmeh Sadat; Abdullah, Salwani; Abdul Malek, Marlinda

    2017-01-01

    Artificial neural networks (ANNs) have been employed to solve a broad variety of tasks. The selection of an ANN model with appropriate weights is important in achieving accurate results. This paper presents an optimization strategy for ANN model selection based on the cuckoo search (CS) algorithm, which is rooted in the obligate brood parasitic actions of some cuckoo species. In order to enhance the convergence ability of basic CS, some modifications are proposed. The fraction Pa of the n nests replaced by new nests is a fixed parameter in basic CS. As the selection of Pa is a challenging issue and has a direct effect on exploration and therefore on convergence ability, in this work the Pa is set to a maximum value at initialization to achieve more exploration in early iterations and it is decreased during the search to achieve more exploitation in later iterations until it reaches the minimum value in the final iteration. In addition, a novel master-leader-slave multi-population strategy is used where the slaves employ the best fitness function among all slaves, which is selected by the leader under a certain condition. This fitness function is used for subsequent Lévy flights. In each iteration a copy of the best solution of each slave is migrated to the master and then the best solution is found by the master. The method is tested on benchmark classification and time series prediction problems and the statistical analysis proves the ability of the method. This method is also applied to a real-world water quality prediction problem with promising results.

  2. [Application of diffusion tensor imaging in judging infarction time of acute ischemic cerebral infarction].

    PubMed

    Dai, Zhenyu; Chen, Fei; Yao, Lizheng; Dong, Congsong; Liu, Yang; Shi, Haicun; Zhang, Zhiping; Yang, Naizhong; Zhang, Mingsheng; Dai, Yinggui

    2015-08-18

    To evaluate the clinical application value of diffusion tensor imaging (DTI) and diffusion tensor tractography (DTT) in judging infarction time phase of acute ischemic cerebral infarction. To retrospective analysis DTI images of 52 patients with unilateral acute ischemic cerebral infarction (hyper-acute, acute and sub-acute) from the Affiliated Yancheng Hospital of Southeast University Medical College, which diagnosed by clinic and magnetic resonance imaging. Set the regions of interest (ROIs) of infarction lesions, brain tissue close to infarction lesions and corresponding contra (contralateral normal brain tissue) on DTI parameters mapping of fractional anisotropy (FA), volume ratio anisotropy (VRA), average diffusion coefficient (DCavg) and exponential attenuation (Exat), record the parameters values of ROIs and calculate the relative parameters value of infarction lesion to contra. Meanwhile, reconstruct the DTT images based on the seed points (infarction lesion and contra). The study compared each parameter value of infarction lesions, brain tissue close to infarction lesions and corresponding contra, also analysed the differences of relative parameters values in different infarction time phases. The DTT images of acute ischemic cerebral infarction in each time phase could show the manifestation of fasciculi damaged. The DCavg value of cerebral infarction lesions was lower and the Exat value was higher than contra in each infarction time phase (P<0.05). The FA and VRA value of cerebral infarction lesions were reduced than contra only in acute and sub-acute infarction (P<0.05). The FA, VRA and Exat value of brain tissue close to infarction lesions were increased and DCavg value was decreased than contra in hyper-acute infarction (P<0.05). There were no statistic differences of FA, VRA, DCavg and Exat value of brain tissue close to infarction lesions in acute and sub-acute infarction. The relative FA and VRA value of infarction lesion to contra gradually decreased from hyper-acute to sub-acute cerebral infarction (P<0.05), but there were no difference of the relative VRA value between acute and sub-acute cerebral infarction. The relative DCavg value of infarction lesion to contra in hyper-acute infarction than that in acute and sub-acute infarction (P<0.05), however there was also no difference between acute and sub-acute infarction. ROC curve showed the best diagnosis cut off value of relative FA, VRA and DCavg of infarction lesions to contra were 0.852, 0.886 and 0.541 between hyper-acute and acute cerebral infarction, the best diagnosis cut off value of relative FA was 0.595 between acute and sub-acute cerebral infarction, respectively. The FA, VRA, DCavg and Exat value have specific change mode in acute ischemic cerebral infarction of different infarction time phases, which can be combine used in judging infarction time phase of acute ischemic cerebral infarction without clear onset time, thus to help selecting the reasonable treatment protocols.

  3. System and method for motor parameter estimation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Luhrs, Bin; Yan, Ting

    2014-03-18

    A system and method for determining unknown values of certain motor parameters includes a motor input device connectable to an electric motor having associated therewith values for known motor parameters and an unknown value of at least one motor parameter. The motor input device includes a processing unit that receives a first input from the electric motor comprising values for the known motor parameters for the electric motor and receive a second input comprising motor data on a plurality of reference motors, including values for motor parameters corresponding to the known motor parameters of the electric motor and values formore » motor parameters corresponding to the at least one unknown motor parameter value of the electric motor. The processor determines the unknown value of the at least one motor parameter from the first input and the second input and determines a motor management strategy for the electric motor based thereon.« less

  4. A Systematic Approach to Sensor Selection for Aircraft Engine Health Estimation

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Garg, Sanjay

    2009-01-01

    A systematic approach for selecting an optimal suite of sensors for on-board aircraft gas turbine engine health estimation is presented. The methodology optimally chooses the engine sensor suite and the model tuning parameter vector to minimize the Kalman filter mean squared estimation error in the engine s health parameters or other unmeasured engine outputs. This technique specifically addresses the underdetermined estimation problem where there are more unknown system health parameters representing degradation than available sensor measurements. This paper presents the theoretical estimation error equations, and describes the optimization approach that is applied to select the sensors and model tuning parameters to minimize these errors. Two different model tuning parameter vector selection approaches are evaluated: the conventional approach of selecting a subset of health parameters to serve as the tuning parameters, and an alternative approach that selects tuning parameters as a linear combination of all health parameters. Results from the application of the technique to an aircraft engine simulation are presented, and compared to those from an alternative sensor selection strategy.

  5. Normative 3D opto-electronic stereo-photogrammetric posture and spine morphology data in young healthy adult population

    PubMed Central

    2017-01-01

    Design: Observational cross-sectional study. The current study aims to yield normative data: i.e., the physiological standard for 30 selected quantitative 3D parameters that accurately capture and describe a full-skeleton, upright-standing attitude. Specific and exclusive consideration was given to three distinct categories: postural, spine morphology and pelvic parameters. To capture such 3D parameters, the authors selected a non-ionising 3D opto-electronic stereo-photogrammetric approach. This required the identification and measurement of 27 body landmarks, each specifically tagged with a skin marker. As subjects for the measurement of these parameters, a cohort of 124 asymptomatic young adult volunteers was recruited. All parameters were identified and measured within this group. Postural and spine morphology data have been compared between genders. In this regard, only five statistically significant differences were found: pelvis width, pelvis torsion, the “lumbar” lordosis angle value, the lumbar curve length, and the T12-L5 anatomically-bound lumbar angle value. The “thoracic” kyphosis mean angle value was the same in both sexes and, even if, derived from skin markers placed on spinous processes it resulted in perfect agreement with the X-ray based literature. As regards lordosis, a direct comparison was more difficult because methods proposed in the literature differ as to the number and position of vertebrae under consideration, and their related angle values. However, when the L1 superior–L5 inferior end plate Cobb angle was considered, these results aligned strongly with the existing literature. Asymmetry was a standard postural-spinal feature for both sexes. Each subject presented some degree of leg length discrepancy (LLD) with μ = 9.37mm. This was associated with four factors: unbalanced posture and/or underfoot loads, spinal curvature in the frontal plane, and pelvis torsion. This led to the additional study of the effect of LLD equalisation influence on upright posture, relying on a sub-sample of 100 subjects (51 males, 49 females). As a result of the equalisation, about 82% of this sub-sample showed improvement in standing posture, mainly in the frontal plane; while in the sagittal plane less than 1/3 of the sub-sample showed evidence of change in spinal angles. A significant variation was found in relation to pelvis torsion: 46% of subjects showed improvement, 49% worsening. The method described in study presents several advantages: non-invasive aspect; relatively short time for a complete postural evaluation with many clinically useful 3D and 2D anatomical/biomechanical/clinical parameters; analysis of real neutral unconstrained upright standing posture. PMID:28640899

  6. Genetic structured antedependence and random regression models applied to the longitudinal feed conversion ratio in growing Large White pigs.

    PubMed

    Huynh-Tran, V H; Gilbert, H; David, I

    2017-11-01

    The objective of the present study was to compare a random regression model, usually used in genetic analyses of longitudinal data, with the structured antedependence (SAD) model to study the longitudinal feed conversion ratio (FCR) in growing Large White pigs and to propose criteria for animal selection when used for genetic evaluation. The study was based on data from 11,790 weekly FCR measures collected on 1,186 Large White male growing pigs. Random regression (RR) using orthogonal polynomial Legendre and SAD models was used to estimate genetic parameters and predict FCR-based EBV for each of the 10 wk of the test. The results demonstrated that the best SAD model (1 order of antedependence of degree 2 and a polynomial of degree 2 for the innovation variance for the genetic and permanent environmental effects, i.e., 12 parameters) provided a better fit for the data than RR with a quadratic function for the genetic and permanent environmental effects (13 parameters), with Bayesian information criteria values of -10,060 and -9,838, respectively. Heritabilities with the SAD model were higher than those of RR over the first 7 wk of the test. Genetic correlations between weeks were higher than 0.68 for short intervals between weeks and decreased to 0.08 for the SAD model and -0.39 for RR for the longest intervals. These differences in genetic parameters showed that, contrary to the RR approach, the SAD model does not suffer from border effect problems and can handle genetic correlations that tend to 0. Summarized breeding values were proposed for each approach as linear combinations of the individual weekly EBV weighted by the coefficients of the first or second eigenvector computed from the genetic covariance matrix of the additive genetic effects. These summarized breeding values isolated EBV trajectories over time, capturing either the average general value or the slope of the trajectory. Finally, applying the SAD model over a reduced period of time suggested that similar selection choices would result from the use of the records from the first 8 wk of the test. To conclude, the SAD model performed well for the genetic evaluation of longitudinal phenotypes.

  7. Changes in the gait characteristics caused by external load, ground slope and velocity variation

    NASA Astrophysics Data System (ADS)

    Mrozowski, Jerzy; Awrejcewicz, Jan

    2011-05-01

    The complexity of the human gait manifests itself by lots of parameters that can evoke different changes in the walking manner. They can be divided into two groups: inherent, like anthropometric features or peculiar psychomotor type, and those related to the external conditions. The aim of the paper is to analyze the influence of three parameters, i.e. external load, ground slope and gait velocity, on the locomotion characteristics and the gait stability. Within the framework of investigations for different values of the mentioned parameters a film registration of the trajectories of selected kinematic nodes during some gait cycles has been carried out. The obtained data was a subject of numerical calculation aimed at extracting the essential properties of the principal gait characteristics.

  8. Robust fixed-time synchronization of delayed Cohen-Grossberg neural networks.

    PubMed

    Wan, Ying; Cao, Jinde; Wen, Guanghui; Yu, Wenwu

    2016-01-01

    The fixed-time master-slave synchronization of Cohen-Grossberg neural networks with parameter uncertainties and time-varying delays is investigated. Compared with finite-time synchronization where the convergence time relies on the initial synchronization errors, the settling time of fixed-time synchronization can be adjusted to desired values regardless of initial conditions. Novel synchronization control strategy for the slave neural network is proposed. By utilizing the Filippov discontinuous theory and Lyapunov stability theory, some sufficient schemes are provided for selecting the control parameters to ensure synchronization with required convergence time and in the presence of parameter uncertainties. Corresponding criteria for tuning control inputs are also derived for the finite-time synchronization. Finally, two numerical examples are given to illustrate the validity of the theoretical results. Copyright © 2015 Elsevier Ltd. All rights reserved.

  9. Earthquake ground motion: Chapter 3

    USGS Publications Warehouse

    Luco, Nicolas; Kircher, Charles A.; Crouse, C. B.; Charney, Finley; Haselton, Curt B.; Baker, Jack W.; Zimmerman, Reid; Hooper, John D.; McVitty, William; Taylor, Andy

    2016-01-01

    Most of the effort in seismic design of buildings and other structures is focused on structural design. This chapter addresses another key aspect of the design process—characterization of earthquake ground motion into parameters for use in design. Section 3.1 describes the basis of the earthquake ground motion maps in the Provisions and in ASCE 7 (the Standard). Section 3.2 has examples for the determination of ground motion parameters and spectra for use in design. Section 3.3 describes site-specific ground motion requirements and provides example site-specific design and MCER response spectra and example values of site-specific ground motion parameters. Section 3.4 discusses and provides an example for the selection and scaling of ground motion records for use in various types of response history analysis permitted in the Standard.

  10. Aeroelastic considerations for torsionally soft rotors

    NASA Technical Reports Server (NTRS)

    Mantay, W. R.; Yeager, W. T., Jr.

    1985-01-01

    A research study was initiated to systematically determine the impact of selected blade tip geometric parameters on conformable rotor performance and loads characteristics. The model articulated rotors included baseline and torsionally soft blades with interchangeable tips. Seven blade tip designs were evaluated on the baseline rotor and six tip designs were tested on the torsionally soft blades. The designs incorporated a systemmatic variation in geometric parameters including sweep, taper, and anhedral. The rotors were evaluated in the NASA Langley Transonic Dynamics Tunnel at several advance ratios, lift and propulsive force values, and tip Mach numbers. A track sensitivity study was also conducted at several advance ratios for both rotors. Based on the test results, tip parameter variations generated significant rotor performance and loads differences for both baseline and torsionally soft blades.

  11. The Predicted Influence of Climate Change on Lesser Prairie-Chicken Reproductive Parameters

    PubMed Central

    Grisham, Blake A.; Boal, Clint W.; Haukos, David A.; Davis, Dawn M.; Boydston, Kathy K.; Dixon, Charles; Heck, Willard R.

    2013-01-01

    The Southern High Plains is anticipated to experience significant changes in temperature and precipitation due to climate change. These changes may influence the lesser prairie-chicken (Tympanuchus pallidicinctus) in positive or negative ways. We assessed the potential changes in clutch size, incubation start date, and nest survival for lesser prairie-chickens for the years 2050 and 2080 based on modeled predictions of climate change and reproductive data for lesser prairie-chickens from 2001–2011 on the Southern High Plains of Texas and New Mexico. We developed 9 a priori models to assess the relationship between reproductive parameters and biologically relevant weather conditions. We selected weather variable(s) with the most model support and then obtained future predicted values from climatewizard.org. We conducted 1,000 simulations using each reproductive parameter’s linear equation obtained from regression calculations, and the future predicted value for each weather variable to predict future reproductive parameter values for lesser prairie-chickens. There was a high degree of model uncertainty for each reproductive value. Winter temperature had the greatest effect size for all three parameters, suggesting a negative relationship between above-average winter temperature and reproductive output. The above-average winter temperatures are correlated to La Niña events, which negatively affect lesser prairie-chickens through resulting drought conditions. By 2050 and 2080, nest survival was predicted to be below levels considered viable for population persistence; however, our assessment did not consider annual survival of adults, chick survival, or the positive benefit of habitat management and conservation, which may ultimately offset the potentially negative effect of drought on nest survival. PMID:23874549

  12. Potential energy function for CH3+CH3 ⇆ C2H6: Attributes of the minimum energy path

    NASA Astrophysics Data System (ADS)

    Robertson, S. H.; Wardlaw, D. M.; Hirst, D. M.

    1993-11-01

    The region of the potential energy surface for the title reaction in the vicinity of its minimum energy path has been predicted from the analysis of ab initio electronic energy calculations. The ab initio procedure employs a 6-31G** basis set and a configuration interaction calculation which uses the orbitals obtained in a generalized valence bond calculation. Calculated equilibrium properties of ethane and of isolated methyl radical are compared to existing theoretical and experimental results. The reaction coordinate is represented by the carbon-carbon interatomic distance. The following attributes are reported as a function of this distance and fit to functional forms which smoothly interpolate between reactant and product values of each attribute: the minimum energy path potential, the minimum energy path geometry, normal mode frequencies for vibrational motion orthogonal to the reaction coordinate, a torsional potential, and a fundamental anharmonic frequency for local mode, out-of-plane CH3 bending (umbrella motion). The best representation is provided by a three-parameter modified Morse function for the minimum energy path potential and a two-parameter hyperbolic tangent switching function for all other attributes. A poorer but simpler representation, which may be satisfactory for selected applications, is provided by a standard Morse function and a one-parameter exponential switching function. Previous applications of the exponential switching function to estimate the reaction coordinate dependence of the frequencies and geometry of this system have assumed the same value of the range parameter α for each property and have taken α to be less than or equal to the ``standard'' value of 1.0 Å-1. Based on the present analysis this is incorrect: The α values depend on the property and range from ˜1.2 to ˜1.8 Å-1.

  13. Selection of meteorological conditions to apply in an Ecotron facility

    NASA Astrophysics Data System (ADS)

    Leemans, Vincent; De Cruz, Lesley; Dumont, Benjamin; Hamdi, Rafiq; Delaplace, Pierre; Heinesh, Bernard; Garré, Sarah; Verheggen, François; Theodorakopoulos, Nicolas; Longdoz, Bernard

    2017-04-01

    This presentation aims to propose a generic method to produce meteorological input data that is useful for climate research infrastructures such as an Ecotron, where researchers will face the need to generate representative actual or future climatic conditions. Depending on the experimental objectives and the research purposes, typical conditions or more extreme values such as dry or wet climatic scenarios might be requested. Four variables were considered here, the near-surface air temperature, the near-surface relative humidity, the cloud cover and precipitation. The meteorological datasets, among which a specific meteorological year can be picked up, are produced by the ALARO-0 model from the RMIB (Royal Meteorological Institute of Belgium). Two future climate scenarios (RCP 4.5 and 8.5) and two time periods (2041-2070 and 2071-2100) were used as well as a historical run of the model (1981-2010) which is used as a reference. When the data from a historical run were compared to the observed historical data, biases were noticed. A linear correction was proposed for all the variables except for precipitation, for which a non-linear correction (using a power function) was chosen to maintain a zero-precipitation occurrences. These transformations were able to remove most of the differences between the observed and historical run of the model for the means and for the standard deviations. For the relative humidity, because of non-linearities, only one half of the average bias was corrected and a different path might have to be chosen. For the selection of a meteorological year, a position and a dispersion parameter have been proposed to characterise each meteorological year for each variable. For precipitation, a third parameter quantifying the importance of dry and wet periods has been defined. In order to select a specific climate, for each of these nine parameters the experimenter should provide a percentile and a weight to prioritize the importance of each variable in the process of a global climate selection. The proposed algorithm computed the weighted distance for each year between the parameters and the point representing the position of the percentile in the nine-dimensional space. The five closest values were then selected and represented in different graphs. The proposed method is able to provide a decision aid in the selection of the meteorological conditions to be generated within an Ecotron. However, with a limited number of years available in each case (thirty years for each RCP and each time period), there is no perfect match and the ultimate trade-off will be the responsibility of the researcher. For typical years, close to the median, the relative frequency is higher and the trade-off is more easy than for more extreme years where the relative frequency is low.

  14. Influence of the thickness of multilayer matching systems on the transfer function of ultrasonic airborne transducer.

    PubMed

    Opieliński, Krzysztof J; Gudra, Tadeusz

    2002-05-01

    The effective ultrasonic energy radiation into the air of piezoelectric transducers requires using multilayer matching systems with accurately selected acoustic impedances and the thickness of particular layers. This problem is of particular importance in the case of ultrasonic transducers working at a frequency above 1 MHz. Because the possibilities of choosing material with required acoustic impedance are limited (the counted values cannot always be realised and applied in practice) it is necessary to correct the differences between theoretical values and the possibilities of practical application of given acoustic impedances. Such a correction can be done by manipulating other parameters of matching layers (e.g. by changing their thickness). The efficiency of the energy transmission from the piezoceramic transducer through different layers with different thickness enabling a compensation of non-ideal real values by changing their thickness was computer analysed. The result of this analysis is the conclusion that from the technological point of view a layer with defined thickness is easier and faster to produce than elaboration of a new material with required acoustic parameter.

  15. Robust fixed-time synchronization for uncertain complex-valued neural networks with discontinuous activation functions.

    PubMed

    Ding, Xiaoshuai; Cao, Jinde; Alsaedi, Ahmed; Alsaadi, Fuad E; Hayat, Tasawar

    2017-06-01

    This paper is concerned with the fixed-time synchronization for a class of complex-valued neural networks in the presence of discontinuous activation functions and parameter uncertainties. Fixed-time synchronization not only claims that the considered master-slave system realizes synchronization within a finite time segment, but also requires a uniform upper bound for such time intervals for all initial synchronization errors. To accomplish the target of fixed-time synchronization, a novel feedback control procedure is designed for the slave neural networks. By means of the Filippov discontinuity theories and Lyapunov stability theories, some sufficient conditions are established for the selection of control parameters to guarantee synchronization within a fixed time, while an upper bound of the settling time is acquired as well, which allows to be modulated to predefined values independently on initial conditions. Additionally, criteria of modified controller for assurance of fixed-time anti-synchronization are also derived for the same system. An example is included to illustrate the proposed methodologies. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. Anomaly Monitoring Method for Key Components of Satellite

    PubMed Central

    Fan, Linjun; Xiao, Weidong; Tang, Jun

    2014-01-01

    This paper presented a fault diagnosis method for key components of satellite, called Anomaly Monitoring Method (AMM), which is made up of state estimation based on Multivariate State Estimation Techniques (MSET) and anomaly detection based on Sequential Probability Ratio Test (SPRT). On the basis of analysis failure of lithium-ion batteries (LIBs), we divided the failure of LIBs into internal failure, external failure, and thermal runaway and selected electrolyte resistance (R e) and the charge transfer resistance (R ct) as the key parameters of state estimation. Then, through the actual in-orbit telemetry data of the key parameters of LIBs, we obtained the actual residual value (R X) and healthy residual value (R L) of LIBs based on the state estimation of MSET, and then, through the residual values (R X and R L) of LIBs, we detected the anomaly states based on the anomaly detection of SPRT. Lastly, we conducted an example of AMM for LIBs, and, according to the results of AMM, we validated the feasibility and effectiveness of AMM by comparing it with the results of threshold detective method (TDM). PMID:24587703

  17. Modification Of Learning Rate With Lvq Model Improvement In Learning Backpropagation

    NASA Astrophysics Data System (ADS)

    Tata Hardinata, Jaya; Zarlis, Muhammad; Budhiarti Nababan, Erna; Hartama, Dedy; Sembiring, Rahmat W.

    2017-12-01

    One type of artificial neural network is a backpropagation, This algorithm trained with the network architecture used during the training as well as providing the correct output to insert a similar but not the same with the architecture in use at training.The selection of appropriate parameters also affects the outcome, value of learning rate is one of the parameters which influence the process of training, Learning rate affects the speed of learning process on the network architecture.If the learning rate is set too large, then the algorithm will become unstable and otherwise the algorithm will converge in a very long period of time.So this study was made to determine the value of learning rate on the backpropagation algorithm. LVQ models of learning rate is one of the models used in the determination of the value of the learning rate of the algorithm LVQ.By modifying this LVQ model to be applied to the backpropagation algorithm. From the experimental results known to modify the learning rate LVQ models were applied to the backpropagation algorithm learning process becomes faster (epoch less).

  18. Detrimental effect of selection for milk yield on genetic tolerance to heat stress in purebred Zebu cattle: Genetic parameters and trends.

    PubMed

    Santana, M L; Pereira, R J; Bignardi, A B; Filho, A E Vercesi; Menéndez-Buxadera, A; El Faro, L

    2015-12-01

    In an attempt to determine the possible detrimental effects of continuous selection for milk yield on the genetic tolerance of Zebu cattle to heat stress, genetic parameters and trends of the response to heat stress for 86,950 test-day (TD) milk yield records from 14,670 first lactations of purebred dairy Gir cows were estimated. A random regression model with regression on days in milk (DIM) and temperature-humidity index (THI) values was applied to the data. The most detrimental effect of THI on milk yield was observed in the stage of lactation with higher milk production, DIM 61 to 120 (-0.099kg/d per THI). Although modest variations were observed for the THI scale, a reduction in additive genetic variance as well as in permanent environmental and residual variance was observed with increasing THI values. The heritability estimates showed a slight increase with increasing THI values for any DIM. The correlations between additive genetic effects across the THI scale showed that, for most of the THI values, genotype by environment interactions due to heat stress were less important for the ranking of bulls. However, for extreme THI values, this type of genotype by environment interaction may lead to an important error in selection. As a result of the selection for milk yield practiced in the dairy Gir population for 3 decades, the genetic trend of cumulative milk yield was significantly positive for production in both high (51.81kg/yr) and low THI values (78.48kg/yr). However, the difference between the breeding values of animals at high and low THI may be considered alarming (355kg in 2011). The genetic trends observed for the regression coefficients related to general production level (intercept of the reaction norm) and specific ability to respond to heat stress (slope of the reaction norm) indicate that the dairy Gir population is heading toward a higher production level at the expense of lower tolerance to heat stress. These trends reflect the genetic antagonism between production and tolerance to heat stress demonstrated by the negative genetic correlation between these components (-0.23). Monitoring trends of the genetic component of heat stress would be a reasonable measure to avoid deterioration in one of the main traits of Zebu cattle (i.e., high tolerance to heat stress). On the basis of current genetic trends, the need for future genetic evaluation of dairy Zebu animals for tolerance to heat stress cannot be ruled out. Copyright © 2015 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  19. Parameter sensitivity analysis and optimization for a satellite-based evapotranspiration model across multiple sites using Moderate Resolution Imaging Spectroradiometer and flux data

    NASA Astrophysics Data System (ADS)

    Zhang, Kun; Ma, Jinzhu; Zhu, Gaofeng; Ma, Ting; Han, Tuo; Feng, Li Li

    2017-01-01

    Global and regional estimates of daily evapotranspiration are essential to our understanding of the hydrologic cycle and climate change. In this study, we selected the radiation-based Priestly-Taylor Jet Propulsion Laboratory (PT-JPL) model and assessed it at a daily time scale by using 44 flux towers. These towers distributed in a wide range of ecological systems: croplands, deciduous broadleaf forest, evergreen broadleaf forest, evergreen needleleaf forest, grasslands, mixed forests, savannas, and shrublands. A regional land surface evapotranspiration model with a relatively simple structure, the PT-JPL model largely uses ecophysiologically-based formulation and parameters to relate potential evapotranspiration to actual evapotranspiration. The results using the original model indicate that the model always overestimates evapotranspiration in arid regions. This likely results from the misrepresentation of water limitation and energy partition in the model. By analyzing physiological processes and determining the sensitive parameters, we identified a series of parameter sets that can increase model performance. The model with optimized parameters showed better performance (R2 = 0.2-0.87; Nash-Sutcliffe efficiency (NSE) = 0.1-0.87) at each site than the original model (R2 = 0.19-0.87; NSE = -12.14-0.85). The results of the optimization indicated that the parameter β (water control of soil evaporation) was much lower in arid regions than in relatively humid regions. Furthermore, the optimized value of parameter m1 (plant control of canopy transpiration) was mostly between 1 to 1.3, slightly lower than the original value. Also, the optimized parameter Topt correlated well to the actual environmental temperature at each site. We suggest that using optimized parameters with the PT-JPL model could provide an efficient way to improve the model performance.

  20. Seasonal variation in the biochemical composition of red seaweed ( Catenella repens) from Gangetic delta, northeast coast of India

    NASA Astrophysics Data System (ADS)

    Banerjee, Kakoli; Ghosh, Rajrupa; Homechaudhuri, Sumit; Mitra, Abhijit

    2009-10-01

    The biochemical composition of red seaweeds, Catenella repens was investigated in this present study along with subsequent analysis of relevant physico-chemical variables. In this study, the relationship between the nutritive components of this species and the ambient environmental parameters was established. Protein content varied from 2.78 ± 0.30% of dry weight (stn.3) to 16.03 ± 0.96% of dry weight (stn.1) with highest values during monsoon. The protein levels were positively correlated with dissolved nitrate content and negatively correlated with water temperature (except stn.3) and salinity. Carbohydrate content of this species varied significantly ( p < 0.05) during pre-monsoon between stations and the values showed positive relationship with salinity and surface water temperature. In contrast to carbohydrate, lipid concentration was lowest in values and varied very slightly between seasons and stations. Astaxanthin content of the seaweed species was greater in pre-monsoon than monsoon and post-monsoon in all the selected stations. Compared with the three seasons, samples of red seaweed collected in pre-monsoon has high carbohydrate-astaxanthin in contrast to protein-lipid which showed high values during monsoon. Statistical analysis computed among the environmental and biochemical parameters suggests the potential role played by the abiotic parameters on biosynthetic pathways of seaweed. This paper also highlights the influence of the nutritional quality of water that can be used for mass cultivation of Catenella repens.

  1. Gaia FGK benchmark stars: Metallicity

    NASA Astrophysics Data System (ADS)

    Jofré, P.; Heiter, U.; Soubiran, C.; Blanco-Cuaresma, S.; Worley, C. C.; Pancino, E.; Cantat-Gaudin, T.; Magrini, L.; Bergemann, M.; González Hernández, J. I.; Hill, V.; Lardo, C.; de Laverny, P.; Lind, K.; Masseron, T.; Montes, D.; Mucciarelli, A.; Nordlander, T.; Recio Blanco, A.; Sobeck, J.; Sordo, R.; Sousa, S. G.; Tabernero, H.; Vallenari, A.; Van Eck, S.

    2014-04-01

    Context. To calibrate automatic pipelines that determine atmospheric parameters of stars, one needs a sample of stars, or "benchmark stars", with well-defined parameters to be used as a reference. Aims: We provide detailed documentation of the iron abundance determination of the 34 FGK-type benchmark stars that are selected to be the pillars for calibration of the one billion Gaia stars. They cover a wide range of temperatures, surface gravities, and metallicities. Methods: Up to seven different methods were used to analyze an observed spectral library of high resolutions and high signal-to-noise ratios. The metallicity was determined by assuming a value of effective temperature and surface gravity obtained from fundamental relations; that is, these parameters were known a priori and independently from the spectra. Results: We present a set of metallicity values obtained in a homogeneous way for our sample of benchmark stars. In addition to this value, we provide detailed documentation of the associated uncertainties. Finally, we report a value of the metallicity of the cool giant ψ Phe for the first time. Based on NARVAL and HARPS data obtained within the Gaia DPAC (Data Processing and Analysis Consortium) and coordinated by the GBOG (Ground-Based Observations for Gaia) working group and on data retrieved from the ESO-ADP database.Tables 6-76 are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/564/A133

  2. Effects of primary selective laser trabeculoplasty on anterior segment parameters

    PubMed Central

    Guven Yilmaz, Suzan; Palamar, Melis; Yusifov, Emil; Ates, Halil; Egrilmez, Sait; Yagci, Ayse

    2015-01-01

    AIM To investigate the effects of selective laser trabeculoplasty (SLT) on the main numerical parameters of anterior segment with Pentacam rotating Scheimpflug camera in patients with ocular hypertension (OHT) and primary open angle glaucoma (POAG). METHODS Pentacam measurements of 45 eyes of 25 (15 females and 10 males) patients (12 with OHT, 13 with POAG) before and after SLT were obtained. Measurements were taken before and 1 and 3mo after SLT. Pentacam parameters were compared between OHT and POAG patients, and age groups (60y and older, and younger than 60y). RESULTS The mean age of the patients was 57.8±13.9 (range 20-77y). Twelve patients (48%) were younger than 60y, while 13 patients (52%) were 60y and older. Measurements of pre-SLT and post-SLT 1mo were significantly different for the parameters of central corneal thickness (CCT) and anterior chamber volume (ACV) (P<0.05). These parameters returned back to pre-SLT values at post-SLT 3mo. Decrease of ACV at post-SLT 1mo was significantly higher in younger than 60y group than 60y and older group. There was no statistically significant difference in Pentacam parameters between OHT and POAG patients at pre- and post-treatment measurements (P>0.05). CONCLUSION SLT leads to significant increase in CCT and decrease in ACV at the 1st month of the procedure. Effects of SLT on these anterior segment parameters, especially for CCT that interferes IOP measurement, should be considered to ensure accurate clinical interpretation. PMID:26558208

  3. A search for optimal parameters of resonance circuits ensuring damping of electroelastic structure vibrations based on the solution of natural vibration problem

    NASA Astrophysics Data System (ADS)

    Oshmarin, D.; Sevodina, N.; Iurlov, M.; Iurlova, N.

    2017-06-01

    In this paper, with the aim of providing passive control of structure vibrations a new approach has been proposed for selecting optimal parameters of external electric shunt circuits connected to piezoelectric elements located on the surface of the structure. The approach is based on the mathematical formulation of the natural vibration problem. The results of solution of this problem are the complex eigenfrequencies, the real part of which represents the vibration frequency and the imaginary part corresponds to the damping ratio, characterizing the rate of damping. A criterion of search for optimal parameters of the external passive shunt circuits, which can provide the system with desired dissipative properties, has been derived based on the analysis of responses of the real and imaginary parts of different complex eigenfrequencies to changes in the values of the parameters of the electric circuit. The efficiency of this approach has been verified in the context of natural vibration problem of rigidly clamped plate and semi-cylindrical shell, which is solved for series-connected and parallel -connected external resonance (consisting of resistive and inductive elements) R-L circuits. It has been shown that at lower (more energy-intensive) frequencies, a series-connected external circuit has the advantage of providing lower values of the circuit parameters, which renders it more attractive in terms of practical applications.

  4. Self-association of plant wax components: a thermodynamic analysis.

    PubMed

    Casado, C G; Heredia, A

    2001-01-01

    Excess specific heat, C(p)()(E), of binary mixtures of selected components of plant cuticular waxes has been determined. This thermodynamic parameter gives an explanation of the special molecular arrangement in crystalline and amorphous zones of plant waxes. C(p)()(E) values indicate that hydrogen bonding between chains results in the formation of amorphous zones. Conclusions on the self-asembly process of plant waxes have been also made.

  5. Chaos and Order in Weakly Coupled Systems of Nonlinear Oscillators

    NASA Astrophysics Data System (ADS)

    Bruhn, B.

    1987-01-01

    We consider in this paper perturbations of two degree of freedom Hamiltonian systems which contain periodic and heteroclinic orbits. The Melnikov-Keener condition is used to proof the existence of horseshoes in the dynamics. The same condition is applied to prove a high degree of order in the motion of the swinging Atwood's machine. For some selected parameter values the theoretical predictions are checked by numerical calculations.

  6. Development of a health literacy assessment for young adult college students: a pilot study.

    PubMed

    Harper, Raquel

    2014-01-01

    The purpose of this study was to develop a comprehensive health literacy assessment tool for young adult college students. Participants were 144 undergraduate students. Two hundred and twenty-nine questions were developed, which were based on concepts identified by the US Department of Health and Human Services, the World Health Organization, and health communication scholars. Four health education experts reviewed this pool of items and helped select 87 questions for testing. Students completed an online assessment consisting of these 87 questions in June and October of 2012. Item response theory and goodness-of-fit values were used to help eliminate nonperforming questions. Fifty-one questions were selected based on good item response theory discrimination parameter values. The instrument has 51 questions that look promising for measuring health literacy in college students, but needs additional testing with a larger student population to see how these questions continue to perform.

  7. Optimizing Support Vector Machine Parameters with Genetic Algorithm for Credit Risk Assessment

    NASA Astrophysics Data System (ADS)

    Manurung, Jonson; Mawengkang, Herman; Zamzami, Elviawaty

    2017-12-01

    Support vector machine (SVM) is a popular classification method known to have strong generalization capabilities. SVM can solve the problem of classification and linear regression or nonlinear kernel which can be a learning algorithm for the ability of classification and regression. However, SVM also has a weakness that is difficult to determine the optimal parameter value. SVM calculates the best linear separator on the input feature space according to the training data. To classify data which are non-linearly separable, SVM uses kernel tricks to transform the data into a linearly separable data on a higher dimension feature space. The kernel trick using various kinds of kernel functions, such as : linear kernel, polynomial, radial base function (RBF) and sigmoid. Each function has parameters which affect the accuracy of SVM classification. To solve the problem genetic algorithms are proposed to be applied as the optimal parameter value search algorithm thus increasing the best classification accuracy on SVM. Data taken from UCI repository of machine learning database: Australian Credit Approval. The results show that the combination of SVM and genetic algorithms is effective in improving classification accuracy. Genetic algorithms has been shown to be effective in systematically finding optimal kernel parameters for SVM, instead of randomly selected kernel parameters. The best accuracy for data has been upgraded from kernel Linear: 85.12%, polynomial: 81.76%, RBF: 77.22% Sigmoid: 78.70%. However, for bigger data sizes, this method is not practical because it takes a lot of time.

  8. Biometric parameters in different stages of primary angle closure using low-coherence interferometry.

    PubMed

    Yazdani, Shahin; Akbarian, Shadi; Pakravan, Mohammad; Doozandeh, Azadeh; Afrouzifar, Mohsen

    2015-03-01

    To compare ocular biometric parameters using low-coherence interferometry among siblings affected with different degrees of primary angle closure (PAC). In this cross-sectional comparative study, a total of 170 eyes of 86 siblings from 47 families underwent low-coherence interferometry (LenStar 900; Haag-Streit, Koeniz, Switzerland) to determine central corneal thickness, anterior chamber depth (ACD), aqueous depth (AD), lens thickness (LT), vitreous depth, and axial length (AL). Regression coefficients were applied to show the trend of the measured variables in different stages of angle closure. To evaluate the discriminative power of the parameters, receiver operating characteristic curves were used. Best cutoff points were selected based on the Youden index. Sensitivity, specificity, positive and negative predicative values, positive and negative likelihood ratios, and diagnostic accuracy were determined for each variable. All biometric parameters changed significantly from normal eyes to PAC suspects, PAC, and PAC glaucoma; there was a significant stepwise decrease in central corneal thickness, ACD, AD, vitreous depth, and AL, and an increase in LT and LT/AL. Anterior chamber depth and AD had the best diagnostic power for detecting angle closure; best levels of sensitivity and specificity were obtained with cutoff values of 3.11 mm for ACD and 2.57 mm for AD. Biometric parameters measured by low-coherence interferometry demonstrated a significant and stepwise change among eyes affected with various degrees of angle closure. Although the current classification scheme for angle closure is based on anatomical features, it has excellent correlation with biometric parameters.

  9. Kinetics modelling of color deterioration during thermal processing of tomato paste with the use of response surface methodology

    NASA Astrophysics Data System (ADS)

    Ganje, Mohammad; Jafari, Seid Mahdi; Farzaneh, Vahid; Malekjani, Narges

    2018-06-01

    To study the kinetics of color degradation, the tomato paste was designed to be processed at three different temperatures including 60, 70 and 80 °C for 25, 50, 75 and 100 min. a/b ratio, total color difference, saturation index and hue angle were calculated with the use of three main color parameters including L (lightness), a (redness-greenness) and b (yellowness-blueness) values. Kinetics of color degradation was developed by Arrhenius equation and the alterations were modelled with the use of response surface methodology (RSM). It was detected that all of the studied responses followed a first order reaction kinetics with an exception in TCD parameter (zeroth order). TCD and a/b respectively with the highest and lowest activation energy presented the highest sensitivity to the temperature alterations. The maximum and minimum rates of alterations were observed by TCD and b parameters, respectively. It was obviously determined that all of the studied parameters (responses) were affected by the selected independent parameters.

  10. Rendering of HDR content on LDR displays: an objective approach

    NASA Astrophysics Data System (ADS)

    Krasula, Lukáš; Narwaria, Manish; Fliegel, Karel; Le Callet, Patrick

    2015-09-01

    Dynamic range compression (or tone mapping) of HDR content is an essential step towards rendering it on traditional LDR displays in a meaningful way. This is however non-trivial and one of the reasons is that tone mapping operators (TMOs) usually need content-specific parameters to achieve the said goal. While subjective TMO parameter adjustment is the most accurate, it may not be easily deployable in many practical applications. Its subjective nature can also influence the comparison of different operators. Thus, there is a need for objective TMO parameter selection to automate the rendering process. To that end, we investigate into a new objective method for TMO parameters optimization. Our method is based on quantification of contrast reversal and naturalness. As an important advantage, it does not require any prior knowledge about the input HDR image and works independently on the used TMO. Experimental results using a variety of HDR images and several popular TMOs demonstrate the value of our method in comparison to default TMO parameter settings.

  11. Application of an Optical Model to the Interaction of the $pi$ Meson with the Nucleus in the $pi$ Mesic Atom (thesis); APPLICATION D'UN MODELE OPTIQUE POUR L'INTERACTION DU MESON $pi$ MESIQUE (THESE)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berthet, M.

    1963-01-01

    The energy levels and their displacement DELTA E with respect to that of a meson placed in a coulomb potential are determined and compared with the experimental values. This comparison permits the selection of values for the parameters introduced by the hypothesis of the optical model. The absorption in the nucleus is studied using the hamiltonian of the nucleon- pi meson interaction and not th optical model. The results are compared with experimen values. As an introduction, the exact form of the interac tion of mesons with nuclei is defined by adopting the opti model. (J.S.R.)

  12. A full-Bayesian approach to parameter inference from tracer travel time moments and investigation of scale effects at the Cape Cod experimental site

    USGS Publications Warehouse

    Woodbury, Allan D.; Rubin, Yoram

    2000-01-01

    A method for inverting the travel time moments of solutes in heterogeneous aquifers is presented and is based on peak concentration arrival times as measured at various samplers in an aquifer. The approach combines a Lagrangian [Rubin and Dagan, 1992] solute transport framework with full‐Bayesian hydrogeological parameter inference. In the full‐Bayesian approach the noise values in the observed data are treated as hyperparameters, and their effects are removed by marginalization. The prior probability density functions (pdfs) for the model parameters (horizontal integral scale, velocity, and log K variance) and noise values are represented by prior pdfs developed from minimum relative entropy considerations. Analysis of the Cape Cod (Massachusetts) field experiment is presented. Inverse results for the hydraulic parameters indicate an expected value for the velocity, variance of log hydraulic conductivity, and horizontal integral scale of 0.42 m/d, 0.26, and 3.0 m, respectively. While these results are consistent with various direct‐field determinations, the importance of the findings is in the reduction of confidence range about the various expected values. On selected control planes we compare observed travel time frequency histograms with the theoretical pdf, conditioned on the observed travel time moments. We observe a positive skew in the travel time pdf which tends to decrease as the travel time distance grows. We also test the hypothesis that there is no scale dependence of the integral scale λ with the scale of the experiment at Cape Cod. We adopt two strategies. The first strategy is to use subsets of the full data set and then to see if the resulting parameter fits are different as we use different data from control planes at expanding distances from the source. The second approach is from the viewpoint of entropy concentration. No increase in integral scale with distance is inferred from either approach over the range of the Cape Cod tracer experiment.

  13. Biological optimization of simultaneous boost on intra-prostatic lesions (DILs): sensitivity to TCP parameters.

    PubMed

    Azzeroni, R; Maggio, A; Fiorino, C; Mangili, P; Cozzarini, C; De Cobelli, F; Di Muzio, N G; Calandrino, R

    2013-11-01

    The aim of this investigation was to explore the potential of biological optimization in the case of simultaneous integrated boost on intra-prostatic dominant lesions (DIL) and evaluating the impact of TCP parameters uncertainty. Different combination of TCP parameters (TD50 and γ50 in the Poisson-like model), were considered for DILs and the prostate outside DILs (CTV) for 7 intermediate/high-risk prostate patients. The aim was to maximize TCP while constraining NTCPs below 5% for all organs at risk. TCP values were highly depending on the parameters used and ranged between 38.4% and 99.9%; the optimized median physical doses were in the range 94-116 Gy and 69-77 Gy for DIL and CTV respectively. TCP values were correlated with the overlap PTV-rectum and the minimum distance between rectum and DIL. In conclusion, biological optimization for selective dose escalation is feasible and suggests prescribed dose around 90-120 Gy to the DILs. The obtained result is critically depending on the assumptions concerning the higher radioresistence in the DILs. In case of very resistant clonogens into the DIL, it may be difficult to maximize TCP to acceptable levels without violating NTCP constraints. Copyright © 2012 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  14. [NUTRITIONAL STATUS BY ANTHROPOMETRIC AND BIOCHEMICAL PARAMETERS OF COLLEGE BASKETBALL PLAYERS].

    PubMed

    Godoy-Cumillaf, Andrés Esteban Roberto; Cárcamo-Araneda, Cristian Rodolfo; Hermosilla-Rodríguez, Freddy Patricio; Oyarzún-Ruiz, Jean Pierre; Viveros-Herrera, José Francisco Javier

    2015-12-01

    in relation to the student population, their class schedules, hours of study, budget shortages, among others, do not allow them to have good eating habits and sedentary ago. Within this context are the sports teams, which must deal with the above. knowing the nutritional status of a group of college basketball players (BU) by anthropometric and biochemical parameters. the research provides a non-experimental, descriptive, transversal, with a quantitative approach The sample was selected on a non-probabilistic approach. which included 12 players design. Anthropometric parameters for body mass index (BMI), somatotype and body composition was assessed. For biochemical glucose, triglycerides and cholesterol. have a BMI of 24.6 (kg/m2), are classified as endomesomorfas (5,5-4,3-1,2) have a fat mass 39.9% and 37.8% of muscle mass, glucose values are 68.7 (mg/dl), triglycerides 128 (mg/dl) and 189 cholesterol (mg/dl). the BU have normal values for BMI and biochemical parameters, but dig deeper greater amount of adipose tissue is found as reported by body composition and somatotype, a situation that could be related to poor eating habits, however is required further study to reach a categorical conclusion. Copyright AULA MEDICA EDICIONES 2014. Published by AULA MEDICA. All rights reserved.

  15. Curve Number Application in Continuous Runoff Models: An Exercise in Futility?

    NASA Astrophysics Data System (ADS)

    Lamont, S. J.; Eli, R. N.

    2006-12-01

    The suitability of applying the NRCS (Natural Resource Conservation Service) Curve Number (CN) to continuous runoff prediction is examined by studying the dependence of CN on several hydrologic variables in the context of a complex nonlinear hydrologic model. The continuous watershed model Hydrologic Simulation Program-FORTRAN (HSPF) was employed using a simple theoretical watershed in two numerical procedures designed to investigate the influence of soil type, soil depth, storm depth, storm distribution, and initial abstraction ratio value on the calculated CN value. This study stems from a concurrent project involving the design of a hydrologic modeling system to support the Cumulative Hydrologic Impact Assessments (CHIA) of over 230 coal-mined watersheds throughout West Virginia. Because of the large number of watersheds and limited availability of data necessary for HSPF calibration, it was initially proposed that predetermined CN values be used as a surrogate for those HSPF parameters controlling direct runoff. A soil physics model was developed to relate CN values to those HSPF parameters governing soil moisture content and infiltration behavior, with the remaining HSPF parameters being adopted from previous calibrations on real watersheds. A numerical procedure was then adopted to back-calculate CN values from the theoretical watershed using antecedent moisture conditions equivalent to the NRCS Antecedent Runoff Condition (ARC) II. This procedure used the direct runoff produced from a cyclic synthetic storm event time series input to HSPF. A second numerical method of CN determination, using real time series rainfall data, was used to provide a comparison to those CN values determined using the synthetic storm event time series. It was determined that the calculated CN values resulting from both numerical methods demonstrated a nonlinear dependence on all of the computational variables listed above. It was concluded that the use of the Curve Number as a surrogate for the selected subset of HPSF parameters could not be justified. These results suggest that use of the Curve Number in other complex continuous time series hydrologic models may not be appropriate, given the limitations inherent in the definition of the NRCS CN method.

  16. Bayesian Optimization for Neuroimaging Pre-processing in Brain Age Classification and Prediction

    PubMed Central

    Lancaster, Jenessa; Lorenz, Romy; Leech, Rob; Cole, James H.

    2018-01-01

    Neuroimaging-based age prediction using machine learning is proposed as a biomarker of brain aging, relating to cognitive performance, health outcomes and progression of neurodegenerative disease. However, even leading age-prediction algorithms contain measurement error, motivating efforts to improve experimental pipelines. T1-weighted MRI is commonly used for age prediction, and the pre-processing of these scans involves normalization to a common template and resampling to a common voxel size, followed by spatial smoothing. Resampling parameters are often selected arbitrarily. Here, we sought to improve brain-age prediction accuracy by optimizing resampling parameters using Bayesian optimization. Using data on N = 2003 healthy individuals (aged 16–90 years) we trained support vector machines to (i) distinguish between young (<22 years) and old (>50 years) brains (classification) and (ii) predict chronological age (regression). We also evaluated generalisability of the age-regression model to an independent dataset (CamCAN, N = 648, aged 18–88 years). Bayesian optimization was used to identify optimal voxel size and smoothing kernel size for each task. This procedure adaptively samples the parameter space to evaluate accuracy across a range of possible parameters, using independent sub-samples to iteratively assess different parameter combinations to arrive at optimal values. When distinguishing between young and old brains a classification accuracy of 88.1% was achieved, (optimal voxel size = 11.5 mm3, smoothing kernel = 2.3 mm). For predicting chronological age, a mean absolute error (MAE) of 5.08 years was achieved, (optimal voxel size = 3.73 mm3, smoothing kernel = 3.68 mm). This was compared to performance using default values of 1.5 mm3 and 4mm respectively, resulting in MAE = 5.48 years, though this 7.3% improvement was not statistically significant. When assessing generalisability, best performance was achieved when applying the entire Bayesian optimization framework to the new dataset, out-performing the parameters optimized for the initial training dataset. Our study outlines the proof-of-principle that neuroimaging models for brain-age prediction can use Bayesian optimization to derive case-specific pre-processing parameters. Our results suggest that different pre-processing parameters are selected when optimization is conducted in specific contexts. This potentially motivates use of optimization techniques at many different points during the experimental process, which may improve statistical sensitivity and reduce opportunities for experimenter-led bias. PMID:29483870

  17. Optimization of fermentation parameters to study the behavior of selected lactic cultures on soy solid state fermentation.

    PubMed

    Rodríguez de Olmos, A; Bru, E; Garro, M S

    2015-03-02

    The use of solid fermentation substrate (SSF) has been appreciated by the demand for natural and healthy products. Lactic acid bacteria and bifidobacteria play a leading role in the production of novel functional foods and their behavior is practically unknown in these systems. Soy is an excellent substrate for the production of functional foods for their low cost and nutritional value. The aim of this work was to optimize different parameters involved in solid state fermentation (SSF) using selected lactic cultures to improve soybean substrate as a possible strategy for the elaboration of new soy food with enhanced functional and nutritional properties. Soy flour and selected lactic cultures were used under different conditions to optimize the soy SSF. The measured responses were bacterial growth, free amino acids and β-glucosidase activity, which were analyzed by applying response surface methodology. Based on the proposed statistical model, different fermentation conditions were raised by varying the moisture content (50-80%) of the soy substrate and temperature of incubation (31-43°C). The effect of inoculum amount was also investigated. These studies demonstrated the ability of selected strains (Lactobacillus paracasei subsp. paracasei and Bifidobacterium longum) to grow with strain-dependent behavior on the SSF system. β-Glucosidase activity was evident in both strains and L. paracasei subsp. paracasei was able to increase the free amino acids at the end of fermentation under assayed conditions. The used statistical model has allowed the optimization of fermentation parameters on soy SSF by selected lactic strains. Besides, the possibility to work with lower initial bacterial amounts to obtain results with significant technological impact was demonstrated. Copyright © 2014 Elsevier B.V. All rights reserved.

  18. Bearing damage assessment using Jensen-Rényi Divergence based on EEMD

    NASA Astrophysics Data System (ADS)

    Singh, Jaskaran; Darpe, A. K.; Singh, S. P.

    2017-03-01

    An Ensemble Empirical Mode Decomposition (EEMD) and Jensen Rényi divergence (JRD) based methodology is proposed for the degradation assessment of rolling element bearings using vibration data. The EEMD decomposes vibration signals into a set of intrinsic mode functions (IMFs). A systematic methodology to select IMFs that are sensitive and closely related to the fault is proposed in the paper. The change in probability distribution of the energies of the sensitive IMFs is measured through JRD which acts as a damage identification parameter. Evaluation of JRD with sensitive IMFs makes it largely unaffected by change/fluctuations in operating conditions. Further, an algorithm based on Chebyshev's inequality is applied to JRD to identify exact points of change in bearing health and remove outliers. The identified change points are investigated for fault classification as possible locations where specific defect initiation could have taken place. For fault classification, two new parameters are proposed: 'α value' and Probable Fault Index, which together classify the fault. To standardize the degradation process, a Confidence Value parameter is proposed to quantify the bearing degradation value in a range of zero to unity. A simulation study is first carried out to demonstrate the robustness of the proposed JRD parameter under variable operating conditions of load and speed. The proposed methodology is then validated on experimental data (seeded defect data and accelerated bearing life test data). The first validation on two different vibration datasets (inner/outer) obtained from seeded defect experiments demonstrate the effectiveness of JRD parameter in detecting a change in health state as the severity of fault changes. The second validation is on two accelerated life tests. The results demonstrate the proposed approach as a potential tool for bearing performance degradation assessment.

  19. Exemplifying the Effects of Parameterization Shortcomings in the Numerical Simulation of Geological Energy and Mass Storage

    NASA Astrophysics Data System (ADS)

    Dethlefsen, Frank; Tilmann Pfeiffer, Wolf; Schäfer, Dirk

    2016-04-01

    Numerical simulations of hydraulic, thermal, geomechanical, or geochemical (THMC-) processes in the subsurface have been conducted for decades. Often, such simulations are commenced by applying a parameter set that is as realistic as possible. Then, a base scenario is calibrated on field observations. Finally, scenario simulations can be performed, for instance to forecast the system behavior after varying input data. In the context of subsurface energy and mass storage, however, these model calibrations based on field data are often not available, as these storage actions have not been carried out so far. Consequently, the numerical models merely rely on the parameter set initially selected, and uncertainties as a consequence of a lack of parameter values or process understanding may not be perceivable, not mentioning quantifiable. Therefore, conducting THMC simulations in the context of energy and mass storage deserves a particular review of the model parameterization with its input data, and such a review so far hardly exists to the required extent. Variability or aleatory uncertainty exists for geoscientific parameter values in general, and parameters for that numerous data points are available, such as aquifer permeabilities, may be described statistically thereby exhibiting statistical uncertainty. In this case, sensitivity analyses for quantifying the uncertainty in the simulation resulting from varying this parameter can be conducted. There are other parameters, where the lack of data quantity and quality implies a fundamental changing of ongoing processes when such a parameter value is varied in numerical scenario simulations. As an example for such a scenario uncertainty, varying the capillary entry pressure as one of the multiphase flow parameters can either allow or completely inhibit the penetration of an aquitard by gas. As the last example, the uncertainty of cap-rock fault permeabilities and consequently potential leakage rates of stored gases into shallow compartments are regarded as recognized ignorance by the authors of this study, as no realistic approach exists to determine this parameter and values are best guesses only. In addition to these aleatory uncertainties, an equivalent classification is possible for rating epistemic uncertainties describing the degree of understanding processes such as the geochemical and hydraulic effects following potential gas intrusions from deeper reservoirs into shallow aquifers. As an outcome of this grouping of uncertainties, prediction errors of scenario simulations can be calculated by sensitivity analyses, if the uncertainties are identified as statistical. However, if scenario uncertainties exist or even recognized ignorance has to be attested to a parameter or a process in question, the outcomes of simulations mainly depend on the decision of the modeler by choosing parameter values or by interpreting the occurring of processes. In that case, the informative value of numerical simulations is limited by ambiguous simulation results, which cannot be refined without improving the geoscientific database through laboratory or field studies on a longer term basis, so that the effects of the subsurface use may be predicted realistically. This discussion, amended by a compilation of available geoscientific data to parameterize such simulations, will be presented in this study.

  20. Identifiability of sorption parameters in stirred flow-through reactor experiments and their identification with a Bayesian approach.

    PubMed

    Nicoulaud-Gouin, V; Garcia-Sanchez, L; Giacalone, M; Attard, J C; Martin-Garin, A; Bois, F Y

    2016-10-01

    This paper addresses the methodological conditions -particularly experimental design and statistical inference- ensuring the identifiability of sorption parameters from breakthrough curves measured during stirred flow-through reactor experiments also known as continuous flow stirred-tank reactor (CSTR) experiments. The equilibrium-kinetic (EK) sorption model was selected as nonequilibrium parameterization embedding the K d approach. Parameter identifiability was studied formally on the equations governing outlet concentrations. It was also studied numerically on 6 simulated CSTR experiments on a soil with known equilibrium-kinetic sorption parameters. EK sorption parameters can not be identified from a single breakthrough curve of a CSTR experiment, because K d,1 and k - were diagnosed collinear. For pairs of CSTR experiments, Bayesian inference allowed to select the correct models of sorption and error among sorption alternatives. Bayesian inference was conducted with SAMCAT software (Sensitivity Analysis and Markov Chain simulations Applied to Transfer models) which launched the simulations through the embedded simulation engine GNU-MCSim, and automated their configuration and post-processing. Experimental designs consisting in varying flow rates between experiments reaching equilibrium at contamination stage were found optimal, because they simultaneously gave accurate sorption parameters and predictions. Bayesian results were comparable to maximum likehood method but they avoided convergence problems, the marginal likelihood allowed to compare all models, and credible interval gave directly the uncertainty of sorption parameters θ. Although these findings are limited to the specific conditions studied here, in particular the considered sorption model, the chosen parameter values and error structure, they help in the conception and analysis of future CSTR experiments with radionuclides whose kinetic behaviour is suspected. Copyright © 2016 Elsevier Ltd. All rights reserved.

Top