NASA Astrophysics Data System (ADS)
Maymandi, Nahal; Kerachian, Reza; Nikoo, Mohammad Reza
2018-03-01
This paper presents a new methodology for optimizing Water Quality Monitoring (WQM) networks of reservoirs and lakes using the concept of the value of information (VOI) and utilizing results of a calibrated numerical water quality simulation model. With reference to the value of information theory, water quality of every checkpoint with a specific prior probability differs in time. After analyzing water quality samples taken from potential monitoring points, the posterior probabilities are updated using the Baye's theorem, and VOI of the samples is calculated. In the next step, the stations with maximum VOI is selected as optimal stations. This process is repeated for each sampling interval to obtain optimal monitoring network locations for each interval. The results of the proposed VOI-based methodology is compared with those obtained using an entropy theoretic approach. As the results of the two methodologies would be partially different, in the next step, the results are combined using a weighting method. Finally, the optimal sampling interval and location of WQM stations are chosen using the Evidential Reasoning (ER) decision making method. The efficiency and applicability of the methodology are evaluated using available water quantity and quality data of the Karkheh Reservoir in the southwestern part of Iran.
Average variograms to guide soil sampling
NASA Astrophysics Data System (ADS)
Kerry, R.; Oliver, M. A.
2004-10-01
To manage land in a site-specific way for agriculture requires detailed maps of the variation in the soil properties of interest. To predict accurately for mapping, the interval at which the soil is sampled should relate to the scale of spatial variation. A variogram can be used to guide sampling in two ways. A sampling interval of less than half the range of spatial dependence can be used, or the variogram can be used with the kriging equations to determine an optimal sampling interval to achieve a given tolerable error. A variogram might not be available for the site, but if the variograms of several soil properties were available on a similar parent material and or particular topographic positions an average variogram could be calculated from these. Averages of the variogram ranges and standardized average variograms from four different parent materials in southern England were used to suggest suitable sampling intervals for future surveys in similar pedological settings based on half the variogram range. The standardized average variograms were also used to determine optimal sampling intervals using the kriging equations. Similar sampling intervals were suggested by each method and the maps of predictions based on data at different grid spacings were evaluated for the different parent materials. Variograms of loss on ignition (LOI) taken from the literature for other sites in southern England with similar parent materials had ranges close to the average for a given parent material showing the possible wider application of such averages to guide sampling.
Optimal regulation in systems with stochastic time sampling
NASA Technical Reports Server (NTRS)
Montgomery, R. C.; Lee, P. S.
1980-01-01
An optimal control theory that accounts for stochastic variable time sampling in a distributed microprocessor based flight control system is presented. The theory is developed by using a linear process model for the airplane dynamics and the information distribution process is modeled as a variable time increment process where, at the time that information is supplied to the control effectors, the control effectors know the time of the next information update only in a stochastic sense. An optimal control problem is formulated and solved for the control law that minimizes the expected value of a quadratic cost function. The optimal cost obtained with a variable time increment Markov information update process where the control effectors know only the past information update intervals and the Markov transition mechanism is almost identical to that obtained with a known and uniform information update interval.
Optimal estimation of suspended-sediment concentrations in streams
Holtschlag, D.J.
2001-01-01
Optimal estimators are developed for computation of suspended-sediment concentrations in streams. The estimators are a function of parameters, computed by use of generalized least squares, which simultaneously account for effects of streamflow, seasonal variations in average sediment concentrations, a dynamic error component, and the uncertainty in concentration measurements. The parameters are used in a Kalman filter for on-line estimation and an associated smoother for off-line estimation of suspended-sediment concentrations. The accuracies of the optimal estimators are compared with alternative time-averaging interpolators and flow-weighting regression estimators by use of long-term daily-mean suspended-sediment concentration and streamflow data from 10 sites within the United States. For sampling intervals from 3 to 48 days, the standard errors of on-line and off-line optimal estimators ranged from 52.7 to 107%, and from 39.5 to 93.0%, respectively. The corresponding standard errors of linear and cubic-spline interpolators ranged from 48.8 to 158%, and from 50.6 to 176%, respectively. The standard errors of simple and multiple regression estimators, which did not vary with the sampling interval, were 124 and 105%, respectively. Thus, the optimal off-line estimator (Kalman smoother) had the lowest error characteristics of those evaluated. Because suspended-sediment concentrations are typically measured at less than 3-day intervals, use of optimal estimators will likely result in significant improvements in the accuracy of continuous suspended-sediment concentration records. Additional research on the integration of direct suspended-sediment concentration measurements and optimal estimators applied at hourly or shorter intervals is needed.
Viana, Duarte S; Santamaría, Luis; Figuerola, Jordi
2016-02-01
Propagule retention time is a key factor in determining propagule dispersal distance and the shape of "seed shadows". Propagules dispersed by animal vectors are either ingested and retained in the gut until defecation or attached externally to the body until detachment. Retention time is a continuous variable, but it is commonly measured at discrete time points, according to pre-established sampling time-intervals. Although parametric continuous distributions have been widely fitted to these interval-censored data, the performance of different fitting methods has not been evaluated. To investigate the performance of five different fitting methods, we fitted parametric probability distributions to typical discretized retention-time data with known distribution using as data-points either the lower, mid or upper bounds of sampling intervals, as well as the cumulative distribution of observed values (using either maximum likelihood or non-linear least squares for parameter estimation); then compared the estimated and original distributions to assess the accuracy of each method. We also assessed the robustness of these methods to variations in the sampling procedure (sample size and length of sampling time-intervals). Fittings to the cumulative distribution performed better for all types of parametric distributions (lognormal, gamma and Weibull distributions) and were more robust to variations in sample size and sampling time-intervals. These estimated distributions had negligible deviations of up to 0.045 in cumulative probability of retention times (according to the Kolmogorov-Smirnov statistic) in relation to original distributions from which propagule retention time was simulated, supporting the overall accuracy of this fitting method. In contrast, fitting the sampling-interval bounds resulted in greater deviations that ranged from 0.058 to 0.273 in cumulative probability of retention times, which may introduce considerable biases in parameter estimates. We recommend the use of cumulative probability to fit parametric probability distributions to propagule retention time, specifically using maximum likelihood for parameter estimation. Furthermore, the experimental design for an optimal characterization of unimodal propagule retention time should contemplate at least 500 recovered propagules and sampling time-intervals not larger than the time peak of propagule retrieval, except in the tail of the distribution where broader sampling time-intervals may also produce accurate fits.
NASA Astrophysics Data System (ADS)
Cao, Jian; Chen, Jing-Bo; Dai, Meng-Xue
2018-01-01
An efficient finite-difference frequency-domain modeling of seismic wave propagation relies on the discrete schemes and appropriate solving methods. The average-derivative optimal scheme for the scalar wave modeling is advantageous in terms of the storage saving for the system of linear equations and the flexibility for arbitrary directional sampling intervals. However, using a LU-decomposition-based direct solver to solve its resulting system of linear equations is very costly for both memory and computational requirements. To address this issue, we consider establishing a multigrid-preconditioned BI-CGSTAB iterative solver fit for the average-derivative optimal scheme. The choice of preconditioning matrix and its corresponding multigrid components is made with the help of Fourier spectral analysis and local mode analysis, respectively, which is important for the convergence. Furthermore, we find that for the computation with unequal directional sampling interval, the anisotropic smoothing in the multigrid precondition may affect the convergence rate of this iterative solver. Successful numerical applications of this iterative solver for the homogenous and heterogeneous models in 2D and 3D are presented where the significant reduction of computer memory and the improvement of computational efficiency are demonstrated by comparison with the direct solver. In the numerical experiments, we also show that the unequal directional sampling interval will weaken the advantage of this multigrid-preconditioned iterative solver in the computing speed or, even worse, could reduce its accuracy in some cases, which implies the need for a reasonable control of directional sampling interval in the discretization.
Strömberg, Eric A; Nyberg, Joakim; Hooker, Andrew C
2016-12-01
With the increasing popularity of optimal design in drug development it is important to understand how the approximations and implementations of the Fisher information matrix (FIM) affect the resulting optimal designs. The aim of this work was to investigate the impact on design performance when using two common approximations to the population model and the full or block-diagonal FIM implementations for optimization of sampling points. Sampling schedules for two example experiments based on population models were optimized using the FO and FOCE approximations and the full and block-diagonal FIM implementations. The number of support points was compared between the designs for each example experiment. The performance of these designs based on simulation/estimations was investigated by computing bias of the parameters as well as through the use of an empirical D-criterion confidence interval. Simulations were performed when the design was computed with the true parameter values as well as with misspecified parameter values. The FOCE approximation and the Full FIM implementation yielded designs with more support points and less clustering of sample points than designs optimized with the FO approximation and the block-diagonal implementation. The D-criterion confidence intervals showed no performance differences between the full and block diagonal FIM optimal designs when assuming true parameter values. However, the FO approximated block-reduced FIM designs had higher bias than the other designs. When assuming parameter misspecification in the design evaluation, the FO Full FIM optimal design was superior to the FO block-diagonal FIM design in both of the examples.
Robustness-Based Design Optimization Under Data Uncertainty
NASA Technical Reports Server (NTRS)
Zaman, Kais; McDonald, Mark; Mahadevan, Sankaran; Green, Lawrence
2010-01-01
This paper proposes formulations and algorithms for design optimization under both aleatory (i.e., natural or physical variability) and epistemic uncertainty (i.e., imprecise probabilistic information), from the perspective of system robustness. The proposed formulations deal with epistemic uncertainty arising from both sparse and interval data without any assumption about the probability distributions of the random variables. A decoupled approach is proposed in this paper to un-nest the robustness-based design from the analysis of non-design epistemic variables to achieve computational efficiency. The proposed methods are illustrated for the upper stage design problem of a two-stage-to-orbit (TSTO) vehicle, where the information on the random design inputs are only available as sparse point and/or interval data. As collecting more data reduces uncertainty but increases cost, the effect of sample size on the optimality and robustness of the solution is also studied. A method is developed to determine the optimal sample size for sparse point data that leads to the solutions of the design problem that are least sensitive to variations in the input random variables.
Shoukri, Mohamed M; Elkum, Nasser; Walter, Stephen D
2006-01-01
Background In this paper we propose the use of the within-subject coefficient of variation as an index of a measurement's reliability. For continuous variables and based on its maximum likelihood estimation we derive a variance-stabilizing transformation and discuss confidence interval construction within the framework of a one-way random effects model. We investigate sample size requirements for the within-subject coefficient of variation for continuous and binary variables. Methods We investigate the validity of the approximate normal confidence interval by Monte Carlo simulations. In designing a reliability study, a crucial issue is the balance between the number of subjects to be recruited and the number of repeated measurements per subject. We discuss efficiency of estimation and cost considerations for the optimal allocation of the sample resources. The approach is illustrated by an example on Magnetic Resonance Imaging (MRI). We also discuss the issue of sample size estimation for dichotomous responses with two examples. Results For the continuous variable we found that the variance stabilizing transformation improves the asymptotic coverage probabilities on the within-subject coefficient of variation for the continuous variable. The maximum like estimation and sample size estimation based on pre-specified width of confidence interval are novel contribution to the literature for the binary variable. Conclusion Using the sample size formulas, we hope to help clinical epidemiologists and practicing statisticians to efficiently design reliability studies using the within-subject coefficient of variation, whether the variable of interest is continuous or binary. PMID:16686943
Foo, Lee Kien; McGree, James; Duffull, Stephen
2012-01-01
Optimal design methods have been proposed to determine the best sampling times when sparse blood sampling is required in clinical pharmacokinetic studies. However, the optimal blood sampling time points may not be feasible in clinical practice. Sampling windows, a time interval for blood sample collection, have been proposed to provide flexibility in blood sampling times while preserving efficient parameter estimation. Because of the complexity of the population pharmacokinetic models, which are generally nonlinear mixed effects models, there is no analytical solution available to determine sampling windows. We propose a method for determination of sampling windows based on MCMC sampling techniques. The proposed method attains a stationary distribution rapidly and provides time-sensitive windows around the optimal design points. The proposed method is applicable to determine sampling windows for any nonlinear mixed effects model although our work focuses on an application to population pharmacokinetic models. Copyright © 2012 John Wiley & Sons, Ltd.
The influence of sampling interval on the accuracy of trail impact assessment
Leung, Y.-F.; Marion, J.L.
1999-01-01
Trail impact assessment and monitoring (IA&M) programs have been growing in importance and application in recreation resource management at protected areas. Census-based and sampling-based approaches have been developed in such programs, with systematic point sampling being the most common survey design. This paper examines the influence of sampling interval on the accuracy of estimates for selected trail impact problems. A complete census of four impact types on 70 trails in Great Smoky Mountains National Park was utilized as the base data set for the analyses. The census data were resampled at increasing intervals to create a series of simulated point data sets. Estimates of frequency of occurrence and lineal extent for the four impact types were compared with the census data set. The responses of accuracy loss on lineal extent estimates to increasing sampling intervals varied across different impact types, while the responses on frequency of occurrence estimates were consistent, approximating an inverse asymptotic curve. These findings suggest that systematic point sampling may be an appropriate method for estimating the lineal extent but not the frequency of trail impacts. Sample intervals of less than 100 m appear to yield an excellent level of accuracy for the four impact types evaluated. Multiple regression analysis results suggest that appropriate sampling intervals are more likely to be determined by the type of impact in question rather than the length of trail. The census-based trail survey and the resampling-simulation method developed in this study can be a valuable first step in establishing long-term trail IA&M programs, in which an optimal sampling interval range with acceptable accuracy is determined before investing efforts in data collection.
Eisenhofer, Graeme; Lattke, Peter; Herberg, Maria; Siegert, Gabriele; Qin, Nan; Därr, Roland; Hoyer, Jana; Villringer, Arno; Prejbisz, Aleksander; Januszewicz, Andrzej; Remaley, Alan; Martucci, Victoria; Pacak, Karel; Ross, H Alec; Sweep, Fred C G J; Lenders, Jacques W M
2013-01-01
Measurements of plasma normetanephrine and metanephrine provide a useful diagnostic test for phaeochromocytoma, but this depends on appropriate reference intervals. Upper cut-offs set too high compromise diagnostic sensitivity, whereas set too low, false-positives are a problem. This study aimed to establish optimal reference intervals for plasma normetanephrine and metanephrine. Blood samples were collected in the supine position from 1226 subjects, aged 5-84 y, including 116 children, 575 normotensive and hypertensive adults and 535 patients in whom phaeochromocytoma was ruled out. Reference intervals were examined according to age and gender. Various models were examined to optimize upper cut-offs according to estimates of diagnostic sensitivity and specificity in a separate validation group of 3888 patients tested for phaeochromocytoma, including 558 with confirmed disease. Plasma metanephrine, but not normetanephrine, was higher (P < 0.001) in men than in women, but reference intervals did not differ. Age showed a positive relationship (P < 0.0001) with plasma normetanephrine and a weaker relationship (P = 0.021) with metanephrine. Upper cut-offs of reference intervals for normetanephrine increased from 0.47 nmol/L in children to 1.05 nmol/L in subjects over 60 y. A curvilinear model for age-adjusted compared with fixed upper cut-offs for normetanephrine, together with a higher cut-off for metanephrine (0.45 versus 0.32 nmol/L), resulted in a substantial gain in diagnostic specificity from 88.3% to 96.0% with minimal loss in diagnostic sensitivity from 93.9% to 93.6%. These data establish age-adjusted cut-offs of reference intervals for plasma normetanephrine and optimized cut-offs for metanephrine useful for minimizing false-positive results.
Eisenhofer, Graeme; Lattke, Peter; Herberg, Maria; Siegert, Gabriele; Qin, Nan; Därr, Roland; Hoyer, Jana; Villringer, Arno; Prejbisz, Aleksander; Januszewicz, Andrzej; Remaley, Alan; Martucci, Victoria; Pacak, Karel; Ross, H Alec; Sweep, Fred C G J; Lenders, Jacques W M
2016-01-01
Background Measurements of plasma normetanephrine and metanephrine provide a useful diagnostic test for phaeochromocytoma, but this depends on appropriate reference intervals. Upper cut-offs set too high compromise diagnostic sensitivity, whereas set too low, false-positives are a problem. This study aimed to establish optimal reference intervals for plasma normetanephrine and metanephrine. Methods Blood samples were collected in the supine position from 1226 subjects, aged 5–84 y, including 116 children, 575 normotensive and hypertensive adults and 535 patients in whom phaeochromocytoma was ruled out. Reference intervals were examined according to age and gender. Various models were examined to optimize upper cut-offs according to estimates of diagnostic sensitivity and specificity in a separate validation group of 3888 patients tested for phaeochromocytoma, including 558 with confirmed disease. Results Plasma metanephrine, but not normetanephrine, was higher (P < 0.001) in men than in women, but reference intervals did not differ. Age showed a positive relationship (P < 0.0001) with plasma normetanephrine and a weaker relationship (P = 0.021) with metanephrine. Upper cut-offs of reference intervals for normetanephrine increased from 0.47 nmol/L in children to 1.05 nmol/L in subjects over 60 y. A curvilinear model for age-adjusted compared with fixed upper cut-offs for normetanephrine, together with a higher cut-off for metanephrine (0.45 versus 0.32 nmol/L), resulted in a substantial gain in diagnostic specificity from 88.3% to 96.0% with minimal loss in diagnostic sensitivity from 93.9% to 93.6%. Conclusions These data establish age-adjusted cut-offs of reference intervals for plasma normetanephrine and optimized cut-offs for metanephrine useful for minimizing false-positive results. PMID:23065528
A computer program for uncertainty analysis integrating regression and Bayesian methods
Lu, Dan; Ye, Ming; Hill, Mary C.; Poeter, Eileen P.; Curtis, Gary
2014-01-01
This work develops a new functionality in UCODE_2014 to evaluate Bayesian credible intervals using the Markov Chain Monte Carlo (MCMC) method. The MCMC capability in UCODE_2014 is based on the FORTRAN version of the differential evolution adaptive Metropolis (DREAM) algorithm of Vrugt et al. (2009), which estimates the posterior probability density function of model parameters in high-dimensional and multimodal sampling problems. The UCODE MCMC capability provides eleven prior probability distributions and three ways to initialize the sampling process. It evaluates parametric and predictive uncertainties and it has parallel computing capability based on multiple chains to accelerate the sampling process. This paper tests and demonstrates the MCMC capability using a 10-dimensional multimodal mathematical function, a 100-dimensional Gaussian function, and a groundwater reactive transport model. The use of the MCMC capability is made straightforward and flexible by adopting the JUPITER API protocol. With the new MCMC capability, UCODE_2014 can be used to calculate three types of uncertainty intervals, which all can account for prior information: (1) linear confidence intervals which require linearity and Gaussian error assumptions and typically 10s–100s of highly parallelizable model runs after optimization, (2) nonlinear confidence intervals which require a smooth objective function surface and Gaussian observation error assumptions and typically 100s–1,000s of partially parallelizable model runs after optimization, and (3) MCMC Bayesian credible intervals which require few assumptions and commonly 10,000s–100,000s or more partially parallelizable model runs. Ready access allows users to select methods best suited to their work, and to compare methods in many circumstances.
NASA Astrophysics Data System (ADS)
Rajabi, Mohammad Mahdi; Ataie-Ashtiani, Behzad; Janssen, Hans
2015-02-01
The majority of literature regarding optimized Latin hypercube sampling (OLHS) is devoted to increasing the efficiency of these sampling strategies through the development of new algorithms based on the combination of innovative space-filling criteria and specialized optimization schemes. However, little attention has been given to the impact of the initial design that is fed into the optimization algorithm, on the efficiency of OLHS strategies. Previous studies, as well as codes developed for OLHS, have relied on one of the following two approaches for the selection of the initial design in OLHS: (1) the use of random points in the hypercube intervals (random LHS), and (2) the use of midpoints in the hypercube intervals (midpoint LHS). Both approaches have been extensively used, but no attempt has been previously made to compare the efficiency and robustness of their resulting sample designs. In this study we compare the two approaches and show that the space-filling characteristics of OLHS designs are sensitive to the initial design that is fed into the optimization algorithm. It is also illustrated that the space-filling characteristics of OLHS designs based on midpoint LHS are significantly better those based on random LHS. The two approaches are compared by incorporating their resulting sample designs in Monte Carlo simulation (MCS) for uncertainty propagation analysis, and then, by employing the sample designs in the selection of the training set for constructing non-intrusive polynomial chaos expansion (NIPCE) meta-models which subsequently replace the original full model in MCSs. The analysis is based on two case studies involving numerical simulation of density dependent flow and solute transport in porous media within the context of seawater intrusion in coastal aquifers. We show that the use of midpoint LHS as the initial design increases the efficiency and robustness of the resulting MCSs and NIPCE meta-models. The study also illustrates that this relative improvement decreases with increasing number of sample points and input parameter dimensions. Since the computational time and efforts for generating the sample designs in the two approaches are identical, the use of midpoint LHS as the initial design in OLHS is thus recommended.
Imaging system design for improved information capacity
NASA Technical Reports Server (NTRS)
Fales, C. L.; Huck, F. O.; Samms, R. W.
1984-01-01
Shannon's theory of information for communication channels is used to assess the performance of line-scan and sensor-array imaging systems and to optimize the design trade-offs involving sensitivity, spatial response, and sampling intervals. Formulations and computational evaluations account for spatial responses typical of line-scan and sensor-array mechanisms, lens diffraction and transmittance shading, defocus blur, and square and hexagonal sampling lattices.
Choudhuri, Indrajit; MacCarter, Dean; Shaw, Rachael; Anderson, Steve; St Cyr, John; Niazi, Imran
2014-11-01
One-third of eligible patients fail to respond to cardiac resynchronization therapy (CRT). Current methods to "optimize" the atrio-ventricular (A-V) interval are performed at rest, which may limit its efficacy during daily activities. We hypothesized that low-intensity cardiopulmonary exercise testing (CPX) could identify the most favorable physiologic combination of specific gas exchange parameters reflecting pulmonary blood flow or cardiac output, stroke volume, and left atrial pressure to guide determination of the optimal A-V interval. We assessed relative feasibility of determining the optimal A-V interval by three methods in 17 patients who underwent optimization of CRT: (1) resting echocardiographic optimization (the Ritter method), (2) resting electrical optimization (intrinsic A-V interval and QRS duration), and (3) during low-intensity, steady-state CPX. Five sequential, incremental A-V intervals were programmed in each method. Assessment of cardiopulmonary stability and potential influence on the CPX-based method were assessed. CPX and determination of a physiological optimal A-V interval was successfully completed in 94.1% of patients, slightly higher than the resting echo-based approach (88.2%). There was a wide variation in the optimal A-V delay determined by each method. There was no observed cardiopulmonary instability or impact of the implant procedure that affected determination of the CPX-based optimized A-V interval. Determining optimized A-V intervals by CPX is feasible. Proposed mechanisms explaining this finding and long-term impact require further study. ©2014 Wiley Periodicals, Inc.
Variance of discharge estimates sampled using acoustic Doppler current profilers from moving boats
Garcia, Carlos M.; Tarrab, Leticia; Oberg, Kevin; Szupiany, Ricardo; Cantero, Mariano I.
2012-01-01
This paper presents a model for quantifying the random errors (i.e., variance) of acoustic Doppler current profiler (ADCP) discharge measurements from moving boats for different sampling times. The model focuses on the random processes in the sampled flow field and has been developed using statistical methods currently available for uncertainty analysis of velocity time series. Analysis of field data collected using ADCP from moving boats from three natural rivers of varying sizes and flow conditions shows that, even though the estimate of the integral time scale of the actual turbulent flow field is larger than the sampling interval, the integral time scale of the sampled flow field is on the order of the sampling interval. Thus, an equation for computing the variance error in discharge measurements associated with different sampling times, assuming uncorrelated flow fields is appropriate. The approach is used to help define optimal sampling strategies by choosing the exposure time required for ADCPs to accurately measure flow discharge.
Uncertainty Quantification and Statistical Convergence Guidelines for PIV Data
NASA Astrophysics Data System (ADS)
Stegmeir, Matthew; Kassen, Dan
2016-11-01
As Particle Image Velocimetry has continued to mature, it has developed into a robust and flexible technique for velocimetry used by expert and non-expert users. While historical estimates of PIV accuracy have typically relied heavily on "rules of thumb" and analysis of idealized synthetic images, recently increased emphasis has been placed on better quantifying real-world PIV measurement uncertainty. Multiple techniques have been developed to provide per-vector instantaneous uncertainty estimates for PIV measurements. Often real-world experimental conditions introduce complications in collecting "optimal" data, and the effect of these conditions is important to consider when planning an experimental campaign. The current work utilizes the results of PIV Uncertainty Quantification techniques to develop a framework for PIV users to utilize estimated PIV confidence intervals to compute reliable data convergence criteria for optimal sampling of flow statistics. Results are compared using experimental and synthetic data, and recommended guidelines and procedures leveraging estimated PIV confidence intervals for efficient sampling for converged statistics are provided.
Optimal time points sampling in pathway modelling.
Hu, Shiyan
2004-01-01
Modelling cellular dynamics based on experimental data is at the heart of system biology. Considerable progress has been made to dynamic pathway modelling as well as the related parameter estimation. However, few of them gives consideration for the issue of optimal sampling time selection for parameter estimation. Time course experiments in molecular biology rarely produce large and accurate data sets and the experiments involved are usually time consuming and expensive. Therefore, to approximate parameters for models with only few available sampling data is of significant practical value. For signal transduction, the sampling intervals are usually not evenly distributed and are based on heuristics. In the paper, we investigate an approach to guide the process of selecting time points in an optimal way to minimize the variance of parameter estimates. In the method, we first formulate the problem to a nonlinear constrained optimization problem by maximum likelihood estimation. We then modify and apply a quantum-inspired evolutionary algorithm, which combines the advantages of both quantum computing and evolutionary computing, to solve the optimization problem. The new algorithm does not suffer from the morass of selecting good initial values and being stuck into local optimum as usually accompanied with the conventional numerical optimization techniques. The simulation results indicate the soundness of the new method.
Daly, Caitlin H; Higgins, Victoria; Adeli, Khosrow; Grey, Vijay L; Hamid, Jemila S
2017-12-01
To statistically compare and evaluate commonly used methods of estimating reference intervals and to determine which method is best based on characteristics of the distribution of various data sets. Three approaches for estimating reference intervals, i.e. parametric, non-parametric, and robust, were compared with simulated Gaussian and non-Gaussian data. The hierarchy of the performances of each method was examined based on bias and measures of precision. The findings of the simulation study were illustrated through real data sets. In all Gaussian scenarios, the parametric approach provided the least biased and most precise estimates. In non-Gaussian scenarios, no single method provided the least biased and most precise estimates for both limits of a reference interval across all sample sizes, although the non-parametric approach performed the best for most scenarios. The hierarchy of the performances of the three methods was only impacted by sample size and skewness. Differences between reference interval estimates established by the three methods were inflated by variability. Whenever possible, laboratories should attempt to transform data to a Gaussian distribution and use the parametric approach to obtain the most optimal reference intervals. When this is not possible, laboratories should consider sample size and skewness as factors in their choice of reference interval estimation method. The consequences of false positives or false negatives may also serve as factors in this decision. Copyright © 2017 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.
Liang, Xinshu; Gao, Yinan; Zhang, Xiaoying; Tian, Yongqiang; Zhang, Zhenxian; Gao, Lihong
2014-01-01
Inappropriate and excessive irrigation and fertilization have led to the predominant decline of crop yields, and water and fertilizer use efficiency in intensive vegetable production systems in China. For many vegetables, fertigation can be applied daily according to the actual water and nutrient requirement of crops. A greenhouse study was therefore conducted to investigate the effect of daily fertigation on migration of water and salt in soil, and root growth and fruit yield of cucumber. The treatments included conventional interval fertigation, optimal interval fertigation and optimal daily fertigation. Generally, although soil under the treatment optimal interval fertigation received much lower fertilizers than soil under conventional interval fertigation, the treatment optimal interval fertigation did not statistically decrease the economic yield and fruit nutrition quality of cucumber when compare to conventional interval fertigation. In addition, the treatment optimal interval fertigation effectively avoided inorganic nitrogen accumulation in soil and significantly (P<0.05) increased the partial factor productivity of applied nitrogen by 88% and 209% in the early-spring and autumn-winter seasons, respectively, when compared to conventional interval fertigation. Although soils under the treatments optimal interval fertigation and optimal daily fertigation received the same amount of fertilizers, the treatment optimal daily fertigation maintained the relatively stable water, electrical conductivity and mineral nitrogen levels in surface soils, promoted fine root (<1.5 mm diameter) growth of cucumber, and eventually increased cucumber economic yield by 6.2% and 8.3% and partial factor productivity of applied nitrogen by 55% and 75% in the early-spring and autumn-winter seasons, respectively, when compared to the treatment optimal interval fertigation. These results suggested that optimal daily fertigation is a beneficial practice for improving crop yield and the water and fertilizers use efficiency in solar greenhouse.
Liang, Xinshu; Gao, Yinan; Zhang, Xiaoying; Tian, Yongqiang; Zhang, Zhenxian; Gao, Lihong
2014-01-01
Inappropriate and excessive irrigation and fertilization have led to the predominant decline of crop yields, and water and fertilizer use efficiency in intensive vegetable production systems in China. For many vegetables, fertigation can be applied daily according to the actual water and nutrient requirement of crops. A greenhouse study was therefore conducted to investigate the effect of daily fertigation on migration of water and salt in soil, and root growth and fruit yield of cucumber. The treatments included conventional interval fertigation, optimal interval fertigation and optimal daily fertigation. Generally, although soil under the treatment optimal interval fertigation received much lower fertilizers than soil under conventional interval fertigation, the treatment optimal interval fertigation did not statistically decrease the economic yield and fruit nutrition quality of cucumber when compare to conventional interval fertigation. In addition, the treatment optimal interval fertigation effectively avoided inorganic nitrogen accumulation in soil and significantly (P<0.05) increased the partial factor productivity of applied nitrogen by 88% and 209% in the early-spring and autumn-winter seasons, respectively, when compared to conventional interval fertigation. Although soils under the treatments optimal interval fertigation and optimal daily fertigation received the same amount of fertilizers, the treatment optimal daily fertigation maintained the relatively stable water, electrical conductivity and mineral nitrogen levels in surface soils, promoted fine root (<1.5 mm diameter) growth of cucumber, and eventually increased cucumber economic yield by 6.2% and 8.3% and partial factor productivity of applied nitrogen by 55% and 75% in the early-spring and autumn-winter seasons, respectively, when compared to the treatment optimal interval fertigation. These results suggested that optimal daily fertigation is a beneficial practice for improving crop yield and the water and fertilizers use efficiency in solar greenhouse. PMID:24475204
A genetic algorithm-based framework for wavelength selection on sample categorization.
Anzanello, Michel J; Yamashita, Gabrielli; Marcelo, Marcelo; Fogliatto, Flávio S; Ortiz, Rafael S; Mariotti, Kristiane; Ferrão, Marco F
2017-08-01
In forensic and pharmaceutical scenarios, the application of chemometrics and optimization techniques has unveiled common and peculiar features of seized medicine and drug samples, helping investigative forces to track illegal operations. This paper proposes a novel framework aimed at identifying relevant subsets of attenuated total reflectance Fourier transform infrared (ATR-FTIR) wavelengths for classifying samples into two classes, for example authentic or forged categories in case of medicines, or salt or base form in cocaine analysis. In the first step of the framework, the ATR-FTIR spectra were partitioned into equidistant intervals and the k-nearest neighbour (KNN) classification technique was applied to each interval to insert samples into proper classes. In the next step, selected intervals were refined through the genetic algorithm (GA) by identifying a limited number of wavelengths from the intervals previously selected aimed at maximizing classification accuracy. When applied to Cialis®, Viagra®, and cocaine ATR-FTIR datasets, the proposed method substantially decreased the number of wavelengths needed to categorize, and increased the classification accuracy. From a practical perspective, the proposed method provides investigative forces with valuable information towards monitoring illegal production of drugs and medicines. In addition, focusing on a reduced subset of wavelengths allows the development of portable devices capable of testing the authenticity of samples during police checking events, avoiding the need for later laboratorial analyses and reducing equipment expenses. Theoretically, the proposed GA-based approach yields more refined solutions than the current methods relying on interval approaches, which tend to insert irrelevant wavelengths in the retained intervals. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Li, Zukui; Ding, Ran; Floudas, Christodoulos A.
2011-01-01
Robust counterpart optimization techniques for linear optimization and mixed integer linear optimization problems are studied in this paper. Different uncertainty sets, including those studied in literature (i.e., interval set; combined interval and ellipsoidal set; combined interval and polyhedral set) and new ones (i.e., adjustable box; pure ellipsoidal; pure polyhedral; combined interval, ellipsoidal, and polyhedral set) are studied in this work and their geometric relationship is discussed. For uncertainty in the left hand side, right hand side, and objective function of the optimization problems, robust counterpart optimization formulations induced by those different uncertainty sets are derived. Numerical studies are performed to compare the solutions of the robust counterpart optimization models and applications in refinery production planning and batch process scheduling problem are presented. PMID:21935263
Treatment selection in a randomized clinical trial via covariate-specific treatment effect curves.
Ma, Yunbei; Zhou, Xiao-Hua
2017-02-01
For time-to-event data in a randomized clinical trial, we proposed two new methods for selecting an optimal treatment for a patient based on the covariate-specific treatment effect curve, which is used to represent the clinical utility of a predictive biomarker. To select an optimal treatment for a patient with a specific biomarker value, we proposed pointwise confidence intervals for each covariate-specific treatment effect curve and the difference between covariate-specific treatment effect curves of two treatments. Furthermore, to select an optimal treatment for a future biomarker-defined subpopulation of patients, we proposed confidence bands for each covariate-specific treatment effect curve and the difference between each pair of covariate-specific treatment effect curve over a fixed interval of biomarker values. We constructed the confidence bands based on a resampling technique. We also conducted simulation studies to evaluate finite-sample properties of the proposed estimation methods. Finally, we illustrated the application of the proposed method in a real-world data set.
Investigation of modulation parameters in multiplexing gas chromatography.
Trapp, Oliver
2010-10-22
Combination of information technology and separation sciences opens a new avenue to achieve high sample throughputs and therefore is of great interest to bypass bottlenecks in catalyst screening of parallelized reactors or using multitier well plates in reaction optimization. Multiplexing gas chromatography utilizes pseudo-random injection sequences derived from Hadamard matrices to perform rapid sample injections which gives a convoluted chromatogram containing the information of a single sample or of several samples with similar analyte composition. The conventional chromatogram is obtained by application of the Hadamard transform using the known injection sequence or in case of several samples an averaged transformed chromatogram is obtained which can be used in a Gauss-Jordan deconvolution procedure to obtain all single chromatograms of the individual samples. The performance of such a system depends on the modulation precision and on the parameters, e.g. the sequence length and modulation interval. Here we demonstrate the effects of the sequence length and modulation interval on the deconvoluted chromatogram, peak shapes and peak integration for sequences between 9-bit (511 elements) and 13-bit (8191 elements) and modulation intervals Δt between 5 s and 500 ms using a mixture of five components. It could be demonstrated that even for high-speed modulation at time intervals of 500 ms the chromatographic information is very well preserved and that the separation efficiency can be improved by very narrow sample injections. Furthermore this study shows that the relative peak areas in multiplexed chromatograms do not deviate from conventionally recorded chromatograms. Copyright © 2010 Elsevier B.V. All rights reserved.
Using known populations of pronghorn to evaluate sampling plans and estimators
Kraft, K.M.; Johnson, D.H.; Samuelson, J.M.; Allen, S.H.
1995-01-01
Although sampling plans and estimators of abundance have good theoretical properties, their performance in real situations is rarely assessed because true population sizes are unknown. We evaluated widely used sampling plans and estimators of population size on 3 known clustered distributions of pronghorn (Antilocapra americana). Our criteria were accuracy of the estimate, coverage of 95% confidence intervals, and cost. Sampling plans were combinations of sampling intensities (16, 33, and 50%), sample selection (simple random sampling without replacement, systematic sampling, and probability proportional to size sampling with replacement), and stratification. We paired sampling plans with suitable estimators (simple, ratio, and probability proportional to size). We used area of the sampling unit as the auxiliary variable for the ratio and probability proportional to size estimators. All estimators were nearly unbiased, but precision was generally low (overall mean coefficient of variation [CV] = 29). Coverage of 95% confidence intervals was only 89% because of the highly skewed distribution of the pronghorn counts and small sample sizes, especially with stratification. Stratification combined with accurate estimates of optimal stratum sample sizes increased precision, reducing the mean CV from 33 without stratification to 25 with stratification; costs increased 23%. Precise results (mean CV = 13) but poor confidence interval coverage (83%) were obtained with simple and ratio estimators when the allocation scheme included all sampling units in the stratum containing most pronghorn. Although areas of the sampling units varied, ratio estimators and probability proportional to size sampling did not increase precision, possibly because of the clumped distribution of pronghorn. Managers should be cautious in using sampling plans and estimators to estimate abundance of aggregated populations.
NASA Astrophysics Data System (ADS)
Yun, Wanying; Lu, Zhenzhou; Jiang, Xian
2018-06-01
To efficiently execute the variance-based global sensitivity analysis, the law of total variance in the successive intervals without overlapping is proved at first, on which an efficient space-partition sampling-based approach is subsequently proposed in this paper. Through partitioning the sample points of output into different subsets according to different inputs, the proposed approach can efficiently evaluate all the main effects concurrently by one group of sample points. In addition, there is no need for optimizing the partition scheme in the proposed approach. The maximum length of subintervals is decreased by increasing the number of sample points of model input variables in the proposed approach, which guarantees the convergence condition of the space-partition approach well. Furthermore, a new interpretation on the thought of partition is illuminated from the perspective of the variance ratio function. Finally, three test examples and one engineering application are employed to demonstrate the accuracy, efficiency and robustness of the proposed approach.
van Gelder, Berry M; Meijer, Albert; Bracke, Frank A
2008-09-01
We compared the calculated optimal V-V interval derived from intracardiac electrograms (IEGM) with the optimized V-V interval determined by invasive measurement of LVdP/dt(MAX). Thirty-two patients with heart failure (six females, ages 68 +/- 7.8 years) had a CRT device implanted. After implantation of the atrial, right and a left ventricular lead, the optimal V-V interval was calculated using the QuickOpt formula (St. Jude Medical, Sylmar, CA, USA) applied to the respective IEGM recordings (V-V(IEGM)), and also determined by invasive measurement of LVdP/dt(MAX) (V-V(dP/dt)). The optimal V-V(IEGM) and V-V(dP/dt) intervals were 52.7 +/- 18 ms and 24.0 +/- 33 ms, respectively (P = 0.017), without correlation between the two. The baseline LVdP/dt(MAX) was 748 +/- 191 mmHg/s. The mean value of LVdP/dt(MAX) at invasive optimization was 947 +/- 198 mmHg/s, and at the calculated optimal V-V(IEGM) interval 920 +/- 191 mmHg/s (P < 0.0001). In spite of this significant difference, there was a good correlation between both methods (R = 0.991, P < 0.0001). However, a similarly good correlation existed between the maximum value of LVdP/dt(MAX) and LVdP/dt(MAX) at a fixed V-V interval of 0 ms (R = 0.993, P < 0.0001), or LVdP/dt(MAX) at a randomly selected V-V interval between 0 and +80 ms (R = 0.991, P < 0.0001). Optimizing the V-V interval with the IEGM method does not yield better hemodynamic results than simultaneous BiV pacing. Although a good correlation between LVdP/dt(MAX) determined with V-V(IEGM) and V-V(dP/dt) can be constructed, there is no correlation with the optimal settings of V-V interval in the individual patient.
A parallel optimization method for product configuration and supplier selection based on interval
NASA Astrophysics Data System (ADS)
Zheng, Jian; Zhang, Meng; Li, Guoxi
2017-06-01
In the process of design and manufacturing, product configuration is an important way of product development, and supplier selection is an essential component of supply chain management. To reduce the risk of procurement and maximize the profits of enterprises, this study proposes to combine the product configuration and supplier selection, and express the multiple uncertainties as interval numbers. An integrated optimization model of interval product configuration and supplier selection was established, and NSGA-II was put forward to locate the Pareto-optimal solutions to the interval multiobjective optimization model.
Paula, T O M; Marinho, C D; Amaral Júnior, A T; Peternelli, L A; Gonçalves, L S A
2013-06-27
The objective of this study was to determine the optimal number of repetitions to be used in competition trials of popcorn traits related to production and quality, including grain yield and expansion capacity. The experiments were conducted in 3 environments representative of the north and northwest regions of the State of Rio de Janeiro with 10 Brazilian genotypes of popcorn, consisting by 4 commercial hybrids (IAC 112, IAC 125, Zélia, and Jade), 4 improved varieties (BRS Ângela, UFVM-2 Barão de Viçosa, Beija-flor, and Viçosa) and 2 experimental populations (UNB2U-C3 and UNB2U-C4). The experimental design utilized was a randomized complete block design with 7 repetitions. The Bootstrap method was employed to obtain samples of all of the possible combinations within the 7 blocks. Subsequently, the confidence intervals of the parameters of interest were calculated for all simulated data sets. The optimal number of repetition for all of the traits was considered when all of the estimates of the parameters in question were encountered within the confidence interval. The estimates of the number of repetitions varied according to the parameter estimated, variable evaluated, and environment cultivated, ranging from 2 to 7. It is believed that only the expansion capacity traits in the Colégio Agrícola environment (for residual variance and coefficient of variation), and number of ears per plot, in the Itaocara environment (for coefficient of variation) needed 7 repetitions to fall within the confidence interval. Thus, for the 3 studies conducted, we can conclude that 6 repetitions are optimal for obtaining high experimental precision.
2017-01-05
AFRL-AFOSR-JP-TR-2017-0002 Advanced Computational Methods for Optimization of Non-Periodic Inspection Intervals for Aging Infrastructure Manabu...Computational Methods for Optimization of Non-Periodic Inspection Intervals for Aging Infrastructure 5a. CONTRACT NUMBER 5b. GRANT NUMBER FA2386...UNLIMITED: PB Public Release 13. SUPPLEMENTARY NOTES 14. ABSTRACT This report for the project titled ’Advanced Computational Methods for Optimization of
Klinkenberg, Don; Thomas, Ekelijn; Artavia, Francisco F Calvo; Bouma, Annemarie
2011-08-01
Design of surveillance programs to detect infections could benefit from more insight into sampling schemes. We address the effect of sampling schemes for Salmonella Enteritidis surveillance in laying hens. Based on experimental estimates for the transmission rate in flocks, and the characteristics of an egg immunological test, we have simulated outbreaks with various sampling schemes, and with the current boot swab program with a 15-week sampling interval. Declaring a flock infected based on a single positive egg was not possible because test specificity was too low. Thus, a threshold number of positive eggs was defined to declare a flock infected, and, for small sample sizes, eggs from previous samplings had to be included in a cumulative sample to guarantee a minimum flock level specificity. Effectiveness of surveillance was measured by the proportion of outbreaks detected, and by the number of contaminated table eggs brought on the market. The boot swab program detected 90% of the outbreaks, with 75% fewer contaminated eggs compared to no surveillance, whereas the baseline egg program (30 eggs each 15 weeks) detected 86%, with 73% fewer contaminated eggs. We conclude that a larger sample size results in more detected outbreaks, whereas a smaller sampling interval decreases the number of contaminated eggs. Decreasing sample size and interval simultaneously reduces the number of contaminated eggs, but not indefinitely: the advantage of more frequent sampling is counterbalanced by the cumulative sample including less recently laid eggs. Apparently, optimizing surveillance has its limits when test specificity is taken into account. © 2011 Society for Risk Analysis.
A Two-Stage Method to Determine Optimal Product Sampling considering Dynamic Potential Market
Hu, Zhineng; Lu, Wei; Han, Bing
2015-01-01
This paper develops an optimization model for the diffusion effects of free samples under dynamic changes in potential market based on the characteristics of independent product and presents a two-stage method to figure out the sampling level. The impact analysis of the key factors on the sampling level shows that the increase of the external coefficient or internal coefficient has a negative influence on the sampling level. And the changing rate of the potential market has no significant influence on the sampling level whereas the repeat purchase has a positive one. Using logistic analysis and regression analysis, the global sensitivity analysis gives a whole analysis of the interaction of all parameters, which provides a two-stage method to estimate the impact of the relevant parameters in the case of inaccuracy of the parameters and to be able to construct a 95% confidence interval for the predicted sampling level. Finally, the paper provides the operational steps to improve the accuracy of the parameter estimation and an innovational way to estimate the sampling level. PMID:25821847
Knowledge-based nonuniform sampling in multidimensional NMR.
Schuyler, Adam D; Maciejewski, Mark W; Arthanari, Haribabu; Hoch, Jeffrey C
2011-07-01
The full resolution afforded by high-field magnets is rarely realized in the indirect dimensions of multidimensional NMR experiments because of the time cost of uniformly sampling to long evolution times. Emerging methods utilizing nonuniform sampling (NUS) enable high resolution along indirect dimensions by sampling long evolution times without sampling at every multiple of the Nyquist sampling interval. While the earliest NUS approaches matched the decay of sampling density to the decay of the signal envelope, recent approaches based on coupled evolution times attempt to optimize sampling by choosing projection angles that increase the likelihood of resolving closely-spaced resonances. These approaches employ knowledge about chemical shifts to predict optimal projection angles, whereas prior applications of tailored sampling employed only knowledge of the decay rate. In this work we adapt the matched filter approach as a general strategy for knowledge-based nonuniform sampling that can exploit prior knowledge about chemical shifts and is not restricted to sampling projections. Based on several measures of performance, we find that exponentially weighted random sampling (envelope matched sampling) performs better than shift-based sampling (beat matched sampling). While shift-based sampling can yield small advantages in sensitivity, the gains are generally outweighed by diminished robustness. Our observation that more robust sampling schemes are only slightly less sensitive than schemes highly optimized using prior knowledge about chemical shifts has broad implications for any multidimensional NMR study employing NUS. The results derived from simulated data are demonstrated with a sample application to PfPMT, the phosphoethanolamine methyltransferase of the human malaria parasite Plasmodium falciparum.
Dual-mode nested search method for categorical uncertain multi-objective optimization
NASA Astrophysics Data System (ADS)
Tang, Long; Wang, Hu
2016-10-01
Categorical multi-objective optimization is an important issue involved in many matching design problems. Non-numerical variables and their uncertainty are the major challenges of such optimizations. Therefore, this article proposes a dual-mode nested search (DMNS) method. In the outer layer, kriging metamodels are established using standard regular simplex mapping (SRSM) from categorical candidates to numerical values. Assisted by the metamodels, a k-cluster-based intelligent sampling strategy is developed to search Pareto frontier points. The inner layer uses an interval number method to model the uncertainty of categorical candidates. To improve the efficiency, a multi-feature convergent optimization via most-promising-area stochastic search (MFCOMPASS) is proposed to determine the bounds of objectives. Finally, typical numerical examples are employed to demonstrate the effectiveness of the proposed DMNS method.
Single step optimization of manipulator maneuvers with variable structure control
NASA Technical Reports Server (NTRS)
Chen, N.; Dwyer, T. A. W., III
1987-01-01
One step ahead optimization has been recently proposed for spacecraft attitude maneuvers as well as for robot manipulator maneuvers. Such a technique yields a discrete time control algorithm implementable as a sequence of state-dependent, quadratic programming problems for acceleration optimization. Its sensitivity to model accuracy, for the required inversion of the system dynamics, is shown in this paper to be alleviated by a fast variable structure control correction, acting between the sampling intervals of the slow one step ahead discrete time acceleration command generation algorithm. The slow and fast looping concept chosen follows that recently proposed for optimal aiming strategies with variable structure control. Accelerations required by the VSC correction are reserved during the slow one step ahead command generation so that the ability to overshoot the sliding surface is guaranteed.
Quantification of Uncertainty in the Flood Frequency Analysis
NASA Astrophysics Data System (ADS)
Kasiapillai Sudalaimuthu, K.; He, J.; Swami, D.
2017-12-01
Flood frequency analysis (FFA) is usually carried out for planning and designing of water resources and hydraulic structures. Owing to the existence of variability in sample representation, selection of distribution and estimation of distribution parameters, the estimation of flood quantile has been always uncertain. Hence, suitable approaches must be developed to quantify the uncertainty in the form of prediction interval as an alternate to deterministic approach. The developed framework in the present study to include uncertainty in the FFA discusses a multi-objective optimization approach to construct the prediction interval using ensemble of flood quantile. Through this approach, an optimal variability of distribution parameters is identified to carry out FFA. To demonstrate the proposed approach, annual maximum flow data from two gauge stations (Bow river at Calgary and Banff, Canada) are used. The major focus of the present study was to evaluate the changes in magnitude of flood quantiles due to the recent extreme flood event occurred during the year 2013. In addition, the efficacy of the proposed method was further verified using standard bootstrap based sampling approaches and found that the proposed method is reliable in modeling extreme floods as compared to the bootstrap methods.
NASA Astrophysics Data System (ADS)
Sun, Chao; Zhang, Chunran; Gu, Xinfeng; Liu, Bin
2017-10-01
Constraints of the optimization objective are often unable to be met when predictive control is applied to industrial production process. Then, online predictive controller will not find a feasible solution or a global optimal solution. To solve this problem, based on Back Propagation-Auto Regressive with exogenous inputs (BP-ARX) combined control model, nonlinear programming method is used to discuss the feasibility of constrained predictive control, feasibility decision theorem of the optimization objective is proposed, and the solution method of soft constraint slack variables is given when the optimization objective is not feasible. Based on this, for the interval control requirements of the controlled variables, the slack variables that have been solved are introduced, the adaptive weighted interval predictive control algorithm is proposed, achieving adaptive regulation of the optimization objective and automatically adjust of the infeasible interval range, expanding the scope of the feasible region, and ensuring the feasibility of the interval optimization objective. Finally, feasibility and effectiveness of the algorithm is validated through the simulation comparative experiments.
Distributed fiber sparse-wideband vibration sensing by sub-Nyquist additive random sampling
NASA Astrophysics Data System (ADS)
Zhang, Jingdong; Zheng, Hua; Zhu, Tao; Yin, Guolu; Liu, Min; Bai, Yongzhong; Qu, Dingrong; Qiu, Feng; Huang, Xianbing
2018-05-01
The round trip time of the light pulse limits the maximum detectable vibration frequency response range of phase-sensitive optical time domain reflectometry ({\\phi}-OTDR). Unlike the uniform laser pulse interval in conventional {\\phi}-OTDR, we randomly modulate the pulse interval, so that an equivalent sub-Nyquist additive random sampling (sNARS) is realized for every sensing point of the long interrogation fiber. For an {\\phi}-OTDR system with 10 km sensing length, the sNARS method is optimized by theoretical analysis and Monte Carlo simulation, and the experimental results verify that a wide-band spars signal can be identified and reconstructed. Such a method can broaden the vibration frequency response range of {\\phi}-OTDR, which is of great significance in sparse-wideband-frequency vibration signal detection, such as rail track monitoring and metal defect detection.
Heller, Melina; Vitali, Luciano; Oliveira, Marcone Augusto Leal; Costa, Ana Carolina O; Micke, Gustavo Amadeu
2011-07-13
The present study aimed to develop a methodology using capillary electrophoresis for the determination of sinapaldehyde, syringaldehyde, coniferaldehyde, and vanillin in whiskey samples. The main objective was to obtain a screening method to differentiate authentic samples from seized samples suspected of being false using the phenolic aldehydes as chemical markers. The optimized background electrolyte was composed of 20 mmol L(-1) sodium tetraborate with 10% MeOH at pH 9.3. The study examined two kinds of sample stacking, using a long-end injection mode: normal sample stacking (NSM) and sample stacking with matrix removal (SWMR). In SWMR, the optimized injection time of the samples was 42 s (SWMR42); at this time, no matrix effects were observed. Values of r were >0.99 for the both methods. The LOD and LOQ were better than 100 and 330 mg mL(-1) for NSM and better than 22 and 73 mg L(-1) for SWMR. The CE-UV reliability in the aldehyde analysis in the real sample was compared statistically with LC-MS/MS methodology, and no significant differences were found, with a 95% confidence interval between the methodologies.
Sell, Rebecca E; Sarno, Renee; Lawrence, Brenna; Castillo, Edward M; Fisher, Roger; Brainard, Criss; Dunford, James V; Davis, Daniel P
2010-07-01
The three-phase model of ventricular fibrillation (VF) arrest suggests a period of compressions to "prime" the heart prior to defibrillation attempts. In addition, post-shock compressions may increase the likelihood of return of spontaneous circulation (ROSC). The optimal intervals for shock delivery following cessation of compressions (pre-shock interval) and resumption of compressions following a shock (post-shock interval) remain unclear. To define optimal pre- and post-defibrillation compression pauses for out-of-hospital cardiac arrest (OOHCA). All patients suffering OOHCA from VF were identified over a 1-month period. Defibrillator data were abstracted and analyzed using the combination of ECG, impedance, and audio recording. Receiver-operator curve (ROC) analysis was used to define the optimal pre- and post-shock compression intervals. Multiple logistic regression analysis was used to quantify the relationship between these intervals and ROSC. Covariates included cumulative number of defibrillation attempts, intubation status, and administration of epinephrine in the immediate pre-shock compression cycle. Cluster adjustment was performed due to the possibility of multiple defibrillation attempts for each patient. A total of 36 patients with 96 defibrillation attempts were included. The ROC analysis identified an optimal pre-shock interval of <3s and an optimal post-shock interval of <6s. Increased likelihood of ROSC was observed with a pre-shock interval <3s (adjusted OR 6.7, 95% CI 2.0-22.3, p=0.002) and a post-shock interval of <6s (adjusted OR 10.7, 95% CI 2.8-41.4, p=0.001). Likelihood of ROSC was substantially increased with the optimization of both pre- and post-shock intervals (adjusted OR 13.1, 95% CI 3.4-49.9, p<0.001). Decreasing pre- and post-shock compression intervals increases the likelihood of ROSC in OOHCA from VF.
A Bayesian model averaging method for the derivation of reservoir operating rules
NASA Astrophysics Data System (ADS)
Zhang, Jingwen; Liu, Pan; Wang, Hao; Lei, Xiaohui; Zhou, Yanlai
2015-09-01
Because the intrinsic dynamics among optimal decision making, inflow processes and reservoir characteristics are complex, functional forms of reservoir operating rules are always determined subjectively. As a result, the uncertainty of selecting form and/or model involved in reservoir operating rules must be analyzed and evaluated. In this study, we analyze the uncertainty of reservoir operating rules using the Bayesian model averaging (BMA) model. Three popular operating rules, namely piecewise linear regression, surface fitting and a least-squares support vector machine, are established based on the optimal deterministic reservoir operation. These individual models provide three-member decisions for the BMA combination, enabling the 90% release interval to be estimated by the Markov Chain Monte Carlo simulation. A case study of China's the Baise reservoir shows that: (1) the optimal deterministic reservoir operation, superior to any reservoir operating rules, is used as the samples to derive the rules; (2) the least-squares support vector machine model is more effective than both piecewise linear regression and surface fitting; (3) BMA outperforms any individual model of operating rules based on the optimal trajectories. It is revealed that the proposed model can reduce the uncertainty of operating rules, which is of great potential benefit in evaluating the confidence interval of decisions.
Di Molfetta, A; Santini, L; Forleo, G B; Minni, V; Mafhouz, K; Della Rocca, D G; Fresiello, L; Romeo, F; Ferrari, G
2012-01-01
In spite of cardiac resynchronization therapy (CRT) benefits, 25-30% of patients are still non responders. One of the possible reasons could be the non optimal atrioventricular (AV) and interventricular (VV) intervals settings. Our aim was to exploit a numerical model of cardiovascular system for AV and VV intervals optimization in CRT. A numerical model of the cardiovascular system CRT-dedicated was previously developed. Echocardiographic parameters, Systemic aortic pressure and ECG were collected in 20 consecutive patients before and after CRT. Patient data were simulated by the model that was used to optimize and set into the device the intervals at the baseline and at the follow up. The optimal AV and VV intervals were chosen to optimize the simulated selected variable/s on the base of both echocardiographic and electrocardiographic parameters. Intervals were different for each patient and in most cases, they changed at follow up. The model can well reproduce clinical data as verified with Bland Altman analysis and T-test (p > 0.05). Left ventricular remodeling was 38.7% and left ventricular ejection fraction increasing was 11% against the 15% and 6% reported in literature, respectively. The developed numerical model could reproduce patients conditions at the baseline and at the follow up including the CRT effects. The model could be used to optimize AV and VV intervals at the baseline and at the follow up realizing a personalized and dynamic CRT. A patient tailored CRT could improve patients outcome in comparison to literature data.
A Class of Prediction-Correction Methods for Time-Varying Convex Optimization
NASA Astrophysics Data System (ADS)
Simonetto, Andrea; Mokhtari, Aryan; Koppel, Alec; Leus, Geert; Ribeiro, Alejandro
2016-09-01
This paper considers unconstrained convex optimization problems with time-varying objective functions. We propose algorithms with a discrete time-sampling scheme to find and track the solution trajectory based on prediction and correction steps, while sampling the problem data at a constant rate of $1/h$, where $h$ is the length of the sampling interval. The prediction step is derived by analyzing the iso-residual dynamics of the optimality conditions. The correction step adjusts for the distance between the current prediction and the optimizer at each time step, and consists either of one or multiple gradient steps or Newton steps, which respectively correspond to the gradient trajectory tracking (GTT) or Newton trajectory tracking (NTT) algorithms. Under suitable conditions, we establish that the asymptotic error incurred by both proposed methods behaves as $O(h^2)$, and in some cases as $O(h^4)$, which outperforms the state-of-the-art error bound of $O(h)$ for correction-only methods in the gradient-correction step. Moreover, when the characteristics of the objective function variation are not available, we propose approximate gradient and Newton tracking algorithms (AGT and ANT, respectively) that still attain these asymptotical error bounds. Numerical simulations demonstrate the practical utility of the proposed methods and that they improve upon existing techniques by several orders of magnitude.
Poller, Wolfram C; Dreger, Henryk; Schwerg, Marius; Melzer, Christoph
2015-01-01
Optimization of the AV-interval (AVI) in DDD pacemakers improves cardiac hemodynamics and reduces pacemaker syndromes. Manual optimization is typically not performed in clinical routine. In the present study we analyze the prevalence of E/A wave fusion and A wave truncation under resting conditions in 160 patients with complete AV block (AVB) under the pre-programmed AVI. We manually optimized sub-optimal AVI. We analyzed 160 pacemaker patients with complete AVB, both in sinus rhythm (AV-sense; n = 129) and under atrial pacing (AV-pace; n = 31). Using Doppler analyses of the transmitral inflow we classified the nominal AVI as: a) normal, b) too long (E/A wave fusion) or c) too short (A wave truncation). In patients with a sub-optimal AVI, we performed manual optimization according to the recommendations of the American Society of Echocardiography. All AVB patients with atrial pacing exhibited a normal transmitral inflow under the nominal AV-pace intervals (100%). In contrast, 25 AVB patients in sinus rhythm showed E/A wave fusion under the pre-programmed AV-sense intervals (19.4%; 95% confidence interval (CI): 12.6-26.2%). A wave truncations were not observed in any patient. All patients with a complete E/A wave fusion achieved a normal transmitral inflow after AV-sense interval reduction (mean optimized AVI: 79.4 ± 13.6 ms). Given the rate of 19.4% (CI 12.6-26.2%) of patients with a too long nominal AV-sense interval, automatic algorithms may prove useful in improving cardiac hemodynamics, especially in the subgroup of atrially triggered pacemaker patients with AV node diseases.
Predicting Maintenance Doses of Vancomycin for Hospitalized Patients Undergoing Hemodialysis.
El Nekidy, Wasim S; El-Masri, Maher M; Umstead, Greg S; Dehoorne-Smith, Michelle
2016-01-01
Methicillin-resistant Staphylococcus aureus is a leading cause of death in patients undergoing hemodialysis. However, controversy exists about the optimal dose of vancomycin that will yield the recommended pre-hemodialysis serum concentration of 15-20 mg/L. To develop a data-driven model to optimize the accuracy of maintenance dosing of vancomycin for patients undergoing hemodialysis. A prospective observational cohort study was performed with 164 observations obtained from a convenience sample of 63 patients undergoing hemodialysis. All vancomycin doses were given on the floor after completion of a hemodialysis session. Multivariate linear generalized estimating equation analysis was used to examine independent predictors of pre-hemodialysis serum vancomycin concentration. Pre-hemodialysis serum vancomycin concentration was independently associated with maintenance dose ( B = 0.658, p < 0.001), baseline pre-hemodialysis serum concentration of the drug ( B = 0.492, p < 0.001), and interdialytic interval ( B = -2.133, p < 0.001). According to the best of 4 models that were developed, the maintenance dose of vancomycin required to achieve a pre-hemodialysis serum concentration of 15-20 mg/L, if the baseline serum concentration of the drug was also 15-20 mg/L, was 5.9 mg/kg with interdialytic interval of 48 h and 7.1 mg/kg with interdialytic interval of 72 h. However, if the baseline pre-hemodialysis serum concentration was 10-14.99 mg/L, the required dose increased to 9.2 mg/kg with an interdialytic interval of 48 h and 10.0 mg/kg with an interdialytic interval of 72 h. The maintenance dose of vancomycin varied according to baseline pre-hemodialysis serum concentration of the drug and interdialytic interval. The current practice of targeting a pre-hemodialysis concentration of 15-20 mg/L may be difficult to achieve for the majority of patients undergoing hemodialysis.
Optimized angiotensin-converting enzyme activity assay for the accurate diagnosis of sarcoidosis.
Csongrádi, Alexandra; Enyedi, Attila; Takács, István; Végh, Tamás; Mányiné, Ivetta S; Pólik, Zsófia; Altorjay, István Tibor; Balla, József; Balla, György; Édes, István; Kappelmayer, János; Tóth, Attila; Papp, Zoltán; Fagyas, Miklós
2018-06-27
Serum angiotensin-converting enzyme (ACE) activity determination can aid the early diagnosis of sarcoidosis. We aimed to optimize a fluorescent kinetic assay for ACE activity by screening the confounding effects of endogenous ACE inhibitors and interfering factors. Genotype-dependent and genotype-independent reference values of ACE activity were established, and their diagnostic accuracies were validated in a clinical study. Internally quenched fluorescent substrate, Abz-FRK(Dnp)P-OH was used for ACE-activity measurements. A total of 201 healthy individuals and 59 presumably sarcoidotic patients were enrolled into this study. ACE activity and insertion/deletion (I/D) genotype of the ACE gene were determined. Here we report that serum samples should be diluted at least 35-fold to eliminate the endogenous inhibitor effect of albumin. No significant interferences were detected: up to a triglyceride concentration of 16 mM, a hemoglobin concentration of 0.71 g/L and a bilirubin concentration of 150 μM. Genotype-dependent reference intervals were considered as 3.76-11.25 U/L, 5.22-11.59 U/L, 7.19-14.84 U/L for II, ID and DD genotypes, respectively. I/D genotype-independent reference interval was established as 4.85-13.79 U/L. An ACE activity value was considered positive for sarcoidosis when it exceeded the upper limit of the reference interval. The optimized assay with genotype-dependent reference ranges resulted in 42.5% sensitivity, 100% specificity, 100% positive predictive value and 32.4% negative predictive value in the clinical study, whereas the genotype-independent reference range proved to have inferior diagnostic efficiency. An optimized fluorescent kinetic assay of serum ACE activity combined with ACE I/D genotype determination is an alternative to invasive biopsy for confirming the diagnosis of sarcoidosis in a significant percentage of patients.
Constraining neutron guide optimizations with phase-space considerations
NASA Astrophysics Data System (ADS)
Bertelsen, Mads; Lefmann, Kim
2016-09-01
We introduce a method named the Minimalist Principle that serves to reduce the parameter space for neutron guide optimization when the required beam divergence is limited. The reduced parameter space will restrict the optimization to guides with a minimal neutron intake that are still theoretically able to deliver the maximal possible performance. The geometrical constraints are derived using phase-space propagation from moderator to guide and from guide to sample, while assuming that the optimized guides will achieve perfect transport of the limited neutron intake. Guide systems optimized using these constraints are shown to provide performance close to guides optimized without any constraints, however the divergence received at the sample is limited to the desired interval, even when the neutron transport is not limited by the supermirrors used in the guide. As the constraints strongly limit the parameter space for the optimizer, two control parameters are introduced that can be used to adjust the selected subspace, effectively balancing between maximizing neutron transport and avoiding background from unnecessary neutrons. One parameter is needed to describe the expected focusing abilities of the guide to be optimized, going from perfectly focusing to no correlation between position and velocity. The second parameter controls neutron intake into the guide, so that one can select exactly how aggressively the background should be limited. We show examples of guides optimized using these constraints which demonstrates the higher signal to noise than conventional optimizations. Furthermore the parameter controlling neutron intake is explored which shows that the simulated optimal neutron intake is close to the analytically predicted, when assuming that the guide is dominated by multiple scattering events.
Synchronic interval Gaussian mixed-integer programming for air quality management.
Cheng, Guanhui; Huang, Guohe Gordon; Dong, Cong
2015-12-15
To reveal the synchronism of interval uncertainties, the tradeoff between system optimality and security, the discreteness of facility-expansion options, the uncertainty of pollutant dispersion processes, and the seasonality of wind features in air quality management (AQM) systems, a synchronic interval Gaussian mixed-integer programming (SIGMIP) approach is proposed in this study. A robust interval Gaussian dispersion model is developed for approaching the pollutant dispersion process under interval uncertainties and seasonal variations. The reflection of synchronic effects of interval uncertainties in the programming objective is enabled through introducing interval functions. The proposition of constraint violation degrees helps quantify the tradeoff between system optimality and constraint violation under interval uncertainties. The overall optimality of system profits of an SIGMIP model is achieved based on the definition of an integrally optimal solution. Integer variables in the SIGMIP model are resolved by the existing cutting-plane method. Combining these efforts leads to an effective algorithm for the SIGMIP model. An application to an AQM problem in a region in Shandong Province, China, reveals that the proposed SIGMIP model can facilitate identifying the desired scheme for AQM. The enhancement of the robustness of optimization exercises may be helpful for increasing the reliability of suggested schemes for AQM under these complexities. The interrelated tradeoffs among control measures, emission sources, flow processes, receptors, influencing factors, and economic and environmental goals are effectively balanced. Interests of many stakeholders are reasonably coordinated. The harmony between economic development and air quality control is enabled. Results also indicate that the constraint violation degree is effective at reflecting the compromise relationship between constraint-violation risks and system optimality under interval uncertainties. This can help decision makers mitigate potential risks, e.g. insufficiency of pollutant treatment capabilities, exceedance of air quality standards, deficiency of pollution control fund, or imbalance of economic or environmental stress, in the process of guiding AQM. Copyright © 2015 Elsevier B.V. All rights reserved.
Nie, Xianghui; Huang, Guo H; Li, Yongping
2009-11-01
This study integrates the concepts of interval numbers and fuzzy sets into optimization analysis by dynamic programming as a means of accounting for system uncertainty. The developed interval fuzzy robust dynamic programming (IFRDP) model improves upon previous interval dynamic programming methods. It allows highly uncertain information to be effectively communicated into the optimization process through introducing the concept of fuzzy boundary interval and providing an interval-parameter fuzzy robust programming method for an embedded linear programming problem. Consequently, robustness of the optimization process and solution can be enhanced. The modeling approach is applied to a hypothetical problem for the planning of waste-flow allocation and treatment/disposal facility expansion within a municipal solid waste (MSW) management system. Interval solutions for capacity expansion of waste management facilities and relevant waste-flow allocation are generated and interpreted to provide useful decision alternatives. The results indicate that robust and useful solutions can be obtained, and the proposed IFRDP approach is applicable to practical problems that are associated with highly complex and uncertain information.
Determination of the optimal atrioventricular interval in sick sinus syndrome during DDD pacing.
Kato, Masaya; Dote, Keigo; Sasaki, Shota; Goto, Kenji; Takemoto, Hiroaki; Habara, Seiji; Hasegawa, Daiji; Matsuda, Osamu
2005-09-01
Although the AAI pacing mode has been shown to be electromechanically superior to the DDD pacing mode in sick sinus syndrome (SSS), there is evidence suggesting that during AAI pacing the presence of natural ventricular activation pattern is not enough for hemodynamic benefit to occur. Myocardial performance index (MPI) is a simply measurable Doppler-derived index of combined systolic and diastolic myocardial performance. The aim of this study was to investigate whether AAI pacing mode is electromechanically superior to the DDD mode in patients with SSS by using Doppler-derived MPI. Thirty-nine SSS patients with dual-chamber pacing devices were evaluated by using Doppler echocardiography in AAI mode and DDD mode. The optimal atrioventricular (AV) interval in DDD mode was determined and atrial stimulus-R interval was measured in AAI mode. The ratio of the atrial stimulus-R interval to the optimal AV interval was defined as relative AV interval (rAVI) and the ratio of MPI in AAI mode to that in DDD mode was defined as relative MPI (rMPI). The rMPI was significantly correlated with atrial stimulus-R interval and rAVI (r = 0.57, P = 0.0002, and r = 0.67, P < 0.0001, respectively). A cutoff point of 1.73 for rAVI provided optimum sensitivity and specificity for rMPI >1 based on the receiver operator curves. Even though the intrinsic AV conduction is moderately prolonged, some SSS patients with dual-chamber pacing devices benefit from the ventricular pacing with optimal AV interval. MPI is useful to determine the optimal pacing mode in acute experiment.
QT-RR relationships and suitable QT correction formulas for halothane-anesthetized dogs.
Tabo, Mitsuyasu; Nakamura, Mikiko; Kimura, Kazuya; Ito, Shigeo
2006-10-01
Several QT correction (QTc) formulas have been used for assessing the QT liability of drugs. However, they are known to under- and over-correct the QT interval and tend to be specific to species and experimental conditions. The purpose of this study was to determine a suitable formula for halothane-anesthetized dogs highly sensitive to drug-induced QT interval prolongation. Twenty dogs were anesthetized with 1.5% halothane and the relationship between the QT and RR intervals were obtained by changing the heart rate under atrial pacing conditions. The QT interval was corrected for the RR interval by applying 4 published formulas (Bazett, Fridericia, Van de Water, and Matsunaga); Fridericia's formula (QTcF = QT/RR(0.33)) showed the least slope and lowest R(2) value for the linear regression of QTc intervals against RR intervals, indicating that it dissociated changes in heart rate most effectively. An optimized formula (QTcX = QT/RR(0.3879)) is defined by analysis of covariance and represents a correction algorithm superior to Fridericia's formula. For both Fridericia's and the optimized formula, QT-prolonging drugs (d,l-sotalol, astemizole) showed QTc interval prolongation. A non-QT-prolonging drug (d,l-propranolol) failed to prolong the QTc interval. In addition, drug-induced changes in QTcF and QTcX intervals were highly correlated with those of the QT interval paced at a cycle length of 500 msec. These findings suggest that Fridericia's and the optimized formula, although the optimized is a little bit better, are suitable for correcting the QT interval in halothane-anesthetized dogs and help to evaluate the potential QT prolongation of drugs with high accuracy.
Santana, Victor M; Alday, Josu G; Lee, HyoHyeMi; Allen, Katherine A; Marrs, Rob H
2016-01-01
A present challenge in fire ecology is to optimize management techniques so that ecological services are maximized and C emissions minimized. Here, we modeled the effects of different prescribed-burning rotation intervals and wildfires on carbon emissions (present and future) in British moorlands. Biomass-accumulation curves from four Calluna-dominated ecosystems along a north-south gradient in Great Britain were calculated and used within a matrix-model based on Markov Chains to calculate above-ground biomass-loads and annual C emissions under different prescribed-burning rotation intervals. Additionally, we assessed the interaction of these parameters with a decreasing wildfire return intervals. We observed that litter accumulation patterns varied between sites. Northern sites (colder and wetter) accumulated lower amounts of litter with time than southern sites (hotter and drier). The accumulation patterns of the living vegetation dominated by Calluna were determined by site-specific conditions. The optimal prescribed-burning rotation interval for minimizing annual carbon emissions also differed between sites: the optimal rotation interval for northern sites was between 30 and 50 years, whereas for southern sites a hump-backed relationship was found with the optimal interval either between 8 to 10 years or between 30 to 50 years. Increasing wildfire frequency interacted with prescribed-burning rotation intervals by both increasing C emissions and modifying the optimum prescribed-burning interval for minimum C emission. This highlights the importance of studying site-specific biomass accumulation patterns with respect to environmental conditions for identifying suitable fire-rotation intervals to minimize C emissions.
Tracking a changing environment: optimal sampling, adaptive memory and overnight effects.
Dunlap, Aimee S; Stephens, David W
2012-02-01
Foraging in a variable environment presents a classic problem of decision making with incomplete information. Animals must track the changing environment, remember the best options and make choices accordingly. While several experimental studies have explored the idea that sampling behavior reflects the amount of environmental change, we take the next logical step in asking how change influences memory. We explore the hypothesis that memory length should be tied to the ecological relevance and the value of the information learned, and that environmental change is a key determinant of the value of memory. We use a dynamic programming model to confirm our predictions and then test memory length in a factorial experiment. In our experimental situation we manipulate rates of change in a simple foraging task for blue jays over a 36 h period. After jays experienced an experimentally determined change regime, we tested them at a range of retention intervals, from 1 to 72 h. Manipulated rates of change influenced learning and sampling rates: subjects sampled more and learned more quickly in the high change condition. Tests of retention revealed significant interactions between retention interval and the experienced rate of change. We observed a striking and surprising difference between the high and low change treatments at the 24h retention interval. In agreement with earlier work we find that a circadian retention interval is special, but we find that the extent of this 'specialness' depends on the subject's prior experience of environmental change. Specifically, experienced rates of change seem to influence how subjects balance recent information against past experience in a way that interacts with the passage of time. Copyright © 2011 Elsevier B.V. All rights reserved.
Eliciting interval beliefs: An experimental study
Peeters, Ronald; Wolk, Leonard
2017-01-01
In this paper we study the interval scoring rule as a mechanism to elicit subjective beliefs under varying degrees of uncertainty. In our experiment, subjects forecast the termination time of a time series to be generated from a given but unknown stochastic process. Subjects gradually learn more about the underlying process over time and hence the true distribution over termination times. We conduct two treatments, one with a high and one with a low volatility process. We find that elicited intervals are better when subjects are facing a low volatility process. In this treatment, participants learn to position their intervals almost optimally over the course of the experiment. This is in contrast with the high volatility treatment, where subjects, over the course of the experiment, learn to optimize the location of their intervals but fail to provide the optimal length. PMID:28380020
NASA Astrophysics Data System (ADS)
Wang, Ershen; Jia, Chaoying; Tong, Gang; Qu, Pingping; Lan, Xiaoyu; Pang, Tao
2018-03-01
The receiver autonomous integrity monitoring (RAIM) is one of the most important parts in an avionic navigation system. Two problems need to be addressed to improve this system, namely, the degeneracy phenomenon and lack of samples for the standard particle filter (PF). However, the number of samples cannot adequately express the real distribution of the probability density function (i.e., sample impoverishment). This study presents a GPS receiver autonomous integrity monitoring (RAIM) method based on a chaos particle swarm optimization particle filter (CPSO-PF) algorithm with a log likelihood ratio. The chaos sequence generates a set of chaotic variables, which are mapped to the interval of optimization variables to improve particle quality. This chaos perturbation overcomes the potential for the search to become trapped in a local optimum in the particle swarm optimization (PSO) algorithm. Test statistics are configured based on a likelihood ratio, and satellite fault detection is then conducted by checking the consistency between the state estimate of the main PF and those of the auxiliary PFs. Based on GPS data, the experimental results demonstrate that the proposed algorithm can effectively detect and isolate satellite faults under conditions of non-Gaussian measurement noise. Moreover, the performance of the proposed novel method is better than that of RAIM based on the PF or PSO-PF algorithm.
RadVel: General toolkit for modeling Radial Velocities
NASA Astrophysics Data System (ADS)
Fulton, Benjamin J.; Petigura, Erik A.; Blunt, Sarah; Sinukoff, Evan
2018-01-01
RadVel models Keplerian orbits in radial velocity (RV) time series. The code is written in Python with a fast Kepler's equation solver written in C. It provides a framework for fitting RVs using maximum a posteriori optimization and computing robust confidence intervals by sampling the posterior probability density via Markov Chain Monte Carlo (MCMC). RadVel can perform Bayesian model comparison and produces publication quality plots and LaTeX tables.
Optimization of Thixoforging Parameters for C70S6 Steel Connecting Rods
NASA Astrophysics Data System (ADS)
Özkara, İsa Metin; Baydoğan, Murat
2016-11-01
A microalloyed steel, C70S6, with a solidification interval of 1390-1479 °C, was thixoforged in the semisolid state in a closed die at temperatures in the range 1400-1475 °C to form a 1/7 scaled-down model of a passenger vehicle connecting rod. Die design and an optimized thixoforging temperature eliminated the excessive flash and other problems during forging. Tension test samples from connecting rods thixoforged at the optimum temperature of 1440 °C exhibited nearly the same hardness, yield strength, and ultimate tensile strength as conventional hot forged samples but ductility decreased by about 45% due to grain boundary ferrite network formed during cooling from the thixoforging temperature. Thus, C70S6-grade steel can be thixoforged at 1440 °C to form flash-free connecting rods. This conclusion was also validated using FEA analysis.
Santana, Victor M.; Alday, Josu G.; Lee, HyoHyeMi; Allen, Katherine A.; Marrs, Rob H.
2016-01-01
A present challenge in fire ecology is to optimize management techniques so that ecological services are maximized and C emissions minimized. Here, we modeled the effects of different prescribed-burning rotation intervals and wildfires on carbon emissions (present and future) in British moorlands. Biomass-accumulation curves from four Calluna-dominated ecosystems along a north-south gradient in Great Britain were calculated and used within a matrix-model based on Markov Chains to calculate above-ground biomass-loads and annual C emissions under different prescribed-burning rotation intervals. Additionally, we assessed the interaction of these parameters with a decreasing wildfire return intervals. We observed that litter accumulation patterns varied between sites. Northern sites (colder and wetter) accumulated lower amounts of litter with time than southern sites (hotter and drier). The accumulation patterns of the living vegetation dominated by Calluna were determined by site-specific conditions. The optimal prescribed-burning rotation interval for minimizing annual carbon emissions also differed between sites: the optimal rotation interval for northern sites was between 30 and 50 years, whereas for southern sites a hump-backed relationship was found with the optimal interval either between 8 to 10 years or between 30 to 50 years. Increasing wildfire frequency interacted with prescribed-burning rotation intervals by both increasing C emissions and modifying the optimum prescribed-burning interval for minimum C emission. This highlights the importance of studying site-specific biomass accumulation patterns with respect to environmental conditions for identifying suitable fire-rotation intervals to minimize C emissions. PMID:27880840
A single-loop optimization method for reliability analysis with second order uncertainty
NASA Astrophysics Data System (ADS)
Xie, Shaojun; Pan, Baisong; Du, Xiaoping
2015-08-01
Reliability analysis may involve random variables and interval variables. In addition, some of the random variables may have interval distribution parameters owing to limited information. This kind of uncertainty is called second order uncertainty. This article develops an efficient reliability method for problems involving the three aforementioned types of uncertain input variables. The analysis produces the maximum and minimum reliability and is computationally demanding because two loops are needed: a reliability analysis loop with respect to random variables and an interval analysis loop for extreme responses with respect to interval variables. The first order reliability method and nonlinear optimization are used for the two loops, respectively. For computational efficiency, the two loops are combined into a single loop by treating the Karush-Kuhn-Tucker (KKT) optimal conditions of the interval analysis as constraints. Three examples are presented to demonstrate the proposed method.
Acierno, Mark J; Schnellbacher, Rodney; Tully, Thomas N
2012-12-01
Although abnormalities in blood glucose concentrations in avian species are not as common as they are in mammals, the inability to provide point-of-care glucose measurement likely results in underreporting and missed treatment opportunities. A veterinary glucometer that uses different optimization codes for specific groups of animals has been produced. To obtain data for a psittacine bird-specific optimization code, as well as to calculate agreement between the veterinary glucometer, a standard human glucometer, and a laboratory analyzer, blood samples were obtained from 25 Hispaniolan Amazon parrots (Amazona ventralis) in a 2-phase study. In the initial phase, blood samples were obtained from 20 parrots twice at a 2-week interval. For each sample, the packed cell volume was determined, and the blood glucose concentration was measured by the veterinary glucometer. The rest of each sample was placed into a lithium heparin microtainer tube and centrifuged, and plasma was removed and frozen at -30 degrees C. Within 5 days, tubes were thawed, and blood glucose concentrations were measured with a laboratory analyzer. The data from both procedures were used to develop a psittacine bird-specific code. For the second phase of the study, the same procedure was repeated twice at a 2-week interval in 25 birds to determine agreement between the veterinary glucometer, a standard human glucometer, and a laboratory analyzer. Neither glucometer was in good agreement with the laboratory analyzer (veterinary glucometer bias, 9.0; level of agreement, -38.1 to 56.2; standard glucometer bias, 69.4; level of agreement -17.8 to 156.7). Based on these results, the use of handheld glucometers in the diagnostic testing of Hispaniolan Amazon parrots and other psittacine birds cannot be recommended.
Relationship between heart rate and quiescent interval of the cardiac cycle in children using MRI.
Zhang, Wei; Bogale, Saivivek; Golriz, Farahnaz; Krishnamurthy, Rajesh
2017-11-01
Imaging the heart in children comes with the challenge of constant cardiac motion. A prospective electrocardiography-triggered CT scan allows for scanning during a predetermined phase of the cardiac cycle with least motion. This technique requires knowing the optimal quiescent intervals of cardiac cycles in a pediatric population. To evaluate high-temporal-resolution cine MRI of the heart in children to determine the relationship of heart rate to the optimal quiescent interval within the cardiac cycle. We included a total of 225 consecutive patients ages 0-18 years who had high-temporal-resolution cine steady-state free-precession sequence performed as part of a magnetic resonance imaging (MRI) or magnetic resonance angiography study of the heart. We determined the location and duration of the quiescent interval in systole and diastole for heart rates ranging 40-178 beats per minute (bpm). We performed the Wilcoxon signed rank test to compare the duration of quiescent interval in systole and diastole for each heart rate group. The duration of the quiescent interval at heart rates <80 bpm and >90 bpm was significantly longer in diastole and systole, respectively (P<.0001 for all ranges, except for 90-99 bpm [P=.02]). For heart rates 80-89 bpm, diastolic interval was longer than systolic interval, but the difference was not statistically significant (P=.06). We created a chart depicting optimal quiescent intervals across a range of heart rates that could be applied for prospective electrocardiography-triggered CT imaging of the heart. The optimal quiescent interval at heart rates <80 bpm is in diastole and at heart rates ≥90 bpm is in systole. The period of quiescence at heart rates 80-89 bpm is uniformly short in systole and diastole.
Evaluating the efficiency of environmental monitoring programs
Levine, Carrie R.; Yanai, Ruth D.; Lampman, Gregory G.; Burns, Douglas A.; Driscoll, Charles T.; Lawrence, Gregory B.; Lynch, Jason; Schoch, Nina
2014-01-01
Statistical uncertainty analyses can be used to improve the efficiency of environmental monitoring, allowing sampling designs to maximize information gained relative to resources required for data collection and analysis. In this paper, we illustrate four methods of data analysis appropriate to four types of environmental monitoring designs. To analyze a long-term record from a single site, we applied a general linear model to weekly stream chemistry data at Biscuit Brook, NY, to simulate the effects of reducing sampling effort and to evaluate statistical confidence in the detection of change over time. To illustrate a detectable difference analysis, we analyzed a one-time survey of mercury concentrations in loon tissues in lakes in the Adirondack Park, NY, demonstrating the effects of sampling intensity on statistical power and the selection of a resampling interval. To illustrate a bootstrapping method, we analyzed the plot-level sampling intensity of forest inventory at the Hubbard Brook Experimental Forest, NH, to quantify the sampling regime needed to achieve a desired confidence interval. Finally, to analyze time-series data from multiple sites, we assessed the number of lakes and the number of samples per year needed to monitor change over time in Adirondack lake chemistry using a repeated-measures mixed-effects model. Evaluations of time series and synoptic long-term monitoring data can help determine whether sampling should be re-allocated in space or time to optimize the use of financial and human resources.
State transformations and Hamiltonian structures for optimal control in discrete systems
NASA Astrophysics Data System (ADS)
Sieniutycz, S.
2006-04-01
Preserving usual definition of Hamiltonian H as the scalar product of rates and generalized momenta we investigate two basic classes of discrete optimal control processes governed by the difference rather than differential equations for the state transformation. The first class, linear in the time interval θ, secures the constancy of optimal H and satisfies a discrete Hamilton-Jacobi equation. The second class, nonlinear in θ, does not assure the constancy of optimal H and satisfies only a relationship that may be regarded as an equation of Hamilton-Jacobi type. The basic question asked is if and when Hamilton's canonical structures emerge in optimal discrete systems. For a constrained discrete control, general optimization algorithms are derived that constitute powerful theoretical and computational tools when evaluating extremum properties of constrained physical systems. The mathematical basis is Bellman's method of dynamic programming (DP) and its extension in the form of the so-called Carathéodory-Boltyanski (CB) stage optimality criterion which allows a variation of the terminal state that is otherwise fixed in Bellman's method. For systems with unconstrained intervals of the holdup time θ two powerful optimization algorithms are obtained: an unconventional discrete algorithm with a constant H and its counterpart for models nonlinear in θ. We also present the time-interval-constrained extension of the second algorithm. The results are general; namely, one arrives at: discrete canonical equations of Hamilton, maximum principles, and (at the continuous limit of processes with free intervals of time) the classical Hamilton-Jacobi theory, along with basic results of variational calculus. A vast spectrum of applications and an example are briefly discussed with particular attention paid to models nonlinear in the time interval θ.
Liu, Datong; Peng, Yu; Peng, Xiyuan
2018-01-01
Effective anomaly detection of sensing data is essential for identifying potential system failures. Because they require no prior knowledge or accumulated labels, and provide uncertainty presentation, the probability prediction methods (e.g., Gaussian process regression (GPR) and relevance vector machine (RVM)) are especially adaptable to perform anomaly detection for sensing series. Generally, one key parameter of prediction models is coverage probability (CP), which controls the judging threshold of the testing sample and is generally set to a default value (e.g., 90% or 95%). There are few criteria to determine the optimal CP for anomaly detection. Therefore, this paper designs a graphic indicator of the receiver operating characteristic curve of prediction interval (ROC-PI) based on the definition of the ROC curve which can depict the trade-off between the PI width and PI coverage probability across a series of cut-off points. Furthermore, the Youden index is modified to assess the performance of different CPs, by the minimization of which the optimal CP is derived by the simulated annealing (SA) algorithm. Experiments conducted on two simulation datasets demonstrate the validity of the proposed method. Especially, an actual case study on sensing series from an on-orbit satellite illustrates its significant performance in practical application. PMID:29587372
Harte, Philip T.
2017-01-01
A common assumption with groundwater sampling is that low (<0.5 L/min) pumping rates during well purging and sampling captures primarily lateral flow from the formation through the well-screened interval at a depth coincident with the pump intake. However, if the intake is adjacent to a low hydraulic conductivity part of the screened formation, this scenario will induce vertical groundwater flow to the pump intake from parts of the screened interval with high hydraulic conductivity. Because less formation water will initially be captured during pumping, a substantial volume of water already in the well (preexisting screen water or screen storage) will be captured during this initial time until inflow from the high hydraulic conductivity part of the screened formation can travel vertically in the well to the pump intake. Therefore, the length of the time needed for adequate purging prior to sample collection (called optimal purge duration) is controlled by the in-well, vertical travel times. A preliminary, simple analytical model was used to provide information on the relation between purge duration and capture of formation water for different gross levels of heterogeneity (contrast between low and high hydraulic conductivity layers). The model was then used to compare these time–volume relations to purge data (pumping rates and drawdown) collected at several representative monitoring wells from multiple sites. Results showed that computation of time-dependent capture of formation water (as opposed to capture of preexisting screen water), which were based on vertical travel times in the well, compares favorably with the time required to achieve field parameter stabilization. If field parameter stabilization is an indicator of arrival time of formation water, which has been postulated, then in-well, vertical flow may be an important factor at wells where low-flow sampling is the sample method of choice.
Liao, Xiang; Wang, Qing; Fu, Ji-hong; Tang, Jun
2015-09-01
This work was undertaken to establish a quantitative analysis model which can rapid determinate the content of linalool, linalyl acetate of Xinjiang lavender essential oil. Totally 165 lavender essential oil samples were measured by using near infrared absorption spectrum (NIR), after analyzing the near infrared spectral absorption peaks of all samples, lavender essential oil have abundant chemical information and the interference of random noise may be relatively low on the spectral intervals of 7100~4500 cm(-1). Thus, the PLS models was constructed by using this interval for further analysis. 8 abnormal samples were eliminated. Through the clustering method, 157 lavender essential oil samples were divided into 105 calibration set samples and 52 validation set samples. Gas chromatography mass spectrometry (GC-MS) was used as a tool to determine the content of linalool and linalyl acetate in lavender essential oil. Then the matrix was established with the GC-MS raw data of two compounds in combination with the original NIR data. In order to optimize the model, different pretreatment methods were used to preprocess the raw NIR spectral to contrast the spectral filtering effect, after analysizing the quantitative model results of linalool and linalyl acetate, the root mean square error prediction (RMSEP) of orthogonal signal transformation (OSC) was 0.226, 0.558, spectrally, it was the optimum pretreatment method. In addition, forward interval partial least squares (FiPLS) method was used to exclude the wavelength points which has nothing to do with determination composition or present nonlinear correlation, finally 8 spectral intervals totally 160 wavelength points were obtained as the dataset. Combining the data sets which have optimized by OSC-FiPLS with partial least squares (PLS) to establish a rapid quantitative analysis model for determining the content of linalool and linalyl acetate in Xinjiang lavender essential oil, numbers of hidden variables of two components were 8 in the model. The performance of the model was evaluated according to root mean square error of cross-validation (RMSECV), root mean square error of prediction (RMSEP). In the model, RESECV of linalool and linalyl acetate were 0.170 and 0.416, respectively; RM-SEP were 0.188 and 0.364. The results indicated that raw data was pretreated by OSC and FiPLS, the NIR-PLS quantitative analysis model with good robustness, high measurement precision; it could quickly determine the content of linalool and linalyl acetate in lavender essential oil. In addition, the model has a favorable prediction ability. The study also provide a new effective method which could rapid quantitative analysis the major components of Xinjiang lavender essential oil.
Sui, Yuanyuan; Ou, Yang; Yan, Baixing; Xu, Xiaohong; Rousseau, Alain N; Zhang, Yu
2016-01-01
Micro-basin tillage is a soil and water conservation practice that requires building individual earth blocks along furrows. In this study, plot experiments were conducted to assess the efficiency of micro-basin tillage on sloping croplands between 2012 and 2013 (5°and 7°). The conceptual, optimal, block interval model was used to design micro-basins which are meant to capture the maximum amount of water per unit area. Results indicated that when compared to the up-down slope tillage, micro-basin tillage could increase soil water content and maize yield by about 45% and 17%, and reduce runoff, sediment and nutrients loads by about 63%, 96% and 86%, respectively. Meanwhile, micro-basin tillage could reduce the peak runoff rates and delay the initial runoff-yielding time. In addition, micro-basin tillage with the optimal block interval proved to be the best one among all treatments with different intervals. Compared with treatments of other block intervals, the optimal block interval treatments increased soil moisture by around 10% and reduced runoff rate by around 15%. In general, micro-basin tillage with optimal block interval represents an effective soil and water conservation practice for sloping farmland of the black soil region.
Sui, Yuanyuan; Ou, Yang; Yan, Baixing; Xu, Xiaohong; Rousseau, Alain N.; Zhang, Yu
2016-01-01
Micro-basin tillage is a soil and water conservation practice that requires building individual earth blocks along furrows. In this study, plot experiments were conducted to assess the efficiency of micro-basin tillage on sloping croplands between 2012 and 2013 (5°and 7°). The conceptual, optimal, block interval model was used to design micro-basins which are meant to capture the maximum amount of water per unit area. Results indicated that when compared to the up-down slope tillage, micro-basin tillage could increase soil water content and maize yield by about 45% and 17%, and reduce runoff, sediment and nutrients loads by about 63%, 96% and 86%, respectively. Meanwhile, micro-basin tillage could reduce the peak runoff rates and delay the initial runoff-yielding time. In addition, micro-basin tillage with the optimal block interval proved to be the best one among all treatments with different intervals. Compared with treatments of other block intervals, the optimal block interval treatments increased soil moisture by around 10% and reduced runoff rate by around 15%. In general, micro-basin tillage with optimal block interval represents an effective soil and water conservation practice for sloping farmland of the black soil region. PMID:27031339
Robust portfolio selection based on asymmetric measures of variability of stock returns
NASA Astrophysics Data System (ADS)
Chen, Wei; Tan, Shaohua
2009-10-01
This paper addresses a new uncertainty set--interval random uncertainty set for robust optimization. The form of interval random uncertainty set makes it suitable for capturing the downside and upside deviations of real-world data. These deviation measures capture distributional asymmetry and lead to better optimization results. We also apply our interval random chance-constrained programming to robust mean-variance portfolio selection under interval random uncertainty sets in the elements of mean vector and covariance matrix. Numerical experiments with real market data indicate that our approach results in better portfolio performance.
Minimax confidence intervals in geomagnetism
NASA Technical Reports Server (NTRS)
Stark, Philip B.
1992-01-01
The present paper uses theory of Donoho (1989) to find lower bounds on the lengths of optimally short fixed-length confidence intervals (minimax confidence intervals) for Gauss coefficients of the field of degree 1-12 using the heat flow constraint. The bounds on optimal minimax intervals are about 40 percent shorter than Backus' intervals: no procedure for producing fixed-length confidence intervals, linear or nonlinear, can give intervals shorter than about 60 percent the length of Backus' in this problem. While both methods rigorously account for the fact that core field models are infinite-dimensional, the application of the techniques to the geomagnetic problem involves approximations and counterfactual assumptions about the data errors, and so these results are likely to be extremely optimistic estimates of the actual uncertainty in Gauss coefficients.
Stabilization for sampled-data neural-network-based control systems.
Zhu, Xun-Lin; Wang, Youyi
2011-02-01
This paper studies the problem of stabilization for sampled-data neural-network-based control systems with an optimal guaranteed cost. Unlike previous works, the resulting closed-loop system with variable uncertain sampling cannot simply be regarded as an ordinary continuous-time system with a fast-varying delay in the state. By defining a novel piecewise Lyapunov functional and using a convex combination technique, the characteristic of sampled-data systems is captured. A new delay-dependent stabilization criterion is established in terms of linear matrix inequalities such that the maximal sampling interval and the minimal guaranteed cost control performance can be obtained. It is shown that the newly proposed approach can lead to less conservative and less complex results than the existing ones. Application examples are given to illustrate the effectiveness and the benefits of the proposed method.
Manual control models of industrial management
NASA Technical Reports Server (NTRS)
Crossman, E. R. F. W.
1972-01-01
The industrial engineer is often required to design and implement control systems and organization for manufacturing and service facilities, to optimize quality, delivery, and yield, and minimize cost. Despite progress in computer science most such systems still employ human operators and managers as real-time control elements. Manual control theory should therefore be applicable to at least some aspects of industrial system design and operations. Formulation of adequate model structures is an essential prerequisite to progress in this area; since real-world production systems invariably include multilevel and multiloop control, and are implemented by timeshared human effort. A modular structure incorporating certain new types of functional element, has been developed. This forms the basis for analysis of an industrial process operation. In this case it appears that managerial controllers operate in a discrete predictive mode based on fast time modelling, with sampling interval related to plant dynamics. Successive aggregation causes reduced response bandwidth and hence increased sampling interval as a function of level.
Pashmakova, Medora B; Piccione, Julie; Bishop, Micah A; Nelson, Whitney R; Lawhon, Sara D
2017-05-01
OBJECTIVE To evaluate the agreement between results of microscopic examination and bacterial culture of bile samples from dogs and cats with hepatobiliary disease for detection of bactibilia. DESIGN Cross-sectional study. ANIMALS 31 dogs and 21 cats with hepatobiliary disease for which subsequent microscopic examination and bacterial culture of bile samples was performed from 2004 through 2014. PROCEDURES Electronic medical records of included dogs and cats were reviewed to extract data regarding diagnosis, antimicrobials administered, and results of microscopic examination and bacterial culture of bile samples. Agreement between these 2 diagnostic tests was assessed by calculation of the Cohen κ value. RESULTS 17 (33%) dogs and cats had bactibilia identified by microscopic examination of bile samples, and 11 (21%) had bactibilia identified via bacterial culture. Agreement between these 2 tests was substantial (percentage agreement [positive and negative results], 85%; κ = 0.62; 95% confidence interval, 0.38 to 0.89) and improved to almost perfect when calculated for only animals that received no antimicrobials within 24 hours prior to sample collection (percentage agreement, 94%; κ = 0.84; 95% confidence interval, 0.61 to 1.00). CONCLUSIONS AND CLINICAL RELEVANCE Results indicated that agreement between microscopic examination and bacterial culture of bile samples for detection of bactibilia is optimized when dogs and cats are not receiving antimicrobials at the time of sample collection. Concurrent bacterial culture and microscopic examination of bile samples are recommended for all cats and dogs evaluated for hepatobiliary disease.
Extended Task Space Control for Robotic Manipulators
NASA Technical Reports Server (NTRS)
Backes, Paul G. (Inventor); Long, Mark K. (Inventor)
1996-01-01
The invention is a method of operating a robot in successive sampling intervals to perform a task, the robot having joints and joint actuators with actuator control loops, by decomposing the task into behavior forces, accelerations, velocities and positions of plural behaviors to be exhibited by the robot simultaneously, computing actuator accelerations of the joint actuators for the current sampling interval from both behavior forces, accelerations velocities and positions of the current sampling interval and actuator velocities and positions of the previous sampling interval, computing actuator velocities and positions of the joint actuators for the current sampling interval from the actuator velocities and positions of the previous sampling interval, and, finally, controlling the actuators in accordance with the actuator accelerations, velocities and positions of the current sampling interval. The actuator accelerations, velocities and positions of the current sampling interval are stored for use during the next sampling interval.
Adaptive Sampling-Based Information Collection for Wireless Body Area Networks.
Xu, Xiaobin; Zhao, Fang; Wang, Wendong; Tian, Hui
2016-08-31
To collect important health information, WBAN applications typically sense data at a high frequency. However, limited by the quality of wireless link, the uploading of sensed data has an upper frequency. To reduce upload frequency, most of the existing WBAN data collection approaches collect data with a tolerable error. These approaches can guarantee precision of the collected data, but they are not able to ensure that the upload frequency is within the upper frequency. Some traditional sampling based approaches can control upload frequency directly, however, they usually have a high loss of information. Since the core task of WBAN applications is to collect health information, this paper aims to collect optimized information under the limitation of upload frequency. The importance of sensed data is defined according to information theory for the first time. Information-aware adaptive sampling is proposed to collect uniformly distributed data. Then we propose Adaptive Sampling-based Information Collection (ASIC) which consists of two algorithms. An adaptive sampling probability algorithm is proposed to compute sampling probabilities of different sensed values. A multiple uniform sampling algorithm provides uniform samplings for values in different intervals. Experiments based on a real dataset show that the proposed approach has higher performance in terms of data coverage and information quantity. The parameter analysis shows the optimized parameter settings and the discussion shows the underlying reason of high performance in the proposed approach.
Adaptive Sampling-Based Information Collection for Wireless Body Area Networks
Xu, Xiaobin; Zhao, Fang; Wang, Wendong; Tian, Hui
2016-01-01
To collect important health information, WBAN applications typically sense data at a high frequency. However, limited by the quality of wireless link, the uploading of sensed data has an upper frequency. To reduce upload frequency, most of the existing WBAN data collection approaches collect data with a tolerable error. These approaches can guarantee precision of the collected data, but they are not able to ensure that the upload frequency is within the upper frequency. Some traditional sampling based approaches can control upload frequency directly, however, they usually have a high loss of information. Since the core task of WBAN applications is to collect health information, this paper aims to collect optimized information under the limitation of upload frequency. The importance of sensed data is defined according to information theory for the first time. Information-aware adaptive sampling is proposed to collect uniformly distributed data. Then we propose Adaptive Sampling-based Information Collection (ASIC) which consists of two algorithms. An adaptive sampling probability algorithm is proposed to compute sampling probabilities of different sensed values. A multiple uniform sampling algorithm provides uniform samplings for values in different intervals. Experiments based on a real dataset show that the proposed approach has higher performance in terms of data coverage and information quantity. The parameter analysis shows the optimized parameter settings and the discussion shows the underlying reason of high performance in the proposed approach. PMID:27589758
Optimal iodine staining of cardiac tissue for X-ray computed tomography.
Butters, Timothy D; Castro, Simon J; Lowe, Tristan; Zhang, Yanmin; Lei, Ming; Withers, Philip J; Zhang, Henggui
2014-01-01
X-ray computed tomography (XCT) has been shown to be an effective imaging technique for a variety of materials. Due to the relatively low differential attenuation of X-rays in biological tissue, a high density contrast agent is often required to obtain optimal contrast. The contrast agent, iodine potassium iodide ([Formula: see text]), has been used in several biological studies to augment the use of XCT scanning. Recently I2KI was used in XCT scans of animal hearts to study cardiac structure and to generate 3D anatomical computer models. However, to date there has been no thorough study into the optimal use of I2KI as a contrast agent in cardiac muscle with respect to the staining times required, which has been shown to impact significantly upon the quality of results. In this study we address this issue by systematically scanning samples at various stages of the staining process. To achieve this, mouse hearts were stained for up to 58 hours and scanned at regular intervals of 6-7 hours throughout this process. Optimal staining was found to depend upon the thickness of the tissue; a simple empirical exponential relationship was derived to allow calculation of the required staining time for cardiac samples of an arbitrary size.
Optimization of thermal processing of canned mussels.
Ansorena, M R; Salvadori, V O
2011-10-01
The design and optimization of thermal processing of solid-liquid food mixtures, such as canned mussels, requires the knowledge of the thermal history at the slowest heating point. In general, this point does not coincide with the geometrical center of the can, and the results show that it is located along the axial axis at a height that depends on the brine content. In this study, a mathematical model for the prediction of the temperature at this point was developed using the discrete transfer function approach. Transfer function coefficients were experimentally obtained, and prediction equations fitted to consider other can dimensions and sampling interval. This model was coupled with an optimization routine in order to search for different retort temperature profiles to maximize a quality index. Both constant retort temperature (CRT) and variable retort temperature (VRT; discrete step-wise and exponential) were considered. In the CRT process, the optimal retort temperature was always between 134 °C and 137 °C, and high values of thiamine retention were achieved. A significant improvement in surface quality index was obtained for optimal VRT profiles compared to optimal CRT. The optimization procedure shown in this study produces results that justify its utilization in the industry.
Fourier crosstalk analysis of multislice and cone-beam helical CT
NASA Astrophysics Data System (ADS)
La Riviere, Patrick J.
2004-05-01
Multi-slice helical CT scanners allow for much faster scanning and better x-ray utilization than do their single-slice predecessors, but they engender considerably more complicated data sampling patterns due to the interlacing of the samples from different rows as the patient is translated. Characterizing and optimizing this sampling is challenging because the conebeam geometry of such scanners means that the projections measured by each detector row are at least slightly oblique, making it difficult to apply standard multidimensional sampling analyses. In this study, we seek to apply a more general framework for analyzing sampled imaging systems known as Fourier crosstalk analysis. Our purpose in this preliminary work is to compare the information content of the data acquired in three different scanner geometries and operating conditions with ostensibly equivalent volume coverage and average longitudinal sampling interval: a single-slice scanner operating at pitch 1, a four-slice scanner operating at pitch 3 and a 15-slice scanner operating at pitch 15. We find that moving from a single-slice to a multi-slice geometry introduces longitudinal crosstalk characteristic of the longitudinal sampling interval between periods of individual each detector row, and not of the overall interlaced sampling pattern. This is attributed to data inconsistencies caused by the obliqueness of the projections in a multi-slice/conebeam configuration. However, these preliminary results suggest that the significance of this additional crosstalk actually decreases as the number of detector rows increases.
Kim, Tae Kyung; Kim, Hyung Wook; Kim, Su Jin; Ha, Jong Kun; Jang, Hyung Ha; Hong, Young Mi; Park, Su Bum; Choi, Cheol Woong; Kang, Dae Hwan
2014-01-01
Background/Aims The quality of bowel preparation (QBP) is the important factor in performing a successful colonoscopy. Several factors influencing QBP have been reported; however, some factors, such as the optimal preparation-to-colonoscopy time interval, remain controversial. This study aimed to determine the factors influencing QBP and the optimal time interval for full-dose polyethylene glycol (PEG) preparation. Methods A total of 165 patients who underwent colonoscopy from June 2012 to August 2012 were prospectively evaluated. The QBP was assessed using the Ottawa Bowel Preparation Scale (Ottawa) score according to several factors influencing the QBP were analyzed. Results Colonoscopies with a time interval of 5 to 6 hours had the best Ottawa score in all parts of the colon. Patients with time intervals of 6 hours or less had the better QBP than those with time intervals of more than 6 hours (p=0.046). In the multivariate analysis, the time interval (odds ratio, 1.897; 95% confidence interval, 1.006 to 3.577; p=0.048) was the only significant contributor to a satisfactory bowel preparation. Conclusions The optimal time was 5 to 6 hours for the full-dose PEG method, and the time interval was the only significant contributor to a satisfactory bowel preparation. PMID:25368750
Symbol interval optimization for molecular communication with drift.
Kim, Na-Rae; Eckford, Andrew W; Chae, Chan-Byoung
2014-09-01
In this paper, we propose a symbol interval optimization algorithm in molecular communication with drift. Proper symbol intervals are important in practical communication systems since information needs to be sent as fast as possible with low error rates. There is a trade-off, however, between symbol intervals and inter-symbol interference (ISI) from Brownian motion. Thus, we find proper symbol interval values considering the ISI inside two kinds of blood vessels, and also suggest no ISI system for strong drift models. Finally, an isomer-based molecule shift keying (IMoSK) is applied to calculate achievable data transmission rates (achievable rates, hereafter). Normalized achievable rates are also obtained and compared in one-symbol ISI and no ISI systems.
Deng, Bo; Shi, Yaoyao; Yu, Tao; Kang, Chao; Zhao, Pan
2018-01-31
The composite tape winding process, which utilizes a tape winding machine and prepreg tapes, provides a promising way to improve the quality of composite products. Nevertheless, the process parameters of composite tape winding have crucial effects on the tensile strength and void content, which are closely related to the performances of the winding products. In this article, two different object values of winding products, including mechanical performance (tensile strength) and a physical property (void content), were respectively calculated. Thereafter, the paper presents an integrated methodology by combining multi-parameter relative sensitivity analysis and single-parameter sensitivity analysis to obtain the optimal intervals of the composite tape winding process. First, the global multi-parameter sensitivity analysis method was applied to investigate the sensitivity of each parameter in the tape winding processing. Then, the local single-parameter sensitivity analysis method was employed to calculate the sensitivity of a single parameter within the corresponding range. Finally, the stability and instability ranges of each parameter were distinguished. Meanwhile, the authors optimized the process parameter ranges and provided comprehensive optimized intervals of the winding parameters. The verification test validated that the optimized intervals of the process parameters were reliable and stable for winding products manufacturing.
Yu, Tao; Kang, Chao; Zhao, Pan
2018-01-01
The composite tape winding process, which utilizes a tape winding machine and prepreg tapes, provides a promising way to improve the quality of composite products. Nevertheless, the process parameters of composite tape winding have crucial effects on the tensile strength and void content, which are closely related to the performances of the winding products. In this article, two different object values of winding products, including mechanical performance (tensile strength) and a physical property (void content), were respectively calculated. Thereafter, the paper presents an integrated methodology by combining multi-parameter relative sensitivity analysis and single-parameter sensitivity analysis to obtain the optimal intervals of the composite tape winding process. First, the global multi-parameter sensitivity analysis method was applied to investigate the sensitivity of each parameter in the tape winding processing. Then, the local single-parameter sensitivity analysis method was employed to calculate the sensitivity of a single parameter within the corresponding range. Finally, the stability and instability ranges of each parameter were distinguished. Meanwhile, the authors optimized the process parameter ranges and provided comprehensive optimized intervals of the winding parameters. The verification test validated that the optimized intervals of the process parameters were reliable and stable for winding products manufacturing. PMID:29385048
Gezer, Cenk; Ekin, Atalay; Golbasi, Ceren; Kocahakimoglu, Ceysu; Bozkurt, Umit; Dogan, Askin; Solmaz, Ulaş; Golbasi, Hakan; Taner, Cuneyt Eftal
2017-04-01
To determine whether urea and creatinine measurements in vaginal fluid could be used to diagnose preterm premature rupture of membranes (PPROM) and predict delivery interval after PPROM. A prospective study conducted with 100 pregnant women with PPROM and 100 healthy pregnant women between 24 + 0 and 36 + 6 gestational weeks. All patients underwent sampling for urea and creatinine concentrations in vaginal fluid at the time of admission. Receiver operator curve analysis was used to determine the cutoff values for the presence of PPROM and delivery within 48 h after PPROM. In multivariate logistic regression analysis, vaginal fluid urea and creatinine levels were found to be significant predictors of PPROM (p < 0.001 and p < 0.001, respectively) and delivery within 48 h after PPROM (p = 0.012 and p = 0.017, respectively). The optimal cutoff values for the diagnosis of PPROM were >6.7 mg/dl for urea and >0.12 mg/dl for creatinine. The optimal cutoff values for the detection of delivery within 48 h were >19.4 mg/dl for urea and >0.23 mg/dl for creatinine. Measurement of urea and creatinine levels in vaginal fluid is a rapid and reliable test for diagnosing and also for predicting delivery interval after PPROM.
Detecting independent and recurrent copy number aberrations using interval graphs.
Wu, Hsin-Ta; Hajirasouliha, Iman; Raphael, Benjamin J
2014-06-15
Somatic copy number aberrations SCNAS: are frequent in cancer genomes, but many of these are random, passenger events. A common strategy to distinguish functional aberrations from passengers is to identify those aberrations that are recurrent across multiple samples. However, the extensive variability in the length and position of SCNA: s makes the problem of identifying recurrent aberrations notoriously difficult. We introduce a combinatorial approach to the problem of identifying independent and recurrent SCNA: s, focusing on the key challenging of separating the overlaps in aberrations across individuals into independent events. We derive independent and recurrent SCNA: s as maximal cliques in an interval graph constructed from overlaps between aberrations. We efficiently enumerate all such cliques, and derive a dynamic programming algorithm to find an optimal selection of non-overlapping cliques, resulting in a very fast algorithm, which we call RAIG (Recurrent Aberrations from Interval Graphs). We show that RAIG outperforms other methods on simulated data and also performs well on data from three cancer types from The Cancer Genome Atlas (TCGA). In contrast to existing approaches that employ various heuristics to select independent aberrations, RAIG optimizes a well-defined objective function. We show that this allows RAIG to identify rare aberrations that are likely functional, but are obscured by overlaps with larger passenger aberrations. http://compbio.cs.brown.edu/software. © The Author 2014. Published by Oxford University Press.
NASA Astrophysics Data System (ADS)
Muratore-Ginanneschi, Paolo
2005-05-01
Investment strategies in multiplicative Markovian market models with transaction costs are defined using growth optimal criteria. The optimal strategy is shown to consist in holding the amount of capital invested in stocks within an interval around an ideal optimal investment. The size of the holding interval is determined by the intensity of the transaction costs and the time horizon. The inclusion of financial derivatives in the models is also considered. All the results presented in this contributions were previously derived in collaboration with E. Aurell.
Balasubramonian, Rajeev [Sandy, UT; Dwarkadas, Sandhya [Rochester, NY; Albonesi, David [Ithaca, NY
2012-01-24
In a processor having multiple clusters which operate in parallel, the number of clusters in use can be varied dynamically. At the start of each program phase, the configuration option for an interval is run to determine the optimal configuration, which is used until the next phase change is detected. The optimum instruction interval is determined by starting with a minimum interval and doubling it until a low stability factor is reached.
Refined genetic algorithm -- Economic dispatch example
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sheble, G.B.; Brittig, K.
1995-02-01
A genetic-based algorithm is used to solve an economic dispatch (ED) problem. The algorithm utilizes payoff information of perspective solutions to evaluate optimality. Thus, the constraints of classical LaGrangian techniques on unit curves are eliminated. Using an economic dispatch problem as a basis for comparison, several different techniques which enhance program efficiency and accuracy, such as mutation prediction, elitism, interval approximation and penalty factors, are explored. Two unique genetic algorithms are also compared. The results are verified for a sample problem using a classical technique.
Stochastic DG Placement for Conservation Voltage Reduction Based on Multiple Replications Procedure
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Zhaoyu; Chen, Bokan; Wang, Jianhui
2015-06-01
Conservation voltage reduction (CVR) and distributed-generation (DG) integration are popular strategies implemented by utilities to improve energy efficiency. This paper investigates the interactions between CVR and DG placement to minimize load consumption in distribution networks, while keeping the lowest voltage level within the predefined range. The optimal placement of DG units is formulated as a stochastic optimization problem considering the uncertainty of DG outputs and load consumptions. A sample average approximation algorithm-based technique is developed to solve the formulated problem effectively. A multiple replications procedure is developed to test the stability of the solution and calculate the confidence interval ofmore » the gap between the candidate solution and optimal solution. The proposed method has been applied to the IEEE 37-bus distribution test system with different scenarios. The numerical results indicate that the implementations of CVR and DG, if combined, can achieve significant energy savings.« less
NASA Astrophysics Data System (ADS)
Chen, Jing-Bo
2014-06-01
By using low-frequency components of the damped wavefield, Laplace-Fourier-domain full waveform inversion (FWI) can recover a long-wavelength velocity model from the original undamped seismic data lacking low-frequency information. Laplace-Fourier-domain modelling is an important foundation of Laplace-Fourier-domain FWI. Based on the numerical phase velocity and the numerical attenuation propagation velocity, a method for performing Laplace-Fourier-domain numerical dispersion analysis is developed in this paper. This method is applied to an average-derivative optimal scheme. The results show that within the relative error of 1 per cent, the Laplace-Fourier-domain average-derivative optimal scheme requires seven gridpoints per smallest wavelength and smallest pseudo-wavelength for both equal and unequal directional sampling intervals. In contrast, the classical five-point scheme requires 23 gridpoints per smallest wavelength and smallest pseudo-wavelength to achieve the same accuracy. Numerical experiments demonstrate the theoretical analysis.
Robotic fish tracking method based on suboptimal interval Kalman filter
NASA Astrophysics Data System (ADS)
Tong, Xiaohong; Tang, Chao
2017-11-01
Autonomous Underwater Vehicle (AUV) research focused on tracking and positioning, precise guidance and return to dock and other fields. The robotic fish of AUV has become a hot application in intelligent education, civil and military etc. In nonlinear tracking analysis of robotic fish, which was found that the interval Kalman filter algorithm contains all possible filter results, but the range is wide, relatively conservative, and the interval data vector is uncertain before implementation. This paper proposes a ptimization algorithm of suboptimal interval Kalman filter. Suboptimal interval Kalman filter scheme used the interval inverse matrix with its worst inverse instead, is more approximate nonlinear state equation and measurement equation than the standard interval Kalman filter, increases the accuracy of the nominal dynamic system model, improves the speed and precision of tracking system. Monte-Carlo simulation results show that the optimal trajectory of sub optimal interval Kalman filter algorithm is better than that of the interval Kalman filter method and the standard method of the filter.
Fernández-Cidón, Bárbara; Padró-Miquel, Ariadna; Alía-Ramos, Pedro; Castro-Castro, María José; Fanlo-Maresma, Marta; Dot-Bach, Dolors; Valero-Politi, José; Pintó-Sala, Xavier; Candás-Estébanez, Beatriz
2017-01-01
High serum concentrations of small dense low-density lipoprotein cholesterol (sd-LDL-c) particles are associated with risk of cardiovascular disease (CVD). Their clinical application has been hindered as a consequence of the laborious current method used for their quantification. Optimize a simple and fast precipitation method to isolate sd-LDL particles and establish a reference interval in a Mediterranean population. Forty-five serum samples were collected, and sd-LDL particles were isolated using a modified heparin-Mg 2+ precipitation method. sd-LDL-c concentration was calculated by subtracting high-density lipoprotein cholesterol (HDL-c) from the total cholesterol measured in the supernatant. This method was compared with the reference method (ultracentrifugation). Reference values were estimated according to the Clinical and Laboratory Standards Institute and The International Federation of Clinical Chemistry and Laboratory Medicine recommendations. sd-LDL-c concentration was measured in serums from 79 subjects with no lipid metabolism abnormalities. The Passing-Bablok regression equation is y = 1.52 (0.72 to 1.73) + 0.07 x (-0.1 to 0.13), demonstrating no significant statistical differences between the modified precipitation method and the ultracentrifugation reference method. Similarly, no differences were detected when considering only sd-LDL-c from dyslipidemic patients, since the modifications added to the precipitation method facilitated the proper sedimentation of triglycerides and other lipoproteins. The reference interval for sd-LDL-c concentration estimated in a Mediterranean population was 0.04-0.47 mmol/L. An optimization of the heparin-Mg 2+ precipitation method for sd-LDL particle isolation was performed, and reference intervals were established in a Spanish Mediterranean population. Measured values were equivalent to those obtained with the reference method, assuring its clinical application when tested in both normolipidemic and dyslipidemic subjects.
Blacksell, Stuart D.; Tanganuchitcharnchai, Ampai; Jintaworn, Suthatip; Kantipong, Pacharee; Richards, Allen L.; Day, Nicholas P. J.
2016-01-01
The enzyme-linked immunosorbent assay (ELISA) has been proposed as an alternative serologic diagnostic test to the indirect immunofluorescence assay (IFA) for scrub typhus. Here, we systematically determine the optimal sample dilution and cutoff optical density (OD) and estimate the accuracy of IgM ELISA using Bayesian latent class models (LCMs). Data from 135 patients with undifferentiated fever were reevaluated using Bayesian LCMs. Every patient was evaluated for the presence of an eschar and tested with a blood culture for Orientia tsutsugamushi, three different PCR assays, and an IgM IFA. The IgM ELISA was performed for every sample at sample dilutions from 1:100 to 1:102,400 using crude whole-cell antigens of the Karp, Kato, and Gilliam strains of O. tsutsugamushi developed by the Naval Medical Research Center. We used Bayesian LCMs to generate unbiased receiver operating characteristic curves and found that the sample dilution of 1:400 was optimal for the IgM ELISA. With the optimal cutoff OD of 1.474 at a sample dilution of 1:400, the IgM ELISA had a sensitivity of 85.7% (95% credible interval [CrI], 77.4% to 86.7%) and a specificity of 98.1% (95% CrI, 97.2% to 100%) using paired samples. For the ELISA, the OD could be determined objectively and quickly, in contrast to the reading of IFA slides, which was both subjective and labor-intensive. The IgM ELISA for scrub typhus has high diagnostic accuracy and is less subjective than the IgM IFA. We suggest that the IgM ELISA may be used as an alternative reference test to the IgM IFA for the serological diagnosis of scrub typhus. PMID:27008880
Consideration of computer limitations in implementing on-line controls. M.S. Thesis
NASA Technical Reports Server (NTRS)
Roberts, G. K.
1976-01-01
A formal statement of the optimal control problem which includes the interval of dicretization as an optimization parameter, and extend this to include selection of a control algorithm as part of the optimization procedure, is formulated. The performance of the scalar linear system depends on the discretization interval. Discrete-time versions of the output feedback regulator and an optimal compensator, and the use of these results in presenting an example of a system for which fast partial-state-feedback control better minimizes a quadratic cost than either a full-state feedback control or a compensator, are developed.
Fuel optimal maneuvers of spacecraft about a circular orbit
NASA Technical Reports Server (NTRS)
Carter, T. E.
1982-01-01
Fuel optimal maneuvers of spacecraft relative to a body in circular orbit are investigated using a point mass model in which the magnitude of the thrust vector is bounded. All nonsingular optimal maneuvers consist of intervals of full thrust and coast and are found to contain at most seven such intervals in one period. Only four boundary conditions where singular solutions occur are possible. Computer simulation of optimal flight path shapes and switching functions are found for various boundary conditions. Emphasis is placed on the problem of soft rendezvous with a body in circular orbit.
Altman, Alon D; Nelson, Gregg; Chu, Pamela; Nation, Jill; Ghatage, Prafull
2012-06-01
The objective of this study was to examine both overall and disease-free survival of patients with advanced stage ovarian cancer after immediate or interval debulking surgery based on residual disease. We performed a retrospective chart review at the Tom Baker Cancer Centre in Calgary, Alberta of patients with pathologically confirmed stage III or IV ovarian cancer, fallopian tube cancer, or primary peritoneal cancer between 2003 and 2007. We collected data on the dates of diagnosis, recurrence, and death; cancer stage and grade, patients' age, surgery performed, and residual disease. One hundred ninety-two patients were included in the final analysis. The optimal debulking rate with immediate surgery was 64.8%, and with interval surgery it was 85.9%. There were improved overall and disease-free survival rates for optimally debulked disease (< 1 cm) with both immediate and interval surgery (P < 0.001) compared to suboptimally debulked disease. Overall survival rates for optimally debulked disease were not significantly different in patients having immediate and interval surgery (P = 0.25). In the immediate surgery group, patients with microscopic residual disease had better disease-free survival (P = 0.015) and overall survival (P = 0.005) than patients with < 1 cm residual disease. In patients who had interval surgery, those who had microscopic residual disease had more improved disease-free survival than those with < 1 cm disease (P = 0.05), but they did not have more improved overall survival (P = 0.42). Patients with microscopic residual disease who had immediate surgery had a significantly better overall survival rate than those who had interval surgery (P = 0.034). In women with advanced stage ovarian cancer, the goal of surgery should be resection of disease to microscopic residual at the initial procedure. This results in improved overall survival than lesser degrees of resection. Further studies are required to determine optimal surgical management.
NASA Astrophysics Data System (ADS)
Sandhu, Amit
A sequential quadratic programming method is proposed for solving nonlinear optimal control problems subject to general path constraints including mixed state-control and state only constraints. The proposed algorithm further develops on the approach proposed in [1] with objective to eliminate the use of a high number of time intervals for arriving at an optimal solution. This is done by introducing an adaptive time discretization to allow formation of a desirable control profile without utilizing a lot of intervals. The use of fewer time intervals reduces the computation time considerably. This algorithm is further used in this thesis to solve a trajectory planning problem for higher elevation Mars landing.
Rosenblum, Michael A; Laan, Mark J van der
2009-01-07
The validity of standard confidence intervals constructed in survey sampling is based on the central limit theorem. For small sample sizes, the central limit theorem may give a poor approximation, resulting in confidence intervals that are misleading. We discuss this issue and propose methods for constructing confidence intervals for the population mean tailored to small sample sizes. We present a simple approach for constructing confidence intervals for the population mean based on tail bounds for the sample mean that are correct for all sample sizes. Bernstein's inequality provides one such tail bound. The resulting confidence intervals have guaranteed coverage probability under much weaker assumptions than are required for standard methods. A drawback of this approach, as we show, is that these confidence intervals are often quite wide. In response to this, we present a method for constructing much narrower confidence intervals, which are better suited for practical applications, and that are still more robust than confidence intervals based on standard methods, when dealing with small sample sizes. We show how to extend our approaches to much more general estimation problems than estimating the sample mean. We describe how these methods can be used to obtain more reliable confidence intervals in survey sampling. As a concrete example, we construct confidence intervals using our methods for the number of violent deaths between March 2003 and July 2006 in Iraq, based on data from the study "Mortality after the 2003 invasion of Iraq: A cross sectional cluster sample survey," by Burnham et al. (2006).
Optimal design of clinical trials with biologics using dose-time-response models.
Lange, Markus R; Schmidli, Heinz
2014-12-30
Biologics, in particular monoclonal antibodies, are important therapies in serious diseases such as cancer, psoriasis, multiple sclerosis, or rheumatoid arthritis. While most conventional drugs are given daily, the effect of monoclonal antibodies often lasts for months, and hence, these biologics require less frequent dosing. A good understanding of the time-changing effect of the biologic for different doses is needed to determine both an adequate dose and an appropriate time-interval between doses. Clinical trials provide data to estimate the dose-time-response relationship with semi-mechanistic nonlinear regression models. We investigate how to best choose the doses and corresponding sample size allocations in such clinical trials, so that the nonlinear dose-time-response model can be precisely estimated. We consider both local and conservative Bayesian D-optimality criteria for the design of clinical trials with biologics. For determining the optimal designs, computer-intensive numerical methods are needed, and we focus here on the particle swarm optimization algorithm. This metaheuristic optimizer has been successfully used in various areas but has only recently been applied in the optimal design context. The equivalence theorem is used to verify the optimality of the designs. The methodology is illustrated based on results from a clinical study in patients with gout, treated by a monoclonal antibody. Copyright © 2014 John Wiley & Sons, Ltd.
NASA Technical Reports Server (NTRS)
Banks, H. T.; Silcox, R. J.; Keeling, S. L.; Wang, C.
1989-01-01
A unified treatment of the linear quadratic tracking (LQT) problem, in which a control system's dynamics are modeled by a linear evolution equation with a nonhomogeneous component that is linearly dependent on the control function u, is presented; the treatment proceeds from the theoretical formulation to a numerical approximation framework. Attention is given to two categories of LQT problems in an infinite time interval: the finite energy and the finite average energy. The behavior of the optimal solution for finite time-interval problems as the length of the interval tends to infinity is discussed. Also presented are the formulations and properties of LQT problems in a finite time interval.
Carotid-femoral pulse wave velocity in a healthy adult sample: The ELSA-Brasil study.
Baldo, Marcelo Perim; Cunha, Roberto S; Molina, Maria Del Carmen B; Chór, Dora; Griep, Rosane H; Duncan, Bruce B; Schmidt, Maria Inês; Ribeiro, Antonio L P; Barreto, Sandhi M; Lotufo, Paulo A; Bensenor, Isabela M; Pereira, Alexandre C; Mill, José Geraldo
2018-01-15
Aging declines essential physiological functions, and the vascular system is strongly affected by artery stiffening. We intended to define the age- and sex-specific reference values for carotid-to-femoral pulse wave velocity (cf-PWV) in a sample free of major risk factors. The ELSA-Brasil study enrolled 15,105 participants aged 35-74years. The healthy sample was achieved by excluding diabetics, those over the optimal and normal blood pressure levels, body mass index ≤18.5 or ≥25kg/m 2 , current and former smokers, and those with self-report of previous cardiovascular disease. After exclusions, the sample consisted of 2158 healthy adults (1412 women). Although cf-PWV predictors were similar between sex (age, mean arterial pressure (MAP) and heart rate), cf-PWV was higher in men (8.74±1.15 vs. 8.31±1.13m/s; adjusted for age and MAP, P<0.001) for all age intervals. When divided by MAP categories, cf-PWV was significantly higher in those which MAP ≥85mmHg, regardless of sex, and for all age intervals. Risk factors for arterial stiffening in the entire ELSA-Brasil population (n=15,105) increased by twice the age-related slope of cf-PWV growth, regardless of sex (0.0919±0.182 vs. 0.0504±0.153m/s per year for men, 0.0960±0.173 vs. 0.0606±0.139m/s per year for women). cf-PWV is different between men and women and even in an optimal and normal range of MAP and free of other classical risk factors for arterial stiffness, reference values for cf-PWV should take into account MAP levels. Also, the presence of major risk factors in the general population doubles the age-related rise in cf-PWV. Copyright © 2017 Elsevier B.V. All rights reserved.
Optimizing structure of complex technical system by heterogeneous vector criterion in interval form
NASA Astrophysics Data System (ADS)
Lysenko, A. V.; Kochegarov, I. I.; Yurkov, N. K.; Grishko, A. K.
2018-05-01
The article examines the methods of development and multi-criteria choice of the preferred structural variant of the complex technical system at the early stages of its life cycle in the absence of sufficient knowledge of parameters and variables for optimizing this structure. The suggested methods takes into consideration the various fuzzy input data connected with the heterogeneous quality criteria of the designed system and the parameters set by their variation range. The suggested approach is based on the complex use of methods of interval analysis, fuzzy sets theory, and the decision-making theory. As a result, the method for normalizing heterogeneous quality criteria has been developed on the basis of establishing preference relations in the interval form. The method of building preferential relations in the interval form on the basis of the vector of heterogeneous quality criteria suggest the use of membership functions instead of the coefficients considering the criteria value. The former show the degree of proximity of the realization of the designed system to the efficient or Pareto optimal variants. The study analyzes the example of choosing the optimal variant for the complex system using heterogeneous quality criteria.
ERIC Educational Resources Information Center
Du, Yunfei
This paper discusses the impact of sampling error on the construction of confidence intervals around effect sizes. Sampling error affects the location and precision of confidence intervals. Meta-analytic resampling demonstrates that confidence intervals can haphazardly bounce around the true population parameter. Special software with graphical…
A Hybrid Interval-Robust Optimization Model for Water Quality Management.
Xu, Jieyu; Li, Yongping; Huang, Guohe
2013-05-01
In water quality management problems, uncertainties may exist in many system components and pollution-related processes ( i.e. , random nature of hydrodynamic conditions, variability in physicochemical processes, dynamic interactions between pollutant loading and receiving water bodies, and indeterminacy of available water and treated wastewater). These complexities lead to difficulties in formulating and solving the resulting nonlinear optimization problems. In this study, a hybrid interval-robust optimization (HIRO) method was developed through coupling stochastic robust optimization and interval linear programming. HIRO can effectively reflect the complex system features under uncertainty, where implications of water quality/quantity restrictions for achieving regional economic development objectives are studied. By delimiting the uncertain decision space through dimensional enlargement of the original chemical oxygen demand (COD) discharge constraints, HIRO enhances the robustness of the optimization processes and resulting solutions. This method was applied to planning of industry development in association with river-water pollution concern in New Binhai District of Tianjin, China. Results demonstrated that the proposed optimization model can effectively communicate uncertainties into the optimization process and generate a spectrum of potential inexact solutions supporting local decision makers in managing benefit-effective water quality management schemes. HIRO is helpful for analysis of policy scenarios related to different levels of economic penalties, while also providing insight into the tradeoff between system benefits and environmental requirements.
Optimization of Sample Preparation processes of Bone Material for Raman Spectroscopy.
Chikhani, Madelen; Wuhrer, Richard; Green, Hayley
2018-03-30
Raman spectroscopy has recently been investigated for use in the calculation of postmortem interval from skeletal material. The fluorescence generated by samples, which affects the interpretation of Raman data, is a major limitation. This study compares the effectiveness of two sample preparation techniques, chemical bleaching and scraping, in the reduction of fluorescence from bone samples during testing with Raman spectroscopy. Visual assessment of Raman spectra obtained at 1064 nm excitation following the preparation protocols indicates an overall reduction in fluorescence. Results demonstrate that scraping is more effective at resolving fluorescence than chemical bleaching. The scraping of skeletonized remains prior to Raman analysis is a less destructive method and allows for the preservation of a bone sample in a state closest to its original form, which is beneficial in forensic investigations. It is recommended that bone scraping supersedes chemical bleaching as the preferred method for sample preparation prior to Raman spectroscopy. © 2018 American Academy of Forensic Sciences.
Tawfik, Ahmed M; Razek, Ahmed A; Elhawary, Galal; Batouty, Nihal M
2014-01-01
To evaluate the effect of increasing the sampling interval from 1 second (1 image per second) to 2 seconds (1 image every 2 seconds) on computed tomographic (CT) perfusion (CTP) of head and neck tumors. Twenty patients underwent CTP studies of head and neck tumors with images acquired in cine mode for 50 seconds using sampling interval of 1 second. Using deconvolution-based software, analysis of CTP was done with sampling interval of 1 second and then 2 seconds. Perfusion maps representing blood flow, blood volume, mean transit time, and permeability surface area product (PS) were obtained. Quantitative tumor CTP values were compared between the 2 sampling intervals. Two blinded radiologists compared the subjective quality of CTP maps using a 3-point scale between the 2 sampling intervals. Radiation dose parameters were recorded for the 2 sampling interval rates. No significant differences were observed between the means of the 4 perfusion parameters generated using both sampling intervals; all P >0.05. The 95% limits of agreement between the 2 sampling intervals were -65.9 to 48.1) mL/min per 100 g for blood flow, -3.6 to 3.1 mL/100 g for blood volume, -2.9 to 3.8 seconds for mean transit time, and -10.0 to 12.5 mL/min per 100 g for PS. There was no significant difference between the subjective quality scores of CTP maps obtained using the 2 sampling intervals; all P > 0.05. Radiation dose was halved when sampling interval increased from 1 to 2 seconds. Increasing the sampling interval rate to 1 image every 2 seconds does not compromise the image quality and has no significant effect on quantitative perfusion parameters of head and neck tumors. The radiation dose is halved.
Optimizing preventive maintenance policy: A data-driven application for a light rail braking system.
Corman, Francesco; Kraijema, Sander; Godjevac, Milinko; Lodewijks, Gabriel
2017-10-01
This article presents a case study determining the optimal preventive maintenance policy for a light rail rolling stock system in terms of reliability, availability, and maintenance costs. The maintenance policy defines one of the three predefined preventive maintenance actions at fixed time-based intervals for each of the subsystems of the braking system. Based on work, maintenance, and failure data, we model the reliability degradation of the system and its subsystems under the current maintenance policy by a Weibull distribution. We then analytically determine the relation between reliability, availability, and maintenance costs. We validate the model against recorded reliability and availability and get further insights by a dedicated sensitivity analysis. The model is then used in a sequential optimization framework determining preventive maintenance intervals to improve on the key performance indicators. We show the potential of data-driven modelling to determine optimal maintenance policy: same system availability and reliability can be achieved with 30% maintenance cost reduction, by prolonging the intervals and re-grouping maintenance actions.
Optimizing preventive maintenance policy: A data-driven application for a light rail braking system
Corman, Francesco; Kraijema, Sander; Godjevac, Milinko; Lodewijks, Gabriel
2017-01-01
This article presents a case study determining the optimal preventive maintenance policy for a light rail rolling stock system in terms of reliability, availability, and maintenance costs. The maintenance policy defines one of the three predefined preventive maintenance actions at fixed time-based intervals for each of the subsystems of the braking system. Based on work, maintenance, and failure data, we model the reliability degradation of the system and its subsystems under the current maintenance policy by a Weibull distribution. We then analytically determine the relation between reliability, availability, and maintenance costs. We validate the model against recorded reliability and availability and get further insights by a dedicated sensitivity analysis. The model is then used in a sequential optimization framework determining preventive maintenance intervals to improve on the key performance indicators. We show the potential of data-driven modelling to determine optimal maintenance policy: same system availability and reliability can be achieved with 30% maintenance cost reduction, by prolonging the intervals and re-grouping maintenance actions. PMID:29278245
Ren, Jingzheng; Dong, Liang; Sun, Lu; Goodsite, Michael Evan; Tan, Shiyu; Dong, Lichun
2015-01-01
The aim of this work was to develop a model for optimizing the life cycle cost of biofuel supply chain under uncertainties. Multiple agriculture zones, multiple transportation modes for the transport of grain and biofuel, multiple biofuel plants, and multiple market centers were considered in this model, and the price of the resources, the yield of grain and the market demands were regarded as interval numbers instead of constants. An interval linear programming was developed, and a method for solving interval linear programming was presented. An illustrative case was studied by the proposed model, and the results showed that the proposed model is feasible for designing biofuel supply chain under uncertainties. Copyright © 2015 Elsevier Ltd. All rights reserved.
Sub-Audible Speech Recognition Based upon Electromyographic Signals
NASA Technical Reports Server (NTRS)
Jorgensen, Charles C. (Inventor); Agabon, Shane T. (Inventor); Lee, Diana D. (Inventor)
2012-01-01
Method and system for processing and identifying a sub-audible signal formed by a source of sub-audible sounds. Sequences of samples of sub-audible sound patterns ("SASPs") for known words/phrases in a selected database are received for overlapping time intervals, and Signal Processing Transforms ("SPTs") are formed for each sample, as part of a matrix of entry values. The matrix is decomposed into contiguous, non-overlapping two-dimensional cells of entries, and neural net analysis is applied to estimate reference sets of weight coefficients that provide sums with optimal matches to reference sets of values. The reference sets of weight coefficients are used to determine a correspondence between a new (unknown) word/phrase and a word/phrase in the database.
NASA Astrophysics Data System (ADS)
Zhao, Yun-wei; Zhu, Zi-qiang; Lu, Guang-yin; Han, Bo
2018-03-01
The sine and cosine transforms implemented with digital filters have been used in the Transient electromagnetic methods for a few decades. Kong (2007) proposed a method of obtaining filter coefficients, which are computed in the sample domain by Hankel transform pair. However, the curve shape of Hankel transform pair changes with a parameter, which usually is set to be 1 or 3 in the process of obtaining the digital filter coefficients of sine and cosine transforms. First, this study investigates the influence of the parameter on the digital filter algorithm of sine and cosine transforms based on the digital filter algorithm of Hankel transform and the relationship between the sine, cosine function and the ±1/2 order Bessel function of the first kind. The results show that the selection of the parameter highly influences the precision of digital filter algorithm. Second, upon the optimal selection of the parameter, it is found that an optimal sampling interval s also exists to achieve the best precision of digital filter algorithm. Finally, this study proposes four groups of sine and cosine transform digital filter coefficients with different length, which may help to develop the digital filter algorithm of sine and cosine transforms, and promote its application.
Modified dwell time optimization model and its applications in subaperture polishing.
Dong, Zhichao; Cheng, Haobo; Tam, Hon-Yuen
2014-05-20
The optimization of dwell time is an important procedure in deterministic subaperture polishing. We present a modified optimization model of dwell time by iterative and numerical method, assisted by extended surface forms and tool paths for suppressing the edge effect. Compared with discrete convolution and linear equation models, the proposed model has essential compatibility with arbitrary tool paths, multiple tool influence functions (TIFs) in one optimization, and asymmetric TIFs. The emulational fabrication of a Φ200 mm workpiece by the proposed model yields a smooth, continuous, and non-negative dwell time map with a root-mean-square (RMS) convergence rate of 99.6%, and the optimization costs much less time. By the proposed model, influences of TIF size and path interval to convergence rate and polishing time are optimized, respectively, for typical low and middle spatial-frequency errors. Results show that (1) the TIF size is nonlinear inversely proportional to convergence rate and polishing time. A TIF size of ~1/7 workpiece size is preferred; (2) the polishing time is less sensitive to path interval, but increasing the interval markedly reduces the convergence rate. A path interval of ~1/8-1/10 of the TIF size is deemed to be appropriate. The proposed model is deployed on a JR-1800 and MRF-180 machine. Figuring results of Φ920 mm Zerodur paraboloid and Φ100 mm Zerodur plane by them yield RMS of 0.016λ and 0.013λ (λ=632.8 nm), respectively, and thereby validate the feasibility of proposed dwell time model used for subaperture polishing.
The Clustering of Lifestyle Behaviours in New Zealand and their Relationship with Optimal Wellbeing.
Prendergast, Kate B; Mackay, Lisa M; Schofield, Grant M
2016-10-01
The purpose of this research was to determine (1) associations between multiple lifestyle behaviours and optimal wellbeing and (2) the extent to which five lifestyle behaviours-sleep, physical activity, sedentary behaviour, sugary drink consumption, and fruit and vegetable intake-cluster in a national sample. A national sample of New Zealand adults participated in a web-based wellbeing survey. Five lifestyle behaviours-sleep, physical activity, sedentary behaviour, sugary drink consumption, and fruit and vegetable intake-were dichotomised into healthy (meets recommendations) and unhealthy (does not meet recommendations) categories. Optimal wellbeing was calculated using a multi-dimensional flourishing scale, and binary logistic regression analysis was used to calculate the relationship between multiple healthy behaviours and optimal wellbeing. Clustering was examined by comparing the observed and expected prevalence rates (O/E) of healthy and unhealthy two-, three-, four-, and five-behaviour combinations. Data from 9425 participants show those engaging in four to five healthy behaviours (23 %) were 4.7 (95 % confidence interval (CI) 3.8-5.7) times more likely to achieve optimal wellbeing compared to those engaging in zero to one healthy behaviour (21 %). Clustering was observed for healthy (5 %, O/E 2.0, 95 % CI 1.8-2.2) and unhealthy (5 %, O/E 2.1, 95 % CI 1.9-2.3) five-behaviour combinations and for four- and three-behaviour combinations. At the two-behaviour level, healthy fruit and vegetable intake clustered with all behaviours, except sleep which did not cluster with any behaviour. Multiple lifestyle behaviours were positively associated with optimal wellbeing. The results show lifestyle behaviours cluster, providing support for multiple behaviour lifestyle-based interventions for optimising wellbeing.
Hailu, Desta; Gulte, Teklemariam
2016-01-01
Background. One of the key strategies to reduce fertility and promote the health status of mothers and their children is adhering to optimal birth spacing. However, women still have shorter birth intervals and studies addressing their determinants were scarce. The objective of this study, therefore, was to assess determinants of birth interval among women who had at least two consecutive live births. Methods. Case control study was conducted from February to April 2014. Cases were women with short birth intervals (<3 years), whereas controls were women having history of optimal birth intervals (3 to 5 years). Bivariate and multivariable analyses were performed. Result. Having no formal education (AOR = 2.36, 95% CL: [1.23–4.52]), duration of breast feeding for less than 24 months (AOR: 66.03, 95% CI; [34.60–126]), preceding child being female (AOR: 5.73, 95% CI; [3.18–10.310]), modern contraceptive use (AOR: 2.79, 95% CI: [1.58–4.940]), and poor wealth index (AOR: 4.89, 95% CI; [1.81–13.25]) of respondents were independent predictors of short birth interval. Conclusion. In equalities in education, duration of breast feeding, sex of the preceding child, contraceptive method use, and wealth index were markers of unequal distribution of inter birth intervals. Thus, to optimize birth spacing, strategies of providing information, education and communication targeting predictor variables should be improved. PMID:27239553
Quantification of soil water retention parameters using multi-section TDR-waveform analysis
NASA Astrophysics Data System (ADS)
Baviskar, S. M.; Heimovaara, T. J.
2017-06-01
Soil water retention parameters are important for describing flow in variably saturated soils. TDR is one of the standard methods used for determining water content in soil samples. In this study, we present an approach to estimate water retention parameters of a sample which is initially saturated and subjected to an incremental decrease in boundary head causing it to drain in a multi-step fashion. TDR waveforms are measured along the height of the sample at assumed different hydrostatic conditions at daily interval. The cumulative discharge outflow drained from the sample is also recorded. The saturated water content is obtained using volumetric analysis after the final step involved in multi-step drainage. The equation obtained by coupling the unsaturated parametric function and the apparent dielectric permittivity is fitted to a TDR wave propagation forward model. The unsaturated parametric function is used to spatially interpolate the water contents along TDR probe. The cumulative discharge outflow data is fitted with cumulative discharge estimated using the unsaturated parametric function. The weight of water inside the sample estimated at the first and final boundary head in multi-step drainage is fitted with the corresponding weights calculated using unsaturated parametric function. A Bayesian optimization scheme is used to obtain optimized water retention parameters for these different objective functions. This approach can be used for samples with long heights and is especially suitable for characterizing sands with a uniform particle size distribution at low capillary heads.
Fuel optimal maneuvers for spacecraft with fixed thrusters
NASA Technical Reports Server (NTRS)
Carter, T. C.
1982-01-01
Several mathematical models, including a minimum integral square criterion problem, were used for the qualitative investigation of fuel optimal maneuvers for spacecraft with fixed thrusters. The solutions consist of intervals of "full thrust" and "coast" indicating that thrusters do not need to be designed as "throttleable" for fuel optimal performance. For the primary model considered, singular solutions occur only if the optimal solution is "pure translation". "Time optimal" singular solutions can be found which consist of intervals of "coast" and "full thrust". The shape of the optimal fuel consumption curve as a function of flight time was found to depend on whether or not the initial state is in the region admitting singular solutions. Comparisons of fuel optimal maneuvers in deep space with those relative to a point in circular orbit indicate that qualitative differences in the solutions can occur. Computation of fuel consumption for certain "pure translation" cases indicates that considerable savings in fuel can result from the fuel optimal maneuvers.
Researches of fruit quality prediction model based on near infrared spectrum
NASA Astrophysics Data System (ADS)
Shen, Yulin; Li, Lian
2018-04-01
With the improvement in standards for food quality and safety, people pay more attention to the internal quality of fruits, therefore the measurement of fruit internal quality is increasingly imperative. In general, nondestructive soluble solid content (SSC) and total acid content (TAC) analysis of fruits is vital and effective for quality measurement in global fresh produce markets, so in this paper, we aim at establishing a novel fruit internal quality prediction model based on SSC and TAC for Near Infrared Spectrum. Firstly, the model of fruit quality prediction based on PCA + BP neural network, PCA + GRNN network, PCA + BP adaboost strong classifier, PCA + ELM and PCA + LS_SVM classifier are designed and implemented respectively; then, in the NSCT domain, the median filter and the SavitzkyGolay filter are used to preprocess the spectral signal, Kennard-Stone algorithm is used to automatically select the training samples and test samples; thirdly, we achieve the optimal models by comparing 15 kinds of prediction model based on the theory of multi-classifier competition mechanism, specifically, the non-parametric estimation is introduced to measure the effectiveness of proposed model, the reliability and variance of nonparametric estimation evaluation of each prediction model to evaluate the prediction result, while the estimated value and confidence interval regard as a reference, the experimental results demonstrate that this model can better achieve the optimal evaluation of the internal quality of fruit; finally, we employ cat swarm optimization to optimize two optimal models above obtained from nonparametric estimation, empirical testing indicates that the proposed method can provide more accurate and effective results than other forecasting methods.
Laber, Eric B; Zhao, Ying-Qi; Regh, Todd; Davidian, Marie; Tsiatis, Anastasios; Stanford, Joseph B; Zeng, Donglin; Song, Rui; Kosorok, Michael R
2016-04-15
A personalized treatment strategy formalizes evidence-based treatment selection by mapping patient information to a recommended treatment. Personalized treatment strategies can produce better patient outcomes while reducing cost and treatment burden. Thus, among clinical and intervention scientists, there is a growing interest in conducting randomized clinical trials when one of the primary aims is estimation of a personalized treatment strategy. However, at present, there are no appropriate sample size formulae to assist in the design of such a trial. Furthermore, because the sampling distribution of the estimated outcome under an estimated optimal treatment strategy can be highly sensitive to small perturbations in the underlying generative model, sample size calculations based on standard (uncorrected) asymptotic approximations or computer simulations may not be reliable. We offer a simple and robust method for powering a single stage, two-armed randomized clinical trial when the primary aim is estimating the optimal single stage personalized treatment strategy. The proposed method is based on inverting a plugin projection confidence interval and is thereby regular and robust to small perturbations of the underlying generative model. The proposed method requires elicitation of two clinically meaningful parameters from clinical scientists and uses data from a small pilot study to estimate nuisance parameters, which are not easily elicited. The method performs well in simulated experiments and is illustrated using data from a pilot study of time to conception and fertility awareness. Copyright © 2015 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Zhou, Rurui; Li, Yu; Lu, Di; Liu, Haixing; Zhou, Huicheng
2016-09-01
This paper investigates the use of an epsilon-dominance non-dominated sorted genetic algorithm II (ɛ-NSGAII) as a sampling approach with an aim to improving sampling efficiency for multiple metrics uncertainty analysis using Generalized Likelihood Uncertainty Estimation (GLUE). The effectiveness of ɛ-NSGAII based sampling is demonstrated compared with Latin hypercube sampling (LHS) through analyzing sampling efficiency, multiple metrics performance, parameter uncertainty and flood forecasting uncertainty with a case study of flood forecasting uncertainty evaluation based on Xinanjiang model (XAJ) for Qing River reservoir, China. Results obtained demonstrate the following advantages of the ɛ-NSGAII based sampling approach in comparison to LHS: (1) The former performs more effective and efficient than LHS, for example the simulation time required to generate 1000 behavioral parameter sets is shorter by 9 times; (2) The Pareto tradeoffs between metrics are demonstrated clearly with the solutions from ɛ-NSGAII based sampling, also their Pareto optimal values are better than those of LHS, which means better forecasting accuracy of ɛ-NSGAII parameter sets; (3) The parameter posterior distributions from ɛ-NSGAII based sampling are concentrated in the appropriate ranges rather than uniform, which accords with their physical significance, also parameter uncertainties are reduced significantly; (4) The forecasted floods are close to the observations as evaluated by three measures: the normalized total flow outside the uncertainty intervals (FOUI), average relative band-width (RB) and average deviation amplitude (D). The flood forecasting uncertainty is also reduced a lot with ɛ-NSGAII based sampling. This study provides a new sampling approach to improve multiple metrics uncertainty analysis under the framework of GLUE, and could be used to reveal the underlying mechanisms of parameter sets under multiple conflicting metrics in the uncertainty analysis process.
Zhang, Xiaoling; Huang, Kai; Zou, Rui; Liu, Yong; Yu, Yajuan
2013-01-01
The conflict of water environment protection and economic development has brought severe water pollution and restricted the sustainable development in the watershed. A risk explicit interval linear programming (REILP) method was used to solve integrated watershed environmental-economic optimization problem. Interval linear programming (ILP) and REILP models for uncertainty-based environmental economic optimization at the watershed scale were developed for the management of Lake Fuxian watershed, China. Scenario analysis was introduced into model solution process to ensure the practicality and operability of optimization schemes. Decision makers' preferences for risk levels can be expressed through inputting different discrete aspiration level values into the REILP model in three periods under two scenarios. Through balancing the optimal system returns and corresponding system risks, decision makers can develop an efficient industrial restructuring scheme based directly on the window of "low risk and high return efficiency" in the trade-off curve. The representative schemes at the turning points of two scenarios were interpreted and compared to identify a preferable planning alternative, which has the relatively low risks and nearly maximum benefits. This study provides new insights and proposes a tool, which was REILP, for decision makers to develop an effectively environmental economic optimization scheme in integrated watershed management.
Zou, Rui; Liu, Yong; Yu, Yajuan
2013-01-01
The conflict of water environment protection and economic development has brought severe water pollution and restricted the sustainable development in the watershed. A risk explicit interval linear programming (REILP) method was used to solve integrated watershed environmental-economic optimization problem. Interval linear programming (ILP) and REILP models for uncertainty-based environmental economic optimization at the watershed scale were developed for the management of Lake Fuxian watershed, China. Scenario analysis was introduced into model solution process to ensure the practicality and operability of optimization schemes. Decision makers' preferences for risk levels can be expressed through inputting different discrete aspiration level values into the REILP model in three periods under two scenarios. Through balancing the optimal system returns and corresponding system risks, decision makers can develop an efficient industrial restructuring scheme based directly on the window of “low risk and high return efficiency” in the trade-off curve. The representative schemes at the turning points of two scenarios were interpreted and compared to identify a preferable planning alternative, which has the relatively low risks and nearly maximum benefits. This study provides new insights and proposes a tool, which was REILP, for decision makers to develop an effectively environmental economic optimization scheme in integrated watershed management. PMID:24191144
A multiple-feature and multiple-kernel scene segmentation algorithm for humanoid robot.
Liu, Zhi; Xu, Shuqiong; Zhang, Yun; Chen, Chun Lung Philip
2014-11-01
This technical correspondence presents a multiple-feature and multiple-kernel support vector machine (MFMK-SVM) methodology to achieve a more reliable and robust segmentation performance for humanoid robot. The pixel wise intensity, gradient, and C1 SMF features are extracted via the local homogeneity model and Gabor filter, which would be used as inputs of MFMK-SVM model. It may provide multiple features of the samples for easier implementation and efficient computation of MFMK-SVM model. A new clustering method, which is called feature validity-interval type-2 fuzzy C-means (FV-IT2FCM) clustering algorithm, is proposed by integrating a type-2 fuzzy criterion in the clustering optimization process to improve the robustness and reliability of clustering results by the iterative optimization. Furthermore, the clustering validity is employed to select the training samples for the learning of the MFMK-SVM model. The MFMK-SVM scene segmentation method is able to fully take advantage of the multiple features of scene image and the ability of multiple kernels. Experiments on the BSDS dataset and real natural scene images demonstrate the superior performance of our proposed method.
Bae, Jong-Myon; Shin, Sang Yop; Kim, Eun Hee
2015-01-01
Purpose This retrospective cohort study was conducted to estimate the optimal interval for gastric cancer screening in Korean adults with initial negative screening results. Materials and Methods This study consisted of voluntary Korean screenees aged 40 to 69 years who underwent subsequent screening gastroscopies after testing negative in the baseline screening performed between January 2007 and December 2011. A new case was defined as the presence of gastric cancer cells in biopsy specimens obtained upon gastroscopy. The follow-up periods were calculated during the months between the date of baseline screening gastroscopy and positive findings upon subsequent screenings, stratified by sex and age group. The mean sojourn time (MST) for determining the screening interval was estimated using the prevalence/incidence ratio. Results Of the 293,520 voluntary screenees for the gastric cancer screening program, 91,850 (31.29%) underwent subsequent screening gastroscopies between January 2007 and December 2011. The MSTs in men and women were 21.67 months (95% confidence intervals [CI], 17.64 to 26.88 months) and 15.14 months (95% CI, 9.44 to 25.85 months), respectively. Conclusion These findings suggest that the optimal interval for subsequent gastric screening in both men and women is 24 months, supporting the 2-year interval recommended by the nationwide gastric cancer screening program. PMID:25687874
NASA Astrophysics Data System (ADS)
Zou, Rui; Riverson, John; Liu, Yong; Murphy, Ryan; Sim, Youn
2015-03-01
Integrated continuous simulation-optimization models can be effective predictors of a process-based responses for cost-benefit optimization of best management practices (BMPs) selection and placement. However, practical application of simulation-optimization model is computationally prohibitive for large-scale systems. This study proposes an enhanced Nonlinearity Interval Mapping Scheme (NIMS) to solve large-scale watershed simulation-optimization problems several orders of magnitude faster than other commonly used algorithms. An efficient interval response coefficient (IRC) derivation method was incorporated into the NIMS framework to overcome a computational bottleneck. The proposed algorithm was evaluated using a case study watershed in the Los Angeles County Flood Control District. Using a continuous simulation watershed/stream-transport model, Loading Simulation Program in C++ (LSPC), three nested in-stream compliance points (CP)—each with multiple Total Maximum Daily Loads (TMDL) targets—were selected to derive optimal treatment levels for each of the 28 subwatersheds, so that the TMDL targets at all the CP were met with the lowest possible BMP implementation cost. Genetic Algorithm (GA) and NIMS were both applied and compared. The results showed that the NIMS took 11 iterations (about 11 min) to complete with the resulting optimal solution having a total cost of 67.2 million, while each of the multiple GA executions took 21-38 days to reach near optimal solutions. The best solution obtained among all the GA executions compared had a minimized cost of 67.7 million—marginally higher, but approximately equal to that of the NIMS solution. The results highlight the utility for decision making in large-scale watershed simulation-optimization formulations.
Horton, Bethany Jablonski; Wages, Nolan A.; Conaway, Mark R.
2016-01-01
Toxicity probability interval designs have received increasing attention as a dose-finding method in recent years. In this study, we compared the two-stage, likelihood-based continual reassessment method (CRM), modified toxicity probability interval (mTPI), and the Bayesian optimal interval design (BOIN) in order to evaluate each method's performance in dose selection for Phase I trials. We use several summary measures to compare the performance of these methods, including percentage of correct selection (PCS) of the true maximum tolerable dose (MTD), allocation of patients to doses at and around the true MTD, and an accuracy index. This index is an efficiency measure that describes the entire distribution of MTD selection and patient allocation by taking into account the distance between the true probability of toxicity at each dose level and the target toxicity rate. The simulation study considered a broad range of toxicity curves and various sample sizes. When considering PCS, we found that CRM outperformed the two competing methods in most scenarios, followed by BOIN, then mTPI. We observed a similar trend when considering the accuracy index for dose allocation, where CRM most often outperformed both the mTPI and BOIN. These trends were more pronounced with increasing number of dose levels. PMID:27435150
Population-wide folic acid fortification and preterm birth: testing the folate depletion hypothesis.
Naimi, Ashley I; Auger, Nathalie
2015-04-01
We assess whether population-wide folic acid fortification policies were followed by a reduction of preterm and early-term birth rates in Québec among women with short and optimal interpregnancy intervals. We extracted birth certificate data for 1.3 million births between 1981 and 2010 to compute age-adjusted preterm and early-term birth rates stratified by short and optimal interpregnancy intervals. We used Joinpoint regression to detect changes in the preterm and early term birth rates and assess whether these changes coincide with the implementation of population-wide folic acid fortification. A change in the preterm birth rate occurred in 2000 among women with short (95% confidence interval [CI] = 1994, 2005) and optimal (95% CI = 1995, 2008) interpregnancy intervals. Changes in early term birth rates did not coincide with the implementation of folic acid fortification. Our results do not indicate a link between folic acid fortification and early term birth but suggest an improvement in preterm birth rates after implementation of a nationwide folic acid fortification program.
NASA Astrophysics Data System (ADS)
Tian, Wenli; Cao, Chengxuan
2017-03-01
A generalized interval fuzzy mixed integer programming model is proposed for the multimodal freight transportation problem under uncertainty, in which the optimal mode of transport and the optimal amount of each type of freight transported through each path need to be decided. For practical purposes, three mathematical methods, i.e. the interval ranking method, fuzzy linear programming method and linear weighted summation method, are applied to obtain equivalents of constraints and parameters, and then a fuzzy expected value model is presented. A heuristic algorithm based on a greedy criterion and the linear relaxation algorithm are designed to solve the model.
Lower Limb Function in Elderly Korean Adults Is Related to Cognitive Function.
Kim, A-Sol; Ko, Hae-Jin
2018-05-01
Patients with cognitive impairment have decreased lower limb function. Therefore, we aimed to investigate the relationship between lower limb function and cognitive disorders to determine whether lower limb function can be screened to identify cognitive decline. Using Korean National Health Insurance Service-National Sample Cohort database data, we assessed the cognitive and lower limb functioning of 66-year-olds who underwent national health screening between 2010 and 2014. Cognitive function was assessed via a questionnaire. Timed Up-and-Go (TUG) and one-leg-standing (OLS) tests were performed to evaluate lower limb function. Associations between cognitive and lower limb functions were analyzed, and optimal cut-off points for these tests to screen for cognitive decline, were determined. Cognitive function was significantly correlated with TUG interval ( r = 0.414, p < 0.001) and OLS duration ( r = −0.237, p < 0.001). Optimal cut-off points for screening cognitive disorders were >11 s and ≤12 s for TUG interval and OLS duration, respectively. Among 66-year-olds who underwent national health screening, a significant correlation between lower limb and cognitive function was demonstrated. The TUG and OLS tests are useful screening tools for cognitive disorders in elderly patients. A large-scale prospective cohort study should be conducted to investigate the causal relationship between cognitive and lower limb function.
Morimoto, Akemi; Nagao, Shoji; Kogiku, Ai; Yamamoto, Kasumi; Miwa, Maiko; Wakahashi, Senn; Ichida, Kotaro; Sudo, Tamotsu; Yamaguchi, Satoshi; Fujiwara, Kiyoshi
2016-06-01
The purpose of this study is to investigate the clinical characteristics to determine the optimal timing of interval debulking surgery following neoadjuvant chemotherapy in patients with advanced epithelial ovarian cancer. We reviewed the charts of women with advanced epithelial ovarian cancer, fallopian tube cancer or primary peritoneal cancer who underwent interval debulking surgery following neoadjuvant chemotherapy at our cancer center from April 2006 to April 2014. There were 139 patients, including 91 with ovarian cancer [International Federation of Gynecology and Obstetrics (FIGO) Stage IIIc in 56 and IV in 35], two with fallopian tube cancers (FIGO Stage IV, both) and 46 with primary peritoneal cancer (FIGO Stage IIIc in 27 and IV in 19). After 3-6 cycles (median, 4 cycles) of platinum-based chemotherapy, interval debulking surgery was performed. Sixty-seven patients (48.2%) achieved complete resection of all macroscopic disease, while 72 did not. More patients with cancer antigen 125 levels ≤25.8 mg/dl at pre-interval debulking surgery achieved complete resection than those with higher cancer antigen 125 levels (84.7 vs. 21.3%; P< 0.0001). Patients with no ascites at pre-interval debulking surgery also achieved a higher complete resection rate (63.5 vs. 34.1%; P< 0.0001). Moreover, most patients (86.7%) with cancer antigen 125 levels ≤25.8 mg/dl and no ascites at pre-interval debulking surgery achieved complete resection. A low cancer antigen 125 level of ≤25.8 mg/dl and the absence of ascites at pre-interval debulking surgery are major predictive factors for complete resection during interval debulking surgery and present useful criteria to determine the optimal timing of interval debulking surgery. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
A modified Wald interval for the area under the ROC curve (AUC) in diagnostic case-control studies
2014-01-01
Background The area under the receiver operating characteristic (ROC) curve, referred to as the AUC, is an appropriate measure for describing the overall accuracy of a diagnostic test or a biomarker in early phase trials without having to choose a threshold. There are many approaches for estimating the confidence interval for the AUC. However, all are relatively complicated to implement. Furthermore, many approaches perform poorly for large AUC values or small sample sizes. Methods The AUC is actually a probability. So we propose a modified Wald interval for a single proportion, which can be calculated on a pocket calculator. We performed a simulation study to compare this modified Wald interval (without and with continuity correction) with other intervals regarding coverage probability and statistical power. Results The main result is that the proposed modified Wald intervals maintain and exploit the type I error much better than the intervals of Agresti-Coull, Wilson, and Clopper-Pearson. The interval suggested by Bamber, the Mann-Whitney interval without transformation and also the interval of the binormal AUC are very liberal. For small sample sizes the Wald interval with continuity has a comparable coverage probability as the LT interval and higher power. For large sample sizes the results of the LT interval and of the Wald interval without continuity correction are comparable. Conclusions If individual patient data is not available, but only the estimated AUC and the total sample size, the modified Wald intervals can be recommended as confidence intervals for the AUC. For small sample sizes the continuity correction should be used. PMID:24552686
A modified Wald interval for the area under the ROC curve (AUC) in diagnostic case-control studies.
Kottas, Martina; Kuss, Oliver; Zapf, Antonia
2014-02-19
The area under the receiver operating characteristic (ROC) curve, referred to as the AUC, is an appropriate measure for describing the overall accuracy of a diagnostic test or a biomarker in early phase trials without having to choose a threshold. There are many approaches for estimating the confidence interval for the AUC. However, all are relatively complicated to implement. Furthermore, many approaches perform poorly for large AUC values or small sample sizes. The AUC is actually a probability. So we propose a modified Wald interval for a single proportion, which can be calculated on a pocket calculator. We performed a simulation study to compare this modified Wald interval (without and with continuity correction) with other intervals regarding coverage probability and statistical power. The main result is that the proposed modified Wald intervals maintain and exploit the type I error much better than the intervals of Agresti-Coull, Wilson, and Clopper-Pearson. The interval suggested by Bamber, the Mann-Whitney interval without transformation and also the interval of the binormal AUC are very liberal. For small sample sizes the Wald interval with continuity has a comparable coverage probability as the LT interval and higher power. For large sample sizes the results of the LT interval and of the Wald interval without continuity correction are comparable. If individual patient data is not available, but only the estimated AUC and the total sample size, the modified Wald intervals can be recommended as confidence intervals for the AUC. For small sample sizes the continuity correction should be used.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Ray -Bing; Wang, Weichung; Jeff Wu, C. F.
A numerical method, called OBSM, was recently proposed which employs overcomplete basis functions to achieve sparse representations. While the method can handle non-stationary response without the need of inverting large covariance matrices, it lacks the capability to quantify uncertainty in predictions. We address this issue by proposing a Bayesian approach which first imposes a normal prior on the large space of linear coefficients, then applies the MCMC algorithm to generate posterior samples for predictions. From these samples, Bayesian credible intervals can then be obtained to assess prediction uncertainty. A key application for the proposed method is the efficient construction ofmore » sequential designs. Several sequential design procedures with different infill criteria are proposed based on the generated posterior samples. As a result, numerical studies show that the proposed schemes are capable of solving problems of positive point identification, optimization, and surrogate fitting.« less
Research on Rigid Body Motion Tracing in Space based on NX MCD
NASA Astrophysics Data System (ADS)
Wang, Junjie; Dai, Chunxiang; Shi, Karen; Qin, Rongkang
2018-03-01
In the use of MCD (Mechatronics Concept Designer) which is a module belong to SIEMENS Ltd industrial design software UG (Unigraphics NX), user can define rigid body and kinematic joint to make objects move according to the existing plan in simulation. At this stage, user may have the desire to see the path of some points in the moving object intuitively. In response to this requirement, this paper will compute the pose through the transformation matrix which can be available from the solver engine, and then fit these sampling points through B-spline curve. Meanwhile, combined with the actual constraints of rigid bodies, the traditional equal interval sampling strategy was optimized. The result shown that this method could satisfy the demand and make up for the deficiency in traditional sampling method. User can still edit and model on this 3D curve. Expected result has been achieved.
Chen, Ray -Bing; Wang, Weichung; Jeff Wu, C. F.
2017-04-12
A numerical method, called OBSM, was recently proposed which employs overcomplete basis functions to achieve sparse representations. While the method can handle non-stationary response without the need of inverting large covariance matrices, it lacks the capability to quantify uncertainty in predictions. We address this issue by proposing a Bayesian approach which first imposes a normal prior on the large space of linear coefficients, then applies the MCMC algorithm to generate posterior samples for predictions. From these samples, Bayesian credible intervals can then be obtained to assess prediction uncertainty. A key application for the proposed method is the efficient construction ofmore » sequential designs. Several sequential design procedures with different infill criteria are proposed based on the generated posterior samples. As a result, numerical studies show that the proposed schemes are capable of solving problems of positive point identification, optimization, and surrogate fitting.« less
NASA Astrophysics Data System (ADS)
Kumar, Girish; Jain, Vipul; Gandhi, O. P.
2018-03-01
Maintenance helps to extend equipment life by improving its condition and avoiding catastrophic failures. Appropriate model or mechanism is, thus, needed to quantify system availability vis-a-vis a given maintenance strategy, which will assist in decision-making for optimal utilization of maintenance resources. This paper deals with semi-Markov process (SMP) modeling for steady state availability analysis of mechanical systems that follow condition-based maintenance (CBM) and evaluation of optimal condition monitoring interval. The developed SMP model is solved using two-stage analytical approach for steady-state availability analysis of the system. Also, CBM interval is decided for maximizing system availability using Genetic Algorithm approach. The main contribution of the paper is in the form of a predictive tool for system availability that will help in deciding the optimum CBM policy. The proposed methodology is demonstrated for a centrifugal pump.
Jihong, Qu
2014-01-01
Wind-hydrothermal power system dispatching has received intensive attention in recent years because it can help develop various reasonable plans to schedule the power generation efficiency. But future data such as wind power output and power load would not be accurately predicted and the nonlinear nature involved in the complex multiobjective scheduling model; therefore, to achieve accurate solution to such complex problem is a very difficult task. This paper presents an interval programming model with 2-step optimization algorithm to solve multiobjective dispatching. Initially, we represented the future data into interval numbers and simplified the object function to a linear programming problem to search the feasible and preliminary solutions to construct the Pareto set. Then the simulated annealing method was used to search the optimal solution of initial model. Thorough experimental results suggest that the proposed method performed reasonably well in terms of both operating efficiency and precision. PMID:24895663
Ren, Kun; Jihong, Qu
2014-01-01
Wind-hydrothermal power system dispatching has received intensive attention in recent years because it can help develop various reasonable plans to schedule the power generation efficiency. But future data such as wind power output and power load would not be accurately predicted and the nonlinear nature involved in the complex multiobjective scheduling model; therefore, to achieve accurate solution to such complex problem is a very difficult task. This paper presents an interval programming model with 2-step optimization algorithm to solve multiobjective dispatching. Initially, we represented the future data into interval numbers and simplified the object function to a linear programming problem to search the feasible and preliminary solutions to construct the Pareto set. Then the simulated annealing method was used to search the optimal solution of initial model. Thorough experimental results suggest that the proposed method performed reasonably well in terms of both operating efficiency and precision.
Aulenbach, Brent T.
2013-01-01
A regression-model based approach is a commonly used, efficient method for estimating streamwater constituent load when there is a relationship between streamwater constituent concentration and continuous variables such as streamwater discharge, season and time. A subsetting experiment using a 30-year dataset of daily suspended sediment observations from the Mississippi River at Thebes, Illinois, was performed to determine optimal sampling frequency, model calibration period length, and regression model methodology, as well as to determine the effect of serial correlation of model residuals on load estimate precision. Two regression-based methods were used to estimate streamwater loads, the Adjusted Maximum Likelihood Estimator (AMLE), and the composite method, a hybrid load estimation approach. While both methods accurately and precisely estimated loads at the model’s calibration period time scale, precisions were progressively worse at shorter reporting periods, from annually to monthly. Serial correlation in model residuals resulted in observed AMLE precision to be significantly worse than the model calculated standard errors of prediction. The composite method effectively improved upon AMLE loads for shorter reporting periods, but required a sampling interval of at least 15-days or shorter, when the serial correlations in the observed load residuals were greater than 0.15. AMLE precision was better at shorter sampling intervals and when using the shortest model calibration periods, such that the regression models better fit the temporal changes in the concentration–discharge relationship. The models with the largest errors typically had poor high flow sampling coverage resulting in unrepresentative models. Increasing sampling frequency and/or targeted high flow sampling are more efficient approaches to ensure sufficient sampling and to avoid poorly performing models, than increasing calibration period length.
Han, Jing-Cheng; Huang, Guo-He; Zhang, Hua; Li, Zhong
2013-09-01
Soil erosion is one of the most serious environmental and public health problems, and such land degradation can be effectively mitigated through performing land use transitions across a watershed. Optimal land use management can thus provide a way to reduce soil erosion while achieving the maximum net benefit. However, optimized land use allocation schemes are not always successful since uncertainties pertaining to soil erosion control are not well presented. This study applied an interval-parameter fuzzy two-stage stochastic programming approach to generate optimal land use planning strategies for soil erosion control based on an inexact optimization framework, in which various uncertainties were reflected. The modeling approach can incorporate predefined soil erosion control policies, and address inherent system uncertainties expressed as discrete intervals, fuzzy sets, and probability distributions. The developed model was demonstrated through a case study in the Xiangxi River watershed, China's Three Gorges Reservoir region. Land use transformations were employed as decision variables, and based on these, the land use change dynamics were yielded for a 15-year planning horizon. Finally, the maximum net economic benefit with an interval value of [1.197, 6.311] × 10(9) $ was obtained as well as corresponding land use allocations in the three planning periods. Also, the resulting soil erosion amount was found to be decreased and controlled at a tolerable level over the watershed. Thus, results confirm that the developed model is a useful tool for implementing land use management as not only does it allow local decision makers to optimize land use allocation, but can also help to answer how to accomplish land use changes.
NASA Astrophysics Data System (ADS)
Han, Jing-Cheng; Huang, Guo-He; Zhang, Hua; Li, Zhong
2013-09-01
Soil erosion is one of the most serious environmental and public health problems, and such land degradation can be effectively mitigated through performing land use transitions across a watershed. Optimal land use management can thus provide a way to reduce soil erosion while achieving the maximum net benefit. However, optimized land use allocation schemes are not always successful since uncertainties pertaining to soil erosion control are not well presented. This study applied an interval-parameter fuzzy two-stage stochastic programming approach to generate optimal land use planning strategies for soil erosion control based on an inexact optimization framework, in which various uncertainties were reflected. The modeling approach can incorporate predefined soil erosion control policies, and address inherent system uncertainties expressed as discrete intervals, fuzzy sets, and probability distributions. The developed model was demonstrated through a case study in the Xiangxi River watershed, China's Three Gorges Reservoir region. Land use transformations were employed as decision variables, and based on these, the land use change dynamics were yielded for a 15-year planning horizon. Finally, the maximum net economic benefit with an interval value of [1.197, 6.311] × 109 was obtained as well as corresponding land use allocations in the three planning periods. Also, the resulting soil erosion amount was found to be decreased and controlled at a tolerable level over the watershed. Thus, results confirm that the developed model is a useful tool for implementing land use management as not only does it allow local decision makers to optimize land use allocation, but can also help to answer how to accomplish land use changes.
Computational problems in autoregressive moving average (ARMA) models
NASA Technical Reports Server (NTRS)
Agarwal, G. C.; Goodarzi, S. M.; Oneill, W. D.; Gottlieb, G. L.
1981-01-01
The choice of the sampling interval and the selection of the order of the model in time series analysis are considered. Band limited (up to 15 Hz) random torque perturbations are applied to the human ankle joint. The applied torque input, the angular rotation output, and the electromyographic activity using surface electrodes from the extensor and flexor muscles of the ankle joint are recorded. Autoregressive moving average models are developed. A parameter constraining technique is applied to develop more reliable models. The asymptotic behavior of the system must be taken into account during parameter optimization to develop predictive models.
Pullin, A N; Pairis-Garcia, M D; Campbell, B J; Campler, M R; Proudfoot, K L
2017-11-01
When considering methodologies for collecting behavioral data, continuous sampling provides the most complete and accurate data set whereas instantaneous sampling can provide similar results and also increase the efficiency of data collection. However, instantaneous time intervals require validation to ensure accurate estimation of the data. Therefore, the objective of this study was to validate scan sampling intervals for lambs housed in a feedlot environment. Feeding, lying, standing, drinking, locomotion, and oral manipulation were measured on 18 crossbred lambs housed in an indoor feedlot facility for 14 h (0600-2000 h). Data from continuous sampling were compared with data from instantaneous scan sampling intervals of 5, 10, 15, and 20 min using a linear regression analysis. Three criteria determined if a time interval accurately estimated behaviors: 1) ≥ 0.90, 2) slope not statistically different from 1 ( > 0.05), and 3) intercept not statistically different from 0 ( > 0.05). Estimations for lying behavior were accurate up to 20-min intervals, whereas feeding and standing behaviors were accurate only at 5-min intervals (i.e., met all 3 regression criteria). Drinking, locomotion, and oral manipulation demonstrated poor associations () for all tested intervals. The results from this study suggest that a 5-min instantaneous sampling interval will accurately estimate lying, feeding, and standing behaviors for lambs housed in a feedlot, whereas continuous sampling is recommended for the remaining behaviors. This methodology will contribute toward the efficiency, accuracy, and transparency of future behavioral data collection in lamb behavior research.
NASA Astrophysics Data System (ADS)
Affendi, I. H. H.; Sarah, M. S. P.; Alrokayan, Salman A. H.; Khan, Haseeb A.; Rusop, M.
2018-05-01
Sol-gel spin coating method is used in the production of nanostructured TiO2 thin film. The surface topology and morphology was observed using the Atomic Force Microscopy (AFM) and Field Emission Scanning Electron Microscopy (FESEM). The electrical properties were investigated by using two probe current-voltage (I-V) measurements to study the electrical resistivity behavior, hence the conductivity of the thin film. The solution concentration will be varied from 14.0 to 0.01wt% with 0.02wt% interval where the last concentration of 0.02 to 0.01wt% have 0.01wt% interval to find which concentrations have the highest conductivity then the optimized concentration's sample were chosen for the thickness parameter based on layer by layer deposition from 1 to 6 layer. Based on the result, the lowest concentration of TiO2, the surface becomes more uniform and the conductivity will increase. As the result, sample of 0.01wt% concentration have conductivity value of 1.77E-10 S/m and will be advanced in thickness parameter. Whereas in thickness parameter, the 3layer deposition were chosen as its conductivity is the highest at 3.9098E9 S/m.
Robust allocation of a defensive budget considering an attacker's private information.
Nikoofal, Mohammad E; Zhuang, Jun
2012-05-01
Attackers' private information is one of the main issues in defensive resource allocation games in homeland security. The outcome of a defense resource allocation decision critically depends on the accuracy of estimations about the attacker's attributes. However, terrorists' goals may be unknown to the defender, necessitating robust decisions by the defender. This article develops a robust-optimization game-theoretical model for identifying optimal defense resource allocation strategies for a rational defender facing a strategic attacker while the attacker's valuation of targets, being the most critical attribute of the attacker, is unknown but belongs to bounded distribution-free intervals. To our best knowledge, no previous research has applied robust optimization in homeland security resource allocation when uncertainty is defined in bounded distribution-free intervals. The key features of our model include (1) modeling uncertainty in attackers' attributes, where uncertainty is characterized by bounded intervals; (2) finding the robust-optimization equilibrium for the defender using concepts dealing with budget of uncertainty and price of robustness; and (3) applying the proposed model to real data. © 2011 Society for Risk Analysis.
Levecke, Bruno; Kaplan, Ray M; Thamsborg, Stig M; Torgerson, Paul R; Vercruysse, Jozef; Dobson, Robert J
2018-04-15
Although various studies have provided novel insights into how to best design, analyze and interpret a fecal egg count reduction test (FECRT), it is still not straightforward to provide guidance that allows improving both the standardization and the analytical performance of the FECRT across a variety of both animal and nematode species. For example, it has been suggested to recommend a minimum number of eggs to be counted under the microscope (not eggs per gram of feces), but we lack the evidence to recommend any number of eggs that would allow a reliable assessment of drug efficacy. Other aspects that need further research are the methodology of calculating uncertainty intervals (UIs; confidence intervals in case of frequentist methods and credible intervals in case of Bayesian methods) and the criteria of classifying drug efficacy into 'normal', 'suspected' and 'reduced'. The aim of this study is to provide complementary insights into the current knowledge, and to ultimately provide guidance in the development of new standardized guidelines for the FECRT. First, data were generated using a simulation in which the 'true' drug efficacy (TDE) was evaluated by the FECRT under varying scenarios of sample size, analytic sensitivity of the diagnostic technique, and level of both intensity and aggregation of egg excretion. Second, the obtained data were analyzed with the aim (i) to verify which classification criteria allow for reliable detection of reduced drug efficacy, (ii) to identify the UI methodology that yields the most reliable assessment of drug efficacy (coverage of TDE) and detection of reduced drug efficacy, and (iii) to determine the required sample size and number of eggs counted under the microscope that optimizes the detection of reduced efficacy. Our results confirm that the currently recommended criteria for classifying drug efficacy are the most appropriate. Additionally, the UI methodologies we tested varied in coverage and ability to detect reduced drug efficacy, thus a combination of UI methodologies is recommended to assess the uncertainty across all scenarios of drug efficacy estimates. Finally, based on our model estimates we were able to determine the required number of eggs to count for each sample size, enabling investigators to optimize the probability of correctly classifying a theoretical TDE while minimizing both financial and technical resources. Copyright © 2018 Elsevier B.V. All rights reserved.
RadVel: The Radial Velocity Modeling Toolkit
NASA Astrophysics Data System (ADS)
Fulton, Benjamin J.; Petigura, Erik A.; Blunt, Sarah; Sinukoff, Evan
2018-04-01
RadVel is an open-source Python package for modeling Keplerian orbits in radial velocity (RV) timeseries. RadVel provides a convenient framework to fit RVs using maximum a posteriori optimization and to compute robust confidence intervals by sampling the posterior probability density via Markov Chain Monte Carlo (MCMC). RadVel allows users to float or fix parameters, impose priors, and perform Bayesian model comparison. We have implemented real-time MCMC convergence tests to ensure adequate sampling of the posterior. RadVel can output a number of publication-quality plots and tables. Users may interface with RadVel through a convenient command-line interface or directly from Python. The code is object-oriented and thus naturally extensible. We encourage contributions from the community. Documentation is available at http://radvel.readthedocs.io.
Interval sampling methods and measurement error: a computer simulation.
Wirth, Oliver; Slaven, James; Taylor, Matthew A
2014-01-01
A simulation study was conducted to provide a more thorough account of measurement error associated with interval sampling methods. A computer program simulated the application of momentary time sampling, partial-interval recording, and whole-interval recording methods on target events randomly distributed across an observation period. The simulation yielded measures of error for multiple combinations of observation period, interval duration, event duration, and cumulative event duration. The simulations were conducted up to 100 times to yield measures of error variability. Although the present simulation confirmed some previously reported characteristics of interval sampling methods, it also revealed many new findings that pertain to each method's inherent strengths and weaknesses. The analysis and resulting error tables can help guide the selection of the most appropriate sampling method for observation-based behavioral assessments. © Society for the Experimental Analysis of Behavior.
Kang, Seongmin; Cha, Jae Hyung; Hong, Yoon-Jung; Lee, Daekyeom; Kim, Ki-Hyun; Jeon, Eui-Chan
2018-01-01
This study estimates the optimum sampling cycle using a statistical method for biomass fraction. More than ten samples were collected from each of the three municipal solid waste (MSW) facilities between June 2013 and March 2015 and the biomass fraction was analyzed. The analysis data were grouped into monthly, quarterly, semi-annual, and annual intervals and the optimum sampling cycle for the detection of the biomass fraction was estimated. Biomass fraction data did not show a normal distribution. Therefore, the non-parametric Kruskal-Wallis test was applied to compare the average values for each sample group. The Kruskal-Wallis test results showed that the average monthly, quarterly, semi-annual, and annual values for all three MSW incineration facilities were equal. Therefore, the biomass fraction at the MSW incineration facilities should be calculated on a yearly cycle which is the longest period of the temporal cycles tested. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Sha, X.; Xu, K.; Bentley, S. J.; Robichaux, P.
2016-02-01
Although many studies of sediment diversions have been conducted on the Mississippi Delta, relatively less attention has been paid to understanding sediment retention and basic cohesive sedimentation processes in receiving basins. Our research evaluates long-term (up to six months) sedimentation processes through various laboratory experiments, especially cohesive sediment settling, consolidation and resuspension and their impacts on sediment retention. Bulk sediment samples were collected from West Bay, near Head of Passes of the Mississippi Delta, and the Big Mar basin that receive water and sediment from the Caernarvon Diversion in the upper Breton Sound region of Louisiana, USA. A-230-cm tall settling column with nine sampling ports at 15 cm intervals was used to measure the consolidation for four initial sediment concentrations (10-120 kg m-3) with two salinities (1 ppt & 5 ppt). Samples of sediment slurry were taken from every port at different time intervals up to 15 days or longer (higher concentration needs longer time to consolidate) to record concentrations gravimetrically. A 200 cm long tube was connected to a 50 cm long core chamber to accumulate at least a 10 cm thick sediment column for erosion tests. A dual-core Gust Erosion Microcosm System was employed to measure time-series (0.5, 1, 2, 3, 4, 5, 6 months) erodibility at seven shear stress regimes (0.01-0.60 Pa). Our preliminary results show a significant decrease of erodibility with time and high concentration (120g/L). Salinity impacted on sediment behavior in consolidation experiments. Our study reveals that more enclosed receiving basins, intermittent openings of diversions, or reduced shear stress due to man-made structure all can potentially reduce cohesive sediment erosion in coastal Louisiana. Further results will be analyzed to determine the model constants. Consolidating rates and corresponding erosional changes will be determined to optimize sediment retention in coastal protection.
NASA Astrophysics Data System (ADS)
Kasiviswanathan, K.; Sudheer, K.
2013-05-01
Artificial neural network (ANN) based hydrologic models have gained lot of attention among water resources engineers and scientists, owing to their potential for accurate prediction of flood flows as compared to conceptual or physics based hydrologic models. The ANN approximates the non-linear functional relationship between the complex hydrologic variables in arriving at the river flow forecast values. Despite a large number of applications, there is still some criticism that ANN's point prediction lacks in reliability since the uncertainty of predictions are not quantified, and it limits its use in practical applications. A major concern in application of traditional uncertainty analysis techniques on neural network framework is its parallel computing architecture with large degrees of freedom, which makes the uncertainty assessment a challenging task. Very limited studies have considered assessment of predictive uncertainty of ANN based hydrologic models. In this study, a novel method is proposed that help construct the prediction interval of ANN flood forecasting model during calibration itself. The method is designed to have two stages of optimization during calibration: at stage 1, the ANN model is trained with genetic algorithm (GA) to obtain optimal set of weights and biases vector, and during stage 2, the optimal variability of ANN parameters (obtained in stage 1) is identified so as to create an ensemble of predictions. During the 2nd stage, the optimization is performed with multiple objectives, (i) minimum residual variance for the ensemble mean, (ii) maximum measured data points to fall within the estimated prediction interval and (iii) minimum width of prediction interval. The method is illustrated using a real world case study of an Indian basin. The method was able to produce an ensemble that has an average prediction interval width of 23.03 m3/s, with 97.17% of the total validation data points (measured) lying within the interval. The derived prediction interval for a selected hydrograph in the validation data set is presented in Fig 1. It is noted that most of the observed flows lie within the constructed prediction interval, and therefore provides information about the uncertainty of the prediction. One specific advantage of the method is that when ensemble mean value is considered as a forecast, the peak flows are predicted with improved accuracy by this method compared to traditional single point forecasted ANNs. Fig. 1 Prediction Interval for selected hydrograph
Monitoring of trace elements in breast milk sampling and measurement procedures.
Spĕvácková, V; Rychlík, S; Cejchanová, M; Spĕvácek, V
2005-06-01
The aims of this study were to test analytical procedures for the determination of Cd, Cu, Mn, Pb, Se and Zn in breast milk and to establish optimum sampling conditions for monitoring purposes. Two population groups were analysed: (1) Seven women from Prague whose breast milk was sampled on days 1,2, 3, 4, 10, 20 and 30 after delivery; (2) 200 women from four (two industrial and two rural) regions whose breast milk was sampled at defined intervals. All samples were mineralised in a microwave oven in the mixture of HNO3 + H2O2 and analysed by atomic absorption spectrometry. Conditions for the measurement of the elements under study (i.e. those for the electrothermal atomisation for Cd, Mn and Pb, flame technique for Cu and Zn, and hydride generation technique for Se) were optimized. Using optimized parameters the analysis was performed and the following conclusion has been made: the concentrations of zinc and manganese decreased very sharply over the first days, that of copper slightly increased within the first two days and then slightly decreased, that of selenium did not change significantly. Partial "stabilisation" was achieved after the second decade. No correlation among the elements was found. A significant difference between whole and skim milk was only found for selenium (26% rel. higher in whole milk). The majority concentrations of cadmium and lead were below the detection limit of the method (0.3 microg x l(-1) and 8.2 microg x l(-1), respectively, as calculated for the original sample). To provide biological monitoring, the maintenance of sampling conditions and especially the time of sampling is crucial.
Validation of a method for the quantitation of ghrelin and unacylated ghrelin by HPLC.
Staes, Edith; Rozet, Eric; Ucakar, Bernard; Hubert, Philippe; Préat, Véronique
2010-02-05
An HPLC/UV method was first optimized for the separation and quantitation of human acylated and unacylated (or des-acyl) ghrelin from aqueous solutions. This method was validated by an original approach using accuracy profiles based on tolerance intervals for the total error measurement. The concentration range that achieved adequate accuracy extended from 1.85 to 59.30microM and 1.93 to 61.60microM for acylated and unacylated ghrelin, respectively. Then, optimal temperature, pH and buffer for sample storage were determined. Unacylated ghrelin was found to be stable in all conditions tested. At 37 degrees C acylated ghrelin was stable at pH 4 but unstable at pH 7.4, the main degradation product was unacylated ghrelin. Finally, this validated HPLC/UV method was used to evaluate the binding of acylated and unacylated ghrelin to liposomes.
Image-plane processing of visual information
NASA Technical Reports Server (NTRS)
Huck, F. O.; Fales, C. L.; Park, S. K.; Samms, R. W.
1984-01-01
Shannon's theory of information is used to optimize the optical design of sensor-array imaging systems which use neighborhood image-plane signal processing for enhancing edges and compressing dynamic range during image formation. The resultant edge-enhancement, or band-pass-filter, response is found to be very similar to that of human vision. Comparisons of traits in human vision with results from information theory suggest that: (1) Image-plane processing, like preprocessing in human vision, can improve visual information acquisition for pattern recognition when resolving power, sensitivity, and dynamic range are constrained. Improvements include reduced sensitivity to changes in lighter levels, reduced signal dynamic range, reduced data transmission and processing, and reduced aliasing and photosensor noise degradation. (2) Information content can be an appropriate figure of merit for optimizing the optical design of imaging systems when visual information is acquired for pattern recognition. The design trade-offs involve spatial response, sensitivity, and sampling interval.
The right time to learn: mechanisms and optimization of spaced learning
Smolen, Paul; Zhang, Yili; Byrne, John H.
2016-01-01
For many types of learning, spaced training, which involves repeated long inter-trial intervals, leads to more robust memory formation than does massed training, which involves short or no intervals. Several cognitive theories have been proposed to explain this superiority, but only recently have data begun to delineate the underlying cellular and molecular mechanisms of spaced training, and we review these theories and data here. Computational models of the implicated signalling cascades have predicted that spaced training with irregular inter-trial intervals can enhance learning. This strategy of using models to predict optimal spaced training protocols, combined with pharmacotherapy, suggests novel ways to rescue impaired synaptic plasticity and learning. PMID:26806627
Li, Zukui; Floudas, Christodoulos A.
2012-01-01
Probabilistic guarantees on constraint satisfaction for robust counterpart optimization are studied in this paper. The robust counterpart optimization formulations studied are derived from box, ellipsoidal, polyhedral, “interval+ellipsoidal” and “interval+polyhedral” uncertainty sets (Li, Z., Ding, R., and Floudas, C.A., A Comparative Theoretical and Computational Study on Robust Counterpart Optimization: I. Robust Linear and Robust Mixed Integer Linear Optimization, Ind. Eng. Chem. Res, 2011, 50, 10567). For those robust counterpart optimization formulations, their corresponding probability bounds on constraint satisfaction are derived for different types of uncertainty characteristic (i.e., bounded or unbounded uncertainty, with or without detailed probability distribution information). The findings of this work extend the results in the literature and provide greater flexibility for robust optimization practitioners in choosing tighter probability bounds so as to find less conservative robust solutions. Extensive numerical studies are performed to compare the tightness of the different probability bounds and the conservatism of different robust counterpart optimization formulations. Guiding rules for the selection of robust counterpart optimization models and for the determination of the size of the uncertainty set are discussed. Applications in production planning and process scheduling problems are presented. PMID:23329868
van Oostrum, Jeroen M; Van Houdenhoven, Mark; Vrielink, Manon M J; Klein, Jan; Hans, Erwin W; Klimek, Markus; Wullink, Gerhard; Steyerberg, Ewout W; Kazemier, Geert
2008-11-01
Hospitals that perform emergency surgery during the night (e.g., from 11:00 pm to 7:30 am) face decisions on optimal operating room (OR) staffing. Emergency patients need to be operated on within a predefined safety window to decrease morbidity and improve their chances of full recovery. We developed a process to determine the optimal OR team composition during the night, such that staffing costs are minimized, while providing adequate resources to start surgery within the safety interval. A discrete event simulation in combination with modeling of safety intervals was applied. Emergency surgery was allowed to be postponed safely. The model was tested using data from the main OR of Erasmus University Medical Center (Erasmus MC). Two outcome measures were calculated: violation of safety intervals and frequency with which OR and anesthesia nurses were called in from home. We used the following input data from Erasmus MC to estimate distributions of all relevant parameters in our model: arrival times of emergency patients, durations of surgical cases, length of stay in the postanesthesia care unit, and transportation times. In addition, surgeons and OR staff of Erasmus MC specified safety intervals. Reducing in-house team members from 9 to 5 increased the fraction of patients treated too late by 2.5% as compared to the baseline scenario. Substantially more OR and anesthesia nurses were called in from home when needed. The use of safety intervals benefits OR management during nights. Modeling of safety intervals substantially influences the number of emergency patients treated on time. Our case study showed that by modeling safety intervals and applying computer simulation, an OR can reduce its staff on call without jeopardizing patient safety.
Genkawa, Takuma; Shinzawa, Hideyuki; Kato, Hideaki; Ishikawa, Daitaro; Murayama, Kodai; Komiyama, Makoto; Ozaki, Yukihiro
2015-12-01
An alternative baseline correction method for diffuse reflection near-infrared (NIR) spectra, searching region standard normal variate (SRSNV), was proposed. Standard normal variate (SNV) is an effective pretreatment method for baseline correction of diffuse reflection NIR spectra of powder and granular samples; however, its baseline correction performance depends on the NIR region used for SNV calculation. To search for an optimal NIR region for baseline correction using SNV, SRSNV employs moving window partial least squares regression (MWPLSR), and an optimal NIR region is identified based on the root mean square error (RMSE) of cross-validation of the partial least squares regression (PLSR) models with the first latent variable (LV). The performance of SRSNV was evaluated using diffuse reflection NIR spectra of mixture samples consisting of wheat flour and granular glucose (0-100% glucose at 5% intervals). From the obtained NIR spectra of the mixture in the 10 000-4000 cm(-1) region at 4 cm intervals (1501 spectral channels), a series of spectral windows consisting of 80 spectral channels was constructed, and then SNV spectra were calculated for each spectral window. Using these SNV spectra, a series of PLSR models with the first LV for glucose concentration was built. A plot of RMSE versus the spectral window position obtained using the PLSR models revealed that the 8680–8364 cm(-1) region was optimal for baseline correction using SNV. In the SNV spectra calculated using the 8680–8364 cm(-1) region (SRSNV spectra), a remarkable relative intensity change between a band due to wheat flour at 8500 cm(-1) and that due to glucose at 8364 cm(-1) was observed owing to successful baseline correction using SNV. A PLSR model with the first LV based on the SRSNV spectra yielded a determination coefficient (R2) of 0.999 and an RMSE of 0.70%, while a PLSR model with three LVs based on SNV spectra calculated in the full spectral region gave an R2 of 0.995 and an RMSE of 2.29%. Additional evaluation of SRSNV was carried out using diffuse reflection NIR spectra of marzipan and corn samples, and PLSR models based on SRSNV spectra showed good prediction results. These evaluation results indicate that SRSNV is effective in baseline correction of diffuse reflection NIR spectra and provides regression models with good prediction accuracy.
Optimal control of lift/drag ratios on a rotating cylinder
NASA Technical Reports Server (NTRS)
Ou, Yuh-Roung; Burns, John A.
1992-01-01
We present the numerical solution to a problem of maximizing the lift to drag ratio by rotating a circular cylinder in a two-dimensional viscous incompressible flow. This problem is viewed as a test case for the newly developing theoretical and computational methods for control of fluid dynamic systems. We show that the time averaged lift to drag ratio for a fixed finite-time interval achieves its maximum value at an optimal rotation rate that depends on the time interval.
NASA Astrophysics Data System (ADS)
junfeng, Li; zhengying, Wei
2017-11-01
Process optimization and microstructure characterization of Ti6Al4V manufactured by selective laser melting (SLM) were investigated in this article. The relative density of sampled fabricated by SLM is influenced by the main process parameters, including laser power, scan speed and hatch distance. The volume energy density (VED) was defined to account for the combined effect of the main process parameters on the relative density. The results shown that the relative density changed with the change of VED and the optimized process interval is 55˜60J/mm3. Furthermore, compared with laser power, scan speed and hatch distance by taguchi method, it was found that the scan speed had the greatest effect on the relative density. Compared with the microstructure of the cross-section of the specimen at different scanning speeds, it was found that the microstructures at different speeds had similar characteristics, all of them were needle-like martensite distributed in the β matrix, but with the increase of scanning speed, the microstructure is finer and the lower scan speed leads to coarsening of the microstructure.
NASA Astrophysics Data System (ADS)
Hayana Hasibuan, Eka; Mawengkang, Herman; Efendi, Syahril
2017-12-01
The use of Partical Swarm Optimization Algorithm in this research is to optimize the feature weights on the Voting Feature Interval 5 algorithm so that we can find the model of using PSO algorithm with VFI 5. Optimization of feature weight on Diabetes or Dyspesia data is considered important because it is very closely related to the livelihood of many people, so if there is any inaccuracy in determining the most dominant feature weight in the data will cause death. Increased accuracy by using PSO Algorithm ie fold 1 from 92.31% to 96.15% increase accuracy of 3.8%, accuracy of fold 2 on Algorithm VFI5 of 92.52% as well as generated on PSO Algorithm means accuracy fixed, then in fold 3 increase accuracy of 85.19% Increased to 96.29% Accuracy increased by 11%. The total accuracy of all three trials increased by 14%. In general the Partical Swarm Optimization algorithm has succeeded in increasing the accuracy to several fold, therefore it can be concluded the PSO algorithm is well used in optimizing the VFI5 Classification Algorithm.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Azunre, P.
Here in this paper, two novel techniques for bounding the solutions of parametric weakly coupled second-order semilinear parabolic partial differential equations are developed. The first provides a theorem to construct interval bounds, while the second provides a theorem to construct lower bounds convex and upper bounds concave in the parameter. The convex/concave bounds can be significantly tighter than the interval bounds because of the wrapping effect suffered by interval analysis in dynamical systems. Both types of bounds are computationally cheap to construct, requiring solving auxiliary systems twice and four times larger than the original system, respectively. An illustrative numerical examplemore » of bound construction and use for deterministic global optimization within a simple serial branch-and-bound algorithm, implemented numerically using interval arithmetic and a generalization of McCormick's relaxation technique, is presented. Finally, problems within the important class of reaction-diffusion systems may be optimized with these tools.« less
Factors influencing preclinical in vivo evaluation of mumps vaccine strain immunogenicity
Halassy, B; Kurtović, T; Brgles, M; Lang Balija, M; Forčić, D
2015-01-01
Immunogenicity testing in animals is a necessary preclinical assay for demonstration of vaccine efficacy the results of which are often the basis for the decision whether to proceed or withdraw the further development of the novel vaccine candidate. However, in vivo assays are rarely, if at all, optimized and validated. Here we clearly demonstrate the importance of in vivo assay (mumps virus immunogenicity testing in guinea pigs) optimization for gaining reliable results and the suitability of Fractional factorial design of experiments (DoE) for such a purpose. By the use of DoE with resolution IV (2IV(4-1)) we clearly revealed that the parameters significantly increasing assay sensitivity were interval between animal immunizations followed by the body weight of experimental animals. The quantity (0 versus 2%) of the stabilizer (fetal bovine serum, FBS) in the sample was shown as non-influencing parameter in DoE setup. However, the separate experiment investigating only the FBS influence, and performed under other parameters optimally set, showed that FBS also influences the results of immunogenicity assay. Such finding indicated that (a) factors with strong influence on the measured outcome can hide the effects of parameters with modest/low influence and (b) the matrix of mumps virus samples to be compared for immunogenicity must be identical for reliable virus immunogenicity comparison. Finally the 3 mumps vaccine strains widely used for decades in the licensed vaccines were for the first time compared in an animal model, and results obtained were in line with their reported immunogenicity in human population supporting the predictive power of the optimized in vivo assay. PMID:26376015
Factors influencing preclinical in vivo evaluation of mumps vaccine strain immunogenicity.
Halassy, B; Kurtović, T; Brgles, M; Lang Balija, M; Forčić, D
2015-01-01
Immunogenicity testing in animals is a necessary preclinical assay for demonstration of vaccine efficacy the results of which are often the basis for the decision whether to proceed or withdraw the further development of the novel vaccine candidate. However, in vivo assays are rarely, if at all, optimized and validated. Here we clearly demonstrate the importance of in vivo assay (mumps virus immunogenicity testing in guinea pigs) optimization for gaining reliable results and the suitability of Fractional factorial design of experiments (DoE) for such a purpose. By the use of DoE with resolution IV (2IV((4-1))) we clearly revealed that the parameters significantly increasing assay sensitivity were interval between animal immunizations followed by the body weight of experimental animals. The quantity (0 versus 2%) of the stabilizer (fetal bovine serum, FBS) in the sample was shown as non-influencing parameter in DoE setup. However, the separate experiment investigating only the FBS influence, and performed under other parameters optimally set, showed that FBS also influences the results of immunogenicity assay. Such finding indicated that (a) factors with strong influence on the measured outcome can hide the effects of parameters with modest/low influence and (b) the matrix of mumps virus samples to be compared for immunogenicity must be identical for reliable virus immunogenicity comparison. Finally the 3 mumps vaccine strains widely used for decades in the licensed vaccines were for the first time compared in an animal model, and results obtained were in line with their reported immunogenicity in human population supporting the predictive power of the optimized in vivo assay.
Pore water sampling in acid sulfate soils: a new peeper method.
Johnston, Scott G; Burton, Edward D; Keene, Annabelle F; Bush, Richard T; Sullivan, Leigh A; Isaacson, Lloyd
2009-01-01
This study describes the design, deployment, and application of a modified equilibration dialysis device (peeper) optimized for sampling pore waters in acid sulfate soils (ASS). The modified design overcomes the limitations of traditional-style peepers, when sampling firm ASS materials over relatively large depth intervals. The new peeper device uses removable, individual cells of 25 mL volume housed in a 1.5 m long rigid, high-density polyethylene rod. The rigid housing structure allows the device to be inserted directly into relatively firm soils without requiring a supporting frame. The use of removable cells eliminates the need for a large glove-box after peeper retrieval, thus simplifying physical handling. Removable cells are easily maintained in an inert atmosphere during sample processing and the 25-mL sample volume is sufficient for undertaking multiple analyses. A field evaluation of equilibration times indicates that 32 to 38 d of deployment was necessary. Overall, the modified method is simple and effective and well suited to acquisition and processing of redox-sensitive pore water profiles>1 m deep in acid sulfate soil or any other firm wetland soils.
Alegana, Victor A; Wright, Jim; Bosco, Claudio; Okiro, Emelda A; Atkinson, Peter M; Snow, Robert W; Tatem, Andrew J; Noor, Abdisalan M
2017-11-21
One pillar to monitoring progress towards the Sustainable Development Goals is the investment in high quality data to strengthen the scientific basis for decision-making. At present, nationally-representative surveys are the main source of data for establishing a scientific evidence base, monitoring, and evaluation of health metrics. However, little is known about the optimal precisions of various population-level health and development indicators that remains unquantified in nationally-representative household surveys. Here, a retrospective analysis of the precision of prevalence from these surveys was conducted. Using malaria indicators, data were assembled in nine sub-Saharan African countries with at least two nationally-representative surveys. A Bayesian statistical model was used to estimate between- and within-cluster variability for fever and malaria prevalence, and insecticide-treated bed nets (ITNs) use in children under the age of 5 years. The intra-class correlation coefficient was estimated along with the optimal sample size for each indicator with associated uncertainty. Results suggest that the estimated sample sizes for the current nationally-representative surveys increases with declining malaria prevalence. Comparison between the actual sample size and the modelled estimate showed a requirement to increase the sample size for parasite prevalence by up to 77.7% (95% Bayesian credible intervals 74.7-79.4) for the 2015 Kenya MIS (estimated sample size of children 0-4 years 7218 [7099-7288]), and 54.1% [50.1-56.5] for the 2014-2015 Rwanda DHS (12,220 [11,950-12,410]). This study highlights the importance of defining indicator-relevant sample sizes to achieve the required precision in the current national surveys. While expanding the current surveys would need additional investment, the study highlights the need for improved approaches to cost effective sampling.
Hu, X H; Li, Y P; Huang, G H; Zhuang, X W; Ding, X W
2016-05-01
In this study, a Bayesian-based two-stage inexact optimization (BTIO) method is developed for supporting water quality management through coupling Bayesian analysis with interval two-stage stochastic programming (ITSP). The BTIO method is capable of addressing uncertainties caused by insufficient inputs in water quality model as well as uncertainties expressed as probabilistic distributions and interval numbers. The BTIO method is applied to a real case of water quality management for the Xiangxi River basin in the Three Gorges Reservoir region to seek optimal water quality management schemes under various uncertainties. Interval solutions for production patterns under a range of probabilistic water quality constraints have been generated. Results obtained demonstrate compromises between the system benefit and the system failure risk due to inherent uncertainties that exist in various system components. Moreover, information about pollutant emission is accomplished, which would help managers to adjust production patterns of regional industry and local policies considering interactions of water quality requirement, economic benefit, and industry structure.
Color image enhancement based on particle swarm optimization with Gaussian mixture
NASA Astrophysics Data System (ADS)
Kattakkalil Subhashdas, Shibudas; Choi, Bong-Seok; Yoo, Ji-Hoon; Ha, Yeong-Ho
2015-01-01
This paper proposes a Gaussian mixture based image enhancement method which uses particle swarm optimization (PSO) to have an edge over other contemporary methods. The proposed method uses the guassian mixture model to model the lightness histogram of the input image in CIEL*a*b* space. The intersection points of the guassian components in the model are used to partition the lightness histogram. . The enhanced lightness image is generated by transforming the lightness value in each interval to appropriate output interval according to the transformation function that depends on PSO optimized parameters, weight and standard deviation of Gaussian component and cumulative distribution of the input histogram interval. In addition, chroma compensation is applied to the resulting image to reduce washout appearance. Experimental results show that the proposed method produces a better enhanced image compared to the traditional methods. Moreover, the enhanced image is free from several side effects such as washout appearance, information loss and gradation artifacts.
Heuristic algorithms for the minmax regret flow-shop problem with interval processing times.
Ćwik, Michał; Józefczyk, Jerzy
2018-01-01
An uncertain version of the permutation flow-shop with unlimited buffers and the makespan as a criterion is considered. The investigated parametric uncertainty is represented by given interval-valued processing times. The maximum regret is used for the evaluation of uncertainty. Consequently, the minmax regret discrete optimization problem is solved. Due to its high complexity, two relaxations are applied to simplify the optimization procedure. First of all, a greedy procedure is used for calculating the criterion's value, as such calculation is NP-hard problem itself. Moreover, the lower bound is used instead of solving the internal deterministic flow-shop. The constructive heuristic algorithm is applied for the relaxed optimization problem. The algorithm is compared with previously elaborated other heuristic algorithms basing on the evolutionary and the middle interval approaches. The conducted computational experiments showed the advantage of the constructive heuristic algorithm with regards to both the criterion and the time of computations. The Wilcoxon paired-rank statistical test confirmed this conclusion.
Dobryakov, A L; Kovalenko, S A; Weigel, A; Pérez-Lustres, J L; Lange, J; Müller, A; Ernsting, N P
2010-11-01
A setup for pump/supercontinuum-probe spectroscopy is described which (i) is optimized to cancel fluctuations of the probe light by single-shot referencing, and (ii) extends the probe range into the near-uv (1000-270 nm). Reflective optics allow 50 μm spot size in the sample and upon entry into two separate spectrographs. The correlation γ(same) between sample and reference readings of probe light level at every pixel exceeds 0.99, compared to γ(consec)<0.92 reported for consecutive referencing. Statistical analysis provides the confidence interval of the induced optical density, ΔOD. For demonstration we first examine a dye (Hoechst 33258) bound in the minor groove of double-stranded DNA. A weak 1.1 ps spectral oscillation in the fluorescence region, assigned to DNA breathing, is shown to be significant. A second example concerns the weak vibrational structure around t=0 which reflects stimulated Raman processes. With 1% fluctuations of probe power, baseline noise for a transient absorption spectrum becomes 25 μOD rms in 1 s at 1 kHz, allowing to record resonance Raman spectra of flavine adenine dinucleotide in the S(0) and S(1) state.
Risch, Martin; Nydegger, Urs; Risch, Lorenz
2017-01-01
In clinical practice, laboratory results are often important for making diagnostic, therapeutic, and prognostic decisions. Interpreting individual results relies on accurate reference intervals and decision limits. Despite the considerable amount of resources in clinical medicine spent on elderly patients, accurate reference intervals for the elderly are rarely available. The SENIORLAB study set out to determine reference intervals in the elderly by investigating a large variety of laboratory parameters in clinical chemistry, hematology, and immunology. The SENIORLAB study is an observational, prospective cohort study. Subjectively healthy residents of Switzerland aged 60 years and older were included for baseline examination (n = 1467), where anthropometric measurements were taken, medical history was reviewed, and a fasting blood sample was drawn under optimal preanalytical conditions. More than 110 laboratory parameters were measured, and a biobank was set up. The study participants are followed up every 3 to 5 years for quality of life, morbidity, and mortality. The primary aim is to evaluate different laboratory parameters at age-related reference intervals. The secondary aims of this study include the following: identify associations between different parameters, identify diagnostic characteristics to diagnose different circumstances, identify the prevalence of occult disease in subjectively healthy individuals, and identify the prognostic factors for the investigated outcomes, including mortality. To obtain better grounds to justify clinical decisions, specific reference intervals for laboratory parameters of the elderly are needed. Reference intervals are obtained from healthy individuals. A major obstacle when obtaining reference intervals in the elderly is the definition of health in seniors because individuals without any medical condition and any medication are rare in older adulthood. Reference intervals obtained from such individuals cannot be considered representative for seniors in a status of age-specific normal health. In addition to the established methods for determining reference intervals, this longitudinal study utilizes a unique approach, in that survival and long-term well-being are taken as indicators of health in seniors. This approach is expected to provide robust and representative reference intervals that are obtained from an adequate reference population and not a collective of highly selected individuals. The present study was registered under International Standard Randomized Controlled Trial Number registry: ISRCTN53778569.
Grey fuzzy optimization model for water quality management of a river system
NASA Astrophysics Data System (ADS)
Karmakar, Subhankar; Mujumdar, P. P.
2006-07-01
A grey fuzzy optimization model is developed for water quality management of river system to address uncertainty involved in fixing the membership functions for different goals of Pollution Control Agency (PCA) and dischargers. The present model, Grey Fuzzy Waste Load Allocation Model (GFWLAM), has the capability to incorporate the conflicting goals of PCA and dischargers in a deterministic framework. The imprecision associated with specifying the water quality criteria and fractional removal levels are modeled in a fuzzy mathematical framework. To address the imprecision in fixing the lower and upper bounds of membership functions, the membership functions themselves are treated as fuzzy in the model and the membership parameters are expressed as interval grey numbers, a closed and bounded interval with known lower and upper bounds but unknown distribution information. The model provides flexibility for PCA and dischargers to specify their aspirations independently, as the membership parameters for different membership functions, specified for different imprecise goals are interval grey numbers in place of a deterministic real number. In the final solution optimal fractional removal levels of the pollutants are obtained in the form of interval grey numbers. This enhances the flexibility and applicability in decision-making, as the decision-maker gets a range of optimal solutions for fixing the final decision scheme considering technical and economic feasibility of the pollutant treatment levels. Application of the GFWLAM is illustrated with case study of the Tunga-Bhadra river system in India.
Experimental design, power and sample size for animal reproduction experiments.
Chapman, Phillip L; Seidel, George E
2008-01-01
The present paper concerns statistical issues in the design of animal reproduction experiments, with emphasis on the problems of sample size determination and power calculations. We include examples and non-technical discussions aimed at helping researchers avoid serious errors that may invalidate or seriously impair the validity of conclusions from experiments. Screen shots from interactive power calculation programs and basic SAS power calculation programs are presented to aid in understanding statistical power and computing power in some common experimental situations. Practical issues that are common to most statistical design problems are briefly discussed. These include one-sided hypothesis tests, power level criteria, equality of within-group variances, transformations of response variables to achieve variance equality, optimal specification of treatment group sizes, 'post hoc' power analysis and arguments for the increased use of confidence intervals in place of hypothesis tests.
Jović, Ozren
2016-12-15
A novel method for quantitative prediction and variable-selection on spectroscopic data, called Durbin-Watson partial least-squares regression (dwPLS), is proposed in this paper. The idea is to inspect serial correlation in infrared data that is known to consist of highly correlated neighbouring variables. The method selects only those variables whose intervals have a lower Durbin-Watson statistic (dw) than a certain optimal cutoff. For each interval, dw is calculated on a vector of regression coefficients. Adulteration of cold-pressed linseed oil (L), a well-known nutrient beneficial to health, is studied in this work by its being mixed with cheaper oils: rapeseed oil (R), sesame oil (Se) and sunflower oil (Su). The samples for each botanical origin of oil vary with respect to producer, content and geographic origin. The results obtained indicate that MIR-ATR, combined with dwPLS could be implemented to quantitative determination of edible-oil adulteration. Copyright © 2016 Elsevier Ltd. All rights reserved.
Dong, Yuwen; Deshpande, Sunil; Rivera, Daniel E; Downs, Danielle S; Savage, Jennifer S
2014-06-01
Control engineering offers a systematic and efficient method to optimize the effectiveness of individually tailored treatment and prevention policies known as adaptive or "just-in-time" behavioral interventions. The nature of these interventions requires assigning dosages at categorical levels, which has been addressed in prior work using Mixed Logical Dynamical (MLD)-based hybrid model predictive control (HMPC) schemes. However, certain requirements of adaptive behavioral interventions that involve sequential decision making have not been comprehensively explored in the literature. This paper presents an extension of the traditional MLD framework for HMPC by representing the requirements of sequential decision policies as mixed-integer linear constraints. This is accomplished with user-specified dosage sequence tables, manipulation of one input at a time, and a switching time strategy for assigning dosages at time intervals less frequent than the measurement sampling interval. A model developed for a gestational weight gain (GWG) intervention is used to illustrate the generation of these sequential decision policies and their effectiveness for implementing adaptive behavioral interventions involving multiple components.
Improved confidence intervals when the sample is counted an integer times longer than the blank.
Potter, William Edward; Strzelczyk, Jadwiga Jodi
2011-05-01
Past computer solutions for confidence intervals in paired counting are extended to the case where the ratio of the sample count time to the blank count time is taken to be an integer, IRR. Previously, confidence intervals have been named Neyman-Pearson confidence intervals; more correctly they should have been named Neyman confidence intervals or simply confidence intervals. The technique utilized mimics a technique used by Pearson and Hartley to tabulate confidence intervals for the expected value of the discrete Poisson and Binomial distributions. The blank count and the contribution of the sample to the gross count are assumed to be Poisson distributed. The expected value of the blank count, in the sample count time, is assumed known. The net count, OC, is taken to be the gross count minus the product of IRR with the blank count. The probability density function (PDF) for the net count can be determined in a straightforward manner.
Sampled-Data Consensus of Linear Multi-agent Systems With Packet Losses.
Zhang, Wenbing; Tang, Yang; Huang, Tingwen; Kurths, Jurgen
In this paper, the consensus problem is studied for a class of multi-agent systems with sampled data and packet losses, where random and deterministic packet losses are considered, respectively. For random packet losses, a Bernoulli-distributed white sequence is used to describe packet dropouts among agents in a stochastic way. For deterministic packet losses, a switched system with stable and unstable subsystems is employed to model packet dropouts in a deterministic way. The purpose of this paper is to derive consensus criteria, such that linear multi-agent systems with sampled-data and packet losses can reach consensus. By means of the Lyapunov function approach and the decomposition method, the design problem of a distributed controller is solved in terms of convex optimization. The interplay among the allowable bound of the sampling interval, the probability of random packet losses, and the rate of deterministic packet losses are explicitly derived to characterize consensus conditions. The obtained criteria are closely related to the maximum eigenvalue of the Laplacian matrix versus the second minimum eigenvalue of the Laplacian matrix, which reveals the intrinsic effect of communication topologies on consensus performance. Finally, simulations are given to show the effectiveness of the proposed results.In this paper, the consensus problem is studied for a class of multi-agent systems with sampled data and packet losses, where random and deterministic packet losses are considered, respectively. For random packet losses, a Bernoulli-distributed white sequence is used to describe packet dropouts among agents in a stochastic way. For deterministic packet losses, a switched system with stable and unstable subsystems is employed to model packet dropouts in a deterministic way. The purpose of this paper is to derive consensus criteria, such that linear multi-agent systems with sampled-data and packet losses can reach consensus. By means of the Lyapunov function approach and the decomposition method, the design problem of a distributed controller is solved in terms of convex optimization. The interplay among the allowable bound of the sampling interval, the probability of random packet losses, and the rate of deterministic packet losses are explicitly derived to characterize consensus conditions. The obtained criteria are closely related to the maximum eigenvalue of the Laplacian matrix versus the second minimum eigenvalue of the Laplacian matrix, which reveals the intrinsic effect of communication topologies on consensus performance. Finally, simulations are given to show the effectiveness of the proposed results.
Feeding Intervals in Premature Infants ≤1750 g: An Integrative Review.
Binchy, Áine; Moore, Zena; Patton, Declan
2018-06-01
The timely establishment of enteral feeds and a reduction in the number of feeding interruptions are key to achieving optimal nutrition in premature infants. Nutritional guidelines vary widely regarding feeding regimens and there is not a widely accepted consensus on the optimal feeding interval. To critically examine the evidence to determine whether there is a relationship to feeding intervals and feeding outcomes in premature infants. A systematic review of the literature in the following databases: PubMed, CINAHL, Embase and the Cochrane Library. The search strategy used the terms infant premature, low birth weight, enteral feeding, feed tolerance and feed intervals. Search results yielded 10 studies involving 1269 infants (birth weight ≤1750 g). No significant differences in feed intolerance, growth, or incidence of necrotizing enterocolitis were observed. Evidence suggests that infants fed at 2 hourly intervals reached full feeds faster than at 3 hourly intervals, had fewer days on parenteral nutrition, and fewer days in which feedings were withheld. Decrease in the volume of gastric residuals and feeding interruptions were observed in the infants fed at 3 hourly intervals than those who were continuously fed. Reducing the feed interval from 3 to 2 hourly increases nurse workload, yet may improve feeding outcomes by reducing the time to achieve full enteral feeding. Studies varied greatly in the definition and management of feeding intolerance and in how outcomes were measured, analyzed, and reported. The term "intermittent" is used widely but can refer to a 2 or 3 hourly interval.
NASA Astrophysics Data System (ADS)
Wang, Fengwen
2018-05-01
This paper presents a systematic approach for designing 3D auxetic lattice materials, which exhibit constant negative Poisson's ratios over large strain intervals. A unit cell model mimicking tensile tests is established and based on the proposed model, the secant Poisson's ratio is defined as the negative ratio between the lateral and the longitudinal engineering strains. The optimization problem for designing a material unit cell with a target Poisson's ratio is formulated to minimize the average lateral engineering stresses under the prescribed deformations. Numerical results demonstrate that 3D auxetic lattice materials with constant Poisson's ratios can be achieved by the proposed optimization formulation and that two sets of material architectures are obtained by imposing different symmetry on the unit cell. Moreover, inspired by the topology-optimized material architecture, a subsequent shape optimization is proposed by parametrizing material architectures using super-ellipsoids. By designing two geometrical parameters, simple optimized material microstructures with different target Poisson's ratios are obtained. By interpolating these two parameters as polynomial functions of Poisson's ratios, material architectures for any Poisson's ratio in the interval of ν ∈ [ - 0.78 , 0.00 ] are explicitly presented. Numerical evaluations show that interpolated auxetic lattice materials exhibit constant Poisson's ratios in the target strain interval of [0.00, 0.20] and that 3D auxetic lattice material architectures with programmable Poisson's ratio are achievable.
New Multi-objective Uncertainty-based Algorithm for Water Resource Models' Calibration
NASA Astrophysics Data System (ADS)
Keshavarz, Kasra; Alizadeh, Hossein
2017-04-01
Water resource models are powerful tools to support water management decision making process and are developed to deal with a broad range of issues including land use and climate change impacts analysis, water allocation, systems design and operation, waste load control and allocation, etc. These models are divided into two categories of simulation and optimization models whose calibration has been addressed in the literature where great relevant efforts in recent decades have led to two main categories of auto-calibration methods of uncertainty-based algorithms such as GLUE, MCMC and PEST and optimization-based algorithms including single-objective optimization such as SCE-UA and multi-objective optimization such as MOCOM-UA and MOSCEM-UA. Although algorithms which benefit from capabilities of both types, such as SUFI-2, were rather developed, this paper proposes a new auto-calibration algorithm which is capable of both finding optimal parameters values regarding multiple objectives like optimization-based algorithms and providing interval estimations of parameters like uncertainty-based algorithms. The algorithm is actually developed to improve quality of SUFI-2 results. Based on a single-objective, e.g. NSE and RMSE, SUFI-2 proposes a routine to find the best point and interval estimation of parameters and corresponding prediction intervals (95 PPU) of time series of interest. To assess the goodness of calibration, final results are presented using two uncertainty measures of p-factor quantifying percentage of observations covered by 95PPU and r-factor quantifying degree of uncertainty, and the analyst has to select the point and interval estimation of parameters which are actually non-dominated regarding both of the uncertainty measures. Based on the described properties of SUFI-2, two important questions are raised, answering of which are our research motivation: Given that in SUFI-2, final selection is based on the two measures or objectives and on the other hand, knowing that there is no multi-objective optimization mechanism in SUFI-2, are the final estimations Pareto-optimal? Can systematic methods be applied to select the final estimations? Dealing with these questions, a new auto-calibration algorithm was proposed where the uncertainty measures were considered as two objectives to find non-dominated interval estimations of parameters by means of coupling Monte Carlo simulation and Multi-Objective Particle Swarm Optimization. Both the proposed algorithm and SUFI-2 were applied to calibrate parameters of water resources planning model of Helleh river basin, Iran. The model is a comprehensive water quantity-quality model developed in the previous researches using WEAP software in order to analyze the impacts of different water resources management strategies including dam construction, increasing cultivation area, utilization of more efficient irrigation technologies, changing crop pattern, etc. Comparing the Pareto frontier resulted from the proposed auto-calibration algorithm with SUFI-2 results, it was revealed that the new algorithm leads to a better and also continuous Pareto frontier, even though it is more computationally expensive. Finally, Nash and Kalai-Smorodinsky bargaining methods were used to choose compromised interval estimation regarding Pareto frontier.
Comparing interval estimates for small sample ordinal CFA models
Natesan, Prathiba
2015-01-01
Robust maximum likelihood (RML) and asymptotically generalized least squares (AGLS) methods have been recommended for fitting ordinal structural equation models. Studies show that some of these methods underestimate standard errors. However, these studies have not investigated the coverage and bias of interval estimates. An estimate with a reasonable standard error could still be severely biased. This can only be known by systematically investigating the interval estimates. The present study compares Bayesian, RML, and AGLS interval estimates of factor correlations in ordinal confirmatory factor analysis models (CFA) for small sample data. Six sample sizes, 3 factor correlations, and 2 factor score distributions (multivariate normal and multivariate mildly skewed) were studied. Two Bayesian prior specifications, informative and relatively less informative were studied. Undercoverage of confidence intervals and underestimation of standard errors was common in non-Bayesian methods. Underestimated standard errors may lead to inflated Type-I error rates. Non-Bayesian intervals were more positive biased than negatively biased, that is, most intervals that did not contain the true value were greater than the true value. Some non-Bayesian methods had non-converging and inadmissible solutions for small samples and non-normal data. Bayesian empirical standard error estimates for informative and relatively less informative priors were closer to the average standard errors of the estimates. The coverage of Bayesian credibility intervals was closer to what was expected with overcoverage in a few cases. Although some Bayesian credibility intervals were wider, they reflected the nature of statistical uncertainty that comes with the data (e.g., small sample). Bayesian point estimates were also more accurate than non-Bayesian estimates. The results illustrate the importance of analyzing coverage and bias of interval estimates, and how ignoring interval estimates can be misleading. Therefore, editors and policymakers should continue to emphasize the inclusion of interval estimates in research. PMID:26579002
Comparing interval estimates for small sample ordinal CFA models.
Natesan, Prathiba
2015-01-01
Robust maximum likelihood (RML) and asymptotically generalized least squares (AGLS) methods have been recommended for fitting ordinal structural equation models. Studies show that some of these methods underestimate standard errors. However, these studies have not investigated the coverage and bias of interval estimates. An estimate with a reasonable standard error could still be severely biased. This can only be known by systematically investigating the interval estimates. The present study compares Bayesian, RML, and AGLS interval estimates of factor correlations in ordinal confirmatory factor analysis models (CFA) for small sample data. Six sample sizes, 3 factor correlations, and 2 factor score distributions (multivariate normal and multivariate mildly skewed) were studied. Two Bayesian prior specifications, informative and relatively less informative were studied. Undercoverage of confidence intervals and underestimation of standard errors was common in non-Bayesian methods. Underestimated standard errors may lead to inflated Type-I error rates. Non-Bayesian intervals were more positive biased than negatively biased, that is, most intervals that did not contain the true value were greater than the true value. Some non-Bayesian methods had non-converging and inadmissible solutions for small samples and non-normal data. Bayesian empirical standard error estimates for informative and relatively less informative priors were closer to the average standard errors of the estimates. The coverage of Bayesian credibility intervals was closer to what was expected with overcoverage in a few cases. Although some Bayesian credibility intervals were wider, they reflected the nature of statistical uncertainty that comes with the data (e.g., small sample). Bayesian point estimates were also more accurate than non-Bayesian estimates. The results illustrate the importance of analyzing coverage and bias of interval estimates, and how ignoring interval estimates can be misleading. Therefore, editors and policymakers should continue to emphasize the inclusion of interval estimates in research.
Optimizing some 3-stage W-methods for the time integration of PDEs
NASA Astrophysics Data System (ADS)
Gonzalez-Pinto, S.; Hernandez-Abreu, D.; Perez-Rodriguez, S.
2017-07-01
The optimization of some W-methods for the time integration of time-dependent PDEs in several spatial variables is considered. In [2, Theorem 1] several three-parametric families of three-stage W-methods for the integration of IVPs in ODEs were studied. Besides, the optimization of several specific methods for PDEs when the Approximate Matrix Factorization Splitting (AMF) is used to define the approximate Jacobian matrix (W ≈ fy(yn)) was carried out. Also, some convergence and stability properties were presented [2]. The derived methods were optimized on the base that the underlying explicit Runge-Kutta method is the one having the largest Monotonicity interval among the thee-stage order three Runge-Kutta methods [1]. Here, we propose an optimization of the methods by imposing some additional order condition [7] to keep order three for parabolic PDE problems [6] but at the price of reducing substantially the length of the nonlinear Monotonicity interval of the underlying explicit Runge-Kutta method.
John, Emily E; Nekouei, Omid; McClure, J T; Cameron, Marguerite; Keefe, Greg; Stryhn, Henrik
2018-06-01
Bulk tank milk (BTM) samples are used to determine the infection status and estimate dairy herd prevalence for bovine leukaemia virus (BLV) using an antibody ELISA assay. BLV ELISA variability between samples from the same herd or from different herds has not been investigated over long time periods. The main objective of this study was to determine the within-herd and between-herd variability of a BTM BLV ELISA assay over 1-month, 3-month, and 3-year sampling intervals. All of the Canadian Maritime region dairy herds (n = 523) that were active in 2013 and 2016 were included (83.9% and 86.9% of total herds in 2013 and 2016, respectively). BLV antibody levels were measured in three BTM samples collected at 1-month intervals in early 2013 as well as two BTM samples collected over a 3-month interval in early 2016. Random-effects models, with fixed effects for sample replicate and province and random effects for herd, were used to estimate the variability between BTM samples from the same herd and between herds for 1-month, 3-month, and 3-year sampling intervals. The majority of variability of BTM BLV ELISA results was seen between herds (1-month, 6.792 ± 0.533; 3-month, 7.806 ± 0.652; 3-year, 6.222 ± 0.528). Unexplained variance between samples from the same herd, on square-root scale, was greatest for the 3-year (0.976 ± 0.104), followed by the 1-month (0.611 ± 0.035) then the 3-month (0.557 ± 0.071) intervals. Variability of BTM antibody levels within the same herd was present but was much smaller than the variability between herds, and was greatest for the 3-year sampling interval. The 3-month sampling interval resulted in the least variability and is appropriate to use for estimating the baseline level of within-herd prevalence for BLV control programs. Knowledge of the baseline variability and within-herd prevalence can help to determine effectiveness of control programs when BTM sampling is repeated at longer intervals. Copyright © 2018 Elsevier B.V. All rights reserved.
Effects of sampling interval on spatial patterns and statistics of watershed nitrogen concentration
Wu, S.-S.D.; Usery, E.L.; Finn, M.P.; Bosch, D.D.
2009-01-01
This study investigates how spatial patterns and statistics of a 30 m resolution, model-simulated, watershed nitrogen concentration surface change with sampling intervals from 30 m to 600 m for every 30 m increase for the Little River Watershed (Georgia, USA). The results indicate that the mean, standard deviation, and variogram sills do not have consistent trends with increasing sampling intervals, whereas the variogram ranges remain constant. A sampling interval smaller than or equal to 90 m is necessary to build a representative variogram. The interpolation accuracy, clustering level, and total hot spot areas show decreasing trends approximating a logarithmic function. The trends correspond to the nitrogen variogram and start to level at a sampling interval of 360 m, which is therefore regarded as a critical spatial scale of the Little River Watershed. Copyright ?? 2009 by Bellwether Publishing, Ltd. All right reserved.
Balasubramonian, Rajeev [Sandy, UT; Dwarkadas, Sandhya [Rochester, NY; Albonesi, David [Ithaca, NY
2009-02-10
In a processor having multiple clusters which operate in parallel, the number of clusters in use can be varied dynamically. At the start of each program phase, the configuration option for an interval is run to determine the optimal configuration, which is used until the next phase change is detected. The optimum instruction interval is determined by starting with a minimum interval and doubling it until a low stability factor is reached.
Introduction to Sample Size Choice for Confidence Intervals Based on "t" Statistics
ERIC Educational Resources Information Center
Liu, Xiaofeng Steven; Loudermilk, Brandon; Simpson, Thomas
2014-01-01
Sample size can be chosen to achieve a specified width in a confidence interval. The probability of obtaining a narrow width given that the confidence interval includes the population parameter is defined as the power of the confidence interval, a concept unfamiliar to many practitioners. This article shows how to utilize the Statistical Analysis…
NASA Astrophysics Data System (ADS)
Siswanto, A.; Kurniati, N.
2018-04-01
An oil and gas company has 2,268 oil and gas wells. Well Barrier Element (WBE) is installed in a well to protect human, prevent asset damage and minimize harm to the environment. The primary WBE component is Surface Controlled Subsurface Safety Valve (SCSSV). The secondary WBE component is Christmas Tree Valves that consist of four valves i.e. Lower Master Valve (LMV), Upper Master Valve (UMV), Swab Valve (SV) and Wing Valve (WV). Current practice on WBE Preventive Maintenance (PM) program is conducted by considering the suggested schedule as stated on manual. Corrective Maintenance (CM) program is conducted when the component fails unexpectedly. Both PM and CM need cost and may cause production loss. This paper attempts to analyze the failure data and reliability based on historical data. Optimal PM interval is determined in order to minimize the total cost of maintenance per unit time. The optimal PM interval for SCSSV is 730 days, LMV is 985 days, UMV is 910 days, SV is 900 days and WV is 780 days. In average of all components, the cost reduction by implementing the suggested interval is 52%, while the reliability is improved by 4% and the availability is increased by 5%.
Yao, X; Anderson, D L; Ross, S A; Lang, D G; Desai, B Z; Cooper, D C; Wheelan, P; McIntyre, M S; Bergquist, M L; MacKenzie, K I; Becherer, J D; Hashim, M A
2008-01-01
Background and purpose: Drug-induced prolongation of the QT interval can lead to torsade de pointes, a life-threatening ventricular arrhythmia. Finding appropriate assays from among the plethora of options available to predict reliably this serious adverse effect in humans remains a challenging issue for the discovery and development of drugs. The purpose of the present study was to develop and verify a reliable and relatively simple approach for assessing, during preclinical development, the propensity of drugs to prolong the QT interval in humans. Experimental approach: Sixteen marketed drugs from various pharmacological classes with a known incidence—or lack thereof—of QT prolongation in humans were examined in hERG (human ether a-go-go-related gene) patch-clamp assay and an anaesthetized guinea-pig assay for QT prolongation using specific protocols. Drug concentrations in perfusates from hERG assays and plasma samples from guinea-pigs were determined using liquid chromatography-mass spectrometry. Key results: Various pharmacological agents that inhibit hERG currents prolong the QT interval in anaesthetized guinea-pigs in a manner similar to that seen in humans and at comparable drug exposures. Several compounds not associated with QT prolongation in humans failed to prolong the QT interval in this model. Conclusions and implications: Analysis of hERG inhibitory potency in conjunction with drug exposures and QT interval measurements in anaesthetized guinea-pigs can reliably predict, during preclinical drug development, the risk of human QT prolongation. A strategy is proposed for mitigating the risk of QT prolongation of new chemical entities during early lead optimization. PMID:18587422
Optimization of Angular-Momentum Biases of Reaction Wheels
NASA Technical Reports Server (NTRS)
Lee, Clifford; Lee, Allan
2008-01-01
RBOT [RWA Bias Optimization Tool (wherein RWA signifies Reaction Wheel Assembly )] is a computer program designed for computing angular momentum biases for reaction wheels used for providing spacecraft pointing in various directions as required for scientific observations. RBOT is currently deployed to support the Cassini mission to prevent operation of reaction wheels at unsafely high speeds while minimizing time in undesirable low-speed range, where elasto-hydrodynamic lubrication films in bearings become ineffective, leading to premature bearing failure. The problem is formulated as a constrained optimization problem in which maximum wheel speed limit is a hard constraint and a cost functional that increases as speed decreases below a low-speed threshold. The optimization problem is solved using a parametric search routine known as the Nelder-Mead simplex algorithm. To increase computational efficiency for extended operation involving large quantity of data, the algorithm is designed to (1) use large time increments during intervals when spacecraft attitudes or rates of rotation are nearly stationary, (2) use sinusoidal-approximation sampling to model repeated long periods of Earth-point rolling maneuvers to reduce computational loads, and (3) utilize an efficient equation to obtain wheel-rate profiles as functions of initial wheel biases based on conservation of angular momentum (in an inertial frame) using pre-computed terms.
Confidence Intervals for Proportion Estimates in Complex Samples. Research Report. ETS RR-06-21
ERIC Educational Resources Information Center
Oranje, Andreas
2006-01-01
Confidence intervals are an important tool to indicate uncertainty of estimates and to give an idea of probable values of an estimate if a different sample from the population was drawn or a different sample of measures was used. Standard symmetric confidence intervals for proportion estimates based on a normal approximation can yield bounds…
Prieto-Blanco, M C; Moliner-Martínez, Y; López-Mahía, P; Campíns-Falcó, P
2012-07-27
A quick, miniaturized and on-line method has been developed for the determination in water of the predominant homologue of benzalkonium chloride, dodecyl dimethyl benzyl ammonium chloride or lauralkonium chloride (C(12)-BAK). The method is based on the formation of an ion-pair in both in-tube solid-phase microextraction (IT-SPME) and capillary liquid chromatography. The IT-SPME optimization required the study of the length and nature of the stationary phase of capillary and the processed sample volume. Because to the surfactant character of the analyte both, the extracting and replacing solvents, have played a decisive role in the IT-SPME optimized procedure. Conditioning the capillary with the mobile phase which contains the counter ion (acetate), using an organic additive (tetrabutylammonium chloride) added to the sample and a mixture water/methanol as replacing solvent (processed just before the valve is switched to the inject position), allowed to obtain good precision of the retention time and a narrow peak for C(12)-BAK. A reversed-phase capillary based TiO(2) column and a mobile phase containing ammonium acetate at pH 5.0 for controlling the interactions of cationic surfactant with titania surface were proposed. The optimized procedure provided adequate linearity, accuracy and precision at the concentrations interval of 1.5-300 μg L(-1) .The limit of detection (LOD) was 0.5 μg L(-1) using diode array detection (DAD). The applicability of proposed IT-SPME-capillary LC method has been assessed in several water samples. Copyright © 2012 Elsevier B.V. All rights reserved.
Akgöz, Ayça; Akata, Deniz; Hazırolan, Tuncay; Karçaaltıncaba, Muşturay
2014-01-01
PURPOSE We aimed to evaluate the visibility of coronary arteries and bypass-grafts in patients who underwent dual source computed tomography (DSCT) angiography without heart rate (HR) control and to determine optimal intervals for image reconstruction. MATERIALS AND METHODS A total of 285 consecutive cases who underwent coronary (n=255) and bypass-graft (n=30) DSCT angiography at our institution were identified retrospectively. Patients with atrial fibrillation were excluded. Ten datasets in 10% increments were reconstructed in all patients. On each dataset, the visibility of coronary arteries was evaluated using the 15-segment American Heart Association classification by two radiologists in consensus. RESULTS Mean HR was 76±16.3 bpm, (range, 46–127 bpm). All coronary segments could be visualized in 277 patients (97.19%). On a segment-basis, 4265 of 4275 (99.77%) coronary artery segments were visible. All segments of 56 bypass-grafts in 30 patients were visible (100%). Total mean segment visibility scores of all coronary arteries were highest at 70%, 40%, and 30% intervals for all HRs. The optimal reconstruction intervals to visualize the segments of all three coronary arteries in descending order were 70%, 60%, 80%, and 30% intervals in patients with a mean HR <70 bpm; 40%, 70%, and 30% intervals in patients with a mean HR 70–100 bpm; and 40%, 50%, and 30% in patients with a mean HR >100 bpm. CONCLUSION Without beta-blocker administration, DSCT coronary angiography offers excellent visibility of vascular segments using both end-systolic and mid-late diastolic reconstructions at HRs up to 100 bpm, and only end-systolic reconstructions at HRs over 100 bpm. PMID:24834490
Teglia, Carla M; Gil García, María D; Galera, María Martínez; Goicoechea, Héctor C
2014-08-01
When determining endogenous compounds in biological samples, the lack of blank or analyte-free matrix samples involves the use of alternative strategies for calibration and quantitation. This article deals with the development, optimization and validation of a high performance liquid chromatography method for the determination of retinoic acid in plasma, obtaining at the same time information about its isomers, taking into account the basal concentration of these endobiotica. An experimental design was used for the optimization of three variables: mobile phase composition, flow rate and column temperature through a central composite design. Four responses were selected for optimization purposes (area under the peaks, quantity of peaks, analysis time and resolution between the first principal peak and the following one). The optimum conditions resulted in a mobile phase consisting of methanol 83.4% (v/v), acetonitrile 0.6% (v/v) and acid aqueous solution 16.0% (v/v); flow rate of 0.68 mL min(-1) and an column temperature of 37.10 °C. Detection was performed at 350 nm by a diode array detector. The method was validated following a holistic approach that included not only the classical parameters related to method performance but also the robustness and the expected proportion of acceptable results lying inside predefined acceptability intervals, i.e., the uncertainty of measurements. The method validation results indicated a high selectivity and good precision characteristics that were studied at four concentration levels, with RSD less than 5.0% for retinoic acid (less than 7.5% for the LOQ concentration level), in intra and inter-assay precision studies. Linearity was proved for a range from 0.00489 to 15.109 ng mL(-1) of retinoic acid and the recovery, which was studied at four different fortification levels in phuman plasma samples, varied from 99.5% to 106.5% for retinoic acid. The applicability of the method was demonstrated by determining retinoic acid and obtaining information about its isomers in human and frog plasma samples from different origins. Copyright © 2014 Elsevier B.V. All rights reserved.
Lin, Ping-I; Martin, Eden R; Browning-Large, Carrie A; Schmechel, Donald E; Welsh-Bohmer, Kathleen A; Doraiswamy, P Murali; Gilbert, John R; Haines, Jonathan L; Pericak-Vance, Margaret A
2006-07-01
Previous linkage studies have suggested that chromosome 12 may harbor susceptibility genes for late-onset Alzheimer disease (LOAD). No risk genes on chromosome 12 have been conclusively identified yet. We have reported that the linkage evidence for LOAD in a 12q region was significantly increased in autopsy-confirmed families particularly for those showing no linkage to alpha-T catenin gene, a LOAD candidate gene on chromosome 10 [LOD score increased from 0.1 in the autopsy-confirmed subset to 4.19 in the unlinked subset (optimal subset); p<0.0001 for the increase in LOD score], indicating a one-LOD support interval spanning 6 Mb. To further investigate this finding and to identify potential candidate LOAD risk genes for follow-up analysis, we analyzed 99 single nucleotide polymorphisms in this region, for the overall sample, the autopsy-confirmed subset, and the optimal subset, respectively, for comparison. We saw no significant association (p<0.01) in the overall sample. In the autopsy-confirmed subset, the best finding was obtained in the activation transcription factor 7 (ATF7) gene (single-locus association, p=0.002; haplotype association global, p=0.007). In the optimal subset, the best finding was obtained in the hypothetical protein FLJ20436 (FLJ20436) gene (single-locus association, p=0.0026). These results suggest that subset and covariate analyses may be one approach to help identify novel susceptibility genes on chromosome 12q for LOAD.
Adjusted Wald Confidence Interval for a Difference of Binomial Proportions Based on Paired Data
ERIC Educational Resources Information Center
Bonett, Douglas G.; Price, Robert M.
2012-01-01
Adjusted Wald intervals for binomial proportions in one-sample and two-sample designs have been shown to perform about as well as the best available methods. The adjusted Wald intervals are easy to compute and have been incorporated into introductory statistics courses. An adjusted Wald interval for paired binomial proportions is proposed here and…
Mist Interval and Hormone Concentration Influence Rooting of Florida and Piedmont Azalea
USDA-ARS?s Scientific Manuscript database
Native azalea (Rhododendron spp.) vegetative propagation information is limited. The objective of this experiment is to determine optimal levels of K-IBA and mist intervals for propagation of Florida azalea (Rhododendron austrinum) and Piedmont azalea (Rhododendron canescens). Florida azalea roote...
On the Parameterized Complexity of Some Optimization Problems Related to Multiple-Interval Graphs
NASA Astrophysics Data System (ADS)
Jiang, Minghui
We show that for any constant t ≥ 2, K -Independent Set and K-Dominating Set in t-track interval graphs are W[1]-hard. This settles an open question recently raised by Fellows, Hermelin, Rosamond, and Vialette. We also give an FPT algorithm for K-Clique in t-interval graphs, parameterized by both k and t, with running time max { t O(k), 2 O(klogk) } ·poly(n), where n is the number of vertices in the graph. This slightly improves the previous FPT algorithm by Fellows, Hermelin, Rosamond, and Vialette. Finally, we use the W[1]-hardness of K-Independent Set in t-track interval graphs to obtain the first parameterized intractability result for a recent bioinformatics problem called Maximal Strip Recovery (MSR). We show that MSR-d is W[1]-hard for any constant d ≥ 4 when the parameter is either the total length of the strips, or the total number of adjacencies in the strips, or the number of strips in the optimal solution.
Azunre, P.
2016-09-21
Here in this paper, two novel techniques for bounding the solutions of parametric weakly coupled second-order semilinear parabolic partial differential equations are developed. The first provides a theorem to construct interval bounds, while the second provides a theorem to construct lower bounds convex and upper bounds concave in the parameter. The convex/concave bounds can be significantly tighter than the interval bounds because of the wrapping effect suffered by interval analysis in dynamical systems. Both types of bounds are computationally cheap to construct, requiring solving auxiliary systems twice and four times larger than the original system, respectively. An illustrative numerical examplemore » of bound construction and use for deterministic global optimization within a simple serial branch-and-bound algorithm, implemented numerically using interval arithmetic and a generalization of McCormick's relaxation technique, is presented. Finally, problems within the important class of reaction-diffusion systems may be optimized with these tools.« less
Adeli, Khosrow; Higgins, Victoria; Seccombe, David; Collier, Christine P; Balion, Cynthia M; Cembrowski, George; Venner, Allison A; Shaw, Julie
2017-11-01
Reference intervals are widely used decision-making tools in laboratory medicine, serving as health-associated standards to interpret laboratory test results. Numerous studies have shown wide variation in reference intervals, even between laboratories using assays from the same manufacturer. Lack of consistency in either sample measurement or reference intervals across laboratories challenges the expectation of standardized patient care regardless of testing location. Here, we present data from a national survey conducted by the Canadian Society of Clinical Chemists (CSCC) Reference Interval Harmonization (hRI) Working Group that examines variation in laboratory reference sample measurements, as well as pediatric and adult reference intervals currently used in clinical practice across Canada. Data on reference intervals currently used by 37 laboratories were collected through a national survey to examine the variation in reference intervals for seven common laboratory tests. Additionally, 40 clinical laboratories participated in a baseline assessment by measuring six analytes in a reference sample. Of the seven analytes examined, alanine aminotransferase (ALT), alkaline phosphatase (ALP), and creatinine reference intervals were most variable. As expected, reference interval variation was more substantial in the pediatric population and varied between laboratories using the same instrumentation. Reference sample results differed between laboratories, particularly for ALT and free thyroxine (FT4). Reference interval variation was greater than test result variation for the majority of analytes. It is evident that there is a critical lack of harmonization in laboratory reference intervals, particularly for the pediatric population. Furthermore, the observed variation in reference intervals across instruments cannot be explained by the bias between the results obtained on instruments by different manufacturers. Copyright © 2017 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.
Error propagation of partial least squares for parameters optimization in NIR modeling.
Du, Chenzhao; Dai, Shengyun; Qiao, Yanjiang; Wu, Zhisheng
2018-03-05
A novel methodology is proposed to determine the error propagation of partial least-square (PLS) for parameters optimization in near-infrared (NIR) modeling. The parameters include spectral pretreatment, latent variables and variable selection. In this paper, an open source dataset (corn) and a complicated dataset (Gardenia) were used to establish PLS models under different modeling parameters. And error propagation of modeling parameters for water quantity in corn and geniposide quantity in Gardenia were presented by both type І and type II error. For example, when variable importance in the projection (VIP), interval partial least square (iPLS) and backward interval partial least square (BiPLS) variable selection algorithms were used for geniposide in Gardenia, compared with synergy interval partial least squares (SiPLS), the error weight varied from 5% to 65%, 55% and 15%. The results demonstrated how and what extent the different modeling parameters affect error propagation of PLS for parameters optimization in NIR modeling. The larger the error weight, the worse the model. Finally, our trials finished a powerful process in developing robust PLS models for corn and Gardenia under the optimal modeling parameters. Furthermore, it could provide a significant guidance for the selection of modeling parameters of other multivariate calibration models. Copyright © 2017. Published by Elsevier B.V.
Error propagation of partial least squares for parameters optimization in NIR modeling
NASA Astrophysics Data System (ADS)
Du, Chenzhao; Dai, Shengyun; Qiao, Yanjiang; Wu, Zhisheng
2018-03-01
A novel methodology is proposed to determine the error propagation of partial least-square (PLS) for parameters optimization in near-infrared (NIR) modeling. The parameters include spectral pretreatment, latent variables and variable selection. In this paper, an open source dataset (corn) and a complicated dataset (Gardenia) were used to establish PLS models under different modeling parameters. And error propagation of modeling parameters for water quantity in corn and geniposide quantity in Gardenia were presented by both type І and type II error. For example, when variable importance in the projection (VIP), interval partial least square (iPLS) and backward interval partial least square (BiPLS) variable selection algorithms were used for geniposide in Gardenia, compared with synergy interval partial least squares (SiPLS), the error weight varied from 5% to 65%, 55% and 15%. The results demonstrated how and what extent the different modeling parameters affect error propagation of PLS for parameters optimization in NIR modeling. The larger the error weight, the worse the model. Finally, our trials finished a powerful process in developing robust PLS models for corn and Gardenia under the optimal modeling parameters. Furthermore, it could provide a significant guidance for the selection of modeling parameters of other multivariate calibration models.
NASA Astrophysics Data System (ADS)
Lin, Yu-Fen; Chen, Yong-Song
2017-02-01
When a proton exchange membrane fuel cell (PEMFC) is operated with a dead-ended anode, impurities gradually accumulate within the anode, resulting in a performance drop. An anode purge is thereby ultimately required to remove impurities within the anode. A purge strategy comprises purge interval (valve closed) and purge duration (valve is open). A short purge interval causes frequent and unnecessary activation of the valve, whereas a long purge interval leads to excessive impurity accumulation. A short purge duration causes an incomplete performance recovery, whereas a long purge duration results in low hydrogen utilization. In this study, a series of experimental trials was conducted to simultaneously measure the hydrogen supply rate and power generation of a PEMFC at a frequency of 50 Hz for various operating current density levels and purge durations. The effect of purge duration on the cell's energy efficiency was subsequently analyzed and discussed. The results showed that the optimal purge duration for the PEMFC was approximately 0.2 s. Based on the results of this study, a methodical process for determining optimal purge durations was ultimately proposed for widespread application. Purging approximately one-fourth of anode gas can obtain optimal energy efficiency for a PEMFC with a dead-ended anode.
Hemolymph amino acid analysis of individual Drosophila larvae.
Piyankarage, Sujeewa C; Augustin, Hrvoje; Grosjean, Yael; Featherstone, David E; Shippy, Scott A
2008-02-15
One of the most widely used transgenic animal models in biology is Drosophila melanogaster, the fruit fly. Chemical information from this exceedingly small organism is usually accomplished by studying populations to attain sample volumes suitable for standard analysis methods. This paper describes a direct sampling technique capable of obtaining 50-300 nL of hemolymph from individual Drosophila larvae. Hemolymph sampling performed under mineral oil and in air at 30 s intervals up to 120 s after piercing larvae revealed that the effect of evaporation on amino acid concentrations is insignificant when the sample was collected within 60 s. Qualitative and quantitative amino acid analyses of obtained hemolymph were carried out in two optimized buffer conditions by capillary electrophoresis with laser-induced fluorescence detection after derivatizing with fluorescamine. Thirteen amino acids were identified from individual hemolymph samples of both wild-type (WT) control and the genderblind (gb) mutant larvae. The levels of glutamine, glutamate, and taurine in the gb hemolymph were significantly lower at 35%, 38%, and 57% of WT levels, respectively. The developed technique that samples only the hemolymph fluid is efficient and enables accurate organism-level chemical information while minimizing errors associated with possible sample contaminations, estimations, and effects of evaporation compared to the traditional hemolymph-sampling techniques.
Information distribution in distributed microprocessor based flight control systems
NASA Technical Reports Server (NTRS)
Montgomery, R. C.; Lee, P. S.
1977-01-01
This paper presents an optimal control theory that accounts for variable time intervals in the information distribution to control effectors in a distributed microprocessor based flight control system. The theory is developed using a linear process model for the aircraft dynamics and the information distribution process is modeled as a variable time increment process where, at the time that information is supplied to the control effectors, the control effectors know the time of the next information update only in a stochastic sense. An optimal control problem is formulated and solved that provides the control law that minimizes the expected value of a quadratic cost function. An example is presented where the theory is applied to the control of the longitudinal motions of the F8-DFBW aircraft. Theoretical and simulation results indicate that, for the example problem, the optimal cost obtained using a variable time increment Markov information update process where the control effectors know only the past information update intervals and the Markov transition mechanism is almost identical to that obtained using a known uniform information update interval.
NASA Astrophysics Data System (ADS)
Niakan, F.; Vahdani, B.; Mohammadi, M.
2015-12-01
This article proposes a multi-objective mixed-integer model to optimize the location of hubs within a hub network design problem under uncertainty. The considered objectives include minimizing the maximum accumulated travel time, minimizing the total costs including transportation, fuel consumption and greenhouse emissions costs, and finally maximizing the minimum service reliability. In the proposed model, it is assumed that for connecting two nodes, there are several types of arc in which their capacity, transportation mode, travel time, and transportation and construction costs are different. Moreover, in this model, determining the capacity of the hubs is part of the decision-making procedure and balancing requirements are imposed on the network. To solve the model, a hybrid solution approach is utilized based on inexact programming, interval-valued fuzzy programming and rough interval programming. Furthermore, a hybrid multi-objective metaheuristic algorithm, namely multi-objective invasive weed optimization (MOIWO), is developed for the given problem. Finally, various computational experiments are carried out to assess the proposed model and solution approaches.
NASA Astrophysics Data System (ADS)
Yu, Jonas C. P.; Wee, H. M.; Yang, P. C.; Wu, Simon
2016-06-01
One of the supply chain risks for hi-tech products is the result of rapid technological innovation; it results in a significant decline in the selling price and demand after the initial launch period. Hi-tech products include computers and communication consumer's products. From a practical standpoint, a more realistic replenishment policy is needed to consider the impact of risks; especially when some portions of shortages are lost. In this paper, suboptimal and optimal order policies with partial backordering are developed for a buyer when the component cost, the selling price, and the demand rate decline at a continuous rate. Two mathematical models are derived and discussed: one model has the suboptimal solution with the fixed replenishment interval and a simpler computational process; the other one has the optimal solution with the varying replenishment interval and a more complicated computational process. The second model results in more profit. Numerical examples are provided to illustrate the two replenishment models. Sensitivity analysis is carried out to investigate the relationship between the parameters and the net profit.
NASA Astrophysics Data System (ADS)
Chen, Sile; Wang, Shuai; Wang, Yibo; Guo, Baohong; Li, Guoqiang; Chang, Zhengshi; Zhang, Guan-Jun
2017-08-01
For enhancing the surface electric withstanding strength of insulating materials, epoxy resin (EP) samples are treated by atmospheric pressure plasma jet (APPJ) with different time interval from 0 to 300s. Helium (He) and tetrafluoromethane (CF4) mixtures are used as working gases with the concentration of CF4 ranging 0%-5%, and when CF4 is ∼3%, the APPJ exhibits an optimal steady state. The flashover withstanding characteristics of modified EP in vacuum are greatly improved under appropriate APPJ treatment conditions. The surface properties of EP samples are evaluated by surface roughness, scanning electron microscope (SEM), X-ray photoelectron spectroscopy (XPS) and water contact angle. It is considered that both physical and chemical effects lead to the enhancement of flashover strength. The physical effect is reflected in the increase of surface roughness, while the chemical effect is reflected in the graft of fluorine groups.
NASA Astrophysics Data System (ADS)
Grace Pavithra, K.; Senthil Kumar, P.; Carolin Christopher, Femina; Saravanan, A.
2017-11-01
In this research, the wastewater samples were collected from leather tanning industry at different time intervals. The parameters like pH, electrical conductivity, temperature, turbidity, chromium and chemical oxygen demand (COD) of the samples were analyzed. A three-phase three-dimensional fluidized type electrode reactor (FTER) was newly designed for the effective removal of toxic pollutants from wastewater. The influencing parameters were optimized for the maximum removal of toxic pollutants from wastewater. The optimum condition for the present system was calculated as: contact time of 30 min, applied voltage of 3 V and the particle electrodes of 15 g. The particle electrode was characterized by using FT-IR analysis. Langmuir-Hinshelwood and pseudo-second order kinetic models were fits well with the experimental data. The results showed that the FTER can be successfully employed for the treatment of industrial wastewater.
Confidence intervals in Flow Forecasting by using artificial neural networks
NASA Astrophysics Data System (ADS)
Panagoulia, Dionysia; Tsekouras, George
2014-05-01
One of the major inadequacies in implementation of Artificial Neural Networks (ANNs) for flow forecasting is the development of confidence intervals, because the relevant estimation cannot be implemented directly, contrasted to the classical forecasting methods. The variation in the ANN output is a measure of uncertainty in the model predictions based on the training data set. Different methods for uncertainty analysis, such as bootstrap, Bayesian, Monte Carlo, have already proposed for hydrologic and geophysical models, while methods for confidence intervals, such as error output, re-sampling, multi-linear regression adapted to ANN have been used for power load forecasting [1-2]. The aim of this paper is to present the re-sampling method for ANN prediction models and to develop this for flow forecasting of the next day. The re-sampling method is based on the ascending sorting of the errors between real and predicted values for all input vectors. The cumulative sample distribution function of the prediction errors is calculated and the confidence intervals are estimated by keeping the intermediate value, rejecting the extreme values according to the desired confidence levels, and holding the intervals symmetrical in probability. For application of the confidence intervals issue, input vectors are used from the Mesochora catchment in western-central Greece. The ANN's training algorithm is the stochastic training back-propagation process with decreasing functions of learning rate and momentum term, for which an optimization process is conducted regarding the crucial parameters values, such as the number of neurons, the kind of activation functions, the initial values and time parameters of learning rate and momentum term etc. Input variables are historical data of previous days, such as flows, nonlinearly weather related temperatures and nonlinearly weather related rainfalls based on correlation analysis between the under prediction flow and each implicit input variable of different ANN structures [3]. The performance of each ANN structure is evaluated by the voting analysis based on eleven criteria, which are the root mean square error (RMSE), the correlation index (R), the mean absolute percentage error (MAPE), the mean percentage error (MPE), the mean percentage error (ME), the percentage volume in errors (VE), the percentage error in peak (MF), the normalized mean bias error (NMBE), the normalized root mean bias error (NRMSE), the Nash-Sutcliffe model efficiency coefficient (E) and the modified Nash-Sutcliffe model efficiency coefficient (E1). The next day flow for the test set is calculated using the best ANN structure's model. Consequently, the confidence intervals of various confidence levels for training, evaluation and test sets are compared in order to explore the generalisation dynamics of confidence intervals from training and evaluation sets. [1] H.S. Hippert, C.E. Pedreira, R.C. Souza, "Neural networks for short-term load forecasting: A review and evaluation," IEEE Trans. on Power Systems, vol. 16, no. 1, 2001, pp. 44-55. [2] G. J. Tsekouras, N.E. Mastorakis, F.D. Kanellos, V.T. Kontargyri, C.D. Tsirekis, I.S. Karanasiou, Ch.N. Elias, A.D. Salis, P.A. Kontaxis, A.A. Gialketsi: "Short term load forecasting in Greek interconnected power system using ANN: Confidence Interval using a novel re-sampling technique with corrective Factor", WSEAS International Conference on Circuits, Systems, Electronics, Control & Signal Processing, (CSECS '10), Vouliagmeni, Athens, Greece, December 29-31, 2010. [3] D. Panagoulia, I. Trichakis, G. J. Tsekouras: "Flow Forecasting via Artificial Neural Networks - A Study for Input Variables conditioned on atmospheric circulation", European Geosciences Union, General Assembly 2012 (NH1.1 / AS1.16 - Extreme meteorological and hydrological events induced by severe weather and climate change), Vienna, Austria, 22-27 April 2012.
Brett, Benjamin L; Smyk, Nathan; Solomon, Gary; Baughman, Brandon C; Schatz, Philip
2016-08-18
The ImPACT (Immediate Post-Concussion Assessment and Cognitive Testing) neurocognitive testing battery is a widely used tool used for the assessment and management of sports-related concussion. Research on the stability of ImPACT in high school athletes at a 1- and 2-year intervals have been inconsistent, requiring further investigation. We documented 1-, 2-, and 3-year test-retest reliability of repeated ImPACT baseline assessments in a sample of high school athletes, using multiple statistical methods for examining stability. A total of 1,510 high school athletes completed baseline cognitive testing using online ImPACT test battery at three time periods of approximately 1- (N = 250), 2- (N = 1146), and 3-year (N = 114) intervals. No participant sustained a concussion between assessments. Intraclass correlation coefficients (ICCs) ranged in composite scores from 0.36 to 0.90 and showed little change as intervals between assessments increased. Reliable change indices and regression-based measures (RBMs) examining the test-retest stability demonstrated a lack of significant change in composite scores across the various time intervals, with very few cases (0%-6%) falling outside of 95% confidence intervals. The results suggest ImPACT composites scores remain considerably stability across 1-, 2-, and 3-year test-retest intervals in high school athletes, when considering both ICCs and RBM. Annually ascertaining baseline scores continues to be optimal for ensuring accurate and individualized management of injury for concussed athletes. For instances in which more recent baselines are not available (1-2 years), clinicians should seek to utilize more conservative range estimates in determining the presence of clinically meaningful change in cognitive performance. © The Author 2016. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
40 CFR 1065.245 - Sample flow meter for batch sampling.
Code of Federal Regulations, 2010 CFR
2010-07-01
... rates or total flow sampled into a batch sampling system over a test interval. You may use the... rates or total raw exhaust flow over a test interval. (b) Component requirements. We recommend that you... averaging Pitot tube, or a hot-wire anemometer. Note that your overall system for measuring sample flow must...
Jagannathan, S; Chaansha, S; Rajesh, K; Santhiya, T; Charles, C; Venkataramana, K N
2009-09-15
Vero cells are utilized for production of rabies vaccine. This study deals with the optimize quantity media require for the rabies vaccine production in the smooth roller surface. The rabies virus (Pasteur vaccine strain) is infected to monolayer of the various experimented bottles. To analyze the optimal quantity of media for the production of rabies viral harvest during the process of Vero cell derived rabies vaccine. The trials are started from 200 to 400 mL (PTARV-1, PTARV-2, PTARV-3, PTARV-4 and PTARV-5). The samples are taken in an appropriate time intervals for analysis of In Process Quality Control (IPQC) tests. The collected viral harvests are further processed to rabies vaccine in a pilot level and in addition to scale up an industrial level. Based on the evaluation the PTARV-2 (250 mL) show highly encouraging results for the Vero cell derived rabies vaccine production.
Continuous-time adaptive critics.
Hanselmann, Thomas; Noakes, Lyle; Zaknich, Anthony
2007-05-01
A continuous-time formulation of an adaptive critic design (ACD) is investigated. Connections to the discrete case are made, where backpropagation through time (BPTT) and real-time recurrent learning (RTRL) are prevalent. Practical benefits are that this framework fits in well with plant descriptions given by differential equations and that any standard integration routine with adaptive step-size does an adaptive sampling for free. A second-order actor adaptation using Newton's method is established for fast actor convergence for a general plant and critic. Also, a fast critic update for concurrent actor-critic training is introduced to immediately apply necessary adjustments of critic parameters induced by actor updates to keep the Bellman optimality correct to first-order approximation after actor changes. Thus, critic and actor updates may be performed at the same time until some substantial error build up in the Bellman optimality or temporal difference equation, when a traditional critic training needs to be performed and then another interval of concurrent actor-critic training may resume.
De Vries, Wouter R.; Hoogeveen, Adwin R.; Zonderland, Maria L.; Thijssen, Eric J. M.; Schep, Goof
2007-01-01
Oxygen (O2) kinetics reflect the ability to adapt to or recover from exercise that is indicative of daily life. In patients with chronic heart failure (CHF), parameters of O2 kinetics have shown to be useful for clinical purposes like grading of functional impairment and assessment of prognosis. This study compared the goodness of fit and reproducibility of previously described methods to assess O2 kinetics in these patients. Nineteen CHF patients, New York Heart Association class II–III, performed two constant-load tests on a cycle ergometer at 50% of the maximum workload. Time constants of O2 onset- and recovery kinetics (τ) were calculated by mono-exponential modeling with four different sampling intervals (5 and 10 s, 5 and 8 breaths). The goodness of fit was expressed as the coefficient of determination (R2). Onset kinetics were also evaluated by the mean response time (MRT). Considering O2 onset kinetics, τ showed a significant inverse correlation with peak- \\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{mathrsfs} \\usepackage{upgreek} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} $$ \\ifmmode\\expandafter\\dot\\else\\expandafter\\.\\fi{V}{\\text{O}}_{2} $$\\end{document} (R = −0.88, using 10 s sampling intervals). The limits of agreement of both τ and MRT, however, were not clinically acceptable. O2 recovery kinetics yielded better reproducibility and goodness of fit. Using the most optimal sampling interval (5 breaths), a change of at least 13 s in τ is needed to exceed normal test-to-test variations. In conclusion, O2 recovery kinetics are more reproducible for clinical purposes than O2 onset kinetics in moderately impaired patients with CHF. It should be recognized that this observation cannot be assumed to be generalizable to more severely impaired CHF patients. PMID:17277937
Predicting long-term survival after coronary artery bypass graft surgery.
Karim, Md N; Reid, Christopher M; Huq, Molla; Brilleman, Samuel L; Cochrane, Andrew; Tran, Lavinia; Billah, Baki
2018-02-01
To develop a model for predicting long-term survival following coronary artery bypass graft surgery. This study included 46 573 patients from the Australian and New Zealand Society of Cardiac and Thoracic Surgeons (ANZCTS) registry, who underwent isolated coronary artery bypass graft surgery between 2001 and 2014. Data were randomly split into development (23 282) and validation (23 291) samples. Cox regression models were fitted separately, using the important preoperative variables, for 4 'time intervals' (31-90 days, 91-365 days, 1-3 years and >3 years), with optimal predictors selected using the bootstrap bagging technique. Model performance was assessed both in validation data and in combined data (development and validation samples). Coefficients of all 4 final models were estimated on the combined data adjusting for hospital-level clustering. The Kaplan-Meier mortality rates estimated in the sample were 1.7% at 90 days, 2.8% at 1 year, 4.4% at 2 years and 6.1% at 3 years. Age, peripheral vascular disease, respiratory disease, reduced ejection fraction, renal dysfunction, arrhythmia, diabetes, hypercholesterolaemia, cerebrovascular disease, hypertension, congestive heart failure, steroid use and smoking were included in all 4 models. However, their magnitude of effect varied across the time intervals. Harrell's C-statistics was 0.83, 0.78, 0.75 and 0.74 for 31-90 days, 91-365 days, 1-3 years and >3 years models, respectively. Models showed excellent discrimination and calibration in validation data. Models were developed for predicting long-term survival at 4 time intervals after isolated coronary artery bypass graft surgery. These models can be used in conjunction with the existing 30-day mortality prediction model. © The Author 2017. Published by Oxford University Press on behalf of the European Association for Cardio-Thoracic Surgery. All rights reserved.
Modeling uncertainty in producing natural gas from tight sands
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chermak, J.M.; Dahl, C.A.; Patrick, R.H
1995-12-31
Since accurate geologic, petroleum engineering, and economic information are essential ingredients in making profitable production decisions for natural gas, we combine these ingredients in a dynamic framework to model natural gas reservoir production decisions. We begin with the certainty case before proceeding to consider how uncertainty might be incorporated in the decision process. Our production model uses dynamic optimal control to combine economic information with geological constraints to develop optimal production decisions. To incorporate uncertainty into the model, we develop probability distributions on geologic properties for the population of tight gas sand wells and perform a Monte Carlo study tomore » select a sample of wells. Geological production factors, completion factors, and financial information are combined into the hybrid economic-petroleum reservoir engineering model to determine the optimal production profile, initial gas stock, and net present value (NPV) for an individual well. To model the probability of the production abandonment decision, the NPV data is converted to a binary dependent variable. A logit model is used to model this decision as a function of the above geological and economic data to give probability relationships. Additional ways to incorporate uncertainty into the decision process include confidence intervals and utility theory.« less
Kamiura, Moto; Sano, Kohei
2017-10-01
The principle of optimism in the face of uncertainty is known as a heuristic in sequential decision-making problems. Overtaking method based on this principle is an effective algorithm to solve multi-armed bandit problems. It was defined by a set of some heuristic patterns of the formulation in the previous study. The objective of the present paper is to redefine the value functions of Overtaking method and to unify the formulation of them. The unified Overtaking method is associated with upper bounds of confidence intervals of expected rewards on statistics. The unification of the formulation enhances the universality of Overtaking method. Consequently we newly obtain Overtaking method for the exponentially distributed rewards, numerically analyze it, and show that it outperforms UCB algorithm on average. The present study suggests that the principle of optimism in the face of uncertainty should be regarded as the statistics-based consequence of the law of large numbers for the sample mean of rewards and estimation of upper bounds of expected rewards, rather than as a heuristic, in the context of multi-armed bandit problems. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Vu, Duy-Duc; Monies, Frédéric; Rubio, Walter
2018-05-01
A large number of studies, based on 3-axis end milling of free-form surfaces, seek to optimize tool path planning. Approaches try to optimize the machining time by reducing the total tool path length while respecting the criterion of the maximum scallop height. Theoretically, the tool path trajectories that remove the most material follow the directions in which the machined width is the largest. The free-form surface is often considered as a single machining area. Therefore, the optimization on the entire surface is limited. Indeed, it is difficult to define tool trajectories with optimal feed directions which generate largest machined widths. Another limiting point of previous approaches for effectively reduce machining time is the inadequate choice of the tool. Researchers use generally a spherical tool on the entire surface. However, the gains proposed by these different methods developed with these tools lead to relatively small time savings. Therefore, this study proposes a new method, using toroidal milling tools, for generating toolpaths in different regions on the machining surface. The surface is divided into several regions based on machining intervals. These intervals ensure that the effective radius of the tool, at each cutter-contact points on the surface, is always greater than the radius of the tool in an optimized feed direction. A parallel plane strategy is then used on the sub-surfaces with an optimal specific feed direction for each sub-surface. This method allows one to mill the entire surface with efficiency greater than with the use of a spherical tool. The proposed method is calculated and modeled using Maple software to find optimal regions and feed directions in each region. This new method is tested on a free-form surface. A comparison is made with a spherical cutter to show the significant gains obtained with a toroidal milling cutter. Comparisons with CAM software and experimental validations are also done. The results show the efficiency of the method.
Ehrenfest model with large jumps in finance
NASA Astrophysics Data System (ADS)
Takahashi, Hisanao
2004-02-01
Changes (returns) in stock index prices and exchange rates for currencies are argued, based on empirical data, to obey a stable distribution with characteristic exponent α<2 for short sampling intervals and a Gaussian distribution for long sampling intervals. In order to explain this phenomenon, an Ehrenfest model with large jumps (ELJ) is introduced to explain the empirical density function of price changes for both short and long sampling intervals.
McEwan, T.E.
1995-11-28
A high speed sampler comprises a meandered sample transmission line for transmitting an input signal, a straight strobe transmission line for transmitting a strobe signal, and a plurality of sampling gates along the transmission lines. The sampling gates comprise a four terminal diode bridge having a first strobe resistor connected from a first terminal of the bridge to the positive strobe line, a second strobe resistor coupled from the third terminal of the bridge to the negative strobe line, a tap connected to the second terminal of the bridge and to the sample transmission line, and a sample holding capacitor connected to the fourth terminal of the bridge. The resistance of the first and second strobe resistors is much higher than the signal transmission line impedance in the preferred system. This results in a sampling gate which applies a very small load on the sample transmission line and on the strobe generator. The sample holding capacitor is implemented using a smaller capacitor and a larger capacitor isolated from the smaller capacitor by resistance. The high speed sampler of the present invention is also characterized by other optimizations, including transmission line tap compensation, stepped impedance strobe line, a multi-layer physical layout, and unique strobe generator design. A plurality of banks of such samplers are controlled for concatenated or interleaved sample intervals to achieve long sample lengths or short sample spacing. 17 figs.
McEwan, Thomas E.
1995-01-01
A high speed sampler comprises a meandered sample transmission line for transmitting an input signal, a straight strobe transmission line for transmitting a strobe signal, and a plurality of sampling gates along the transmission lines. The sampling gates comprise a four terminal diode bridge having a first strobe resistor connected from a first terminal of the bridge to the positive strobe line, a second strobe resistor coupled from the third terminal of the bridge to the negative strobe line, a tap connected to the second terminal of the bridge and to the sample transmission line, and a sample holding capacitor connected to the fourth terminal of the bridge. The resistance of the first and second strobe resistors is much higher than the signal transmission line impedance in the preferred system. This results in a sampling gate which applies a very small load on the sample transmission line and on the strobe generator. The sample holding capacitor is implemented using a smaller capacitor and a larger capacitor isolated from the smaller capacitor by resistance. The high speed sampler of the present invention is also characterized by other optimizations, including transmission line tap compensation, stepped impedance strobe line, a multi-layer physical layout, and unique strobe generator design. A plurality of banks of such samplers are controlled for concatenated or interleaved sample intervals to achieve long sample lengths or short sample spacing.
Modeling and quantification of repolarization feature dependency on heart rate.
Minchole, A; Zacur, E; Pueyo, E; Laguna, P
2014-01-01
This article is part of the Focus Theme of Methods of Information in Medicine on "Biosignal Interpretation: Advanced Methods for Studying Cardiovascular and Respiratory Systems". This work aims at providing an efficient method to estimate the parameters of a non linear model including memory, previously proposed to characterize rate adaptation of repolarization indices. The physiological restrictions on the model parameters have been included in the cost function in such a way that unconstrained optimization techniques such as descent optimization methods can be used for parameter estimation. The proposed method has been evaluated on electrocardiogram (ECG) recordings of healthy subjects performing a tilt test, where rate adaptation of QT and Tpeak-to-Tend (Tpe) intervals has been characterized. The proposed strategy results in an efficient methodology to characterize rate adaptation of repolarization features, improving the convergence time with respect to previous strategies. Moreover, Tpe interval adapts faster to changes in heart rate than the QT interval. In this work an efficient estimation of the parameters of a model aimed at characterizing rate adaptation of repolarization features has been proposed. The Tpe interval has been shown to be rate related and with a shorter memory lag than the QT interval.
First, Matthew R; Robbins-Wamsley, Stephanie H; Riley, Scott C; Moser, Cameron S; Smith, George E; Tamburri, Mario N; Drake, Lisa A
2013-05-07
Vertical migrations of living organisms and settling of particle-attached organisms lead to uneven distributions of biota at different depths in the water column. In ballast tanks, heterogeneity could lead to different population estimates depending on the portion of the discharge sampled. For example, concentrations of organisms exceeding a discharge standard may not be detected if sampling occurs during periods of the discharge when concentrations are low. To determine the degree of stratification, water from ballast tanks was sampled at two experimental facilities as the tanks were drained after water was held for 1 or 5 days. Living organisms ≥50 μm were counted in discrete segments of the drain (e.g., the first 20 min of the drain operation, the second 20 min interval, etc.), thus representing different strata in the tank. In 1 and 5 day trials at both facilities, concentrations of organisms varied among drain segments, and the patterns of stratification varied among replicate trials. From numerical simulations, the optimal sampling strategy for stratified tanks is to collect multiple time-integrated samples spaced relatively evenly throughout the discharge event.
Prinos, Scott T.; Valderrama, Robert
2015-01-01
At five of the monitoring-well cluster locations, a long-screened well was also installed for monitoring and comparison purposes. These long-screened wells are 160 to 200 ft deep, and have open intervals ranging from 145 to 185 ft in length. Water samples were collected at depth intervals of about 5 to 10 ft, using 3-ft-long straddle packers to isolate each sampling interval. The results of monitoring conducted using these long-screened interval wells were generally too variable to identify any changes that might be associated with the seepage barrier. Samples from one of these long-screened interval wells failed to detect the saltwater interface evident in samples and TSEMIL datasets from a collocated well cluster. This failure may have been caused by downward flow of freshwater from above the saltwater interface in the well bore.
Estimating fluvial wood discharge from timelapse photography with varying sampling intervals
NASA Astrophysics Data System (ADS)
Anderson, N. K.
2013-12-01
There is recent focus on calculating wood budgets for streams and rivers to help inform management decisions, ecological studies and carbon/nutrient cycling models. Most work has measured in situ wood in temporary storage along stream banks or estimated wood inputs from banks. Little effort has been employed monitoring and quantifying wood in transport during high flows. This paper outlines a procedure for estimating total seasonal wood loads using non-continuous coarse interval sampling and examines differences in estimation between sampling at 1, 5, 10 and 15 minutes. Analysis is performed on wood transport for the Slave River in Northwest Territories, Canada. Relative to the 1 minute dataset, precision decreased by 23%, 46% and 60% for the 5, 10 and 15 minute datasets, respectively. Five and 10 minute sampling intervals provided unbiased equal variance estimates of 1 minute sampling, whereas 15 minute intervals were biased towards underestimation by 6%. Stratifying estimates by day and by discharge increased precision over non-stratification by 4% and 3%, respectively. Not including wood transported during ice break-up, the total minimum wood load estimated at this site is 3300 × 800$ m3 for the 2012 runoff season. The vast majority of the imprecision in total wood volumes came from variance in estimating average volume per log. Comparison of proportions and variance across sample intervals using bootstrap sampling to achieve equal n. Each trial was sampled for n=100, 10,000 times and averaged. All trials were then averaged to obtain an estimate for each sample interval. Dashed lines represent values from the one minute dataset.
Confidence intervals for correlations when data are not normal.
Bishara, Anthony J; Hittner, James B
2017-02-01
With nonnormal data, the typical confidence interval of the correlation (Fisher z') may be inaccurate. The literature has been unclear as to which of several alternative methods should be used instead, and how extreme a violation of normality is needed to justify an alternative. Through Monte Carlo simulation, 11 confidence interval methods were compared, including Fisher z', two Spearman rank-order methods, the Box-Cox transformation, rank-based inverse normal (RIN) transformation, and various bootstrap methods. Nonnormality often distorted the Fisher z' confidence interval-for example, leading to a 95 % confidence interval that had actual coverage as low as 68 %. Increasing the sample size sometimes worsened this problem. Inaccurate Fisher z' intervals could be predicted by a sample kurtosis of at least 2, an absolute sample skewness of at least 1, or significant violations of normality hypothesis tests. Only the Spearman rank-order and RIN transformation methods were universally robust to nonnormality. Among the bootstrap methods, an observed imposed bootstrap came closest to accurate coverage, though it often resulted in an overly long interval. The results suggest that sample nonnormality can justify avoidance of the Fisher z' interval in favor of a more robust alternative. R code for the relevant methods is provided in supplementary materials.
Juang, Chia-Feng; Hsu, Chia-Hung
2009-12-01
This paper proposes a new reinforcement-learning method using online rule generation and Q-value-aided ant colony optimization (ORGQACO) for fuzzy controller design. The fuzzy controller is based on an interval type-2 fuzzy system (IT2FS). The antecedent part in the designed IT2FS uses interval type-2 fuzzy sets to improve controller robustness to noise. There are initially no fuzzy rules in the IT2FS. The ORGQACO concurrently designs both the structure and parameters of an IT2FS. We propose an online interval type-2 rule generation method for the evolution of system structure and flexible partitioning of the input space. Consequent part parameters in an IT2FS are designed using Q -values and the reinforcement local-global ant colony optimization algorithm. This algorithm selects the consequent part from a set of candidate actions according to ant pheromone trails and Q-values, both of which are updated using reinforcement signals. The ORGQACO design method is applied to the following three control problems: 1) truck-backing control; 2) magnetic-levitation control; and 3) chaotic-system control. The ORGQACO is compared with other reinforcement-learning methods to verify its efficiency and effectiveness. Comparisons with type-1 fuzzy systems verify the noise robustness property of using an IT2FS.
Alternative Confidence Interval Methods Used in the Diagnostic Accuracy Studies
Gülhan, Orekıcı Temel
2016-01-01
Background/Aim. It is necessary to decide whether the newly improved methods are better than the standard or reference test or not. To decide whether the new diagnostics test is better than the gold standard test/imperfect standard test, the differences of estimated sensitivity/specificity are calculated with the help of information obtained from samples. However, to generalize this value to the population, it should be given with the confidence intervals. The aim of this study is to evaluate the confidence interval methods developed for the differences between the two dependent sensitivity/specificity values on a clinical application. Materials and Methods. In this study, confidence interval methods like Asymptotic Intervals, Conditional Intervals, Unconditional Interval, Score Intervals, and Nonparametric Methods Based on Relative Effects Intervals are used. Besides, as clinical application, data used in diagnostics study by Dickel et al. (2010) has been taken as a sample. Results. The results belonging to the alternative confidence interval methods for Nickel Sulfate, Potassium Dichromate, and Lanolin Alcohol are given as a table. Conclusion. While preferring the confidence interval methods, the researchers have to consider whether the case to be compared is single ratio or dependent binary ratio differences, the correlation coefficient between the rates in two dependent ratios and the sample sizes. PMID:27478491
Alternative Confidence Interval Methods Used in the Diagnostic Accuracy Studies.
Erdoğan, Semra; Gülhan, Orekıcı Temel
2016-01-01
Background/Aim. It is necessary to decide whether the newly improved methods are better than the standard or reference test or not. To decide whether the new diagnostics test is better than the gold standard test/imperfect standard test, the differences of estimated sensitivity/specificity are calculated with the help of information obtained from samples. However, to generalize this value to the population, it should be given with the confidence intervals. The aim of this study is to evaluate the confidence interval methods developed for the differences between the two dependent sensitivity/specificity values on a clinical application. Materials and Methods. In this study, confidence interval methods like Asymptotic Intervals, Conditional Intervals, Unconditional Interval, Score Intervals, and Nonparametric Methods Based on Relative Effects Intervals are used. Besides, as clinical application, data used in diagnostics study by Dickel et al. (2010) has been taken as a sample. Results. The results belonging to the alternative confidence interval methods for Nickel Sulfate, Potassium Dichromate, and Lanolin Alcohol are given as a table. Conclusion. While preferring the confidence interval methods, the researchers have to consider whether the case to be compared is single ratio or dependent binary ratio differences, the correlation coefficient between the rates in two dependent ratios and the sample sizes.
Comparison of measurement methods for benzene and toluene
NASA Astrophysics Data System (ADS)
Wideqvist, U.; Vesely, V.; Johansson, C.; Potter, A.; Brorström-Lundén, E.; Sjöberg, K.; Jonsson, T.
Diffusive sampling and active (pumped) sampling (tubes filled with Tenax TA or Carbopack B) were compared with an automatic BTX instrument (Chrompack, GC/FID) for measurements of benzene and toluene. The measurements were made during differing pollution levels and different weather conditions at a roof-top site and in a densely trafficked street canyon in Stockholm, Sweden. The BTX instrument was used as the reference method for comparison with the other methods. Considering all data the Perkin-Elmer diffusive samplers, containing Tenax TA and assuming a constant uptake rate of 0.406 cm3 min-1, showed about 30% higher benzene values compared to the BTX instrument. This discrepancy may be explained by a dose-dependent uptake rate with higher uptake rates at lower dose as suggested by laboratory experiments presented in the literature. After correction by applying the relationship between uptake rate and dose as suggested by Roche et al. (Atmos. Environ. 33 (1999) 1905), the two methods agreed almost perfectly. For toluene there was much better agreement between the two methods. No sign of a dose-dependent uptake could be seen. The mean concentrations and 95% confidence intervals of all toluene measurements (67 values) were (10.80±1.6) μg m -3 for diffusive sampling and (11.3±1.6) μg m -3 for the BTX instrument, respectively. The overall ratio between the concentrations obtained using diffusive sampling and the BTX instrument was 0.91±0.07 (95% confidence interval). Tenax TA was found to be equal to Carbopack B for measuring benzene and toluene in this concentration range, although it has been proposed not to be optimal for benzene. There was also good agreement between the active samplers and the BTX instrument.
Future VIIRS enhancements for the integrated polar-orbiting environmental satellite system
NASA Astrophysics Data System (ADS)
Puschell, Jeffery J.; Silny, John; Cook, Lacy; Kim, Eugene
2010-08-01
The Visible/Infrared Imager Radiometer Suite (VIIRS) is the next-generation imaging spectroradiometer for the future operational polar-orbiting environmental satellite system. A successful Flight Unit 1 has been delivered and integrated onto the NPP spacecraft. The flexible VIIRS architecture can be adapted and enhanced to respond to a wide range of requirements and to incorporate new technology as it becomes available. This paper reports on recent design studies to evaluate building a MW-VLWIR dispersive hyperspectral module with active cooling into the existing VIIRS architecture. Performance of a two-grating VIIRS hyperspectral module was studied across a broad trade space defined primarily by spatial sampling, spectral range, spectral sampling interval, along-track field of view and integration time. The hyperspectral module studied here provides contiguous coverage across 3.9 - 15.5 μm with a spectral sampling interval of 10 nm or better, thereby extending VIIRS spectral range to the shortwave side of the 15.5 μm CO2 band and encompassing the 6.7 μm H2O band. Spatial sampling occurs at VIIRS I-band (~0.4 km at nadir) spatial resolution with aggregation to M-band (~0.8 km) and larger pixel sizes to improve sensitivity. Radiometric sensitivity (NEdT) at a spatial resolution of ~4 km is ~0.1 K or better for a 250 K scene across a wavelength range of 4.5 μm to 15.5 μm. The large number of high spectral and spatial resolution FOVs in this instrument improves chances for retrievals of information on the physical state and composition of the atmosphere all the way to the surface in cloudy regions relative to current systems. Spectral aggregation of spatial resolution measurements to MODIS and VIIRS multispectral bands would continue legacy measurements with better sensitivity in nearly all bands. Additional work is needed to optimize spatial sampling, spectral range and spectral sampling approaches for the hyperspectral module and to further refine this powerful imager concept.
Seo, Eun Hee; Kim, Tae Oh; Park, Min Jae; Joo, Hee Rin; Heo, Nae Yun; Park, Jongha; Park, Seung Ha; Yang, Sung Yeon; Moon, Young Soo
2012-03-01
Several factors influence bowel preparation quality. Recent studies have indicated that the time interval between bowel preparation and the start of colonoscopy is also important in determining bowel preparation quality. To evaluate the influence of the preparation-to-colonoscopy (PC) interval (the interval of time between the last polyethylene glycol dose ingestion and the start of the colonoscopy) on bowel preparation quality in the split-dose method for colonoscopy. Prospective observational study. University medical center. A total of 366 consecutive outpatients undergoing colonoscopy. Split-dose bowel preparation and colonoscopy. The quality of bowel preparation was assessed by using the Ottawa Bowel Preparation Scale according to the PC interval, and other factors that might influence bowel preparation quality were analyzed. Colonoscopies with a PC interval of 3 to 5 hours had the best bowel preparation quality score in the whole, right, mid, and rectosigmoid colon according to the Ottawa Bowel Preparation Scale. In multivariate analysis, the PC interval (odds ratio [OR] 1.85; 95% CI, 1.18-2.86), the amount of PEG ingested (OR 4.34; 95% CI, 1.08-16.66), and compliance with diet instructions (OR 2.22l 95% CI, 1.33-3.70) were significant contributors to satisfactory bowel preparation. Nonrandomized controlled, single-center trial. The optimal time interval between the last dose of the agent and the start of colonoscopy is one of the important factors to determine satisfactory bowel preparation quality in split-dose polyethylene glycol bowel preparation. Copyright © 2012 American Society for Gastrointestinal Endoscopy. Published by Mosby, Inc. All rights reserved.
Fixed-interval matching-to-sample: intermatching time and intermatching error runs1
Nelson, Thomas D.
1978-01-01
Four pigeons were trained on a matching-to-sample task in which reinforcers followed either the first matching response (fixed interval) or the fifth matching response (tandem fixed-interval fixed-ratio) that occurred 80 seconds or longer after the last reinforcement. Relative frequency distributions of the matching-to-sample responses that concluded intermatching times and runs of mismatches (intermatching error runs) were computed for the final matching responses directly followed by grain access and also for the three matching responses immediately preceding the final match. Comparison of these two distributions showed that the fixed-interval schedule arranged for the preferential reinforcement of matches concluding relatively extended intermatching times and runs of mismatches. Differences in matching accuracy and rate during the fixed interval, compared to the tandem fixed-interval fixed-ratio, suggested that reinforcers following matches concluding various intermatching times and runs of mismatches influenced the rate and accuracy of the last few matches before grain access, but did not control rate and accuracy throughout the entire fixed-interval period. PMID:16812032
Optimization of the Reconstruction Interval in Neurovascular 4D-CTA Imaging
Hoogenboom, T.C.H.; van Beurden, R.M.J.; van Teylingen, B.; Schenk, B.; Willems, P.W.A.
2012-01-01
Summary Time resolved whole brain CT angiography (4D-CTA) is a novel imaging technology providing information regarding blood flow. One of the factors that influence the diagnostic value of this examination is the temporal resolution, which is affected by the gantry rotation speed during acquisition and the reconstruction interval during post-processing. Post-processing determines the time spacing between two reconstructed volumes and, unlike rotation speed, does not affect radiation burden. The data sets of six patients who underwent a cranial 4D-CTA were used for this study. Raw data was acquired using a 320-slice scanner with a rotation speed of 2 Hz. The arterial to venous passage of an intravenous contrast bolus was captured during a 15 s continuous scan. The raw data was reconstructed using four different reconstruction-intervals: 0.2, 0.3, 0.5 and 1.0 s. The results were rated by two observers using a standardized score sheet. The appearance of each lesion was rated correctly in all readings. Scoring for quality of temporal resolution revealed a stepwise improvement from the 1.0 s interval to the 0.3 s interval, while no discernable improvement was noted between the 0.3 s and 0.2 s interval. An increase in temporal resolution may improve the diagnostic quality of cranial 4D-CTA. Using a rotation speed of 0.5 s, the optimal reconstruction interval appears to be 0.3 s, beyond which, changes can no longer be discerned. PMID:23217631
An hp symplectic pseudospectral method for nonlinear optimal control
NASA Astrophysics Data System (ADS)
Peng, Haijun; Wang, Xinwei; Li, Mingwu; Chen, Biaosong
2017-01-01
An adaptive symplectic pseudospectral method based on the dual variational principle is proposed and is successfully applied to solving nonlinear optimal control problems in this paper. The proposed method satisfies the first order necessary conditions of continuous optimal control problems, also the symplectic property of the original continuous Hamiltonian system is preserved. The original optimal control problem is transferred into a set of nonlinear equations which can be solved easily by Newton-Raphson iterations, and the Jacobian matrix is found to be sparse and symmetric. The proposed method, on one hand, exhibits exponent convergence rates when the number of collocation points are increasing with the fixed number of sub-intervals; on the other hand, exhibits linear convergence rates when the number of sub-intervals is increasing with the fixed number of collocation points. Furthermore, combining with the hp method based on the residual error of dynamic constraints, the proposed method can achieve given precisions in a few iterations. Five examples highlight the high precision and high computational efficiency of the proposed method.
NASA Astrophysics Data System (ADS)
Glazner, Allen F.; Sadler, Peter M.
2016-12-01
The duration of a geologic interval, such as the time over which a given volume of magma accumulated to form a pluton, or the lifespan of a large igneous province, is commonly determined from a relatively small number of geochronologic determinations (e.g., 4-10) within that interval. Such sample sets can underestimate the true length of the interval by a significant amount. For example, the average interval determined from a sample of size n = 5, drawn from a uniform random distribution, will underestimate the true interval by 50%. Even for n = 10, the average sample only captures ˜80% of the interval. If the underlying distribution is known then a correction factor can be determined from theory or Monte Carlo analysis; for a uniform random distribution, this factor is
A Bioimpedance Analysis Platform for Amputee Residual Limb Assessment.
Sanders, Joan E; Moehring, Mark A; Rothlisberger, Travis M; Phillips, Reid H; Hartley, Tyler; Dietrich, Colin R; Redd, Christian B; Gardner, David W; Cagle, John C
2016-08-01
The objective of this research was to develop a bioimpedance platform for monitoring fluid volume in residual limbs of people with trans-tibial limb loss using prostheses. A customized multifrequency current stimulus profile was sent to thin flat electrodes positioned on the thigh and distal residual limb. The applied current signal and sensed voltage signals from four pairs of electrodes located on the anterior and posterior surfaces were demodulated into resistive and reactive components. An established electrical model (Cole) and segmental limb geometry model were used to convert results to extracellular and intracellular fluid volumes. Bench tests and testing on amputee participants were conducted to optimize the stimulus profile and electrode design and layout. The proximal current injection electrode needed to be at least 25 cm from the proximal voltage sensing electrode. A thin layer of hydrogel needed to be present during testing to ensure good electrical coupling. Using a burst duration of 2.0 ms, intermission interval of 100 μs, and sampling delay of 10 μs at each of 24 frequencies except 5 kHz, which required a 200-μs sampling delay, the system achieved a sampling rate of 19.7 Hz. The designed bioimpedance platform allowed system settings and electrode layouts and positions to be optimized for amputee limb fluid volume measurement. The system will be useful toward identifying and ranking prosthetic design features and participant characteristics that impact residual limb fluid volume.
Hatami, Mehdi; Farhadi, Khalil; Abdollahpour, Assem
2011-11-01
A simple, rapid, and efficient method, dispersive liquid-liquid microextraction (DLLME) coupled with high-performance liquid chromatography-fluorescence detector, has been developed for the determination of guaifenesin (GUA) enantiomers in human urine samples after an oral dose administration of its syrup formulation. Urine samples were collected during the time intervals 0-2, 2-4, and 4-6 h and concentration and ratio of two enantiomers was determined. The ratio of R-(-) to S-(+) enantiomer concentrations in urine showed an increase with time, with R/S ratios of 0.66 at 2 h and 2.23 at 6 h. For microextraction process, a mixture of extraction solvent (dichloromethane, 100 μL) and dispersive solvent (THF, 1 mL) was rapidly injected into 5.0 mL diluted urine sample for the formation of cloudy solution and extraction of enantiomers into the fine droplets of CH(2)Cl(2). After optimization of HPLC enantioselective conditions, some important parameters, such as the kind and volume of extraction and dispersive solvents, extraction time, temperature, pH, and salt effect were optimized for dispersive liquid-liquid microextraction process. Under the optimum extraction condition, the method yields a linear calibration curve in the concentration range from 10 to 2000 ng/mL for target analytes. LOD was 3.00 ng/mL for both of the enantiomers. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Bias Assessment of General Chemistry Analytes using Commutable Samples.
Koerbin, Gus; Tate, Jillian R; Ryan, Julie; Jones, Graham Rd; Sikaris, Ken A; Kanowski, David; Reed, Maxine; Gill, Janice; Koumantakis, George; Yen, Tina; St John, Andrew; Hickman, Peter E; Simpson, Aaron; Graham, Peter
2014-11-01
Harmonisation of reference intervals for routine general chemistry analytes has been a goal for many years. Analytical bias may prevent this harmonisation. To determine if analytical bias is present when comparing methods, the use of commutable samples, or samples that have the same properties as the clinical samples routinely analysed, should be used as reference samples to eliminate the possibility of matrix effect. The use of commutable samples has improved the identification of unacceptable analytical performance in the Netherlands and Spain. The International Federation of Clinical Chemistry and Laboratory Medicine (IFCC) has undertaken a pilot study using commutable samples in an attempt to determine not only country specific reference intervals but to make them comparable between countries. Australia and New Zealand, through the Australasian Association of Clinical Biochemists (AACB), have also undertaken an assessment of analytical bias using commutable samples and determined that of the 27 general chemistry analytes studied, 19 showed sufficiently small between method biases as to not prevent harmonisation of reference intervals. Application of evidence based approaches including the determination of analytical bias using commutable material is necessary when seeking to harmonise reference intervals.
Reference Intervals of Common Clinical Chemistry Analytes for Adults in Hong Kong.
Lo, Y C; Armbruster, David A
2012-04-01
Defining reference intervals is a major challenge because of the difficulty in recruiting volunteers to participate and testing samples from a significant number of healthy reference individuals. Historical literature citation intervals are often suboptimal because they're be based on obsolete methods and/or only a small number of poorly defined reference samples. Blood donors in Hong Kong gave permission for additional blood to be collected for reference interval testing. The samples were tested for twenty-five routine analytes on the Abbott ARCHITECT clinical chemistry system. Results were analyzed using the Rhoads EP evaluator software program, which is based on the CLSI/IFCC C28-A guideline, and defines the reference interval as the 95% central range. Method specific reference intervals were established for twenty-five common clinical chemistry analytes for a Chinese ethnic population. The intervals were defined for each gender separately and for genders combined. Gender specific or combined gender intervals were adapted as appropriate for each analyte. A large number of healthy, apparently normal blood donors from a local ethnic population were tested to provide current reference intervals for a new clinical chemistry system. Intervals were determined following an accepted international guideline. Laboratories using the same or similar methodologies may adapt these intervals if deemed validated and deemed suitable for their patient population. Laboratories using different methodologies may be able to successfully adapt the intervals for their facilities using the reference interval transference technique based on a method comparison study.
Pigeons' Choices between Fixed-Interval and Random-Interval Schedules: Utility of Variability?
ERIC Educational Resources Information Center
Andrzejewski, Matthew E.; Cardinal, Claudia D.; Field, Douglas P.; Flannery, Barbara A.; Johnson, Michael; Bailey, Kathleen; Hineline, Philip N.
2005-01-01
Pigeons' choosing between fixed-interval and random-interval schedules of reinforcement was investigated in three experiments using a discrete-trial procedure. In all three experiments, the random-interval schedule was generated by sampling a probability distribution at an interval (and in multiples of the interval) equal to that of the…
Terry, Leann; Kelley, Ken
2012-11-01
Composite measures play an important role in psychology and related disciplines. Composite measures almost always have error. Correspondingly, it is important to understand the reliability of the scores from any particular composite measure. However, the point estimates of the reliability of composite measures are fallible and thus all such point estimates should be accompanied by a confidence interval. When confidence intervals are wide, there is much uncertainty in the population value of the reliability coefficient. Given the importance of reporting confidence intervals for estimates of reliability, coupled with the undesirability of wide confidence intervals, we develop methods that allow researchers to plan sample size in order to obtain narrow confidence intervals for population reliability coefficients. We first discuss composite reliability coefficients and then provide a discussion on confidence interval formation for the corresponding population value. Using the accuracy in parameter estimation approach, we develop two methods to obtain accurate estimates of reliability by planning sample size. The first method provides a way to plan sample size so that the expected confidence interval width for the population reliability coefficient is sufficiently narrow. The second method ensures that the confidence interval width will be sufficiently narrow with some desired degree of assurance (e.g., 99% assurance that the 95% confidence interval for the population reliability coefficient will be less than W units wide). The effectiveness of our methods was verified with Monte Carlo simulation studies. We demonstrate how to easily implement the methods with easy-to-use and freely available software. ©2011 The British Psychological Society.
NASA Astrophysics Data System (ADS)
Guan, Fengjiao; Zhang, Guanjun; Liu, Jie; Wang, Shujing; Luo, Xu; Zhu, Feng
2017-10-01
Accurate material parameters are critical to construct the high biofidelity finite element (FE) models. However, it is hard to obtain the brain tissue parameters accurately because of the effects of irregular geometry and uncertain boundary conditions. Considering the complexity of material test and the uncertainty of friction coefficient, a computational inverse method for viscoelastic material parameters identification of brain tissue is presented based on the interval analysis method. Firstly, the intervals are used to quantify the friction coefficient in the boundary condition. And then the inverse problem of material parameters identification under uncertain friction coefficient is transformed into two types of deterministic inverse problem. Finally the intelligent optimization algorithm is used to solve the two types of deterministic inverse problems quickly and accurately, and the range of material parameters can be easily acquired with no need of a variety of samples. The efficiency and convergence of this method are demonstrated by the material parameters identification of thalamus. The proposed method provides a potential effective tool for building high biofidelity human finite element model in the study of traffic accident injury.
Lei, Xiaohui; Wang, Chao; Yue, Dong; Xie, Xiangpeng
2017-01-01
Since wind power is integrated into the thermal power operation system, dynamic economic emission dispatch (DEED) has become a new challenge due to its uncertain characteristics. This paper proposes an adaptive grid based multi-objective Cauchy differential evolution (AGB-MOCDE) for solving stochastic DEED with wind power uncertainty. To properly deal with wind power uncertainty, some scenarios are generated to simulate those possible situations by dividing the uncertainty domain into different intervals, the probability of each interval can be calculated using the cumulative distribution function, and a stochastic DEED model can be formulated under different scenarios. For enhancing the optimization efficiency, Cauchy mutation operation is utilized to improve differential evolution by adjusting the population diversity during the population evolution process, and an adaptive grid is constructed for retaining diversity distribution of Pareto front. With consideration of large number of generated scenarios, the reduction mechanism is carried out to decrease the scenarios number with covariance relationships, which can greatly decrease the computational complexity. Moreover, the constraint-handling technique is also utilized to deal with the system load balance while considering transmission loss among thermal units and wind farms, all the constraint limits can be satisfied under the permitted accuracy. After the proposed method is simulated on three test systems, the obtained results reveal that in comparison with other alternatives, the proposed AGB-MOCDE can optimize the DEED problem while handling all constraint limits, and the optimal scheme of stochastic DEED can decrease the conservation of interval optimization, which can provide a more valuable optimal scheme for real-world applications. PMID:28961262
An interval programming model for continuous improvement in micro-manufacturing
NASA Astrophysics Data System (ADS)
Ouyang, Linhan; Ma, Yizhong; Wang, Jianjun; Tu, Yiliu; Byun, Jai-Hyun
2018-03-01
Continuous quality improvement in micro-manufacturing processes relies on optimization strategies that relate an output performance to a set of machining parameters. However, when determining the optimal machining parameters in a micro-manufacturing process, the economics of continuous quality improvement and decision makers' preference information are typically neglected. This article proposes an economic continuous improvement strategy based on an interval programming model. The proposed strategy differs from previous studies in two ways. First, an interval programming model is proposed to measure the quality level, where decision makers' preference information is considered in order to determine the weight of location and dispersion effects. Second, the proposed strategy is a more flexible approach since it considers the trade-off between the quality level and the associated costs, and leaves engineers a larger decision space through adjusting the quality level. The proposed strategy is compared with its conventional counterparts using an Nd:YLF laser beam micro-drilling process.
Optimism and Cause-Specific Mortality: A Prospective Cohort Study
Kim, Eric S.; Hagan, Kaitlin A.; Grodstein, Francine; DeMeo, Dawn L.; De Vivo, Immaculata; Kubzansky, Laura D.
2017-01-01
Growing evidence has linked positive psychological attributes like optimism to a lower risk of poor health outcomes, especially cardiovascular disease. It has been demonstrated in randomized trials that optimism can be learned. If associations between optimism and broader health outcomes are established, it may lead to novel interventions that improve public health and longevity. In the present study, we evaluated the association between optimism and cause-specific mortality in women after considering the role of potential confounding (sociodemographic characteristics, depression) and intermediary (health behaviors, health conditions) variables. We used prospective data from the Nurses’ Health Study (n = 70,021). Dispositional optimism was measured in 2004; all-cause and cause-specific mortality rates were assessed from 2006 to 2012. Using Cox proportional hazard models, we found that a higher degree of optimism was associated with a lower mortality risk. After adjustment for sociodemographic confounders, compared with women in the lowest quartile of optimism, women in the highest quartile had a hazard ratio of 0.71 (95% confidence interval: 0.66, 0.76) for all-cause mortality. Adding health behaviors, health conditions, and depression attenuated but did not eliminate the associations (hazard ratio = 0.91, 95% confidence interval: 0.85, 0.97). Associations were maintained for various causes of death, including cancer, heart disease, stroke, respiratory disease, and infection. Given that optimism was associated with numerous causes of mortality, it may provide a valuable target for new research on strategies to improve health. PMID:27927621
Optimal sixteenth order convergent method based on quasi-Hermite interpolation for computing roots.
Zafar, Fiza; Hussain, Nawab; Fatimah, Zirwah; Kharal, Athar
2014-01-01
We have given a four-step, multipoint iterative method without memory for solving nonlinear equations. The method is constructed by using quasi-Hermite interpolation and has order of convergence sixteen. As this method requires four function evaluations and one derivative evaluation at each step, it is optimal in the sense of the Kung and Traub conjecture. The comparisons are given with some other newly developed sixteenth-order methods. Interval Newton's method is also used for finding the enough accurate initial approximations. Some figures show the enclosure of finitely many zeroes of nonlinear equations in an interval. Basins of attractions show the effectiveness of the method.
NASA Astrophysics Data System (ADS)
Ouyang, Qin; Chen, Quansheng; Zhao, Jiewen
2016-02-01
The approach presented herein reports the application of near infrared (NIR) spectroscopy, in contrast with human sensory panel, as a tool for estimating Chinese rice wine quality; concretely, to achieve the prediction of the overall sensory scores assigned by the trained sensory panel. Back propagation artificial neural network (BPANN) combined with adaptive boosting (AdaBoost) algorithm, namely BP-AdaBoost, as a novel nonlinear algorithm, was proposed in modeling. First, the optimal spectra intervals were selected by synergy interval partial least square (Si-PLS). Then, BP-AdaBoost model based on the optimal spectra intervals was established, called Si-BP-AdaBoost model. These models were optimized by cross validation, and the performance of each final model was evaluated according to correlation coefficient (Rp) and root mean square error of prediction (RMSEP) in prediction set. Si-BP-AdaBoost showed excellent performance in comparison with other models. The best Si-BP-AdaBoost model was achieved with Rp = 0.9180 and RMSEP = 2.23 in the prediction set. It was concluded that NIR spectroscopy combined with Si-BP-AdaBoost was an appropriate method for the prediction of the sensory quality in Chinese rice wine.
Oono, Ryoko
2017-01-01
High-throughput sequencing technology has helped microbial community ecologists explore ecological and evolutionary patterns at unprecedented scales. The benefits of a large sample size still typically outweigh that of greater sequencing depths per sample for accurate estimations of ecological inferences. However, excluding or not sequencing rare taxa may mislead the answers to the questions 'how and why are communities different?' This study evaluates the confidence intervals of ecological inferences from high-throughput sequencing data of foliar fungal endophytes as case studies through a range of sampling efforts, sequencing depths, and taxonomic resolutions to understand how technical and analytical practices may affect our interpretations. Increasing sampling size reliably decreased confidence intervals across multiple community comparisons. However, the effects of sequencing depths on confidence intervals depended on how rare taxa influenced the dissimilarity estimates among communities and did not significantly decrease confidence intervals for all community comparisons. A comparison of simulated communities under random drift suggests that sequencing depths are important in estimating dissimilarities between microbial communities under neutral selective processes. Confidence interval analyses reveal important biases as well as biological trends in microbial community studies that otherwise may be ignored when communities are only compared for statistically significant differences.
2017-01-01
High-throughput sequencing technology has helped microbial community ecologists explore ecological and evolutionary patterns at unprecedented scales. The benefits of a large sample size still typically outweigh that of greater sequencing depths per sample for accurate estimations of ecological inferences. However, excluding or not sequencing rare taxa may mislead the answers to the questions ‘how and why are communities different?’ This study evaluates the confidence intervals of ecological inferences from high-throughput sequencing data of foliar fungal endophytes as case studies through a range of sampling efforts, sequencing depths, and taxonomic resolutions to understand how technical and analytical practices may affect our interpretations. Increasing sampling size reliably decreased confidence intervals across multiple community comparisons. However, the effects of sequencing depths on confidence intervals depended on how rare taxa influenced the dissimilarity estimates among communities and did not significantly decrease confidence intervals for all community comparisons. A comparison of simulated communities under random drift suggests that sequencing depths are important in estimating dissimilarities between microbial communities under neutral selective processes. Confidence interval analyses reveal important biases as well as biological trends in microbial community studies that otherwise may be ignored when communities are only compared for statistically significant differences. PMID:29253889
Optimization of antitumor treatment conditions for transcutaneous CO2 application: An in vivo study.
Ueha, Takeshi; Kawamoto, Teruya; Onishi, Yasuo; Harada, Risa; Minoda, Masaya; Toda, Mitsunori; Hara, Hitomi; Fukase, Naomasa; Kurosaka, Masahiro; Kuroda, Ryosuke; Akisue, Toshihiro; Sakai, Yoshitada
2017-06-01
Carbon dioxide (CO2) therapy can be applied to treat a variety of disorders. We previously found that transcutaneous application of CO2 with a hydrogel decreased the tumor volume of several types of tumors and induced apoptosis via the mitochondrial pathway. However, only one condition of treatment intensity has been tested. For widespread application in clinical antitumor therapy, the conditions must be optimized. In the present study, we investigated the relationship between the duration, frequency, and treatment interval of transcutaneous CO2 application and antitumor effects in murine xenograft models. Murine xenograft models of three types of human tumors (breast cancer, osteosarcoma, and malignant fibrous histiocytoma/undifferentiated pleomorphic sarcoma) were used to assess the antitumor effects of transcutaneous CO2 application of varying durations, frequencies, and treatment intervals. In all human tumor xenografts, apoptosis was significantly induced by CO2 treatment for ≥10 min, and a significant decrease in tumor volume was observed with CO2 treatments of >5 min. The effect on tumor volume was not dependent on the frequency of CO2 application, i.e., twice or five times per week. However, treatment using 3- and 4-day intervals was more effective at decreasing tumor volume than treatment using 2- and 5-day intervals. The optimal conditions of transcutaneous CO2 application to obtain the best antitumor effect in various tumors were as follows: greater than 10 min per application, twice per week, with 3- and 4-day intervals, and application to the site of the tumor. The results suggest that this novel transcutaneous CO2 application might be useful to treat primary tumors, while mitigating some side effects, and therefore could be safe for clinical trials.
Confidence intervals from single observations in forest research
Harry T. Valentine; George M. Furnival; Timothy G. Gregoire
1991-01-01
A procedure for constructing confidence intervals and testing hypothese from a single trial or observation is reviewed. The procedure requires a prior, fixed estimate or guess of the outcome of an experiment or sampling. Two examples of applications are described: a confidence interval is constructed for the expected outcome of a systematic sampling of a forested tract...
Multiport well design for sampling of ground water at closely spaced vertical intervals
Delin, G.N.; Landon, M.K.
1996-01-01
Detailed vertical sampling is useful in aquifers where vertical mixing is limited and steep vertical gradients in chemical concentrations are expected. Samples can be collected at closely spaced vertical intervals from nested wells with short screened intervals. However, this approach may not be appropriate in all situations. An easy-to-construct and easy-to-install multiport sampling well to collect ground-water samples from closely spaced vertical intervals was developed and tested. The multiport sampling well was designed to sample ground water from surficial sand-and-gravel aquifers. The device consists of multiple stainless-steel tubes within a polyvinyl chloride (PVC) protective casing. The tubes protrude through the wall of the PVC casing at the desired sampling depths. A peristaltic pump is used to collect ground-water samples from the sampling ports. The difference in hydraulic head between any two sampling ports can be measured with a vacuum pump and a modified manometer. The usefulness and versatility of this multiport well design was demonstrated at an agricultural research site near Princeton, Minnesota where sampling ports were installed to a maximum depth of about 12 m below land surface. Tracer experiments were conducted using potassium bromide to document the degree to which short-circuiting occurred between sampling ports. Samples were successfully collected for analysis of major cations and anions, nutrients, selected herbicides, isotopes, dissolved gases, and chlorofluorcarbon concentrations.
What Is the Shape of Developmental Change?
Adolph, Karen E.; Robinson, Scott R.; Young, Jesse W.; Gill-Alvarez, Felix
2009-01-01
Developmental trajectories provide the empirical foundation for theories about change processes during development. However, the ability to distinguish among alternative trajectories depends on how frequently observations are sampled. This study used real behavioral data, with real patterns of variability, to examine the effects of sampling at different intervals on characterization of the underlying trajectory. Data were derived from a set of 32 infant motor skills indexed daily during the first 18 months. Larger sampling intervals (2-31 days) were simulated by systematically removing observations from the daily data and interpolating over the gaps. Infrequent sampling caused decreasing sensitivity to fluctuations in the daily data: Variable trajectories erroneously appeared as step-functions and estimates of onset ages were increasingly off target. Sensitivity to variation decreased as an inverse power function of sampling interval, resulting in severe degradation of the trajectory with intervals longer than 7 days. These findings suggest that sampling rates typically used by developmental researchers may be inadequate to accurately depict patterns of variability and the shape of developmental change. Inadequate sampling regimes therefore may seriously compromise theories of development. PMID:18729590
We demonstrate how thermal-optical transmission analysis (TOT) for refractory light-absorbing carbon in atmospheric particulate matter was optimized with empirical response surface modeling. TOT employs pyrolysis to distinguish the mass of black carbon (BC) from organic carbon (...
Swayze, G.A.; Clark, R.N.; Goetz, A.F.H.; Chrien, T.H.; Gorelick, N.S.
2003-01-01
Estimates of spectrometer band pass, sampling interval, and signal-to-noise ratio required for identification of pure minerals and plants were derived using reflectance spectra convolved to AVIRIS, HYDICE, MIVIS, VIMS, and other imaging spectrometers. For each spectral simulation, various levels of random noise were added to the reflectance spectra after convolution, and then each was analyzed with the Tetracorder spectra identification algorithm [Clark et al., 2003]. The outcome of each identification attempt was tabulated to provide an estimate of the signal-to-noise ratio at which a given percentage of the noisy spectra were identified correctly. Results show that spectral identification is most sensitive to the signal-to-noise ratio at narrow sampling interval values but is more sensitive to the sampling interval itself at broad sampling interval values because of spectral aliasing, a condition when absorption features of different materials can resemble one another. The band pass is less critical to spectral identification than the sampling interval or signal-to-noise ratio because broadening the band pass does not induce spectral aliasing. These conclusions are empirically corroborated by analysis of mineral maps of AVIRIS data collected at Cuprite, Nevada, between 1990 and 1995, a period during which the sensor signal-to-noise ratio increased up to sixfold. There are values of spectrometer sampling and band pass beyond which spectral identification of materials will require an abrupt increase in sensor signal-to-noise ratio due to the effects of spectral aliasing. Factors that control this threshold are the uniqueness of a material's diagnostic absorptions in terms of shape and wavelength isolation, and the spectral diversity of the materials found in nature and in the spectral library used for comparison. Array spectrometers provide the best data for identification when they critically sample spectra. The sampling interval should not be broadened to increase the signal-to-noise ratio in a photon-noise-limited system when high levels of accuracy are desired. It is possible, using this simulation method, to select optimum combinations of band-pass, sampling interval, and signal-to-noise ratio values for a particular application that maximize identification accuracy and minimize the volume of imaging data.
NASA Astrophysics Data System (ADS)
Liu, Ronghua; Sun, Qiaofeng; Hu, Tian; Li, Lian; Nie, Lei; Wang, Jiayue; Zhou, Wanhui; Zang, Hengchang
2018-03-01
As a powerful process analytical technology (PAT) tool, near infrared (NIR) spectroscopy has been widely used in real-time monitoring. In this study, NIR spectroscopy was applied to monitor multi-parameters of traditional Chinese medicine (TCM) Shenzhiling oral liquid during the concentration process to guarantee the quality of products. Five lab scale batches were employed to construct quantitative models to determine five chemical ingredients and physical change (samples density) during concentration process. The paeoniflorin, albiflorin, liquiritin and samples density were modeled by partial least square regression (PLSR), while the content of the glycyrrhizic acid and cinnamic acid were modeled by support vector machine regression (SVMR). Standard normal variate (SNV) and/or Savitzkye-Golay (SG) smoothing with derivative methods were adopted for spectra pretreatment. Variable selection methods including correlation coefficient (CC), competitive adaptive reweighted sampling (CARS) and interval partial least squares regression (iPLS) were performed for optimizing the models. The results indicated that NIR spectroscopy was an effective tool to successfully monitoring the concentration process of Shenzhiling oral liquid.
NASA Astrophysics Data System (ADS)
Gromov, V. A.; Sharygin, G. S.; Mironov, M. V.
2012-08-01
An interval method of radar signal detection and selection based on non-energetic polarization parameter - the ellipticity angle - is suggested. The examined method is optimal by the Neumann-Pearson criterion. The probability of correct detection for a preset probability of false alarm is calculated for different signal/noise ratios. Recommendations for optimization of the given method are provided.
Graphical models for optimal power flow
Dvijotham, Krishnamurthy; Chertkov, Michael; Van Hentenryck, Pascal; ...
2016-09-13
Optimal power flow (OPF) is the central optimization problem in electric power grids. Although solved routinely in the course of power grid operations, it is known to be strongly NP-hard in general, and weakly NP-hard over tree networks. In this paper, we formulate the optimal power flow problem over tree networks as an inference problem over a tree-structured graphical model where the nodal variables are low-dimensional vectors. We adapt the standard dynamic programming algorithm for inference over a tree-structured graphical model to the OPF problem. Combining this with an interval discretization of the nodal variables, we develop an approximation algorithmmore » for the OPF problem. Further, we use techniques from constraint programming (CP) to perform interval computations and adaptive bound propagation to obtain practically efficient algorithms. Compared to previous algorithms that solve OPF with optimality guarantees using convex relaxations, our approach is able to work for arbitrary tree-structured distribution networks and handle mixed-integer optimization problems. Further, it can be implemented in a distributed message-passing fashion that is scalable and is suitable for “smart grid” applications like control of distributed energy resources. In conclusion, numerical evaluations on several benchmark networks show that practical OPF problems can be solved effectively using this approach.« less
Fung, Tak; Keenan, Kevin
2014-01-01
The estimation of population allele frequencies using sample data forms a central component of studies in population genetics. These estimates can be used to test hypotheses on the evolutionary processes governing changes in genetic variation among populations. However, existing studies frequently do not account for sampling uncertainty in these estimates, thus compromising their utility. Incorporation of this uncertainty has been hindered by the lack of a method for constructing confidence intervals containing the population allele frequencies, for the general case of sampling from a finite diploid population of any size. In this study, we address this important knowledge gap by presenting a rigorous mathematical method to construct such confidence intervals. For a range of scenarios, the method is used to demonstrate that for a particular allele, in order to obtain accurate estimates within 0.05 of the population allele frequency with high probability (> or = 95%), a sample size of > 30 is often required. This analysis is augmented by an application of the method to empirical sample allele frequency data for two populations of the checkerspot butterfly (Melitaea cinxia L.), occupying meadows in Finland. For each population, the method is used to derive > or = 98.3% confidence intervals for the population frequencies of three alleles. These intervals are then used to construct two joint > or = 95% confidence regions, one for the set of three frequencies for each population. These regions are then used to derive a > or = 95%% confidence interval for Jost's D, a measure of genetic differentiation between the two populations. Overall, the results demonstrate the practical utility of the method with respect to informing sampling design and accounting for sampling uncertainty in studies of population genetics, important for scientific hypothesis-testing and also for risk-based natural resource management.
Ometto, Giovanni; Erlandsen, Mogens; Hunter, Andrew; Bek, Toke
2017-06-01
It has previously been shown that the intervals between screening examinations for diabetic retinopathy can be optimized by including individual risk factors for the development of the disease in the risk assessment. However, in some cases, the risk model calculating the screening interval may recommend a different interval than an experienced clinician. The purpose of this study was to evaluate the influence of factors unrelated to diabetic retinopathy and the distribution of lesions for discrepancies between decisions made by the clinician and the risk model. Therefore, fundus photographs from 90 screening examinations where the recommendations of the clinician and a risk model had been discrepant were evaluated. Forty features were defined to describe the type and location of the lesions, and classification and ranking techniques were used to assess whether the features could predict the discrepancy between the grader and the risk model. Suspicion of tumours, retinal degeneration and vascular diseases other than diabetic retinopathy could explain why the clinician recommended shorter examination intervals than the model. Additionally, the regional distribution of microaneurysms/dot haemorrhages was important for defining a photograph as belonging to the group where both the clinician and the risk model had recommended a short screening interval as opposed to the other decision alternatives. Features unrelated to diabetic retinopathy and the regional distribution of retinal lesions may affect the recommendation of the examination interval during screening for diabetic retinopathy. The development of automated computerized algorithms for extracting information about the type and location of retinal lesions could be expected to further optimize examination intervals during screening for diabetic retinopathy. © 2016 Acta Ophthalmologica Scandinavica Foundation. Published by John Wiley & Sons Ltd.
Prenatal Antecedents of Newborn Neurological Maturation
DiPietro, Janet A.; Kivlighan, Katie T.; Costigan, Kathleen A.; Rubin, Suzanne E.; Shiffler, Dorothy E.; Henderson, Janice L.; Pillion, Joseph P.
2009-01-01
Fetal neurobehavioral development was modeled longitudinally using data collected at weekly intervals from 24- to -38 weeks gestation in a sample of 112 healthy pregnancies. Predictive associations between 3 measures of fetal neurobehavioral functioning and their developmental trajectories to neurological maturation in the 1st weeks after birth were examined. Prenatal measures included fetal heart rate variability, fetal movement, and coupling between fetal motor activity and heart rate patterning; neonatal outcomes include a standard neurologic examination (n = 97) and brainstem auditory evoked potential (BAEP; n = 47). Optimality in newborn motor activity and reflexes was predicted by fetal motor activity; fetal heart rate variability and somatic-cardiac coupling predicted BAEP parameters. Maternal pregnancy-specific psychological stress was associated with accelerated neurologic maturation. PMID:20331657
Xu, Daolin; Lu, Fangfang
2006-12-01
We address the problem of reconstructing a set of nonlinear differential equations from chaotic time series. A method that combines the implicit Adams integration and the structure-selection technique of an error reduction ratio is proposed for system identification and corresponding parameter estimation of the model. The structure-selection technique identifies the significant terms from a pool of candidates of functional basis and determines the optimal model through orthogonal characteristics on data. The technique with the Adams integration algorithm makes the reconstruction available to data sampled with large time intervals. Numerical experiment on Lorenz and Rossler systems shows that the proposed strategy is effective in global vector field reconstruction from noisy time series.
NASA Astrophysics Data System (ADS)
Meng, Su; Chen, Jie; Sun, Jian
2017-10-01
This paper investigates the problem of observer-based output feedback control for networked control systems with non-uniform sampling and time-varying transmission delay. The sampling intervals are assumed to vary within a given interval. The transmission delay belongs to a known interval. A discrete-time model is first established, which contains time-varying delay and norm-bounded uncertainties coming from non-uniform sampling intervals. It is then converted to an interconnection of two subsystems in which the forward channel is delay-free. The scaled small gain theorem is used to derive the stability condition for the closed-loop system. Moreover, the observer-based output feedback controller design method is proposed by utilising a modified cone complementary linearisation algorithm. Finally, numerical examples illustrate the validity and superiority of the proposed method.
H. T. Schreuder; M. S. Williams
2000-01-01
In simulation sampling from forest populations using sample sizes of 20, 40, and 60 plots respectively, confidence intervals based on the bootstrap (accelerated, percentile, and t-distribution based) were calculated and compared with those based on the classical t confidence intervals for mapped populations and subdomains within those populations. A 68.1 ha mapped...
Optimal and Most Exact Confidence Intervals for Person Parameters in Item Response Theory Models
ERIC Educational Resources Information Center
Doebler, Anna; Doebler, Philipp; Holling, Heinz
2013-01-01
The common way to calculate confidence intervals for item response theory models is to assume that the standardized maximum likelihood estimator for the person parameter [theta] is normally distributed. However, this approximation is often inadequate for short and medium test lengths. As a result, the coverage probabilities fall below the given…
Optimal number of stimulation contacts for coordinated reset neuromodulation
Lysyansky, Borys; Popovych, Oleksandr V.; Tass, Peter A.
2013-01-01
In this computational study we investigate coordinated reset (CR) neuromodulation designed for an effective control of synchronization by multi-site stimulation of neuronal target populations. This method was suggested to effectively counteract pathological neuronal synchrony characteristic for several neurological disorders. We study how many stimulation sites are required for optimal CR-induced desynchronization. We found that a moderate increase of the number of stimulation sites may significantly prolong the post-stimulation desynchronized transient after the stimulation is completely switched off. This can, in turn, reduce the amount of the administered stimulation current for the intermittent ON–OFF CR stimulation protocol, where time intervals with stimulation ON are recurrently followed by time intervals with stimulation OFF. In addition, we found that the optimal number of stimulation sites essentially depends on how strongly the administered current decays within the neuronal tissue with increasing distance from the stimulation site. In particular, for a broad spatial stimulation profile, i.e., for a weak spatial decay rate of the stimulation current, CR stimulation can optimally be delivered via a small number of stimulation sites. Our findings may contribute to an optimization of therapeutic applications of CR neuromodulation. PMID:23885239
Chambaz, Antoine; Zheng, Wenjing; van der Laan, Mark J
2017-01-01
This article studies the targeted sequential inference of an optimal treatment rule (TR) and its mean reward in the non-exceptional case, i.e. , assuming that there is no stratum of the baseline covariates where treatment is neither beneficial nor harmful, and under a companion margin assumption. Our pivotal estimator, whose definition hinges on the targeted minimum loss estimation (TMLE) principle, actually infers the mean reward under the current estimate of the optimal TR. This data-adaptive statistical parameter is worthy of interest on its own. Our main result is a central limit theorem which enables the construction of confidence intervals on both mean rewards under the current estimate of the optimal TR and under the optimal TR itself. The asymptotic variance of the estimator takes the form of the variance of an efficient influence curve at a limiting distribution, allowing to discuss the efficiency of inference. As a by product, we also derive confidence intervals on two cumulated pseudo-regrets, a key notion in the study of bandits problems. A simulation study illustrates the procedure. One of the corner-stones of the theoretical study is a new maximal inequality for martingales with respect to the uniform entropy integral.
Estimation After a Group Sequential Trial.
Milanzi, Elasma; Molenberghs, Geert; Alonso, Ariel; Kenward, Michael G; Tsiatis, Anastasios A; Davidian, Marie; Verbeke, Geert
2015-10-01
Group sequential trials are one important instance of studies for which the sample size is not fixed a priori but rather takes one of a finite set of pre-specified values, dependent on the observed data. Much work has been devoted to the inferential consequences of this design feature. Molenberghs et al (2012) and Milanzi et al (2012) reviewed and extended the existing literature, focusing on a collection of seemingly disparate, but related, settings, namely completely random sample sizes, group sequential studies with deterministic and random stopping rules, incomplete data, and random cluster sizes. They showed that the ordinary sample average is a viable option for estimation following a group sequential trial, for a wide class of stopping rules and for random outcomes with a distribution in the exponential family. Their results are somewhat surprising in the sense that the sample average is not optimal, and further, there does not exist an optimal, or even, unbiased linear estimator. However, the sample average is asymptotically unbiased, both conditionally upon the observed sample size as well as marginalized over it. By exploiting ignorability they showed that the sample average is the conventional maximum likelihood estimator. They also showed that a conditional maximum likelihood estimator is finite sample unbiased, but is less efficient than the sample average and has the larger mean squared error. Asymptotically, the sample average and the conditional maximum likelihood estimator are equivalent. This previous work is restricted, however, to the situation in which the the random sample size can take only two values, N = n or N = 2 n . In this paper, we consider the more practically useful setting of sample sizes in a the finite set { n 1 , n 2 , …, n L }. It is shown that the sample average is then a justifiable estimator , in the sense that it follows from joint likelihood estimation, and it is consistent and asymptotically unbiased. We also show why simulations can give the false impression of bias in the sample average when considered conditional upon the sample size. The consequence is that no corrections need to be made to estimators following sequential trials. When small-sample bias is of concern, the conditional likelihood estimator provides a relatively straightforward modification to the sample average. Finally, it is shown that classical likelihood-based standard errors and confidence intervals can be applied, obviating the need for technical corrections.
Automated storm water sampling on small watersheds
Harmel, R.D.; King, K.W.; Slade, R.M.
2003-01-01
Few guidelines are currently available to assist in designing appropriate automated storm water sampling strategies for small watersheds. Therefore, guidance is needed to develop strategies that achieve an appropriate balance between accurate characterization of storm water quality and loads and limitations of budget, equipment, and personnel. In this article, we explore the important sampling strategy components (minimum flow threshold, sampling interval, and discrete versus composite sampling) and project-specific considerations (sampling goal, sampling and analysis resources, and watershed characteristics) based on personal experiences and pertinent field and analytical studies. These components and considerations are important in achieving the balance between sampling goals and limitations because they determine how and when samples are taken and the potential sampling error. Several general recommendations are made, including: setting low minimum flow thresholds, using flow-interval or variable time-interval sampling, and using composite sampling to limit the number of samples collected. Guidelines are presented to aid in selection of an appropriate sampling strategy based on user's project-specific considerations. Our experiences suggest these recommendations should allow implementation of a successful sampling strategy for most small watershed sampling projects with common sampling goals.
The effect of sampling rate on observed statistics in a correlated random walk
Rosser, G.; Fletcher, A. G.; Maini, P. K.; Baker, R. E.
2013-01-01
Tracking the movement of individual cells or animals can provide important information about their motile behaviour, with key examples including migrating birds, foraging mammals and bacterial chemotaxis. In many experimental protocols, observations are recorded with a fixed sampling interval and the continuous underlying motion is approximated as a series of discrete steps. The size of the sampling interval significantly affects the tracking measurements, the statistics computed from observed trajectories, and the inferences drawn. Despite the widespread use of tracking data to investigate motile behaviour, many open questions remain about these effects. We use a correlated random walk model to study the variation with sampling interval of two key quantities of interest: apparent speed and angle change. Two variants of the model are considered, in which reorientations occur instantaneously and with a stationary pause, respectively. We employ stochastic simulations to study the effect of sampling on the distributions of apparent speeds and angle changes, and present novel mathematical analysis in the case of rapid sampling. Our investigation elucidates the complex nature of sampling effects for sampling intervals ranging over many orders of magnitude. Results show that inclusion of a stationary phase significantly alters the observed distributions of both quantities. PMID:23740484
Minimizing Glovebox Glove Breaches, Part III: Deriving Service Lifetimes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cournoyer, M.E.; Wilson, K.V.; Maestas, M.M.
At the Los Alamos Plutonium Facility, various isotopes of plutonium along with other actinides are handled in a glove box environment. Weapons-grade plutonium consists mainly in Pu-239. Pu-238 is another isotope used for heat sources. The Pu-238 is more aggressive regarding gloves due to its higher alpha-emitting characteristic ({approx}300 times more active than Pu-239), which modifies the change-out intervals for gloves. Optimization of the change-out intervals for gloves is fundamental since Nuclear Materials Technology (NMT) Division generates approximately 4 m{sup 3}/yr of TRU waste from the disposal of glovebox gloves. To reduce the number of glovebox glove failures, the NMTmore » Division pro-actively investigates processes and procedures that minimize glove failures. Aging studies have been conducted that correlate changes in mechanical (physical) properties with degradation chemistry. This present work derives glovebox glove change intervals based on mechanical data of thermally aged Hypalon{sup R}, and Butasol{sup R} glove samples. Information from this study represent an important baseline in gauging the acceptable standards for polymeric gloves used in a laboratory glovebox environment and will be used later to account for possible presence of dose-rate or synergistic effects in 'combined-environment'. In addition, excursions of contaminants into the operator's breathing zone and excess exposure to the radiological sources associated with unplanned breaches in the glovebox are reduced. (authors)« less
Zhang, Qing; Fung, Jeffrey Wing-Hong; Chan, Yat-Sun; Chan, Hamish Chi-Kin; Lin, Hong; Chan, Skiva; Yu, Cheuk-Man
2008-02-29
Cardiac resynchronization therapy (CRT) is an effective therapy for heart failure patients with electromechanical delay. Optimization of atrioventricular interval (AVI) is a cardinal component for the benefits. However, it is unknown if the AVI needs to be re-optimized during long-term follow-up. Thirty-one patients (66+/-11 years, 20 males) with sinus rhythm who received CRT underwent serial optimization of AVI at day 1, 3-month and during long-term follow-up by pulse Doppler echocardiography (PDE). At long-term follow-up, the optimal AVI and cardiac output (CO) estimated by non-invasive impedance cardiography (ICG) were compared with those by PDE. The follow-up was 16+/-11 months. There was no significant difference in the mean optimal AVI when compared between any 2 time points among day 1 (99+/-30 ms), 3-month (97+/-28 ms) and long-term follow-up (94+/-28 ms). However, in individual patient, the optimal AVI remained unchanged only in 14 patients (44%), and was shortened in 12 (38%) and lengthened in 6 patients (18%). During long-term follow-up, although the mean optimal AVIs obtained by PDE or ICG (94+/-28 vs. 92+/-29 ms) were not different, a discrepancy was found in 14 patients (45%). For the same AVI, the CO measured by ICG was systematically higher than that by PDE (3.5+/-0.8 Vs. 2.7+/-0.6 L/min, p<0.001). Optimization of AVI after CRT appears necessary during follow-up as it was readjusted in 55% of patients. Although AVI optimization by ICG was feasible, further studies are needed to confirm its role in optimizing AVI after CRT.
Optimal Budget Allocation for Sample Average Approximation
2011-06-01
an optimization algorithm applied to the sample average problem. We examine the convergence rate of the estimator as the computing budget tends to...regime for the optimization algorithm . 1 Introduction Sample average approximation (SAA) is a frequently used approach to solving stochastic programs...appealing due to its simplicity and the fact that a large number of standard optimization algorithms are often available to optimize the resulting sample
Wu, Yunqi; Hussain, Munir; Fassihi, Reza
2005-06-15
A simple spectrophotometric method for determination of glucosamine release from sustained release (SR) hydrophilic matrix tablet based on reaction with ninhydrin is developed, optimized and validated. The purple color (Ruhemann purple) resulted from the reaction was stabilized and measured at 570 nm. The method optimization was essential as many procedural parameters influenced the accuracy of determination including the ninhydrin concentration, reaction time, pH, reaction temperature, purple color stability period, and glucosamine/ninhydrin ratio. Glucosamine tablets (600 mg) with different hydrophilic polymers were formulated and manufactured on a rotary press. Dissolution studies were conducted (USP 26) using deionized water at 37+/-0.2 degrees C with paddle rotation of 50 rpm, and samples were removed manually at appropriate time intervals. Under given optimized reaction conditions that appeared to be critical, glucosamine was quantitatively analyzed and the calibration curve in the range of 0.202-2.020 mg (r=0.9999) was constructed. The recovery rate of the developed method was 97.8-101.7% (n=6). Reproducible dissolution profiles were achieved from the dissolution studies performed on different glucosamine tablets. The developed method is easy to use, accurate and highly cost-effective for routine studies relative to HPLC and other techniques.
McBride, W. Scott; Wacker, Michael A.
2015-01-01
A test well was drilled by the City of Tallahassee to assess the suitability of the site for the installation of a new well for public water supply. The test well is in Leon County in north-central Florida. The U.S. Geological Survey delineated high-permeability zones in the Upper Floridan aquifer, using borehole-geophysical data collected from the open interval of the test well. A composite water sample was collected from the open interval during high-flow conditions, and three discrete water samples were collected from specified depth intervals within the test well during low-flow conditions. Water-quality, source tracer, and age-dating results indicate that the open interval of the test well produces water of consistently high quality throughout its length. The cavernous nature of the open interval makes it likely that the highly permeable zones are interconnected in the aquifer by secondary porosity features.
Shao, Jing; Fan, Liu-Yin; Cao, Cheng-Xi; Huang, Xian-Qing; Xu, Yu-Quan
2012-07-01
Interval free-flow zone electrophoresis (FFZE) has been used to suppress sample band broadening greatly hindering the development of free-flow electrophoresis (FFE). However, there has been still no quantitative study on the resolution increase of interval FFZE. Herein, we tried to make a comparison between bandwidths in interval FFZE and continuous one. A commercial dye with methyl green and crystal violet was well chosen to show the bandwidth. The comparative experiments were conducted under the same sample loading of the model dye (viz. 3.49, 1.75, 1.17, and 0.88 mg/h), the same running time (viz. 5, 10, 15, and 20 min), and the same flux ratio between sample and background buffer (= 10.64 × 10⁻³). Under the given conditions, the experiments demonstrated that (i) the band broadening was evidently caused by hydrodynamic factor in continuous mode, and (ii) the interval mode could clearly eliminate the hydrodynamic broadening existing in continuous mode, greatly increasing the resolution of dye separation. Finally, the interval FFZE was successfully used for the complete separation of two-model antibiotics (herein pyoluteorin and phenazine-1-carboxylic acid coexisting in fermentation broth of a new strain Pseudomonas aeruginosa M18), demonstrating the feasibility of interval FFZE mode for separation of biomolecules. © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
ERIC Educational Resources Information Center
Alvero, Alicia M.; Struss, Kristen; Rappaport, Eva
2008-01-01
Partial-interval (PIR), whole-interval (WIR), and momentary time sampling (MTS) estimates were compared against continuous measures of safety performance for three postural behaviors: feet, back, and shoulder position. Twenty-five samples of safety performance across five undergraduate students were scored using a second-by-second continuous…
Bioactivity studies on TiO₂-bearing Na₂O-CaO-SiO₂-B₂O₃ glasses.
Jagan Mohini, G; Sahaya Baskaran, G; Ravi Kumar, V; Piasecki, M; Veeraiah, N
2015-12-01
Soda lime silica borate glasses mixed with different concentrations of TiO2 are synthesized by the melt-quenching technique. As a part of study on bioactivity of these glasses, the samples were immersed in simulated body fluid (SBF) solution for prolonged times (~21 days) during which weight loss along with pH measurements is carried out at specific intervals of time. The XRD and SEM analyses of post-immersed samples confirm the formation of crystalline hydroxyapatite layer (HA) on the surface of the samples. To assess the role of TiO2 on the formation of HA layer and degradability of the samples the spectroscopic studies viz. optical absorption and IR spectral studies on post- and pre-immersed samples have been carried out. The analysis of the results of degradability together with spectroscopic studies as a function of TiO2 concentration indicated that about 6.0 mol% of TiO2 is the optimal concentration for achieving better bioactivity of these glasses. The presence of the maximal concentration octahedral titanium ions in this glass that facilitates the formation of HA layer is found to be the reason for such a higher bioactivity. Copyright © 2015 Elsevier B.V. All rights reserved.
In-Flight Pitot-Static Calibration
NASA Technical Reports Server (NTRS)
Foster, John V. (Inventor); Cunningham, Kevin (Inventor)
2016-01-01
A GPS-based pitot-static calibration system uses global output-error optimization. High data rate measurements of static and total pressure, ambient air conditions, and GPS-based ground speed measurements are used to compute pitot-static pressure errors over a range of airspeed. System identification methods rapidly compute optimal pressure error models with defined confidence intervals.
Analysis of single ion channel data incorporating time-interval omission and sampling
The, Yu-Kai; Timmer, Jens
2005-01-01
Hidden Markov models are widely used to describe single channel currents from patch-clamp experiments. The inevitable anti-aliasing filter limits the time resolution of the measurements and therefore the standard hidden Markov model is not adequate anymore. The notion of time-interval omission has been introduced where brief events are not detected. The developed, exact solutions to this problem do not take into account that the measured intervals are limited by the sampling time. In this case the dead-time that specifies the minimal detectable interval length is not defined unambiguously. We show that a wrong choice of the dead-time leads to considerably biased estimates and present the appropriate equations to describe sampled data. PMID:16849220
Optimal design in pediatric pharmacokinetic and pharmacodynamic clinical studies.
Roberts, Jessica K; Stockmann, Chris; Balch, Alfred; Yu, Tian; Ward, Robert M; Spigarelli, Michael G; Sherwin, Catherine M T
2015-03-01
It is not trivial to conduct clinical trials with pediatric participants. Ethical, logistical, and financial considerations add to the complexity of pediatric studies. Optimal design theory allows investigators the opportunity to apply mathematical optimization algorithms to define how to structure their data collection to answer focused research questions. These techniques can be used to determine an optimal sample size, optimal sample times, and the number of samples required for pharmacokinetic and pharmacodynamic studies. The aim of this review is to demonstrate how to determine optimal sample size, optimal sample times, and the number of samples required from each patient by presenting specific examples using optimal design tools. Additionally, this review aims to discuss the relative usefulness of sparse vs rich data. This review is intended to educate the clinician, as well as the basic research scientist, whom plan on conducting a pharmacokinetic/pharmacodynamic clinical trial in pediatric patients. © 2015 John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Mudelsee, Manfred
2015-04-01
The Big Data era has begun also in the climate sciences, not only in economics or molecular biology. We measure climate at increasing spatial resolution by means of satellites and look farther back in time at increasing temporal resolution by means of natural archives and proxy data. We use powerful supercomputers to run climate models. The model output of the calculations made for the IPCC's Fifth Assessment Report amounts to ~650 TB. The 'scientific evolution' of grid computing has started, and the 'scientific revolution' of quantum computing is being prepared. This will increase computing power, and data amount, by several orders of magnitude in the future. However, more data does not automatically mean more knowledge. We need statisticians, who are at the core of transforming data into knowledge. Statisticians notably also explore the limits of our knowledge (uncertainties, that is, confidence intervals and P-values). Mudelsee (2014 Climate Time Series Analysis: Classical Statistical and Bootstrap Methods. Second edition. Springer, Cham, xxxii + 454 pp.) coined the term 'optimal estimation'. Consider the hyperspace of climate estimation. It has many, but not infinite, dimensions. It consists of the three subspaces Monte Carlo design, method and measure. The Monte Carlo design describes the data generating process. The method subspace describes the estimation and confidence interval construction. The measure subspace describes how to detect the optimal estimation method for the Monte Carlo experiment. The envisaged large increase in computing power may bring the following idea of optimal climate estimation into existence. Given a data sample, some prior information (e.g. measurement standard errors) and a set of questions (parameters to be estimated), the first task is simple: perform an initial estimation on basis of existing knowledge and experience with such types of estimation problems. The second task requires the computing power: explore the hyperspace to find the suitable method, that is, the mode of estimation and uncertainty-measure determination that optimizes a selected measure for prescribed values close to the initial estimates. Also here, intelligent exploration methods (gradient, Brent, etc.) are useful. The third task is to apply the optimal estimation method to the climate dataset. This conference paper illustrates by means of three examples that optimal estimation has the potential to shape future big climate data analysis. First, we consider various hypothesis tests to study whether climate extremes are increasing in their occurrence. Second, we compare Pearson's and Spearman's correlation measures. Third, we introduce a novel estimator of the tail index, which helps to better quantify climate-change related risks.
Opatz, Chad C.; Dinicola, Richard S.
2018-05-21
Operable Unit 2, Area 8, at Naval Base Kitsap, Keyport is the site of a former chrome-plating facility that released metals (primarily chromium and cadmium), chlorinated volatile organic compounds, and petroleum compounds into the local environment. To ensure long-term protectiveness, as stipulated in the Fourth Five-Year Review for the site, Naval Facilities Engineering Command Northwest collaborated with the U.S. Environmental Protection Agency, the Washington State Department of Ecology, and the Suquamish Tribe, to collect data to monitor the contamination left in place and to ensure the site does not pose a risk to human health or the environment. To support these efforts, refined information was needed on the interaction of fresh groundwater with seawater in response to the up-to 13-ft tidal fluctuations at this nearshore site adjacent to Port Orchard Bay. The information was analyzed to meet the primary objective of this investigation, which was to determine the optimal time during the semi-diurnal and the neap-spring tidal cycles to sample groundwater for freshwater contaminants in Area 8 monitoring wells.Groundwater levels and specific conductance in five monitoring wells, along with marine water-levels (tidal levels) in Port Orchard Bay, were monitored every 15 minutes during a 3-week duration to determine how nearshore groundwater responds to tidal forcing. Time series data were collected from October 24, 2017, to November 16, 2017, a period that included neap and spring tides. Vertical profiles of specific conductance were also measured once in the screened interval of each well prior to instrument deployment to determine if a freshwater/saltwater interface was present in the well during that particular time.The vertical profiles of specific conductance were measured only one time during an ebbing tide at approximately the top, middle, and bottom of the saturated thickness within the screened interval of each well. The landward-most well, MW8-8, was completely freshwater, while one of the most seaward wells, MW8-9, was completely saline. A distinct saltwater interface was measured in the three other shallow wells (MW8-11, MW8-12, and MW8-14), with the topmost groundwater occurring fresh underlain by higher conductivity water.Lag times between minimum spring-tide level and minimum groundwater levels in wells ranged from about 2 to 4.5 hours in the less-than 20-ft deep wells screened across the water table, and was about 7 hours for the single 48-ft deep well screened below the water table. Those lag times were surprisingly long considering the wells are all located within 200-ft of the shoreline and the local geology is largely coarse-grained glacial outwash deposits. Various manmade subsurface features, such as slurry walls and backfilled excavations, likely influence and confuse the connectivity between seawater and groundwater.The specific-conductance time-series data showed clear evidence of substantial saltwater intrusion into the screened intervals of most shallow wells. Unexpectedly, the intrusion was associated with the neap part of the tidal cycle around November 13–16, when relatively low barometric pressure and high southerly winds led to the highest high and low tides measured during the monitoring period. The data consistently indicated that the groundwater had the lowest specific conductance (was least mixed with seawater) during the prior neap tides around October 30, the same period when the shallow groundwater levels were lowest. Although the specific conductance response is somewhat different between wells, the data do suggest that it is the heights of the actual high-high and low-low tides, regardless of whether or not they occur during the neap or spring part of the cycle, that allows seawater intrusion into the nearshore aquifer at Area 8.With all the data taken into consideration, the optimal time for sampling the shallow monitoring wells at Area 8 would be centered on a 2–5-hour period following the predicted low-low tide during neap tide, with due consideration of local atmospheric pressure and wind conditions that have the potential to generate tides that can be substantially higher than those predicted from lunar-solar tidal forces. The optimal time for sampling the deeper monitoring wells at Area 8 would be during the 6–8-hour period following a predicted low-low tide, also during the neap tide part of the tidal cycle. The specific time window to sample each well following a low tide can be found in table 5. Those periods are when groundwater in the wells is most fresh and least diluted by seawater intrusion. In addition to timing, consideration should be given to collecting undisturbed samples from the top of the screened interval (or top of the water table if below the top of the interval) to best characterize contaminant concentrations in freshwater. A downhole conductivity probe could be used to identify the saltwater interface, above which would be the ideal depth for sampling.
Lim, Meng-Hui; Teoh, Andrew Beng Jin; Toh, Kar-Ann
2013-06-01
Biometric discretization is a key component in biometric cryptographic key generation. It converts an extracted biometric feature vector into a binary string via typical steps such as segmentation of each feature element into a number of labeled intervals, mapping of each interval-captured feature element onto a binary space, and concatenation of the resulted binary output of all feature elements into a binary string. Currently, the detection rate optimized bit allocation (DROBA) scheme is one of the most effective biometric discretization schemes in terms of its capability to assign binary bits dynamically to user-specific features with respect to their discriminability. However, we learn that DROBA suffers from potential discriminative feature misdetection and underdiscretization in its bit allocation process. This paper highlights such drawbacks and improves upon DROBA based on a novel two-stage algorithm: 1) a dynamic search method to efficiently recapture such misdetected features and to optimize the bit allocation of underdiscretized features and 2) a genuine interval concealment technique to alleviate crucial information leakage resulted from the dynamic search. Improvements in classification accuracy on two popular face data sets vindicate the feasibility of our approach compared with DROBA.
Owen, Lauren; Scholey, Andrew B; Finnegan, Yvonne; Hu, Henglong; Sünram-Lea, Sandra I
2012-04-01
Previous research has identified a number of factors that appear to moderate the behavioural response to glucose administration. These include physiological state, dose, types of cognitive tasks used and level of cognitive demand. Another potential moderating factor is the length of the fasting interval prior to a glucose load. Therefore, we aimed to examine the effect of glucose dose and fasting interval on mood and cognitive function. The current study utilised a double-blind, placebo-controlled, balanced, six period crossover design to examine potential interactions between length of fasting interval (2 versus 12 hours) and optimal dose for cognition enhancement. Results demonstrated that the higher dose (60 g) increased working memory performance following an overnight fast, whereas the lower dose (25 g) enhanced working memory performance following a 2-h fast. The data suggest that optimal glucose dosage may differ under different conditions of depleted blood glucose resources. In addition, glucoregulation was observed to be a moderating factor. However, further research is needed to develop a model of the moderating and mediating factors under which glucose facilitation is best achieved.
A generic hydrological model for a green roof drainage layer.
Vesuviano, Gianni; Stovin, Virginia
2013-01-01
A rainfall simulator of length 5 m and width 1 m was used to supply constant intensity and largely spatially uniform water inflow events to 100 different configurations of commercially available green roof drainage layer and protection mat. The runoff from each inflow event was collected and sampled at one-second intervals. Time-series runoff responses were subsequently produced for each of the tested configurations, using the average response of three repeat tests. Runoff models, based on storage routing (dS/dt = I-Q) and a power-law relationship between storage and runoff (Q = kS(n)), and incorporating a delay parameter, were created. The parameters k, n and delay were optimized to best fit each of the runoff responses individually. The range and pattern of optimized parameter values was analysed with respect to roof and event configuration. An analysis was performed to determine the sensitivity of the shape of the runoff profile to changes in parameter values. There appears to be potential to consolidate values of n by roof slope and drainage component material.
Uncertainty Analysis in 3D Equilibrium Reconstruction
Cianciosa, Mark R.; Hanson, James D.; Maurer, David A.
2018-02-21
Reconstruction is an inverse process where a parameter space is searched to locate a set of parameters with the highest probability of describing experimental observations. Due to systematic errors and uncertainty in experimental measurements, this optimal set of parameters will contain some associated uncertainty. This uncertainty in the optimal parameters leads to uncertainty in models derived using those parameters. V3FIT is a three-dimensional (3D) equilibrium reconstruction code that propagates uncertainty from the input signals, to the reconstructed parameters, and to the final model. Here in this paper, we describe the methods used to propagate uncertainty in V3FIT. Using the resultsmore » of whole shot 3D equilibrium reconstruction of the Compact Toroidal Hybrid, this propagated uncertainty is validated against the random variation in the resulting parameters. Two different model parameterizations demonstrate how the uncertainty propagation can indicate the quality of a reconstruction. As a proxy for random sampling, the whole shot reconstruction results in a time interval that will be used to validate the propagated uncertainty from a single time slice.« less
Uncertainty Analysis in 3D Equilibrium Reconstruction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cianciosa, Mark R.; Hanson, James D.; Maurer, David A.
Reconstruction is an inverse process where a parameter space is searched to locate a set of parameters with the highest probability of describing experimental observations. Due to systematic errors and uncertainty in experimental measurements, this optimal set of parameters will contain some associated uncertainty. This uncertainty in the optimal parameters leads to uncertainty in models derived using those parameters. V3FIT is a three-dimensional (3D) equilibrium reconstruction code that propagates uncertainty from the input signals, to the reconstructed parameters, and to the final model. Here in this paper, we describe the methods used to propagate uncertainty in V3FIT. Using the resultsmore » of whole shot 3D equilibrium reconstruction of the Compact Toroidal Hybrid, this propagated uncertainty is validated against the random variation in the resulting parameters. Two different model parameterizations demonstrate how the uncertainty propagation can indicate the quality of a reconstruction. As a proxy for random sampling, the whole shot reconstruction results in a time interval that will be used to validate the propagated uncertainty from a single time slice.« less
Intertrial interval duration and learning in autistic children.
Koegel, R L; Dunlap, G; Dyer, K
1980-01-01
This study investigated the influence of intertrial interval duration on the performance of autistic children during teaching situations. The children were taught under the same conditions existing in their regular programs, except that the length of time between trials was systematically manipulated. With both multiple baseline and repeated reversal designs, two lengths of intertrial interval were employed; short intervals with the SD for any given trial presented approximately one second following the reinforcer for the previous trial versus long intervals with the SD presented four or more seconds following the reinforcer for the previous trial. The results showed that: (1) the short intertrial intervals always produced higher levels of correct responding than the long intervals; and (2) there were improving trends in performance and rapid acquisition with the short intertrial intervals, in contrast to minimal or no change with the long intervals. The results are discussed in terms of utilizing information about child and task characteristics in terms of selecting optimal intervals. The data suggest that manipulations made between trials have a large influence on autistic children's learning. PMID:7364701
NASA Astrophysics Data System (ADS)
Fu, Z. H.; Zhao, H. J.; Wang, H.; Lu, W. T.; Wang, J.; Guo, H. C.
2017-11-01
Economic restructuring, water resources management, population planning and environmental protection are subjects to inner uncertainties of a compound system with objectives which are competitive alternatives. Optimization model and water quality model are usually used to solve problems in a certain aspect. To overcome the uncertainty and coupling in reginal planning management, an interval fuzzy program combined with water quality model for regional planning and management has been developed to obtain the absolutely ;optimal; solution in this study. The model is a hybrid methodology of interval parameter programming (IPP), fuzzy programing (FP), and a general one-dimensional water quality model. The method extends on the traditional interval parameter fuzzy programming method by integrating water quality model into the optimization framework. Meanwhile, as an abstract concept, water resources carrying capacity has been transformed into specific and calculable index. Besides, unlike many of the past studies about water resource management, population as a significant factor has been considered. The results suggested that the methodology was applicable for reflecting the complexities of the regional planning and management systems within the planning period. The government policy makers could establish effective industrial structure, water resources utilization patterns and population planning, and to better understand the tradeoffs among economic, water resources, population and environmental objectives.
Ouyang, Qin; Chen, Quansheng; Zhao, Jiewen
2016-02-05
The approach presented herein reports the application of near infrared (NIR) spectroscopy, in contrast with human sensory panel, as a tool for estimating Chinese rice wine quality; concretely, to achieve the prediction of the overall sensory scores assigned by the trained sensory panel. Back propagation artificial neural network (BPANN) combined with adaptive boosting (AdaBoost) algorithm, namely BP-AdaBoost, as a novel nonlinear algorithm, was proposed in modeling. First, the optimal spectra intervals were selected by synergy interval partial least square (Si-PLS). Then, BP-AdaBoost model based on the optimal spectra intervals was established, called Si-BP-AdaBoost model. These models were optimized by cross validation, and the performance of each final model was evaluated according to correlation coefficient (Rp) and root mean square error of prediction (RMSEP) in prediction set. Si-BP-AdaBoost showed excellent performance in comparison with other models. The best Si-BP-AdaBoost model was achieved with Rp=0.9180 and RMSEP=2.23 in the prediction set. It was concluded that NIR spectroscopy combined with Si-BP-AdaBoost was an appropriate method for the prediction of the sensory quality in Chinese rice wine. Copyright © 2015 Elsevier B.V. All rights reserved.
Ochi, Kento; Kamiura, Moto
2015-09-01
A multi-armed bandit problem is a search problem on which a learning agent must select the optimal arm among multiple slot machines generating random rewards. UCB algorithm is one of the most popular methods to solve multi-armed bandit problems. It achieves logarithmic regret performance by coordinating balance between exploration and exploitation. Since UCB algorithms, researchers have empirically known that optimistic value functions exhibit good performance in multi-armed bandit problems. The terms optimistic or optimism might suggest that the value function is sufficiently larger than the sample mean of rewards. The first definition of UCB algorithm is focused on the optimization of regret, and it is not directly based on the optimism of a value function. We need to think the reason why the optimism derives good performance in multi-armed bandit problems. In the present article, we propose a new method, which is called Overtaking method, to solve multi-armed bandit problems. The value function of the proposed method is defined as an upper bound of a confidence interval with respect to an estimator of expected value of reward: the value function asymptotically approaches to the expected value of reward from the upper bound. If the value function is larger than the expected value under the asymptote, then the learning agent is almost sure to be able to obtain the optimal arm. This structure is called sand-sifter mechanism, which has no regrowth of value function of suboptimal arms. It means that the learning agent can play only the current best arm in each time step. Consequently the proposed method achieves high accuracy rate and low regret and some value functions of it can outperform UCB algorithms. This study suggests the advantage of optimism of agents in uncertain environment by one of the simplest frameworks. Copyright © 2015 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.
Enhanced Fuel-Optimal Trajectory-Generation Algorithm for Planetary Pinpoint Landing
NASA Technical Reports Server (NTRS)
Acikmese, Behcet; Blackmore, James C.; Scharf, Daniel P.
2011-01-01
An enhanced algorithm is developed that builds on a previous innovation of fuel-optimal powered-descent guidance (PDG) for planetary pinpoint landing. The PDG problem is to compute constrained, fuel-optimal trajectories to land a craft at a prescribed target on a planetary surface, starting from a parachute cut-off point and using a throttleable descent engine. The previous innovation showed the minimal-fuel PDG problem can be posed as a convex optimization problem, in particular, as a Second-Order Cone Program, which can be solved to global optimality with deterministic convergence properties, and hence is a candidate for onboard implementation. To increase the speed and robustness of this convex PDG algorithm for possible onboard implementation, the following enhancements are incorporated: 1) Fast detection of infeasibility (i.e., control authority is not sufficient for soft-landing) for subsequent fault response. 2) The use of a piecewise-linear control parameterization, providing smooth solution trajectories and increasing computational efficiency. 3) An enhanced line-search algorithm for optimal time-of-flight, providing quicker convergence and bounding the number of path-planning iterations needed. 4) An additional constraint that analytically guarantees inter-sample satisfaction of glide-slope and non-sub-surface flight constraints, allowing larger discretizations and, hence, faster optimization. 5) Explicit incorporation of Mars rotation rate into the trajectory computation for improved targeting accuracy. These enhancements allow faster convergence to the fuel-optimal solution and, more importantly, remove the need for a "human-in-the-loop," as constraints will be satisfied over the entire path-planning interval independent of step-size (as opposed to just at the discrete time points) and infeasible initial conditions are immediately detected. Finally, while the PDG stage is typically only a few minutes, ignoring the rotation rate of Mars can introduce 10s of meters of error. By incorporating it, the enhanced PDG algorithm becomes capable of pinpoint targeting.
Method and apparatus for measuring nuclear magnetic properties
Weitekamp, D.P.; Bielecki, A.; Zax, D.B.; Zilm, K.W.; Pines, A.
1987-12-01
A method for studying the chemical and structural characteristics of materials is disclosed. The method includes placement of a sample material in a high strength polarizing magnetic field to order the sample nuclei. The condition used to order the sample is then removed abruptly and the ordering of the sample allowed to evolve for a time interval. At the end of the time interval, the ordering of the sample is measured by conventional nuclear magnetic resonance techniques. 5 figs.
Method and apparatus for measuring nuclear magnetic properties
Weitekamp, Daniel P.; Bielecki, Anthony; Zax, David B.; Zilm, Kurt W.; Pines, Alexander
1987-01-01
A method for studying the chemical and structural characteristics of materials is disclosed. The method includes placement of a sample material in a high strength polarizing magnetic field to order the sample nucleii. The condition used to order the sample is then removed abruptly and the ordering of the sample allowed to evolve for a time interval. At the end of the time interval, the ordering of the sample is measured by conventional nuclear magnetic resonance techniques.
Repeat sample intraocular pressure variance in induced and naturally ocular hypertensive monkeys.
Dawson, William W; Dawson, Judyth C; Hope, George M; Brooks, Dennis E; Percicot, Christine L
2005-12-01
To compare repeat-sample means variance of laser induced ocular hypertension (OH) in rhesus monkeys with the repeat-sample mean variance of natural OH in age-range matched monkeys of similar and dissimilar pedigrees. Multiple monocular, retrospective, intraocular pressure (IOP) measures were recorded repeatedly during a short sampling interval (SSI, 1-5 months) and a long sampling interval (LSI, 6-36 months). There were 5-13 eyes in each SSI and LSI subgroup. Each interval contained subgroups from the Florida with natural hypertension (NHT), induced hypertension (IHT1) Florida monkeys, unrelated (Strasbourg, France) induced hypertensives (IHT2), and Florida age-range matched controls (C). Repeat-sample individual variance means and related IOPs were analyzed by a parametric analysis of variance (ANOV) and results compared to non-parametric Kruskal-Wallis ANOV. As designed, all group intraocular pressure distributions were significantly different (P < or = 0.009) except for the two (Florida/Strasbourg) induced OH groups. A parametric 2 x 4 design ANOV for mean variance showed large significant effects due to treatment group and sampling interval. Similar results were produced by the nonparametric ANOV. Induced OH sample variance (LSI) was 43x the natural OH sample variance-mean. The same relationship for the SSI was 12x. Laser induced ocular hypertension in rhesus monkeys produces large IOP repeat-sample variance mean results compared to controls and natural OH.
Thoresen, Stein I; Arnemo, Jon M; Liberg, Olof
2009-06-01
Scandinavian free-ranging wolves (Canis lupus) are endangered, such that laboratory data to assess their health status is increasingly important. Although wolves have been studied for decades, most biological information comes from captive animals. The objective of the present study was to establish reference intervals for 30 clinical chemical and 8 hematologic analytes in Scandinavian free-ranging wolves. All wolves were tracked and chemically immobilized from a helicopter before examination and blood sampling in the winter of 7 consecutive years (1998-2004). Seventy-nine blood samples were collected from 57 gray wolves, including 24 juveniles (24 samples), 17 adult females (25 samples), and 16 adult males (30 samples). Whole blood and serum samples were stored at refrigeration temperature for 1-3 days before hematologic analyses and for 1-5 days before serum biochemical analyses. Reference intervals were calculated as 95% confidence intervals except for juveniles where the minimum and maximum values were used. Significant differences were observed between adult and juvenile wolves for RBC parameters, alkaline phosphatase and amylase activities, and total protein, albumin, gamma-globulins, cholesterol, creatinine, calcium, chloride, magnesium, phosphate, and sodium concentrations. Compared with published reference values for captive wolves, reference intervals for free-ranging wolves reflected exercise activity associated with capture (higher creatine kinase activity, higher glucose concentration), and differences in nutritional status (higher urea concentration).
Birth Spacing of Pregnant Women in Nepal: A Community-Based Study.
Karkee, Rajendra; Lee, Andy H
2016-01-01
Optimal birth spacing has health advantages for both mother and child. In developing countries, shorter birth intervals are common and associated with social, cultural, and economic factors, as well as a lack of family planning. This study investigated the first birth interval after marriage and preceding interbirth interval in Nepal. A community-based prospective cohort study was conducted in the Kaski district of Nepal. Information on birth spacing, demographic, and obstetric characteristics was obtained from 701 pregnant women using a structured questionnaire. Logistic regression analyses were performed to ascertain factors associated with short birth spacing. About 39% of primiparous women gave their first child birth within 1 year of marriage and 23% of multiparous women had short preceding interbirth intervals (<24 months). The average birth spacing among the multiparous group was 44.9 (SD 21.8) months. Overall, short birth spacing appeared to be inversely associated with advancing maternal age. For the multiparous group, Janajati and lower caste women, and those whose newborn was female, were more likely to have short birth spacing. The preceding interbirth interval was relatively long in the Kaski district of Nepal and tended to be associated with maternal age, caste, and sex of newborn infant. Optimal birth spacing programs should target Janajati and lower caste women, along with promotion of gender equality in society.
Tsukerman, B M; Finkel'shteĭn, I E
1987-07-01
A statistical analysis of prolonged ECG records has been carried out in patients with various heart rhythm and conductivity disorders. The distribution of absolute R-R duration values and relationships between adjacent intervals have been examined. A two-step algorithm has been constructed that excludes anomalous and "suspicious" intervals from a sample of consecutively recorded R-R intervals, until only the intervals between contractions of veritably sinus origin remain in the sample. The algorithm has been developed into a programme for microcomputer Electronica NC-80. It operates reliably even in cases of complex combined rhythm and conductivity disorders.
Lai, Keke; Kelley, Ken
2011-06-01
In addition to evaluating a structural equation model (SEM) as a whole, often the model parameters are of interest and confidence intervals for those parameters are formed. Given a model with a good overall fit, it is entirely possible for the targeted effects of interest to have very wide confidence intervals, thus giving little information about the magnitude of the population targeted effects. With the goal of obtaining sufficiently narrow confidence intervals for the model parameters of interest, sample size planning methods for SEM are developed from the accuracy in parameter estimation approach. One method plans for the sample size so that the expected confidence interval width is sufficiently narrow. An extended procedure ensures that the obtained confidence interval will be no wider than desired, with some specified degree of assurance. A Monte Carlo simulation study was conducted that verified the effectiveness of the procedures in realistic situations. The methods developed have been implemented in the MBESS package in R so that they can be easily applied by researchers. © 2011 American Psychological Association
Patient experience and quality of urologic cancer surgery in US hospitals.
Shirk, Joseph D; Tan, Hung-Jui; Hu, Jim C; Saigal, Christopher S; Litwin, Mark S
2016-08-15
Care interactions as perceived by patients and families are increasingly viewed as both an indicator and lever for high-value care. To promote patient-centeredness and motivate quality improvement, payers have begun tying reimbursement with related measures of patient experience. Accordingly, the authors sought to determine whether such data correlate with outcomes among patients undergoing surgery for genitourinary cancer. The authors used the Nationwide Inpatient Sample and Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS) data from 2009 through 2011. They identified hospital admissions for cancer-directed prostatectomy, nephrectomy, and cystectomy, and measured mortality, hospitalization length, discharge disposition, and complications. Mixed effects models were used to compare the likelihood of selected outcomes between the top and bottom tercile hospitals adjusting for patient and hospital characteristics. Among a sample of 46,988 encounters, the authors found small differences in patient age, race, income, comorbidity, cancer type, receipt of minimally invasive surgery, and procedure acuity according to HCAHPS tercile (P<.001). Hospital characteristics also varied with respect to ownership, teaching status, size, and location (P<.001). Compared with patients treated in low-performing hospitals, patients treated in high-performing hospitals less often faced prolonged hospitalization (odds ratio, 0.77; 95% confidence interval, 0.64-0.92) or nursing-sensitive complications (odds ratio, 0.85; 95% confidence interval, 0.72-0.99). No difference was found with regard to inpatient mortality, other complications, and discharge disposition (P>.05). Using Nationwide Inpatient Sample and HCAHPS data, the authors found a limited association between patient experience and surgical outcomes. For urologic cancer surgery, patient experience may be optimally viewed as an independent quality domain rather than a mechanism with which to improve surgical outcomes. Cancer 2016;122:2571-8. © 2016 American Cancer Society. © 2016 American Cancer Society.
2011-01-01
Background There is substantial variation in reported reference intervals for canine plasma creatinine among veterinary laboratories, thereby influencing the clinical assessment of analytical results. The aims of the study was to determine the inter- and intra-laboratory variation in plasma creatinine among 10 veterinary laboratories, and to compare results from each laboratory with the upper limit of its reference interval. Methods Samples were collected from 10 healthy dogs, 10 dogs with expected intermediate plasma creatinine concentrations, and 10 dogs with azotemia. Overlap was observed for the first two groups. The 30 samples were divided into 3 batches and shipped in random order by postal delivery for plasma creatinine determination. Statistical testing was performed in accordance with ISO standard methodology. Results Inter- and intra-laboratory variation was clinically acceptable as plasma creatinine values for most samples were usually of the same magnitude. A few extreme outliers caused three laboratories to fail statistical testing for consistency. Laboratory sample means above or below the overall sample mean, did not unequivocally reflect high or low reference intervals in that laboratory. Conclusions In spite of close analytical results, further standardization among laboratories is warranted. The discrepant reference intervals seem to largely reflect different populations used in establishing the reference intervals, rather than analytical variation due to different laboratory methods. PMID:21477356
Estimation of reference intervals from small samples: an example using canine plasma creatinine.
Geffré, A; Braun, J P; Trumel, C; Concordet, D
2009-12-01
According to international recommendations, reference intervals should be determined from at least 120 reference individuals, which often are impossible to achieve in veterinary clinical pathology, especially for wild animals. When only a small number of reference subjects is available, the possible bias cannot be known and the normality of the distribution cannot be evaluated. A comparison of reference intervals estimated by different methods could be helpful. The purpose of this study was to compare reference limits determined from a large set of canine plasma creatinine reference values, and large subsets of this data, with estimates obtained from small samples selected randomly. Twenty sets each of 120 and 27 samples were randomly selected from a set of 1439 plasma creatinine results obtained from healthy dogs in another study. Reference intervals for the whole sample and for the large samples were determined by a nonparametric method. The estimated reference limits for the small samples were minimum and maximum, mean +/- 2 SD of native and Box-Cox-transformed values, 2.5th and 97.5th percentiles by a robust method on native and Box-Cox-transformed values, and estimates from diagrams of cumulative distribution functions. The whole sample had a heavily skewed distribution, which approached Gaussian after Box-Cox transformation. The reference limits estimated from small samples were highly variable. The closest estimates to the 1439-result reference interval for 27-result subsamples were obtained by both parametric and robust methods after Box-Cox transformation but were grossly erroneous in some cases. For small samples, it is recommended that all values be reported graphically in a dot plot or histogram and that estimates of the reference limits be compared using different methods.
Magnetic Resonance Fingerprinting with short relaxation intervals.
Amthor, Thomas; Doneva, Mariya; Koken, Peter; Sommer, Karsten; Meineke, Jakob; Börnert, Peter
2017-09-01
The aim of this study was to investigate a technique for improving the performance of Magnetic Resonance Fingerprinting (MRF) in repetitive sampling schemes, in particular for 3D MRF acquisition, by shortening relaxation intervals between MRF pulse train repetitions. A calculation method for MRF dictionaries adapted to short relaxation intervals and non-relaxed initial spin states is presented, based on the concept of stationary fingerprints. The method is applicable to many different k-space sampling schemes in 2D and 3D. For accuracy analysis, T 1 and T 2 values of a phantom are determined by single-slice Cartesian MRF for different relaxation intervals and are compared with quantitative reference measurements. The relevance of slice profile effects is also investigated in this case. To further illustrate the capabilities of the method, an application to in-vivo spiral 3D MRF measurements is demonstrated. The proposed computation method enables accurate parameter estimation even for the shortest relaxation intervals, as investigated for different sampling patterns in 2D and 3D. In 2D Cartesian measurements, we achieved a scan acceleration of more than a factor of two, while maintaining acceptable accuracy: The largest T 1 values of a sample set deviated from their reference values by 0.3% (longest relaxation interval) and 2.4% (shortest relaxation interval). The largest T 2 values showed systematic deviations of up to 10% for all relaxation intervals, which is discussed. The influence of slice profile effects for multislice acquisition is shown to become increasingly relevant for short relaxation intervals. In 3D spiral measurements, a scan time reduction of 36% was achieved, maintaining the quality of in-vivo T1 and T2 maps. Reducing the relaxation interval between MRF sequence repetitions using stationary fingerprint dictionaries is a feasible method to improve the scan efficiency of MRF sequences. The method enables fast implementations of 3D spatially resolved MRF. Copyright © 2017 Elsevier Inc. All rights reserved.
Weighted regression analysis and interval estimators
Donald W. Seegrist
1974-01-01
A method for deriving the weighted least squares estimators for the parameters of a multiple regression model. Confidence intervals for expected values, and prediction intervals for the means of future samples are given.
An approach to solve group-decision-making problems with ordinal interval numbers.
Fan, Zhi-Ping; Liu, Yang
2010-10-01
The ordinal interval number is a form of uncertain preference information in group decision making (GDM), while it is seldom discussed in the existing research. This paper investigates how the ranking order of alternatives is determined based on preference information of ordinal interval numbers in GDM problems. When ranking a large quantity of ordinal interval numbers, the efficiency and accuracy of the ranking process are critical. A new approach is proposed to rank alternatives using ordinal interval numbers when every ranking ordinal in an ordinal interval number is thought to be uniformly and independently distributed in its interval. First, we give the definition of possibility degree on comparing two ordinal interval numbers and the related theory analysis. Then, to rank alternatives, by comparing multiple ordinal interval numbers, a collective expectation possibility degree matrix on pairwise comparisons of alternatives is built, and an optimization model based on this matrix is constructed. Furthermore, an algorithm is also presented to rank alternatives by solving the model. Finally, two examples are used to illustrate the use of the proposed approach.
A sequential solution for anisotropic total variation image denoising with interval constraints
NASA Astrophysics Data System (ADS)
Xu, Jingyan; Noo, Frédéric
2017-09-01
We show that two problems involving the anisotropic total variation (TV) and interval constraints on the unknown variables admit, under some conditions, a simple sequential solution. Problem 1 is a constrained TV penalized image denoising problem; problem 2 is a constrained fused lasso signal approximator. The sequential solution entails finding first the solution to the unconstrained problem, and then applying a thresholding to satisfy the constraints. If the interval constraints are uniform, this sequential solution solves problem 1. If the interval constraints furthermore contain zero, the sequential solution solves problem 2. Here uniform interval constraints refer to all unknowns being constrained to the same interval. A typical example of application is image denoising in x-ray CT, where the image intensities are non-negative as they physically represent linear attenuation coefficient in the patient body. Our results are simple yet seem unknown; we establish them using the Karush-Kuhn-Tucker conditions for constrained convex optimization.
Optimal go/no-go ratios to maximize false alarms.
Young, Michael E; Sutherland, Steven C; McCoy, Anthony W
2018-06-01
Despite the ubiquity of go/no-go tasks in the study of behavioral inhibition, there is a lack of evidence regarding the impact of key design characteristics, including the go/no-go ratio, intertrial interval, and number of types of go stimuli, on the production of different response classes of central interest. In the present study we sought to empirically determine the optimal conditions to maximize the production of a rare outcome of considerable interest to researchers: false alarms. As predicted, the shortest intertrial intervals (450 ms), intermediate go/no-go ratios (2:1 to 4:1), and the use of multiple types of go stimuli produced the greatest numbers of false alarms. These results are placed within the context of behavioral changes during learning.
Exact intervals and tests for median when one sample value possibly an outliner
NASA Technical Reports Server (NTRS)
Keller, G. J.; Walsh, J. E.
1973-01-01
Available are independent observations (continuous data) that are believed to be a random sample. Desired are distribution-free confidence intervals and significance tests for the population median. However, there is the possibility that either the smallest or the largest observation is an outlier. Then, use of a procedure for rejection of an outlying observation might seem appropriate. Such a procedure would consider that two alternative situations are possible and would select one of them. Either (1) the n observations are truly a random sample, or (2) an outlier exists and its removal leaves a random sample of size n-1. For either situation, confidence intervals and tests are desired for the median of the population yielding the random sample. Unfortunately, satisfactory rejection procedures of a distribution-free nature do not seem to be available. Moreover, all rejection procedures impose undesirable conditional effects on the observations, and also, can select the wrong one of the two above situations. It is found that two-sided intervals and tests based on two symmetrically located order statistics (not the largest and smallest) of the n observations have this property.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hurley, D.F.; Whitehouse, J.M.
A dedicated low-flow groundwater sample collection system was designed for implementation in a post-closure ACL monitoring program at the Yaworski Lagoon NPL site in Canterbury, Connecticut. The system includes dedicated bladder pumps with intake ports located in the screened interval of the monitoring wells. This sampling technique was implemented in the spring of 1993. The system was designed to simultaneously obtain samples directly from the screened interval of nested wells in three distinct water bearing zones. Sample collection is begun upon stabilization of field parameters. Other than line volume, no prior purging of the well is required. It was foundmore » that dedicated low-flow sampling from the screened interval provides a method of representative sample collection without the bias of suspended solids introduced by traditional techniques of pumping and bailing. Analytical data indicate that measured chemical constituents are representative of groundwater migrating through the screened interval. Upon implementation of the low-flow monitoring system, analytical results exhibited a decrease in concentrations of some organic compounds and metals. The system has also proven to be a cost effective alternative to pumping and bailing which generate large volumes of purge water requiring containment and disposal.« less
Huffman, Jeff C.; Beale, Eleanor E.; Celano, Christopher M.; Beach, Scott R.; Belcher, Arianna M.; Moore, Shannon V.; Suarez, Laura; Motiwala, Shweta R.; Gandhi, Parul U.; Gaggin, Hanna; Januzzi, James L.
2015-01-01
Background Positive psychological constructs, such as optimism, are associated with beneficial health outcomes. However, no study has separately examined the effects of multiple positive psychological constructs on behavioral, biological, and clinical outcomes after an acute coronary syndrome (ACS). Accordingly, we aimed to investigate associations of baseline optimism and gratitude with subsequent physical activity, prognostic biomarkers, and cardiac rehospitalizations in post-ACS patients. Methods and Results Participants were enrolled during admission for ACS and underwent assessments at baseline (2 weeks post-ACS) and follow-up (6 months later). Associations between baseline positive psychological constructs and subsequent physical activity/biomarkers were analyzed using multivariable linear regression. Associations between baseline positive constructs and 6-month rehospitalizations were assessed via multivariable Cox regression. Overall, 164 participants enrolled and completed the baseline 2-week assessments. Baseline optimism was significantly associated with greater physical activity at 6 months (n=153; β=102.5; 95% confidence interval [13.6-191.5]; p=.024), controlling for baseline activity and sociodemographic, medical, and negative psychological covariates. Baseline optimism was also associated with lower rates of cardiac readmissions at 6 months (N=164), controlling for age, gender, and medical comorbidity (hazard ratio=.92; 95% confidence interval [.86-.98]; p=.006). There were no significant relationships between optimism and biomarkers. Gratitude was minimally associated with post-ACS outcomes. Conclusions Post-ACS optimism, but not gratitude, was prospectively and independently associated with superior physical activity and fewer cardiac readmissions. Whether interventions that target optimism can successfully increase optimism or improve cardiovascular outcomes in post-ACS patients is not yet known, but can be tested in future studies. Clinical Trial Registration URL: http://www.clinicaltrials.gov. Unique identifier: NCT01709669. PMID:26646818
Optimism and Cause-Specific Mortality: A Prospective Cohort Study.
Kim, Eric S; Hagan, Kaitlin A; Grodstein, Francine; DeMeo, Dawn L; De Vivo, Immaculata; Kubzansky, Laura D
2017-01-01
Growing evidence has linked positive psychological attributes like optimism to a lower risk of poor health outcomes, especially cardiovascular disease. It has been demonstrated in randomized trials that optimism can be learned. If associations between optimism and broader health outcomes are established, it may lead to novel interventions that improve public health and longevity. In the present study, we evaluated the association between optimism and cause-specific mortality in women after considering the role of potential confounding (sociodemographic characteristics, depression) and intermediary (health behaviors, health conditions) variables. We used prospective data from the Nurses' Health Study (n = 70,021). Dispositional optimism was measured in 2004; all-cause and cause-specific mortality rates were assessed from 2006 to 2012. Using Cox proportional hazard models, we found that a higher degree of optimism was associated with a lower mortality risk. After adjustment for sociodemographic confounders, compared with women in the lowest quartile of optimism, women in the highest quartile had a hazard ratio of 0.71 (95% confidence interval: 0.66, 0.76) for all-cause mortality. Adding health behaviors, health conditions, and depression attenuated but did not eliminate the associations (hazard ratio = 0.91, 95% confidence interval: 0.85, 0.97). Associations were maintained for various causes of death, including cancer, heart disease, stroke, respiratory disease, and infection. Given that optimism was associated with numerous causes of mortality, it may provide a valuable target for new research on strategies to improve health. © The Author 2016. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Huffman, Jeff C; Beale, Eleanor E; Celano, Christopher M; Beach, Scott R; Belcher, Arianna M; Moore, Shannon V; Suarez, Laura; Motiwala, Shweta R; Gandhi, Parul U; Gaggin, Hanna K; Januzzi, James L
2016-01-01
Positive psychological constructs, such as optimism, are associated with beneficial health outcomes. However, no study has separately examined the effects of multiple positive psychological constructs on behavioral, biological, and clinical outcomes after an acute coronary syndrome (ACS). Accordingly, we aimed to investigate associations of baseline optimism and gratitude with subsequent physical activity, prognostic biomarkers, and cardiac rehospitalizations in post-ACS patients. Participants were enrolled during admission for ACS and underwent assessments at baseline (2 weeks post-ACS) and follow-up (6 months later). Associations between baseline positive psychological constructs and subsequent physical activity/biomarkers were analyzed using multivariable linear regression. Associations between baseline positive constructs and 6-month rehospitalizations were assessed via multivariable Cox regression. Overall, 164 participants enrolled and completed the baseline 2-week assessments. Baseline optimism was significantly associated with greater physical activity at 6 months (n=153; β=102.5; 95% confidence interval, 13.6-191.5; P=0.024), controlling for baseline activity and sociodemographic, medical, and negative psychological covariates. Baseline optimism was also associated with lower rates of cardiac readmissions at 6 months (n=164), controlling for age, sex, and medical comorbidity (hazard ratio, 0.92; 95% confidence interval, [0.86-0.98]; P=0.006). There were no significant relationships between optimism and biomarkers. Gratitude was minimally associated with post-ACS outcomes. Post-ACS optimism, but not gratitude, was prospectively and independently associated with superior physical activity and fewer cardiac readmissions. Whether interventions that target optimism can successfully increase optimism or improve cardiovascular outcomes in post-ACS patients is not yet known, but can be tested in future studies. URL: http://www.clinicaltrials.gov. Unique identifier: NCT01709669. © 2015 American Heart Association, Inc.
Comparison of Techniques for Sampling Adult Necrophilous Insects From Pig Carcasses.
Cruise, Angela; Hatano, Eduardo; Watson, David W; Schal, Coby
2018-02-06
Studies of the pre-colonization interval and mechanisms driving necrophilous insect ecological succession depend on effective sampling of adult insects and knowledge of their diel and successional activity patterns. The number of insects trapped, their diversity, and diel periodicity were compared with four sampling methods on neonate pigs. Sampling method, time of day and decomposition age of the pigs significantly affected the number of insects sampled from pigs. We also found significant interactions of sampling method and decomposition day, time of sampling and decomposition day. No single method was superior to the other methods during all three decomposition days. Sampling times after noon yielded the largest samples during the first 2 d of decomposition. On day 3 of decomposition however, all sampling times were equally effective. Therefore, to maximize insect collections from neonate pigs, the method used to sample must vary by decomposition day. The suction trap collected the most species-rich samples, but sticky trap samples were the most diverse, when both species richness and evenness were factored into a Shannon diversity index. Repeated sampling during the noon to 18:00 hours period was most effective to obtain the maximum diversity of trapped insects. The integration of multiple sampling techniques would most effectively sample the necrophilous insect community. However, because all four tested methods were deficient at sampling beetle species, future work should focus on optimizing the most promising methods, alone or in combinations, and incorporate hand-collections of beetles. © The Author(s) 2018. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Programmable noise bandwidth reduction by means of digital averaging
NASA Technical Reports Server (NTRS)
Poklemba, John J. (Inventor)
1993-01-01
Predetection noise bandwidth reduction is effected by a pre-averager capable of digitally averaging the samples of an input data signal over two or more symbols, the averaging interval being defined by the input sampling rate divided by the output sampling rate. As the averaged sample is clocked to a suitable detector at a much slower rate than the input signal sampling rate the noise bandwidth at the input to the detector is reduced, the input to the detector having an improved signal to noise ratio as a result of the averaging process, and the rate at which such subsequent processing must operate is correspondingly reduced. The pre-averager forms a data filter having an output sampling rate of one sample per symbol of received data. More specifically, selected ones of a plurality of samples accumulated over two or more symbol intervals are output in response to clock signals at a rate of one sample per symbol interval. The pre-averager includes circuitry for weighting digitized signal samples using stored finite impulse response (FIR) filter coefficients. A method according to the present invention is also disclosed.
Reference values for 27 clinical chemistry tests in 70-year-old males and females.
Carlsson, Lena; Lind, Lars; Larsson, Anders
2010-01-01
Reference values are usually defined based on blood samples from healthy men or nonpregnant women in the age range of 20-50 years. These values are not optimal for elderly patients, as many biological markers change over time and adequate reference values are important for correct clinical decisions. To validate NORIP (Nordic Reference Interval Project) reference values in a 70-year-old population. We studied 27 frequently used laboratory tests. The 2.5th and 97.5th percentiles for these markers were calculated according to the recommendations of the International Federation of Clinical Chemistry on the statistical treatment of reference values. Reference values are reported for plasma alanine aminotransferase, albumin, alkaline phosphatase, pancreas amylase, apolipoprotein A1, apolipoprotein B, aspartate aminotransferase, bilirubin, calcium, chloride, cholesterol, creatinine, creatine kinase, C-reactive protein, glucose, gamma-glutamyltransferase, HDL-cholesterol, iron, lactate dehydrogenase, LDL-cholesterol, magnesium, phosphate, potassium, sodium, transferrin, triglycerides, urate and urea. Reference values calculated from the whole population and a subpopulation without cardiovascular disease showed strong concordance. Several of the reference interval limits were outside the 90% CI of a Scandinavian population (NORIP). 2009 S. Karger AG, Basel.
A Two-Step Bayesian Approach for Propensity Score Analysis: Simulations and Case Study.
Kaplan, David; Chen, Jianshen
2012-07-01
A two-step Bayesian propensity score approach is introduced that incorporates prior information in the propensity score equation and outcome equation without the problems associated with simultaneous Bayesian propensity score approaches. The corresponding variance estimators are also provided. The two-step Bayesian propensity score is provided for three methods of implementation: propensity score stratification, weighting, and optimal full matching. Three simulation studies and one case study are presented to elaborate the proposed two-step Bayesian propensity score approach. Results of the simulation studies reveal that greater precision in the propensity score equation yields better recovery of the frequentist-based treatment effect. A slight advantage is shown for the Bayesian approach in small samples. Results also reveal that greater precision around the wrong treatment effect can lead to seriously distorted results. However, greater precision around the correct treatment effect parameter yields quite good results, with slight improvement seen with greater precision in the propensity score equation. A comparison of coverage rates for the conventional frequentist approach and proposed Bayesian approach is also provided. The case study reveals that credible intervals are wider than frequentist confidence intervals when priors are non-informative.
Gremeaux, Vincent; Drigny, Joffrey; Nigam, Anil; Juneau, Martin; Guilbeault, Valérie; Latour, Elise; Gayda, Mathieu
2012-11-01
The aim of this study was to study the impact of a combined long-term lifestyle and high-intensity interval training intervention on body composition, cardiometabolic risk, and exercise tolerance in overweight and obese subjects. Sixty-two overweight and obese subjects (53.3 ± 9.7 yrs; mean body mass index, 35.8 ± 5 kg/m(2)) were retrospectively identified at their entry into a 9-mo program consisting of individualized nutritional counselling, optimized high-intensity interval exercise, and resistance training two to three times a week. Anthropometric measurements, cardiometabolic risk factors, and exercise tolerance were measured at baseline and program completion. Adherence rate was 97%, and no adverse events occurred with high-intensity interval exercise training. Exercise training was associated with a weekly energy expenditure of 1582 ± 284 kcal. Clinically and statistically significant improvements were observed for body mass (-5.3 ± 5.2 kg), body mass index (-1.9 ± 1.9 kg/m(2)), waist circumference (-5.8 ± 5.4 cm), and maximal exercise capacity (+1.26 ± 0.84 metabolic equivalents) (P < 0.0001 for all parameters). Total fat mass and trunk fat mass, lipid profile, and triglyceride/high-density lipoprotein ratio were also significantly improved (P < 0.0001). At program completion, the prevalence of metabolic syndrome was reduced by 32.5% (P < 0.05). Independent predictors of being a responder to body mass and waist circumference loss were baseline body mass index and resting metabolic rate; those for body mass index decrease were baseline waist circumference and triglyceride/high-density lipoprotein cholesterol ratio. A long-term lifestyle intervention with optimized high-intensity interval exercise improves body composition, cardiometabolic risk, and exercise tolerance in obese subjects. This intervention seems safe, efficient, and well tolerated and could improve adherence to exercise training in this population.
NASA Astrophysics Data System (ADS)
Mousavi, Seyed Jamshid; Mahdizadeh, Kourosh; Afshar, Abbas
2004-08-01
Application of stochastic dynamic programming (SDP) models to reservoir optimization calls for state variables discretization. As an important variable discretization of reservoir storage volume has a pronounced effect on the computational efforts. The error caused by storage volume discretization is examined by considering it as a fuzzy state variable. In this approach, the point-to-point transitions between storage volumes at the beginning and end of each period are replaced by transitions between storage intervals. This is achieved by using fuzzy arithmetic operations with fuzzy numbers. In this approach, instead of aggregating single-valued crisp numbers, the membership functions of fuzzy numbers are combined. Running a simulated model with optimal release policies derived from fuzzy and non-fuzzy SDP models shows that a fuzzy SDP with a coarse discretization scheme performs as well as a classical SDP having much finer discretized space. It is believed that this advantage in the fuzzy SDP model is due to the smooth transitions between storage intervals which benefit from soft boundaries.
Model Based Optimal Control, Estimation, and Validation of Lithium-Ion Batteries
NASA Astrophysics Data System (ADS)
Perez, Hector Eduardo
This dissertation focuses on developing and experimentally validating model based control techniques to enhance the operation of lithium ion batteries, safely. An overview of the contributions to address the challenges that arise are provided below. Chapter 1: This chapter provides an introduction to battery fundamentals, models, and control and estimation techniques. Additionally, it provides motivation for the contributions of this dissertation. Chapter 2: This chapter examines reference governor (RG) methods for satisfying state constraints in Li-ion batteries. Mathematically, these constraints are formulated from a first principles electrochemical model. Consequently, the constraints explicitly model specific degradation mechanisms, such as lithium plating, lithium depletion, and overheating. This contrasts with the present paradigm of limiting measured voltage, current, and/or temperature. The critical challenges, however, are that (i) the electrochemical states evolve according to a system of nonlinear partial differential equations, and (ii) the states are not physically measurable. Assuming available state and parameter estimates, this chapter develops RGs for electrochemical battery models. The results demonstrate how electrochemical model state information can be utilized to ensure safe operation, while simultaneously enhancing energy capacity, power, and charge speeds in Li-ion batteries. Chapter 3: Complex multi-partial differential equation (PDE) electrochemical battery models are characterized by parameters that are often difficult to measure or identify. This parametric uncertainty influences the state estimates of electrochemical model-based observers for applications such as state-of-charge (SOC) estimation. This chapter develops two sensitivity-based interval observers that map bounded parameter uncertainty to state estimation intervals, within the context of electrochemical PDE models and SOC estimation. Theoretically, this chapter extends the notion of interval observers to PDE models using a sensitivity-based approach. Practically, this chapter quantifies the sensitivity of battery state estimates to parameter variations, enabling robust battery management schemes. The effectiveness of the proposed sensitivity-based interval observers is verified via a numerical study for the range of uncertain parameters. Chapter 4: This chapter seeks to derive insight on battery charging control using electrochemistry models. Directly using full order complex multi-partial differential equation (PDE) electrochemical battery models is difficult and sometimes impossible to implement. This chapter develops an approach for obtaining optimal charge control schemes, while ensuring safety through constraint satisfaction. An optimal charge control problem is mathematically formulated via a coupled reduced order electrochemical-thermal model which conserves key electrochemical and thermal state information. The Legendre-Gauss-Radau (LGR) pseudo-spectral method with adaptive multi-mesh-interval collocation is employed to solve the resulting nonlinear multi-state optimal control problem. Minimum time charge protocols are analyzed in detail subject to solid and electrolyte phase concentration constraints, as well as temperature constraints. The optimization scheme is examined using different input current bounds, and an insight on battery design for fast charging is provided. Experimental results are provided to compare the tradeoffs between an electrochemical-thermal model based optimal charge protocol and a traditional charge protocol. Chapter 5: Fast and safe charging protocols are crucial for enhancing the practicality of batteries, especially for mobile applications such as smartphones and electric vehicles. This chapter proposes an innovative approach to devising optimally health-conscious fast-safe charge protocols. A multi-objective optimal control problem is mathematically formulated via a coupled electro-thermal-aging battery model, where electrical and aging sub-models depend upon the core temperature captured by a two-state thermal sub-model. The Legendre-Gauss-Radau (LGR) pseudo-spectral method with adaptive multi-mesh-interval collocation is employed to solve the resulting highly nonlinear six-state optimal control problem. Charge time and health degradation are therefore optimally traded off, subject to both electrical and thermal constraints. Minimum-time, minimum-aging, and balanced charge scenarios are examined in detail. Sensitivities to the upper voltage bound, ambient temperature, and cooling convection resistance are investigated as well. Experimental results are provided to compare the tradeoffs between a balanced and traditional charge protocol. Chapter 6: This chapter provides concluding remarks on the findings of this dissertation and a discussion of future work.
Least squares polynomial chaos expansion: A review of sampling strategies
NASA Astrophysics Data System (ADS)
Hadigol, Mohammad; Doostan, Alireza
2018-04-01
As non-institutive polynomial chaos expansion (PCE) techniques have gained growing popularity among researchers, we here provide a comprehensive review of major sampling strategies for the least squares based PCE. Traditional sampling methods, such as Monte Carlo, Latin hypercube, quasi-Monte Carlo, optimal design of experiments (ODE), Gaussian quadratures, as well as more recent techniques, such as coherence-optimal and randomized quadratures are discussed. We also propose a hybrid sampling method, dubbed alphabetic-coherence-optimal, that employs the so-called alphabetic optimality criteria used in the context of ODE in conjunction with coherence-optimal samples. A comparison between the empirical performance of the selected sampling methods applied to three numerical examples, including high-order PCE's, high-dimensional problems, and low oversampling ratios, is presented to provide a road map for practitioners seeking the most suitable sampling technique for a problem at hand. We observed that the alphabetic-coherence-optimal technique outperforms other sampling methods, specially when high-order ODE are employed and/or the oversampling ratio is low.
Effects of age and recovery duration on peak power output during repeated cycling sprints.
Ratel, S; Bedu, M; Hennegrave, A; Doré, E; Duché, P
2002-08-01
The aim of the present study was to investigate the effects of age and recovery duration on the time course of cycling peak power and blood lactate concentration ([La]) during repeated bouts of short-term high-intensity exercise. Eleven prepubescent boys (9.6 +/- 0.7 yr), nine pubescent boys (15.0 +/- 0.7 yr) and ten men (20.4 +/- 0.8 yr) performed ten consecutive 10 s cycling sprints separated by either 30 s (R30), 1 min (R1), or 5 min (R5) passive recovery intervals against a friction load corresponding to 50 % of their optimal force (50 % Ffopt). Peak power produced at 50 % Ffopt (PP50) was calculated at each sprint including the flywheel inertia of the bicycle. Arterialized capillary blood samples were collected at rest and during the sprint exercises to measure the time course of [La]. In the prepubescent boys, whatever recovery intervals, PP50 remained unchanged during the ten 10 s sprint exercises. In the pubescent boys, PP50 decreased significantly by 18.5 % (p < 0.001) with R30 and by 15.3 % (p < 0.01) with R1 from the first to the tenth sprint but remained unchanged with R5. In the men, PP50 decreased respectively by 28.5 % (p < 0.001) and 11.3 % (p < 0.01) with R30 and R1 and slightly diminished with R5. For each recovery interval, the increase in blood [La] over the ten sprints was significantly lower in the prepubescent boys compared with the pubescent boys and the men. To conclude, the prepubescent boys sustained their PP50 during the ten 10 s sprint exercises with only 30 s recovery intervals. In contrast, the pubescent boys and the men needed 5 min recovery intervals. It was suggested that the faster recovery of PP50 in the prepubescent boys was due to their lower muscle glycolytic activity and their higher muscle oxidative capacity allowing a faster resynthesis in phosphocreatine.
NASA Astrophysics Data System (ADS)
Da Silva, A. C.; Hladil, J.; Chadimová, L.; Slavík, L.; Hilgen, F. J.; Bábek, O.; Dekkers, M. J.
2016-12-01
The Early Devonian geological time scale (base of the Devonian at 418.8 ± 2.9 Myr, Becker et al., 2012) suffers from poor age control, with associated large uncertainties between 2.5 and 4.2 Myr on the stage boundaries. Identifying orbital cycles from sedimentary successions can serve as a very powerful chronometer to test and, where appropriate, improve age models. Here, we focus on the Lochkovian and Pragian, the two lowermost Devonian stages. High-resolution magnetic susceptibility (χin - 5 to 10 cm sampling interval) and gamma ray spectrometry (GRS - 25 to 50 cm sampling interval) records were gathered from two main limestone sections, Požár-CS (118 m, spanning the Lochkov and Praha Formations) and Pod Barrandovem (174 m; Praha Formation), both in the Czech Republic. An additional section (Branžovy, 65 m, Praha Formation) was sampled for GRS (every 50 cm). The χin and GRS records are very similar, so χin variations are driven by variations in the samples' paramagnetic clay mineral content, reflecting changes in detrital input. Therefore, climatic variations are very likely captured in our records. Multiple spectral analysis and statistical techniques such as: Continuous Wavelet Transform, Evolutive Harmonic Analysis, Multi-taper method and Average Spectral Misfit, were used in concert to reach an optimal astronomical interpretation. The Požár-CS section shows distinctly varying sediment accumulation rates. The Lochkovian (essentially equivalent to the Lochkov Formation (Fm.)) is interpreted to include a total of nineteen 405 kyr eccentricity cycles, constraining its duration to 7.7 ± 2.8 Myr. The Praha Fm. includes fourteen 405 kyr eccentricity cycles in the three sampled sections, while the Pragian Stage only includes about four 405 kyr eccentricity cycles, thus exhibiting durations of 5.7 ± 0.6 Myr and 1.7 ± 0.7 Myr respectively. Because the Lochkov Fm. contains an interval with very low sediment accumulation rate and because the Praha Fm. was cross-validated in three different sections, the uncertainty in the duration of the Lochkov Fm. and the Lochkovian is larger than that of the Praha Fm. and Pragian. The new floating time scales for the Lochkovian and Pragian stages have an unprecedented precision, with reduction in the uncertainty by a factor of 1.7 for the Lochkovian and of ∼6 for the Pragian. Furthermore, longer orbital modulation cycles are also identified with periodicities of ∼1000 kyr and 2000-2500 kyr.
Resampling methods in Microsoft Excel® for estimating reference intervals
Theodorsson, Elvar
2015-01-01
Computer- intensive resampling/bootstrap methods are feasible when calculating reference intervals from non-Gaussian or small reference samples. Microsoft Excel® in version 2010 or later includes natural functions, which lend themselves well to this purpose including recommended interpolation procedures for estimating 2.5 and 97.5 percentiles. The purpose of this paper is to introduce the reader to resampling estimation techniques in general and in using Microsoft Excel® 2010 for the purpose of estimating reference intervals in particular. Parametric methods are preferable to resampling methods when the distributions of observations in the reference samples is Gaussian or can transformed to that distribution even when the number of reference samples is less than 120. Resampling methods are appropriate when the distribution of data from the reference samples is non-Gaussian and in case the number of reference individuals and corresponding samples are in the order of 40. At least 500-1000 random samples with replacement should be taken from the results of measurement of the reference samples. PMID:26527366
Resampling methods in Microsoft Excel® for estimating reference intervals.
Theodorsson, Elvar
2015-01-01
Computer-intensive resampling/bootstrap methods are feasible when calculating reference intervals from non-Gaussian or small reference samples. Microsoft Excel® in version 2010 or later includes natural functions, which lend themselves well to this purpose including recommended interpolation procedures for estimating 2.5 and 97.5 percentiles. The purpose of this paper is to introduce the reader to resampling estimation techniques in general and in using Microsoft Excel® 2010 for the purpose of estimating reference intervals in particular. Parametric methods are preferable to resampling methods when the distributions of observations in the reference samples is Gaussian or can transformed to that distribution even when the number of reference samples is less than 120. Resampling methods are appropriate when the distribution of data from the reference samples is non-Gaussian and in case the number of reference individuals and corresponding samples are in the order of 40. At least 500-1000 random samples with replacement should be taken from the results of measurement of the reference samples.
Huffman, Raegan L.
2002-01-01
Ground-water samples were collected in April 1999 at Naval Air Station Whidbey Island, Washington, with passive diffusion samplers and a submersible pump to compare concentrations of volatile organic compounds (VOCs) in water samples collected using the two sampling methods. Single diffusion samplers were installed in wells with 10-foot screened intervals, and multiple diffusion samplers were installed in wells with 20- to 40-foot screened intervals. The diffusion samplers were recovered after 20 days and the wells were then sampled using a submersible pump. VOC concentrations in the 10-foot screened wells in water samples collected with diffusion samplers closely matched concentrations in samples collected with the submersible pump. Analysis of VOC concentrations in samples collected from the 20- to 40-foot screened wells with multiple diffusion samplers indicated vertical concentration variation within the screened interval, whereas the analysis of VOC concentrations in samples collected with the submersible pump indicated mixing during pumping. The results obtained using the two sampling methods indicate that the samples collected with the diffusion samplers were comparable with and can be considerably less expensive than samples collected using a submersible pump.
Kishore, Amit; Vail, Andy; Majid, Arshad; Dawson, Jesse; Lees, Kennedy R; Tyrrell, Pippa J; Smith, Craig J
2014-02-01
Atrial fibrillation (AF) confers a high risk of recurrent stroke, although detection methods and definitions of paroxysmal AF during screening vary. We therefore undertook a systematic review and meta-analysis to determine the frequency of newly detected AF using noninvasive or invasive cardiac monitoring after ischemic stroke or transient ischemic attack. Prospective observational studies or randomized controlled trials of patients with ischemic stroke, transient ischemic attack, or both, who underwent any cardiac monitoring for a minimum of 12 hours, were included after electronic searches of multiple databases. The primary outcome was detection of any new AF during the monitoring period. We prespecified subgroup analysis of selected (prescreened or cryptogenic) versus unselected patients and according to duration of monitoring. A total of 32 studies were analyzed. The overall detection rate of any AF was 11.5% (95% confidence interval, 8.9%-14.3%), although the timing, duration, method of monitoring, and reporting of diagnostic criteria used for paroxysmal AF varied. Detection rates were higher in selected (13.4%; 95% confidence interval, 9.0%-18.4%) than in unselected patients (6.2%; 95% confidence interval, 4.4%-8.3%). There was substantial heterogeneity even within specified subgroups. Detection of AF was highly variable, and the review was limited by small sample sizes and marked heterogeneity. Further studies are required to inform patient selection, optimal timing, methods, and duration of monitoring for detection of AF/paroxysmal AF.
Optimal flexible sample size design with robust power.
Zhang, Lanju; Cui, Lu; Yang, Bo
2016-08-30
It is well recognized that sample size determination is challenging because of the uncertainty on the treatment effect size. Several remedies are available in the literature. Group sequential designs start with a sample size based on a conservative (smaller) effect size and allow early stop at interim looks. Sample size re-estimation designs start with a sample size based on an optimistic (larger) effect size and allow sample size increase if the observed effect size is smaller than planned. Different opinions favoring one type over the other exist. We propose an optimal approach using an appropriate optimality criterion to select the best design among all the candidate designs. Our results show that (1) for the same type of designs, for example, group sequential designs, there is room for significant improvement through our optimization approach; (2) optimal promising zone designs appear to have no advantages over optimal group sequential designs; and (3) optimal designs with sample size re-estimation deliver the best adaptive performance. We conclude that to deal with the challenge of sample size determination due to effect size uncertainty, an optimal approach can help to select the best design that provides most robust power across the effect size range of interest. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Gas, water, and oil production from Wattenberg field in the Denver Basin, Colorado
Nelson, Philip H.; Santus, Stephen L.
2011-01-01
Gas, oil, and water production data were compiled from selected wells in two tight gas reservoirs-the Codell-Niobrara interval, comprised of the Codell Sandstone Member of the Carlile Shale and the Niobrara Formation; and the Dakota J interval, comprised mostly of the Muddy (J) Sandstone of the Dakota Group; both intervals are of Cretaceous age-in the Wattenberg field in the Denver Basin of Colorado. Production from each well is represented by two samples spaced five years apart, the first sample typically taken two years after production commenced, which generally was in the 1990s. For each producing interval, summary diagrams and tables of oil-versus-gas production and water-versus-gas production are shown with fluid-production rates, the change in production over five years, the water-gas and oil-gas ratios, and the fluid type. These diagrams and tables permit well-to-well and field-to-field comparisons. Fields producing water at low rates (water dissolved in gas in the reservoir) can be distinguished from fields producing water at moderate or high rates, and the water-gas ratios are quantified. The Dakota J interval produces gas on a per-well basis at roughly three times the rate of the Codell-Niobrara interval. After five years of production, gas data from the second samples show that both intervals produce gas, on average, at about one-half the rate as the first sample. Oil-gas ratios in the Codell-Niobrara interval are characteristic of a retrograde gas and are considerably higher than oil-gas ratios in the Dakota J interval, which are characteristic of a wet gas. Water production from both intervals is low, and records in many wells are discontinuous, particularly in the Codell-Niobrara interval. Water-gas ratios are broadly variable, with some of the variability possibly due to the difficulty of measuring small production rates. Most wells for which water is reported have water-gas ratios exceeding the amount that could exist dissolved in gas at reservoir pressure and temperature. The Codell-Niobrara interval is reported to be overpressured (that is, pressure greater than hydrostatic) whereas the underlying Dakota J interval is underpressured (less than hydrostatic), demonstrating a lack of hydraulic communication between the two intervals despite their proximity over a broad geographical area. The underpressuring in the Dakota J interval has been attributed by others to outcropping strata east of the basin. We agree with this interpretation and postulate that the gas accumulation also may contribute to hydraulic isolation from outcrops immediately west of the basin.
Global Geopotential Modelling from Satellite-to-Satellite Tracking,
1981-10-01
measured range-rate sampled at regular intervals. The expansion of the potential has been truncated at degree n = 331, because little information on...averaging interval is 4 s , and sampling takes place every 4 s ; if residual data are used, with respect to a reference model of specified accuracy, complete...LEGFDN, MODEL, andNVAR... .. ....... 93 B-4 Sample Output .. .. .. .... ..... ..... ..... 94 Appendix C: Detailed Listings Degree by Degree
ERIC Educational Resources Information Center
Suzuki, Yuichi
2017-01-01
This study examined optimal learning schedules for second language (L2) acquisition of a morphological structure. Sixty participants studied the simple and complex morphological rules of a novel miniature language system so as to use them for oral production. They engaged in four training sessions in either shorter spaced (3.3-day interval) or…
Evaluation of listener-based anuran surveys with automated audio recording devices
Shearin, A. F.; Calhoun, A.J.K.; Loftin, C.S.
2012-01-01
Volunteer-based audio surveys are used to document long-term trends in anuran community composition and abundance. Current sampling protocols, however, are not region- or species-specific and may not detect relatively rare or audibly cryptic species. We used automated audio recording devices to record calling anurans during 2006–2009 at wetlands in Maine, USA. We identified species calling, chorus intensity, time of day, and environmental variables when each species was calling and developed logistic and generalized mixed models to determine the time interval and environmental variables that optimize detection of each species during peak calling periods. We detected eight of nine anurans documented in Maine. Individual recordings selected from the sampling period (0.5 h past sunset to 0100 h) described in the North American Amphibian Monitoring Program (NAAMP) detected fewer species than were detected in recordings from 30 min past sunset until sunrise. Time of maximum detection of presence and full chorusing for three species (green frogs, mink frogs, pickerel frogs) occurred after the NAAMP sampling end time (0100 h). The NAAMP protocol’s sampling period may result in omissions and misclassifications of chorus sizes for certain species. These potential errors should be considered when interpreting trends generated from standardized anuran audio surveys.
Meta-analysis with missing study-level sample variance data.
Chowdhry, Amit K; Dworkin, Robert H; McDermott, Michael P
2016-07-30
We consider a study-level meta-analysis with a normally distributed outcome variable and possibly unequal study-level variances, where the object of inference is the difference in means between a treatment and control group. A common complication in such an analysis is missing sample variances for some studies. A frequently used approach is to impute the weighted (by sample size) mean of the observed variances (mean imputation). Another approach is to include only those studies with variances reported (complete case analysis). Both mean imputation and complete case analysis are only valid under the missing-completely-at-random assumption, and even then the inverse variance weights produced are not necessarily optimal. We propose a multiple imputation method employing gamma meta-regression to impute the missing sample variances. Our method takes advantage of study-level covariates that may be used to provide information about the missing data. Through simulation studies, we show that multiple imputation, when the imputation model is correctly specified, is superior to competing methods in terms of confidence interval coverage probability and type I error probability when testing a specified group difference. Finally, we describe a similar approach to handling missing variances in cross-over studies. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Chang, Xiaofeng; Bao, Xiaoying; Wang, Shiping; Zhu, Xiaoxue; Luo, Caiyun; Zhang, Zhenhua; Wilkes, Andreas
2016-05-15
The effects of climate change and human activities on grassland degradation and soil carbon stocks have become a focus of both research and policy. However, lack of research on appropriate sampling design prevents accurate assessment of soil carbon stocks and stock changes at community and regional scales. Here, we conducted an intensive survey with 1196 sampling sites over an area of 190 km(2) of degraded alpine meadow. Compared to lightly degraded meadow, soil organic carbon (SOC) stocks in moderately, heavily and extremely degraded meadow were reduced by 11.0%, 13.5% and 17.9%, respectively. Our field survey sampling design was overly intensive to estimate SOC status with a tolerable uncertainty of 10%. Power analysis showed that the optimal sampling density to achieve the desired accuracy would be 2, 3, 5 and 7 sites per 10 km(2) for lightly, moderately, heavily and extremely degraded meadows, respectively. If a subsequent paired sampling design with the optimum sample size were performed, assuming stock change rates predicted by experimental and modeling results, we estimate that about 5-10 years would be necessary to detect expected trends in SOC in the top 20 cm soil layer. Our results highlight the utility of conducting preliminary surveys to estimate the appropriate sampling density and avoid wasting resources due to over-sampling, and to estimate the sampling interval required to detect an expected sequestration rate. Future studies will be needed to evaluate spatial and temporal patterns of SOC variability. Copyright © 2016. Published by Elsevier Ltd.
Practical synchronization on complex dynamical networks via optimal pinning control
NASA Astrophysics Data System (ADS)
Li, Kezan; Sun, Weigang; Small, Michael; Fu, Xinchu
2015-07-01
We consider practical synchronization on complex dynamical networks under linear feedback control designed by optimal control theory. The control goal is to minimize global synchronization error and control strength over a given finite time interval, and synchronization error at terminal time. By utilizing the Pontryagin's minimum principle, and based on a general complex dynamical network, we obtain an optimal system to achieve the control goal. The result is verified by performing some numerical simulations on Star networks, Watts-Strogatz networks, and Barabási-Albert networks. Moreover, by combining optimal control and traditional pinning control, we propose an optimal pinning control strategy which depends on the network's topological structure. Obtained results show that optimal pinning control is very effective for synchronization control in real applications.
Comparison of digestion methods for determination of trace and minor metals in plant samples.
Lavilla, I; Filgueiras, A V; Bendicho, C
1999-12-01
In this paper, three dissolution methods using pressure digestion vessels (low-, medium-, and high-pressure vessels) for the determination of metals in plant samples are described. The Plackett-Burman saturated factorial design was used to identify the significant factors influencing wet ashing and to select optimized dissolution conditions. The three methods were statistically compared (on-way ANOVA) on the same sample; no significant differences were obtained. In all cases the relative standard deviation values were <3%. The digestion method based on the use of low-pressure vessels and a microwave oven was validated against CRM GBW07605 tea leaves. This method was applied to the determination of Cu, Zn, Mn, Fe, Mg, and Ca in 22 different medicinal, aromatic, and seasoning plants by flame-atomic absorption spectrometry. The concentration intervals of metal in the plants analyzed were the following: Cu, 4 (Allium sativum)-35 (Thea sinensis) microg g(-1); Zn, 7 (Piper nigrum)-90 (Betula alba) microg g(-1); Mn, 9 (Allium sativum)-939 (Caryophylus aromaticus) microg g(-1); Fe, 33 (Allium sativum)-2486 (Anethum graveolens) microg g(-1); Mg, 495 (Allium sativum)-7458 (Ocimum basilicum) microg g(-1); Ca, 386 (Allium sativum)-21500 (Ocimum basilicum) microg g(-1).
RENEB intercomparisons applying the conventional Dicentric Chromosome Assay (DCA).
Oestreicher, Ursula; Samaga, Daniel; Ainsbury, Elizabeth; Antunes, Ana Catarina; Baeyens, Ans; Barrios, Leonardo; Beinke, Christina; Beukes, Philip; Blakely, William F; Cucu, Alexandra; De Amicis, Andrea; Depuydt, Julie; De Sanctis, Stefania; Di Giorgio, Marina; Dobos, Katalin; Dominguez, Inmaculada; Duy, Pham Ngoc; Espinoza, Marco E; Flegal, Farrah N; Figel, Markus; Garcia, Omar; Monteiro Gil, Octávia; Gregoire, Eric; Guerrero-Carbajal, C; Güçlü, İnci; Hadjidekova, Valeria; Hande, Prakash; Kulka, Ulrike; Lemon, Jennifer; Lindholm, Carita; Lista, Florigio; Lumniczky, Katalin; Martinez-Lopez, Wilner; Maznyk, Nataliya; Meschini, Roberta; M'kacher, Radia; Montoro, Alegria; Moquet, Jayne; Moreno, Mercedes; Noditi, Mihaela; Pajic, Jelena; Radl, Analía; Ricoul, Michelle; Romm, Horst; Roy, Laurence; Sabatier, Laure; Sebastià, Natividad; Slabbert, Jacobus; Sommer, Sylwester; Stuck Oliveira, Monica; Subramanian, Uma; Suto, Yumiko; Que, Tran; Testa, Antonella; Terzoudi, Georgia; Vral, Anne; Wilkins, Ruth; Yanti, LusiYanti; Zafiropoulos, Demetre; Wojcik, Andrzej
2017-01-01
Two quality controlled inter-laboratory exercises were organized within the EU project 'Realizing the European Network of Biodosimetry (RENEB)' to further optimize the dicentric chromosome assay (DCA) and to identify needs for training and harmonization activities within the RENEB network. The general study design included blood shipment, sample processing, analysis of chromosome aberrations and radiation dose assessment. After manual scoring of dicentric chromosomes in different cell numbers dose estimations and corresponding 95% confidence intervals were submitted by the participants. The shipment of blood samples to the partners in the European Community (EU) were performed successfully. Outside the EU unacceptable delays occurred. The results of the dose estimation demonstrate a very successful classification of the blood samples in medically relevant groups. In comparison to the 1st exercise the 2nd intercomparison showed an improvement in the accuracy of dose estimations especially for the high dose point. In case of a large-scale radiological incident, the pooling of ressources by networks can enhance the rapid classification of individuals in medically relevant treatment groups based on the DCA. The performance of the RENEB network as a whole has clearly benefited from harmonization processes and specific training activities for the network partners.
Outcome-Dependent Sampling with Interval-Censored Failure Time Data
Zhou, Qingning; Cai, Jianwen; Zhou, Haibo
2017-01-01
Summary Epidemiologic studies and disease prevention trials often seek to relate an exposure variable to a failure time that suffers from interval-censoring. When the failure rate is low and the time intervals are wide, a large cohort is often required so as to yield reliable precision on the exposure-failure-time relationship. However, large cohort studies with simple random sampling could be prohibitive for investigators with a limited budget, especially when the exposure variables are expensive to obtain. Alternative cost-effective sampling designs and inference procedures are therefore desirable. We propose an outcome-dependent sampling (ODS) design with interval-censored failure time data, where we enrich the observed sample by selectively including certain more informative failure subjects. We develop a novel sieve semiparametric maximum empirical likelihood approach for fitting the proportional hazards model to data from the proposed interval-censoring ODS design. This approach employs the empirical likelihood and sieve methods to deal with the infinite-dimensional nuisance parameters, which greatly reduces the dimensionality of the estimation problem and eases the computation difficulty. The consistency and asymptotic normality of the resulting regression parameter estimator are established. The results from our extensive simulation study show that the proposed design and method works well for practical situations and is more efficient than the alternative designs and competing approaches. An example from the Atherosclerosis Risk in Communities (ARIC) study is provided for illustration. PMID:28771664
Super-optimal CO2 reduces seed yield but not vegetative growth in wheat
NASA Technical Reports Server (NTRS)
Grotenhuis, T. P.; Bugbee, B.
1997-01-01
Although terrestrial atmospheric CO2 levels will not reach 1000 micromoles mol-1 (0.1%) for decades, CO2 levels in growth chambers and greenhouses routinely exceed that concentration. CO2 levels in life support systems in space can exceed 10000 micromoles mol-1(1%). Numerous studies have examined CO2 effects up to 1000 micromoles mol-1, but biochemical measurements indicate that the beneficial effects of CO2 can continue beyond this concentration. We studied the effects of near-optimal (approximately 1200 micromoles mol-1) and super-optimal CO2 levels (2400 micromoles mol-1) on yield of two cultivars of hydroponically grown wheat (Triticum aestivum L.) in 12 trials in growth chambers. Increasing CO2 from sub-optimal to near-optimal (350-1200 micromoles mol-1) increased vegetative growth by 25% and seed yield by 15% in both cultivars. Yield increases were primarily the result of an increased number of heads per square meter. Further elevation of CO2 to 2500 micromoles mol-1 reduced seed yield by 22% (P < 0.001) in cv. Veery-10 and by 15% (P < 0.001) in cv. USU-Apogee. Super-optimal CO2 did not decrease the number of heads per square meter, but reduced seeds per head by 10% and mass per seed by 11%. The toxic effect of CO2 was similar over a range of light levels from half to full sunlight. Subsequent trials revealed that super-optimal CO2 during the interval between 2 wk before and after anthesis mimicked the effect of constant super-optimal CO2. Furthermore, near-optimal CO2 during the same interval mimicked the effect of constant near-optimal CO2. Nutrient concentration of leaves and heads was not affected by CO2. These results suggest that super-optimal CO2 inhibits some process that occurs near the time of seed set resulting in decreased seed set, seed mass, and yield.
Stock optimizing: maximizing reinforcers per session on a variable-interval schedule.
Silberberg, A; Bauman, R; Hursh, S
1993-01-01
In Experiment 1, 2 monkeys earned their daily food ration by pressing a key that delivered food according to a variable-interval 3-min schedule. In Phases 1 and 4, sessions ended after 3 hr. In Phases 2 and 3, sessions ended after a fixed number of responses that reduced food intake and body weights from levels during Phases 1 and 4. Monkeys responded at higher rates and emitted more responses per food delivery when the food earned in a session was reduced. In Experiment 2, monkeys earned their daily food ration by depositing tokens into the response panel. Deposits delivered food according to a variable-interval 3-min schedule. When the token supply was unlimited (Phases 1, 3, and 5), sessions ended after 3 hr. In Phases 2 and 4, sessions ended after 150 tokens were deposited, resulting in a decrease in food intake and body weight. Both monkeys responded at lower rates and emitted fewer responses per food delivery when the food earned in a session was reduced. Experiment 1's results are consistent with a strength account, according to which the phases that reduced body weights increased food's value and therefore increased subjects' response rates. The results of Experiment 2 are consistent with an optimizing strategy, because lowering response rates when food is restricted defends body weight on variable-interval schedules. These contrasting results may be attributed to the discriminability of the contingency between response number and the end of a session being greater in Experiment 2 than in Experiment 1. In consequence, subjects lowered their response rates in order to increase the number of reinforcers per session (stock optimizing). PMID:8454960
Epelboym, Irene; Zenati, Mazen S; Hamad, Ahmad; Steve, Jennifer; Lee, Kenneth K; Bahary, Nathan; Hogg, Melissa E; Zeh, Herbert J; Zureikat, Amer H
2017-09-01
Receipt of 6 cycles of adjuvant chemotherapy (AC) is standard of care in pancreatic cancer (PC). Neoadjuvant chemotherapy (NAC) is increasingly utilized; however, optimal number of cycles needed alone or in combination with AC remains unknown. We sought to determine the optimal number and sequence of perioperative chemotherapy cycles in PC. Single institutional review of all resected PCs from 2008 to 2015. The impact of cumulative number of chemotherapy cycles received (0, 1-5, and ≥6 cycles) and their sequence (NAC, AC, or NAC + AC) on overall survival was evaluated Cox-proportional hazard modeling, using 6 cycles of AC as reference. A total of 522 patients were analyzed. Based on sample size distribution, four combinations were evaluated: 0 cycles = 12.1%, 1-5 cycles of combined NAC + AC = 29%, 6 cycles of AC = 25%, and ≥6 cycles of combined NAC + AC = 34%, with corresponding survival. 13.1, 18.5, 37, and 36.8 months. On MVA (P < 0.0001), tumor stage [hazard ratio (HR) 1.35], LNR (HR 4.3), and R1 margins (HR 1.77) were associated with increased hazard of death. Compared with 6 cycles AC, receipt of 0 cycles [HR 3.57, confidence interval (CI) 2.47-5.18] or 1-5 cycles in any combination (HR 2.37, CI 1.73-3.23) was associated with increased hazard of death, whereas receipt of ≥6 cycles in any sequence was associated with optimal and comparable survival (HR 1.07, CI 0.78-1.47). Receipt of 6 or more perioperative cycles of chemotherapy either as combined neoadjuvant and adjuvant or adjuvant alone may be associated with optimal and comparable survival in resected PC.
Effect of aeration interval on oxygen consumption and GHG emission during pig manure composting.
Zeng, Jianfei; Yin, Hongjie; Shen, Xiuli; Liu, Ning; Ge, Jinyi; Han, Lujia; Huang, Guangqun
2018-02-01
To verify the optimal aeration interval for oxygen supply and consumption and investigate the effect of aeration interval on GHG emission, reactor-scale composting was conducted with different aeration intervals (0, 10, 30 and 50 min). Although O 2 was sufficiently supplied during aeration period, it could be consumed to <10 vol% only when the aeration interval was 50 min, indicating that an aeration interval more than 50 min would be inadvisable. Compared to continuous aeration, reductions of the total CH 4 and N 2 O emissions as well as the total GHG emission equivalent by 22.26-61.36%, 8.24-49.80% and 12.36-53.20%, respectively, was achieved through intermittent aeration. Specifically, both the total CH 4 and N 2 O emissions as well as the total GHG emission equivalent were inversely proportional to the duration of aeration interval (R 2 > 0.902), suggesting that lengthening the duration of aeration interval to some extent could effectively reduce GHG emission. Copyright © 2017 Elsevier Ltd. All rights reserved.
A novel approach based on preference-based index for interval bilevel linear programming problem.
Ren, Aihong; Wang, Yuping; Xue, Xingsi
2017-01-01
This paper proposes a new methodology for solving the interval bilevel linear programming problem in which all coefficients of both objective functions and constraints are considered as interval numbers. In order to keep as much uncertainty of the original constraint region as possible, the original problem is first converted into an interval bilevel programming problem with interval coefficients in both objective functions only through normal variation of interval number and chance-constrained programming. With the consideration of different preferences of different decision makers, the concept of the preference level that the interval objective function is preferred to a target interval is defined based on the preference-based index. Then a preference-based deterministic bilevel programming problem is constructed in terms of the preference level and the order relation [Formula: see text]. Furthermore, the concept of a preference δ -optimal solution is given. Subsequently, the constructed deterministic nonlinear bilevel problem is solved with the help of estimation of distribution algorithm. Finally, several numerical examples are provided to demonstrate the effectiveness of the proposed approach.
Monthly Fluctuations of Insomnia Symptoms in a Population-Based Sample
Morin, Charles M.; LeBlanc, M.; Ivers, H.; Bélanger, L.; Mérette, Chantal; Savard, Josée; Jarrin, Denise C.
2014-01-01
Study Objectives: To document the monthly changes in sleep/insomnia status over a 12-month period; to determine the optimal time intervals to reliably capture new incident cases and recurrent episodes of insomnia and the likelihood of its persistence over time. Design: Participants were 100 adults (mean age = 49.9 years; 66% women) randomly selected from a larger population-based sample enrolled in a longitudinal study of the natural history of insomnia. They completed 12 monthly telephone interviews assessing insomnia, use of sleep aids, stressful life events, and physical and mental health problems in the previous month. A total of 1,125 interviews of a potential 1,200 were completed. Based on data collected at each assessment, participants were classified into one of three subgroups: good sleepers, insomnia symptoms, and insomnia syndrome. Results: At baseline, 42 participants were classified as good sleepers, 34 met criteria for insomnia symptoms, and 24 for an insomnia syndrome. There were significant fluctuations of insomnia over time, with 66% of the participants changing sleep status at least once over the 12 monthly assessments (51.5% for good sleepers, 59.5% for insomnia syndrome, and 93.4% for insomnia symptoms). Changes of status were more frequent among individuals with insomnia symptoms at baseline (mean = 3.46, SD = 2.36) than among those initially classified as good sleepers (mean = 2.12, SD = 2.70). Among the subgroup with insomnia symptoms at baseline, 88.3% reported improved sleep (i.e., became good sleepers) at least once over the 12 monthly assessments compared to 27.7% whose sleep worsened (i.e., met criteria for an insomnia syndrome) during the same period. Among individuals classified as good sleepers at baseline, risks of developing insomnia symptoms and syndrome over the subsequent months were, respectively, 48.6% and 14.5%. Monthly assessment over an interval of 6 months was found most reliable to estimate incidence rates, while an interval of 3 months proved the most reliable for defining chronic insomnia. Conclusions: Monthly assessment of insomnia and sleep patterns revealed significant variability over the course of a 12-month period. These findings highlight the importance for future epidemiological studies of conducting repeated assessment at shorter than the typical yearly interval in order to reliably capture the natural course of insomnia over time. Citation: Morin CM; LeBlanc M; Ivers H; Bélanger L; Mérette C; Savard J; Jarrin DC. Monthly fluctuations of insomnia symptoms in a population-based sample. SLEEP 2014;37(2):319-326. PMID:24497660
Ou, Guoliang; Tan, Shukui; Zhou, Min; Lu, Shasha; Tao, Yinghui; Zhang, Zuo; Zhang, Lu; Yan, Danping; Guan, Xingliang; Wu, Gang
2017-12-15
An interval chance-constrained fuzzy land-use allocation (ICCF-LUA) model is proposed in this study to support solving land resource management problem associated with various environmental and ecological constraints at a watershed level. The ICCF-LUA model is based on the ICCF (interval chance-constrained fuzzy) model which is coupled with interval mathematical model, chance-constrained programming model and fuzzy linear programming model and can be used to deal with uncertainties expressed as intervals, probabilities and fuzzy sets. Therefore, the ICCF-LUA model can reflect the tradeoff between decision makers and land stakeholders, the tradeoff between the economical benefits and eco-environmental demands. The ICCF-LUA model has been applied to the land-use allocation of Wujiang watershed, Guizhou Province, China. The results indicate that under highly land suitable conditions, optimized area of cultivated land, forest land, grass land, construction land, water land, unused land and landfill in Wujiang watershed will be [5015, 5648] hm 2 , [7841, 7965] hm 2 , [1980, 2056] hm 2 , [914, 1423] hm 2 , [70, 90] hm 2 , [50, 70] hm 2 and [3.2, 4.3] hm 2 , the corresponding system economic benefit will be between 6831 and 7219 billion yuan. Consequently, the ICCF-LUA model can effectively support optimized land-use allocation problem in various complicated conditions which include uncertainties, risks, economic objective and eco-environmental constraints. Copyright © 2017 Elsevier Ltd. All rights reserved.
Functional near infrared spectroscopy for awake monkey to accelerate neurorehabilitation study
NASA Astrophysics Data System (ADS)
Kawaguchi, Hiroshi; Higo, Noriyuki; Kato, Junpei; Matsuda, Keiji; Yamada, Toru
2017-02-01
Functional near-infrared spectroscopy (fNIRS) is suitable for measuring brain functions during neurorehabilitation because of its portability and less motion restriction. However, it is not known whether neural reconstruction can be observed through changes in cerebral hemodynamics. In this study, we modified an fNIRS system for measuring the motor function of awake monkeys to study cerebral hemodynamics during neurorehabilitation. Computer simulation was performed to determine the optimal fNIRS source-detector interval for monkey motor cortex. Accurate digital phantoms were constructed based on anatomical magnetic resonance images. Light propagation based on the diffusion equation was numerically calculated using the finite element method. The source-detector pair was placed on the scalp above the primary motor cortex. Four different interval values (10, 15, 20, 25 mm) were examined. The results showed that the detected intensity decreased and the partial optical path length in gray matter increased with an increase in the source-detector interval. We found that 15 mm is the optimal interval for the fNIRS measurement of monkey motor cortex. The preliminary measurement was performed on a healthy female macaque monkey using fNIRS equipment and custom-made optodes and optode holder. The optodes were attached above bilateral primary motor cortices. Under the awaking condition, 10 to 20 trials of alternated single-sided hand movements for several seconds with intervals of 10 to 30 s were performed. Increases and decreases in oxy- and deoxyhemoglobin concentration were observed in a localized area in the hemisphere contralateral to the moved forelimb.
Mwavua, Shillah Mwaniga; Ndungu, Edward Kiogora; Mutai, Kenneth K; Joshi, Mark David
2016-01-05
Peripheral public health facilities remain the most frequented by the majority of the population in Kenya; yet remain sub-optimally equipped and not optimized for non-communicable diseases care. We undertook a descriptive, cross sectional study among ambulatory type 2 diabetes mellitus clients, attending Kenyatta National Referral Hospital (KNH), and Thika District Hospital (TDH) in Central Kenya. Systematic random sampling was used. HbA1c was assessed for glycemic control and the following, as markers of quality of care: direct client costs, clinic appointment interval and frequency of self monitoring test, affordability and satisfaction with care. We enrolled 200 clients, (Kenyatta National Hospital 120; Thika District Hospital 80); Majority of the patients 66.5% were females, the mean age was 57.8 years; and 58% of the patients had basic primary education. 67.5% had diabetes for less than 10 years and 40% were on insulin therapy. The proportion (95% CI) with good glycemic was 17% (12.0-22.5 respectively) in the two facilities [Kenyatta National Hospital 18.3% (11.5-25.6); Thika District Hospital 15% (CI 7.4-23.7); P = 0.539]. However, in Thika District Hospital clients were more likely to have a clinic driven routine urinalysis and weight, they were also accorded shorter clinic appointment intervals; incurred half to three quarter lower direct costs, and reported greater affordability and satisfactions with care. In conclusion, we demonstrate that in Thika district hospital, glycemic control and diabetic care is suboptimal; but comparable to that of Kenyatta National Referral hospital. Opportunities for improvement of care abound at peripheral health facilities.
Merli, Marco; Galli, Laura; Castagna, Antonella; Salpietro, Stefania; Gianotti, Nicola; Messina, Emanuela; Poli, Andrea; Morsica, Giulia; Bagaglio, Sabrina; Cernuschi, Massimo; Bigoloni, Alba; Uberti-Foppa, Caterina; Lazzarin, Adriano; Hasson, Hamid
2016-04-01
We determined the diagnostic accuracy and optimal cut off of three indirect fibrosis biomarkers (APRI, FIB-4, Forns) compared with liver stiffness (LS) for the detection of liver cirrhosis in HIV/HCV-coinfected patients. An observational retrospective study on HIV/HCV-coinfected patients with concomitant LS measurement and APRI, FIB-4 and Forns was performed. The presence of liver cirrhosis was defined as a LS ≥13 KPa. The diagnostic accuracy and optimal cut-off values, compared with LS categorization (<13 vs ≥13 KPa), were determined by receiver operating characteristics (ROC) curves. The study sample included 646 patients. The area-under-the ROC curve (95% confidence interval) for the detection of liver cirrhosis were 0.84 (0.81-0.88), 0.87 (0.84-0.91) and 0.87 (0.84-0.90) for APRI, FIB-4 and Forns, respectively. According to the optimal cut off values for liver cirrhosis (≥0.97 for APRI, ≥2.02 for FIB-4 and ≥7.8 for Forns), 80%, 80% and 82% of subjects were correctly classified by the three indirect fibrosis biomarkers, respectively. Misclassifications were mostly due to false positive cases. The study suggests that indirect fibrosis biomarkers can help clinicians to exclude liver cirrhosis in the management of HIV/HCV co-infected patients, reducing the frequency of more expensive or invasive assessments.
Lewan, Michael; Sonnenfeld, Mark D.
2017-01-01
Low-temperature hydrous pyrolysis (LTHP) at 300°C (572°F) for 24 h released retained oils from 12- to 20-meshsize samples of mature Niobrara marly chalk and marlstone cores. The released oil accumulated on the water surface of the reactor, and is compositionally similar to oil produced from the same well. The quantities of oil released from the marly chalk and marlstone by LTHP are respectively 3.4 and 1.6 times greater than those determined by tight rock analyses (TRA) on aliquots of the same samples. Gas chromatograms indicated this difference is a result of TRA oils losing more volatiles and volatilizing less heavy hydrocarbons during collection than LTHP oils. Characterization of the rocks before and after LTPH by programmable open-system pyrolysis (HAWK) indicate that under LTHP conditions no significant oil is generated and only preexisting retained oil is released. Although LTHP appears to provide better predictions of quantity and quality of retained oil in a mature source rock, it is not expected to replace the more time and sample-size efficacy of TRA. However, LTHP can be applied to composited samples from key intervals or lithologies originally recognized by TRA. Additional studies on duration, temperature, and sample size used in LTHP may further optimize its utility.
Letcher, B.H.; Horton, G.E.
2008-01-01
We estimated the magnitude and shape of size-dependent survival (SDS) across multiple sampling intervals for two cohorts of stream-dwelling Atlantic salmon (Salmo salar) juveniles using multistate capture-mark-recapture (CMR) models. Simulations designed to test the effectiveness of multistate models for detecting SDS in our system indicated that error in SDS estimates was low and that both time-invariant and time-varying SDS could be detected with sample sizes of >250, average survival of >0.6, and average probability of capture of >0.6, except for cases of very strong SDS. In the field (N ??? 750, survival 0.6-0.8 among sampling intervals, probability of capture 0.6-0.8 among sampling occasions), about one-third of the sampling intervals showed evidence of SDS, with poorer survival of larger fish during the age-2+ autumn and quadratic survival (opposite direction between cohorts) during age-1+ spring. The varying magnitude and shape of SDS among sampling intervals suggest a potential mechanism for the maintenance of the very wide observed size distributions. Estimating SDS using multistate CMR models appears complementary to established approaches, can provide estimates with low error, and can be used to detect intermittent SDS. ?? 2008 NRC Canada.
Belitz, Kenneth; Jurgens, Bryant C.; Landon, Matthew K.; Fram, Miranda S.; Johnson, Tyler D.
2010-01-01
The proportion of an aquifer with constituent concentrations above a specified threshold (high concentrations) is taken as a nondimensional measure of regional scale water quality. If computed on the basis of area, it can be referred to as the aquifer scale proportion. A spatially unbiased estimate of aquifer scale proportion and a confidence interval for that estimate are obtained through the use of equal area grids and the binomial distribution. Traditionally, the confidence interval for a binomial proportion is computed using either the standard interval or the exact interval. Research from the statistics literature has shown that the standard interval should not be used and that the exact interval is overly conservative. On the basis of coverage probability and interval width, the Jeffreys interval is preferred. If more than one sample per cell is available, cell declustering is used to estimate the aquifer scale proportion, and Kish's design effect may be useful for estimating an effective number of samples. The binomial distribution is also used to quantify the adequacy of a grid with a given number of cells for identifying a small target, defined as a constituent that is present at high concentrations in a small proportion of the aquifer. Case studies illustrate a consistency between approaches that use one well per grid cell and many wells per cell. The methods presented in this paper provide a quantitative basis for designing a sampling program and for utilizing existing data.
Piecewise linear approximation for hereditary control problems
NASA Technical Reports Server (NTRS)
Propst, Georg
1987-01-01
Finite dimensional approximations are presented for linear retarded functional differential equations by use of discontinuous piecewise linear functions. The approximation scheme is applied to optimal control problems when a quadratic cost integral has to be minimized subject to the controlled retarded system. It is shown that the approximate optimal feedback operators converge to the true ones both in case the cost integral ranges over a finite time interval as well as in the case it ranges over an infinite time interval. The arguments in the latter case rely on the fact that the piecewise linear approximations to stable systems are stable in a uniform sense. This feature is established using a vector-component stability criterion in the state space R(n) x L(2) and the favorable eigenvalue behavior of the piecewise linear approximations.
Phillips, Christopher; Mac Parthaláin, Neil; Syed, Yasir; Deganello, Davide; Claypole, Timothy; Lewis, Keir
2014-01-01
Exhaled volatile organic compounds (VOCs) are of interest for their potential to diagnose disease non-invasively. However, most breath VOC studies have analyzed single breath samples from an individual and assumed them to be wholly consistent representative of the person. This provided the motivation for an investigation of the variability of breath profiles when three breath samples are taken over a short time period (two minute intervals between samples) for 118 stable patients with Chronic Obstructive Pulmonary Disease (COPD) and 63 healthy controls and analyzed by gas chromatography and mass spectroscopy (GC/MS). The extent of the variation in VOC levels differed between COPD and healthy subjects and the patterns of variation differed for isoprene versus the bulk of other VOCs. In addition, machine learning approaches were applied to the breath data to establish whether these samples differed in their ability to discriminate COPD from healthy states and whether aggregation of multiple samples, into single data sets, could offer improved discrimination. The three breath samples gave similar classification accuracy to one another when evaluated separately (66.5% to 68.3% subjects classified correctly depending on the breath repetition used). Combining multiple breath samples into single data sets gave better discrimination (73.4% subjects classified correctly). Although accuracy is not sufficient for COPD diagnosis in a clinical setting, enhanced sampling and analysis may improve accuracy further. Variability in samples, and short-term effects of practice or exertion, need to be considered in any breath testing program to improve reliability and optimize discrimination. PMID:24957028
Phillips, Christopher; Mac Parthaláin, Neil; Syed, Yasir; Deganello, Davide; Claypole, Timothy; Lewis, Keir
2014-05-09
Exhaled volatile organic compounds (VOCs) are of interest for their potential to diagnose disease non-invasively. However, most breath VOC studies have analyzed single breath samples from an individual and assumed them to be wholly consistent representative of the person. This provided the motivation for an investigation of the variability of breath profiles when three breath samples are taken over a short time period (two minute intervals between samples) for 118 stable patients with Chronic Obstructive Pulmonary Disease (COPD) and 63 healthy controls and analyzed by gas chromatography and mass spectroscopy (GC/MS). The extent of the variation in VOC levels differed between COPD and healthy subjects and the patterns of variation differed for isoprene versus the bulk of other VOCs. In addition, machine learning approaches were applied to the breath data to establish whether these samples differed in their ability to discriminate COPD from healthy states and whether aggregation of multiple samples, into single data sets, could offer improved discrimination. The three breath samples gave similar classification accuracy to one another when evaluated separately (66.5% to 68.3% subjects classified correctly depending on the breath repetition used). Combining multiple breath samples into single data sets gave better discrimination (73.4% subjects classified correctly). Although accuracy is not sufficient for COPD diagnosis in a clinical setting, enhanced sampling and analysis may improve accuracy further. Variability in samples, and short-term effects of practice or exertion, need to be considered in any breath testing program to improve reliability and optimize discrimination.
Ibáñez, R.; Félez-Sánchez, M.; Godínez, J. M.; Guardià, C.; Caballero, E.; Juve, R.; Combalia, N.; Bellosillo, B.; Cuevas, D.; Moreno-Crespi, J.; Pons, L.; Autonell, J.; Gutierrez, C.; Ordi, J.; de Sanjosé, S.
2014-01-01
In Catalonia, a screening protocol for cervical cancer, including human papillomavirus (HPV) DNA testing using the Digene Hybrid Capture 2 (HC2) assay, was implemented in 2006. In order to monitor interlaboratory reproducibility, a proficiency testing (PT) survey of the HPV samples was launched in 2008. The aim of this study was to explore the repeatability of the HC2 assay's performance. Participating laboratories provided 20 samples annually, 5 randomly chosen samples from each of the following relative light unit (RLU) intervals: <0.5, 0.5 to 0.99, 1 to 9.99, and ≥10. Kappa statistics were used to determine the agreement levels between the original and the PT readings. The nature and origin of the discrepant results were calculated by bootstrapping. A total of 946 specimens were retested. The kappa values were 0.91 for positive/negative categorical classification and 0.79 for the four RLU intervals studied. Sample retesting yielded systematically lower RLU values than the original test (P < 0.005), independently of the time elapsed between the two determinations (median, 53 days), possibly due to freeze-thaw cycles. The probability for a sample to show clinically discrepant results upon retesting was a function of the RLU value; samples with RLU values in the 0.5 to 5 interval showed 10.80% probability to yield discrepant results (95% confidence interval [CI], 7.86 to 14.33) compared to 0.85% probability for samples outside this interval (95% CI, 0.17 to 1.69). Globally, the HC2 assay shows high interlaboratory concordance. We have identified differential confidence thresholds and suggested the guidelines for interlaboratory PT in the future, as analytical quality assessment of HPV DNA detection remains a central component of the screening program for cervical cancer prevention. PMID:24574284
Formulation and stability of an extemporaneous 0.02% chlorhexidine digluconate ophthalmic solution.
Lin, Shu-Chiao; Huang, Chih-Fen; Shen, Li-Jiuan; Wang, Hsueh-Ju; Lin, Chia-Yu; Wu, Fe-Lin Lin
2015-12-01
Acanthamoeba keratitis is difficult to treat because Acanthamoeba cysts are resistant to the majority of antimicrobial agents. Despite the efficacy of 0.02% chlorhexidine in treating Acanthamoeba keratitis, a lack of data in the literature regarding the formulation's stability limits its clinical use. The objective of this study was to develop an optimal extemporaneous 0.02% chlorhexidine digluconate ophthalmic formulation for patients in need. With available active pharmaceutical ingredients, 0.02% chlorhexidine digluconate sample solutions were prepared by diluting with BSS Plus Solution or acetate buffer. Influences of the buffer, type of container, and temperature under daily-open condition were assessed based on the changes of pH values and chlorhexidine concentrations of the test samples weekly. To determine the beyond-use date, the optimal samples were stored at 2-8°C or room temperature, and analyzed at time 0 and at Week 1, Week 2, Week 3, Week 4, Week 5, Week 8, Week 12, and Week 24. Despite chlorhexidine exhibiting better stability in acetate buffer than in BSS solution, its shelf-life was < 14 days when stored in a light-resistant low-density polyethylene container. The acetate-buffered 0.02% chlorhexidine digluconate solution stored in light-resistant high-density polyethylene eyedroppers did not exhibit significant changes in pH or strength at any time interval. The acetate-buffered 0.02% chlorhexidine digluconate ophthalmic solution stored in light-resistant high-density polyethylene eyedroppers demonstrated excellent stability at 2-25°C for 6 months after being sealed and for 1 month after opening. This finding will enable us to prepare 0.02% chlorhexidine digluconate ophthalmic solutions based on a doctor's prescription. Copyright © 2014. Published by Elsevier B.V.
Effectiveness of breast cancer screening policies in countries with medium-low incidence rates.
Kong, Qingxia; Mondschein, Susana; Pereira, Ana
2018-02-05
Chile has lower breast cancer incidence rates compared to those in developed countries. Our public health system aims to perform 10 biennial screening mammograms in the age group of 50 to 69 years by 2020. Using a dynamic programming model, we have found the optimal ages to perform 10 screening mammograms that lead to the lowest lifetime death rate and we have evaluated a set of fixed inter-screening interval policies. The optimal ages for the 10 mammograms are 43, 47, 51, 54, 57, 61, 65, 68, 72, and 76 years, and the most effective fixed inter-screening is every four years after the 40 years. Both policies respectively reduce lifetime death rate in 6.4% and 5.7% and the cost of saving one life in 17% and 9.3% compared to the 2020 Chilean policy. Our findings show that two-year inter-screening interval policies are less effective in countries with lower breast cancer incidence; thus we recommend screening policies with a wider age range and larger inter-screening intervals for Chile.
Effectiveness of breast cancer screening policies in countries with medium-low incidence rates
Kong, Qingxia; Mondschein, Susana; Pereira, Ana
2018-01-01
ABSTRACT Chile has lower breast cancer incidence rates compared to those in developed countries. Our public health system aims to perform 10 biennial screening mammograms in the age group of 50 to 69 years by 2020. Using a dynamic programming model, we have found the optimal ages to perform 10 screening mammograms that lead to the lowest lifetime death rate and we have evaluated a set of fixed inter-screening interval policies. The optimal ages for the 10 mammograms are 43, 47, 51, 54, 57, 61, 65, 68, 72, and 76 years, and the most effective fixed inter-screening is every four years after the 40 years. Both policies respectively reduce lifetime death rate in 6.4% and 5.7% and the cost of saving one life in 17% and 9.3% compared to the 2020 Chilean policy. Our findings show that two-year inter-screening interval policies are less effective in countries with lower breast cancer incidence; thus we recommend screening policies with a wider age range and larger inter-screening intervals for Chile. PMID:29412375
Assessing and minimizing contamination in time of flight based validation data
NASA Astrophysics Data System (ADS)
Lennox, Kristin P.; Rosenfield, Paul; Blair, Brenton; Kaplan, Alan; Ruz, Jaime; Glenn, Andrew; Wurtz, Ronald
2017-10-01
Time of flight experiments are the gold standard method for generating labeled training and testing data for the neutron/gamma pulse shape discrimination problem. As the popularity of supervised classification methods increases in this field, there will also be increasing reliance on time of flight data for algorithm development and evaluation. However, time of flight experiments are subject to various sources of contamination that lead to neutron and gamma pulses being mislabeled. Such labeling errors have a detrimental effect on classification algorithm training and testing, and should therefore be minimized. This paper presents a method for identifying minimally contaminated data sets from time of flight experiments and estimating the residual contamination rate. This method leverages statistical models describing neutron and gamma travel time distributions and is easily implemented using existing statistical software. The method produces a set of optimal intervals that balance the trade-off between interval size and nuisance particle contamination, and its use is demonstrated on a time of flight data set for Cf-252. The particular properties of the optimal intervals for the demonstration data are explored in detail.
Guo, P; Huang, G H
2010-03-01
In this study, an interval-parameter semi-infinite fuzzy-chance-constrained mixed-integer linear programming (ISIFCIP) approach is developed for supporting long-term planning of waste-management systems under multiple uncertainties in the City of Regina, Canada. The method improves upon the existing interval-parameter semi-infinite programming (ISIP) and fuzzy-chance-constrained programming (FCCP) by incorporating uncertainties expressed as dual uncertainties of functional intervals and multiple uncertainties of distributions with fuzzy-interval admissible probability of violating constraint within a general optimization framework. The binary-variable solutions represent the decisions of waste-management-facility expansion, and the continuous ones are related to decisions of waste-flow allocation. The interval solutions can help decision-makers to obtain multiple decision alternatives, as well as provide bases for further analyses of tradeoffs between waste-management cost and system-failure risk. In the application to the City of Regina, Canada, two scenarios are considered. In Scenario 1, the City's waste-management practices would be based on the existing policy over the next 25 years. The total diversion rate for the residential waste would be approximately 14%. Scenario 2 is associated with a policy for waste minimization and diversion, where 35% diversion of residential waste should be achieved within 15 years, and 50% diversion over 25 years. In this scenario, not only landfill would be expanded, but also CF and MRF would be expanded. Through the scenario analyses, useful decision support for the City's solid-waste managers and decision-makers has been generated. Three special characteristics of the proposed method make it unique compared with other optimization techniques that deal with uncertainties. Firstly, it is useful for tackling multiple uncertainties expressed as intervals, functional intervals, probability distributions, fuzzy sets, and their combinations; secondly, it has capability in addressing the temporal variations of the functional intervals; thirdly, it can facilitate dynamic analysis for decisions of facility-expansion planning and waste-flow allocation within a multi-facility, multi-period and multi-option context. Copyright 2009 Elsevier Ltd. All rights reserved.
Fouda, Usama M; Gad Allah, Sherine H; Elshaer, Hesham S
2016-07-01
To determine the optimal timing of vaginal misoprostol administration in nulliparous women undergoing office hysteroscopy. Randomized double-blind placebo-controlled study. University teaching hospital. One hundred twenty nulliparous patients were randomly allocated in a 1:1 ratio to the long-interval misoprostol group or the short-interval misoprostol group. In the long-interval misoprostol group, two misoprostol tablets (400 μg) and two placebo tablets were administered vaginally at 12 and 3 hours, respectively, before office hysteroscopy. In the short-interval misoprostol group, two placebo tablets and two misoprostol tablets (400 μg) were administered vaginally 12 and 3 hours, respectively, before office hysteroscopy. The severity of pain was assessed by the patients with the use of a 100-mm visual analog scale (VAS). The operators assessed the ease of the passage of the hysteroscope through the cervical canal with the use of a 100-mm VAS as well. Pain scores during the procedure were significantly lower in the long-interval misoprostol group (37.98 ± 13.13 vs. 51.98 ± 20.68). In contrast, the pain scores 30 minutes after the procedure were similar between the two groups (11.92 ± 7.22 vs. 13.3 ± 6.73). Moreover, the passage of the hysteroscope through the cervical canal was easier in the long-interval misoprostol group (48.9 ± 17.79 vs. 58.28 ± 21.85). Vaginal misoprostol administration 12 hours before office hysteroscopy was more effective than vaginal misoprostol administration 3 hours before office hysteroscopy in relieving pain experienced by nulliparous patients undergoing office hysteroscopy. NCT02316301. Copyright © 2016 American Society for Reproductive Medicine. Published by Elsevier Inc. All rights reserved.
Effect of different rest intervals after whole-body vibration on vertical jump performance.
Dabbs, Nicole C; Muñoz, Colleen X; Tran, Tai T; Brown, Lee E; Bottaro, Martim
2011-03-01
Whole-body vibration (WBV) may potentiate vertical jump (VJ) performance via augmented muscular strength and motor function. The purpose of this study was to evaluate the effect of different rest intervals after WBV on VJ performance. Thirty recreationally trained subjects (15 men and 15 women) volunteered to participate in 4 testing visits separated by 24 hours. Visit 1 acted as a familiarization visit where subjects were introduced to the VJ and WBV protocols. Visits 2-4 contained 2 randomized conditions per visit with a 10-minute rest period between conditions. The WBV was administered on a pivotal platform with a frequency of 30 Hz and an amplitude of 6.5 mm in 4 bouts of 30 seconds for a total of 2 minutes with 30 seconds of rest between bouts. During WBV, subjects performed a quarter squat every 5 seconds, simulating a countermovement jump (CMJ). Whole-body vibration was followed by 3 CMJs with 5 different rest intervals: immediate, 30 seconds, 1 minute, 2 minutes, or 4 minutes. For a control condition, subjects performed squats with no WBV. There were no significant (p > 0.05) differences in peak velocity or relative ground reaction force after WBV rest intervals. However, results of VJ height revealed that maximum values, regardless of rest interval (56.93 ± 13.98 cm), were significantly (p < 0.05) greater than the control condition (54.44 ± 13.74 cm). Therefore, subjects' VJ height potentiated at different times after WBV suggesting strong individual differences in optimal rest interval. Coaches may use WBV to enhance acute VJ performance but should first identify each individual's optimal rest time to maximize the potentiating effects.
Resonance Shift of Single-Axis Acoustic Levitation
NASA Astrophysics Data System (ADS)
Xie, Wen-Jun; Wei, Bing-Bo
2007-01-01
The resonance shift due to the presence and movement of a rigid spherical sample in a single-axis acoustic levitator is studied with the boundary element method on the basis of a two-cylinder model of the levitator. The introduction of a sample into the sound pressure nodes, where it is usually levitated, reduces the resonant interval Hn (n is the mode number) between the reflector and emitter. The larger the sample radius, the greater the resonance shift. When the sample moves along the symmetric axis, the resonance interval Hn varies in an approximately periodical manner, which reaches the minima near the pressure nodes and the maxima near the pressure antinodes. This suggests a resonance interval oscillation around its minimum if the stably levitated sample is slightly perturbed. The dependence of the resonance shift on the sample radius R and position h for the single-axis acoustic levitator is compared with Leung's theory for a closed rectangular chamber, which shows a good agreement.
Designing efficient nitrous oxide sampling strategies in agroecosystems using simulation models
NASA Astrophysics Data System (ADS)
Saha, Debasish; Kemanian, Armen R.; Rau, Benjamin M.; Adler, Paul R.; Montes, Felipe
2017-04-01
Annual cumulative soil nitrous oxide (N2O) emissions calculated from discrete chamber-based flux measurements have unknown uncertainty. We used outputs from simulations obtained with an agroecosystem model to design sampling strategies that yield accurate cumulative N2O flux estimates with a known uncertainty level. Daily soil N2O fluxes were simulated for Ames, IA (corn-soybean rotation), College Station, TX (corn-vetch rotation), Fort Collins, CO (irrigated corn), and Pullman, WA (winter wheat), representing diverse agro-ecoregions of the United States. Fertilization source, rate, and timing were site-specific. These simulated fluxes surrogated daily measurements in the analysis. We ;sampled; the fluxes using a fixed interval (1-32 days) or a rule-based (decision tree-based) sampling method. Two types of decision trees were built: a high-input tree (HI) that included soil inorganic nitrogen (SIN) as a predictor variable, and a low-input tree (LI) that excluded SIN. Other predictor variables were identified with Random Forest. The decision trees were inverted to be used as rules for sampling a representative number of members from each terminal node. The uncertainty of the annual N2O flux estimation increased along with the fixed interval length. A 4- and 8-day fixed sampling interval was required at College Station and Ames, respectively, to yield ±20% accuracy in the flux estimate; a 12-day interval rendered the same accuracy at Fort Collins and Pullman. Both the HI and the LI rule-based methods provided the same accuracy as that of fixed interval method with up to a 60% reduction in sampling events, particularly at locations with greater temporal flux variability. For instance, at Ames, the HI rule-based and the fixed interval methods required 16 and 91 sampling events, respectively, to achieve the same absolute bias of 0.2 kg N ha-1 yr-1 in estimating cumulative N2O flux. These results suggest that using simulation models along with decision trees can reduce the cost and improve the accuracy of the estimations of cumulative N2O fluxes using the discrete chamber-based method.
Product modular design incorporating preventive maintenance issues
NASA Astrophysics Data System (ADS)
Gao, Yicong; Feng, Yixiong; Tan, Jianrong
2016-03-01
Traditional modular design methods lead to product maintenance problems, because the module form of a system is created according to either the function requirements or the manufacturing considerations. For solving these problems, a new modular design method is proposed with the considerations of not only the traditional function related attributes, but also the maintenance related ones. First, modularity parameters and modularity scenarios for product modularity are defined. Then the reliability and economic assessment models of product modularity strategies are formulated with the introduction of the effective working age of modules. A mathematical model used to evaluate the difference among the modules of the product so that the optimal module of the product can be established. After that, a multi-objective optimization problem based on metrics for preventive maintenance interval different degrees and preventive maintenance economics is formulated for modular optimization. Multi-objective GA is utilized to rapidly approximate the Pareto set of optimal modularity strategy trade-offs between preventive maintenance cost and preventive maintenance interval difference degree. Finally, a coordinate CNC boring machine is adopted to depict the process of product modularity. In addition, two factorial design experiments based on the modularity parameters are constructed and analyzed. These experiments investigate the impacts of these parameters on the optimal modularity strategies and the structure of module. The research proposes a new modular design method, which may help to improve the maintainability of product in modular design.
Hennig, Stefanie; Waterhouse, Timothy H; Bell, Scott C; France, Megan; Wainwright, Claire E; Miller, Hugh; Charles, Bruce G; Duffull, Stephen B
2007-01-01
What is already known about this subject • Itraconazole is a triazole antifungal used in the treatment of allergic bronchopulmonary aspergillosis in patients with cystic fibrosis (CF). • The pharmacokinetic (PK) properties of this drug and its active metabolite have been described before, mostly in healthy volunteers. • However, only sparse information from case reports were available of the PK properties of this drug in CF patients at the start of our study. What this study adds • This study reports for the first time the population pharmacokinetic properties of itraconazole and a known active metabolite, hydroxy-itraconazole in adult patients with CF. • As a result, this study offers new dosing approaches and their pharmacoeconomic impact as well as a PK model for therapeutic drug monitoring of this drug in this patient group. • Furthermore, it is an example of a successful d-optimal design application in a clinical setting. Aim The primary objective of the study was to estimate the population pharmacokinetic parameters for itraconazole and hydroxy-itraconazole, in particular, the relative oral bioavailability of the capsule compared with solution in adult cystic fibrosis patients, in order to develop new dosing guidelines. A secondary objective was to evaluate the performance of a population optimal design. Methods The blood sampling times for the population study were optimized previously using POPT v.2.0. The design was based on the administration of solution and capsules to 30 patients in a cross-over study. Prior information suggested that itraconazole is generally well described by a two-compartment disposition model with either linear or saturable elimination. The pharmacokinetics of itraconazole and the metabolite were modelled simultaneously using NONMEM. Dosing schedules were simulated to assess their ability to achieve a trough target concentration of 0.5 mg ml−1. Results Out of 241 blood samples, 94% were taken within the defined optimal sampling windows. A two-compartment model with first order absorption and elimination best described itraconazole kinetics, with first order metabolism to the hydroxy-metabolite. For itraconazole the absorption rate constants (between-subject variability) for capsule and solution were 0.0315 h−1 (91.9%) and 0.125 h−1 (106.3%), respectively, and the relative bioavailability of the capsule was 0.82 (62.3%) (confidence interval 0.36, 1.97), compared with the solution. There was no evidence of nonlinearity. Simulations from the final model showed that a dosing schedule of 500 mg twice daily for both formulations provided the highest chance of target success. Conclusion The optimal design performed well and the pharmacokinetics of itraconazole and hydroxy-itraconazole were described adequately by the model. The relative bioavailability for itraconazole capsules was 82% compared with the solution. PMID:17073891
Zhao, X; Zhao, W; Zhang, H; Li, J; Shu, Y; Li, S; Cai, L; Zhou, J; Li, Y; Hu, R
2013-01-01
To evaluate the efficiency of fasting capillary blood glucose (FCG) measurement as compared with fasting venous plasma glucose (FPG) measurement in screening diabetes and pre-diabetes in low-resource rural settings. In 2010, 993 participants were randomly selected from 9 villages in Yunnan province using cluster sampling method. Samples for FCG and FPG test were obtained after demographics and physical examination. The oral glucose tolerance test was performed in parallel as gold standard for diagnosis. Diagnostic capacities of the FCG measurement in predicting undiagnosed diabetes and pre-diabetes were assessed. The performance of FCG and FPG tests was compared. Fifty-seven individuals with undiagnosed diabetes and 145 subjects with pre-diabetes were detected. The concordance between FCG and FPG levels was high (r = 0.75, p < 0.001). The area under the curve (AUC) for FCG test in predicting diabetes was 0.88 [95% confidence interval (CI) 0.82-0.93] with the optimal cutoff value of 5.65 mmol/l, sensitivity of 84.2%, and specificity of 79.3%. The corresponding values in FPG tests were 0.92 (95% CI 0.88-0.97) (AUC), 6.51 mmol/l (optimal cutoff point), 82.5% (sensitivity) and 98.3% (specificity), respectively. No significant difference was found in the AUC for the two screening strategies. FCG measurement is considered to be a convenient, practicable screening method in low-resource rural communities with acceptable test properties.
An improved 2D MoF method by using high order derivatives
NASA Astrophysics Data System (ADS)
Chen, Xiang; Zhang, Xiong
2017-11-01
The MoF (Moment of Fluid) method is one of the most accurate approaches among various interface reconstruction algorithms. Alike other second order methods, the MoF method needs to solve an implicit optimization problem to obtain the optimal approximate interface, so an iteration process is inevitable under most circumstances. In order to solve the optimization efficiently, the properties of the objective function are worthy of studying. In 2D problems, the first order derivative has been deduced and applied in the previous researches. In this paper, the high order derivatives of the objective function are deduced on the convex polygon. We show that the nth (n ≥ 2) order derivatives are discontinuous, and the number of the discontinuous points is two times the number of the polygon edge. A rotation algorithm is proposed to successively calculate these discontinuous points, thus the target interval where the optimal solution is located can be determined. Since the high order derivatives of the objective function are continuous in the target interval, the iteration schemes based on high order derivatives can be used to improve the convergence rate. Moreover, when iterating in the target interval, the value of objective function and its derivatives can be directly updated without explicitly solving the volume conservation equation. The direct update makes a further improvement of the efficiency especially when the number of edges of the polygon is increasing. The Halley's method, which is based on the first three order derivatives, is applied as the iteration scheme in this paper and the numerical results indicate that the CPU time is about half of the previous method on the quadrilateral cell and is about one sixth on the decagon cell.
Advancing microwave technology for dehydration processing of biologics.
Cellemme, Stephanie L; Van Vorst, Matthew; Paramore, Elisha; Elliott, Gloria D
2013-10-01
Our prior work has shown that microwave processing can be effective as a method for dehydrating cell-based suspensions in preparation for anhydrous storage, yielding homogenous samples with predictable and reproducible drying times. In the current work an optimized microwave-based drying process was developed that expands upon this previous proof-of-concept. Utilization of a commercial microwave (CEM SAM 255, Matthews, NC) enabled continuous drying at variable low power settings. A new turntable was manufactured from Ultra High Molecular Weight Polyethylene (UHMW-PE; Grainger, Lake Forest, IL) to provide for drying of up to 12 samples at a time. The new process enabled rapid and simultaneous drying of multiple samples in containment devices suitable for long-term storage and aseptic rehydration of the sample. To determine sample repeatability and consistency of drying within the microwave cavity, a concentration series of aqueous trehalose solutions were dried for specific intervals and water content assessed using Karl Fischer Titration at the end of each processing period. Samples were dried on Whatman S-14 conjugate release filters (Whatman, Maidestone, UK), a glass fiber membrane used currently in clinical laboratories. The filters were cut to size for use in a 13 mm Swinnex(®) syringe filter holder (Millipore(™), Billerica, MA). Samples of 40 μL volume could be dehydrated to the equilibrium moisture content by continuous processing at 20% with excellent sample-to-sample repeatability. The microwave-assisted procedure enabled high throughput, repeatable drying of multiple samples, in a manner easily adaptable for drying a wide array of biological samples. Depending on the tolerance for sample heating, the drying time can be altered by changing the power level of the microwave unit.
Sub-optimal control of fuzzy linear dynamical systems under granular differentiability concept.
Mazandarani, Mehran; Pariz, Naser
2018-05-01
This paper deals with sub-optimal control of a fuzzy linear dynamical system. The aim is to keep the state variables of the fuzzy linear dynamical system close to zero in an optimal manner. In the fuzzy dynamical system, the fuzzy derivative is considered as the granular derivative; and all the coefficients and initial conditions can be uncertain. The criterion for assessing the optimality is regarded as a granular integral whose integrand is a quadratic function of the state variables and control inputs. Using the relative-distance-measure (RDM) fuzzy interval arithmetic and calculus of variations, the optimal control law is presented as the fuzzy state variables feedback. Since the optimal feedback gains are obtained as fuzzy functions, they need to be defuzzified. This will result in the sub-optimal control law. This paper also sheds light on the restrictions imposed by the approaches which are based on fuzzy standard interval arithmetic (FSIA), and use strongly generalized Hukuhara and generalized Hukuhara differentiability concepts for obtaining the optimal control law. The granular eigenvalues notion is also defined. Using an RLC circuit mathematical model, it is shown that, due to their unnatural behavior in the modeling phenomenon, the FSIA-based approaches may obtain some eigenvalues sets that might be different from the inherent eigenvalues set of the fuzzy dynamical system. This is, however, not the case with the approach proposed in this study. The notions of granular controllability and granular stabilizability of the fuzzy linear dynamical system are also presented in this paper. Moreover, a sub-optimal control for regulating a Boeing 747 in longitudinal direction with uncertain initial conditions and parameters is gained. In addition, an uncertain suspension system of one of the four wheels of a bus is regulated using the sub-optimal control introduced in this paper. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Sun, Y.; Li, Y. P.; Huang, G. H.
2012-06-01
In this study, a queuing-theory-based interval-fuzzy robust two-stage programming (QB-IRTP) model is developed through introducing queuing theory into an interval-fuzzy robust two-stage (IRTP) optimization framework. The developed QB-IRTP model can not only address highly uncertain information for the lower and upper bounds of interval parameters but also be used for analysing a variety of policy scenarios that are associated with different levels of economic penalties when the promised targets are violated. Moreover, it can reflect uncertainties in queuing theory problems. The developed method has been applied to a case of long-term municipal solid waste (MSW) management planning. Interval solutions associated with different waste-generation rates, different waiting costs and different arriving rates have been obtained. They can be used for generating decision alternatives and thus help managers to identify desired MSW management policies under various economic objectives and system reliability constraints.
Tang, Zhongwen
2015-01-01
An analytical way to compute predictive probability of success (PPOS) together with credible interval at interim analysis (IA) is developed for big clinical trials with time-to-event endpoints. The method takes account of the fixed data up to IA, the amount of uncertainty in future data, and uncertainty about parameters. Predictive power is a special type of PPOS. The result is confirmed by simulation. An optimal design is proposed by finding optimal combination of analysis time and futility cutoff based on some PPOS criteria.
Silva Filho, Telmo M; Souza, Renata M C R; Prudêncio, Ricardo B C
2016-08-01
Some complex data types are capable of modeling data variability and imprecision. These data types are studied in the symbolic data analysis field. One such data type is interval data, which represents ranges of values and is more versatile than classic point data for many domains. This paper proposes a new prototype-based classifier for interval data, trained by a swarm optimization method. Our work has two main contributions: a swarm method which is capable of performing both automatic selection of features and pruning of unused prototypes and a generalized weighted squared Euclidean distance for interval data. By discarding unnecessary features and prototypes, the proposed algorithm deals with typical limitations of prototype-based methods, such as the problem of prototype initialization. The proposed distance is useful for learning classes in interval datasets with different shapes, sizes and structures. When compared to other prototype-based methods, the proposed method achieves lower error rates in both synthetic and real interval datasets. Copyright © 2016 Elsevier Ltd. All rights reserved.
Da Silva, M; Garcia, G T; Vizoni, E; Kawamura, O; Hirooka, E Y; Ono, E Y S
2008-05-01
Natural mycoflora and fumonisins were analysed in 490 samples of freshly harvested corn (Zea mays L.) (2003 and 2004 crops) collected at three points in the producing chain from the Northern region of Parana State, Brazil, and correlated to the time interval between the harvesting and the pre-drying step. The two crops showed a similar profile concerning the fungal frequency, and Fusarium sp. was the prevalent genera (100%) for the sampling sites from both crops. Fumonisins were detected in all samples from the three points of the producing chain (2003 and 2004 crops). The levels ranged from 0.11 to 15.32 microg g(-1)in field samples, from 0.16 to 15.90 microg g(-1)in reception samples, and from 0.02 to 18.78 microg g(-1)in pre-drying samples (2003 crop). Samples from the 2004 crop showed lower contamination and fumonisin levels ranged from 0.07 to 4.78 microg g(-1)in field samples, from 0.03 to 4.09 microg g(-1)in reception samples, and from 0.11 to 11.21 microg g(-1)in pre-drying samples. The mean fumonisin level increased gradually from < or = 5.0 to 19.0 microg g(-1)as the time interval between the harvesting and the pre-drying step increased from 3.22 to 8.89 h (2003 crop). The same profile was observed for samples from the 2004 crop. Fumonisin levels and the time interval (rho = 0.96) showed positive correlation (p < or = 0.05), indicating that delay in the drying process can increase fumonisin levels.
Inner Radiation Belt Dynamics and Climatology
NASA Astrophysics Data System (ADS)
Guild, T. B.; O'Brien, P. P.; Looper, M. D.
2012-12-01
We present preliminary results of inner belt proton data assimilation using an augmented version of the Selesnick et al. Inner Zone Model (SIZM). By varying modeled physics parameters and solar particle injection parameters to generate many ensembles of the inner belt, then optimizing the ensemble weights according to inner belt observations from SAMPEX/PET at LEO and HEO/DOS at high altitude, we obtain the best-fit state of the inner belt. We need to fully sample the range of solar proton injection sources among the ensemble members to ensure reasonable agreement between the model ensembles and observations. Once this is accomplished, we find the method is fairly robust. We will demonstrate the data assimilation by presenting an extended interval of solar proton injections and losses, illustrating how these short-term dynamics dominate long-term inner belt climatology.
Jovian thundercloud observation with Jovian orbiter and ground-based telescope
NASA Astrophysics Data System (ADS)
Takahashi, Yukihiro; Nakajima, Kensuke; Takeuchi, Satoru; Sato, Mitsuteru; Fukuhara, Tetsuya; Watanabe, Makoto; Yair, Yoav; Fischer, Georg; Aplin, Karen
The latest observational and theoretical studies suggest that thunderstorms in Jupiter's at-mosphere are very important subject not only for understanding of meteorology, which may determine the large scale structures such as belt/zone and big ovals, but also for probing the water abundance of the deep atmosphere, which is crucial to constrain the behavior of volatiles in early solar system. Here we suggest a very simple high-speed imager on board Jovian orbiter, Optical Lightning Detector, OLD, optimized for detecting optical emissions from lightning dis-charge in Jupiter. OLD consists of radiation-tolerant CMOS sensors and two H Balmer Alpha line (656.3nm) filters. In normal sampling mode the frame intervals is 29ms with a full frame format of 512x512 pixels and in high-speed sampling mode the interval could be reduced down to 0.1ms by concentrating a limited area of 30x30 pixels. Weight, size and power consump-tion are about 1kg, 16x7x5.5 cm (sensor) and 16x12x4 cm (circuit), and 4W, respectively, though they can be reduced according to the spacecraft resources and required environmental tolerance. Also we plan to investigate the optical flashes using a ground-based middle-sized telescope, which will be built by Hokkaido University, with narrow-band high speed imaging unit using an EM-CCD camera. Observational strategy with these optical lightning detectors and spectral imagers, which enables us to estimate the horizontal motion and altitude of clouds, will be introduced.
Song, Gaoguang; Liu, Yujie; Wang, Yanying; Ren, Guanjun; Guo, Shuai; Ren, Junling; Zhang, Li; Li, Zhili
2015-02-02
Disease-specific humoral immune response-related protein complexes in blood are associated with disease progression. Thirty-one patients with stage IIIB and IV non-small-cell lung cancer (NSCLC) were administered with oral dose of icotinib hydrochloride (150 mg twice daily or 125 mg 3 times daily) for a 28-continuous-day cycle until diseases progressed or unacceptable toxicity occurred. The levels of immunoinflammation-related protein complexes (IIRPCs) in a series of plasma samples from 31 NSCLC patients treated with icotinib hydrochloride were determined by an optimized native polyacrylamide gel electrophoresis. Three characteristic patterns of the IIRPCs, named as patterns a, b, and c, respectively, were detected in plasma samples from 31 patients. Prior to the treatment, there were 18 patients in pattern a consisting of 5 IIRPCs, 9 in pattern b consisting of six IIRPCs, and 4 in pattern c without the IIRPCs. The levels of the IIRPCs in 27 patients were quantified. Our results indicate that the time length of humoral immune and inflammation response (TLHIIR) was closely associated with disease progression, and the median TLHIIR was 22.0 weeks, 95% confidence interval: 16.2 to 33.0 weeks, with a lead time of median 11 weeks relative to clinical imaging evidence confirmed by computed tomography or magnetic resonance imaging (the median progression-free survival, 34.0 weeks, 95% confidence interval: 27.9 to 49.0 weeks). The complex relationships between humoral immune response, acquired resistance, and disease progression existed. Personalized IIRPCs could be indicators to monitor the disease progression. Copyright © 2014 Elsevier B.V. All rights reserved.
Optimization of Control Points Number at Coordinate Measurements based on the Monte-Carlo Method
NASA Astrophysics Data System (ADS)
Korolev, A. A.; Kochetkov, A. V.; Zakharov, O. V.
2018-01-01
Improving the quality of products causes an increase in the requirements for the accuracy of the dimensions and shape of the surfaces of the workpieces. This, in turn, raises the requirements for accuracy and productivity of measuring of the workpieces. The use of coordinate measuring machines is currently the most effective measuring tool for solving similar problems. The article proposes a method for optimizing the number of control points using Monte Carlo simulation. Based on the measurement of a small sample from batches of workpieces, statistical modeling is performed, which allows one to obtain interval estimates of the measurement error. This approach is demonstrated by examples of applications for flatness, cylindricity and sphericity. Four options of uniform and uneven arrangement of control points are considered and their comparison is given. It is revealed that when the number of control points decreases, the arithmetic mean decreases, the standard deviation of the measurement error increases and the probability of the measurement α-error increases. In general, it has been established that it is possible to repeatedly reduce the number of control points while maintaining the required measurement accuracy.
Non-invasive surveillance for Plasmodium in reservoir macaque species.
Siregar, Josephine E; Faust, Christina L; Murdiyarso, Lydia S; Rosmanah, Lis; Saepuloh, Uus; Dobson, Andrew P; Iskandriati, Diah
2015-10-12
Primates are important reservoirs for human diseases, but their infection status and disease dynamics are difficult to track in the wild. Within the last decade, a macaque malaria, Plasmodium knowlesi, has caused disease in hundreds of humans in Southeast Asia. In order to track cases and understand zoonotic risk, it is imperative to be able to quantify infection status in reservoir macaque species. In this study, protocols for the collection of non-invasive samples and isolation of malaria parasites from naturally infected macaques are optimized. Paired faecal and blood samples from 60 Macaca fascicularis and four Macaca nemestrina were collected. All animals came from Sumatra or Java and were housed in semi-captive breeding colonies around West Java. DNA was extracted from samples using a modified protocol. Nested polymerase chain reactions (PCR) were run to detect Plasmodium using primers targeting mitochondrial DNA. Sensitivity of screening faecal samples for Plasmodium was compared to other studies using Kruskal Wallis tests and logistic regression models. The best primer set was 96.7 % (95 % confidence intervals (CI): 83.3-99.4 %) sensitive for detecting Plasmodium in faecal samples of naturally infected macaques (n = 30). This is the first study to produce definitive estimates of Plasmodium sensitivity and specificity in faecal samples from naturally infected hosts. The sensitivity was significantly higher than some other studies involving wild primates. Faecal samples can be used for detection of malaria infection in field surveys of macaques, even when there are no parasites visible in thin blood smears. Repeating samples from individuals will improve inferences of the epidemiology of malaria in wild primates.
Levonorgestrel release rates over 5 years with the Liletta® 52-mg intrauterine system.
Creinin, Mitchell D; Jansen, Rolf; Starr, Robert M; Gobburu, Joga; Gopalakrishnan, Mathangi; Olariu, Andrea
2016-10-01
To understand the potential duration of action for Liletta®, we conducted this study to estimate levonorgestrel (LNG) release rates over approximately 5½years of product use. Clinical sites in the U.S. Phase 3 study of Liletta collected the LNG intrauterine systems (IUSs) from women who discontinued the study. We randomly selected samples within 90-day intervals after discontinuation of IUS use through 900days (approximately 2.5years) and 180-day intervals for the remaining duration through 5.4years (1980days) to evaluate residual LNG content. We also performed an initial LNG content analysis using 10 randomly selected samples from a single lot. We calculated the average ex vivo release rate using the residual LNG content over the duration of the analysis. We analyzed 64 samples within 90-day intervals (range 6-10 samples per interval) through 900days and 36 samples within 180-day intervals (6 samples per interval) for the remaining duration. The initial content analysis averaged 52.0±1.8mg. We calculated an average initial release rate of 19.5mcg/day that decreased to 17.0, 14.8, 12.9, 11.3 and 9.8mcg/day after 1, 2, 3, 4 and 5years, respectively. The 5-year average release rate is 14.7mcg/day. The estimated initial LNG release rate and gradual decay of the estimated release rate are consistent with the target design and function of the product. The calculated LNG content and release rate curves support the continued evaluation of Liletta as a contraceptive for 5 or more years of use. Liletta LNG content and release rates are comparable to published data for another LNG 52-mg IUS. The release rate at 5years is more than double the published release rate at 3years with an LNG 13.5-mg IUS, suggesting continued efficacy of Liletta beyond 5years. Copyright © 2016 Elsevier Inc. All rights reserved.
Feasibility study of TSPO quantification with [18F]FEPPA using population-based input function.
Mabrouk, Rostom; Strafella, Antonio P; Knezevic, Dunja; Ghadery, Christine; Mizrahi, Romina; Gharehgazlou, Avideh; Koshimori, Yuko; Houle, Sylvain; Rusjan, Pablo
2017-01-01
The input function (IF) is a core element in the quantification of Translocator protein 18 kDa with positron emission tomography (PET), as no suitable reference region with negligible binding has been identified. Arterial blood sampling is indeed needed to create the IF (ASIF). In the present manuscript we study individualization of a population based input function (PBIF) with a single arterial manual sample to estimate total distribution volume (VT) for [18F]FEPPA and to replicate previously published clinical studies in which the ASIF was used. The data of 3 previous [18F]FEPPA studies (39 of healthy controls (HC), 16 patients with Parkinson's disease (PD) and 18 with Alzheimer's disease (AD)) was reanalyzed with the new approach. PBIF was used with the Logan graphical analysis (GA) neglecting the vascular contribution to estimate VT. Time of linearization of the GA was determined with the maximum error criteria. The optimal calibration of the PBIF was determined based on the area under the curve (AUC) of the IF and the agreement range of VT between methods. The shape of the IF between groups was studied while taking into account genotyping of the polymorphism (rs6971). PBIF scaled with a single value of activity due to unmetabolized radioligand in arterial plasma, calculated as the average of a sample taken at 60 min and a sample taken at 90 min post-injection, yielded a good interval of agreement between methods and optimized the area under the curve of IF. In HC, gray matter VTs estimated by PBIF highly correlated with those using the standard method (r2 = 0.82, p = 0.0001). Bland-Altman plots revealed PBIF slightly underestimates (~1 mL/cm3) VT calculated by ASIF (including a vascular contribution). It was verified that the AUC of the ASIF were independent of genotype and disease (HC, PD, and AD). Previous clinical results were replicated using PBIF but with lower statistical power. A single arterial blood sample taken 75 minute post-injection contains enough information to individualize the IF in the groups of subjects studied; however, the higher variability produced requires an increase in sample size to reach the same effect size.
Besson, Florent L; Henry, Théophraste; Meyer, Céline; Chevance, Virgile; Roblot, Victoire; Blanchet, Elise; Arnould, Victor; Grimon, Gilles; Chekroun, Malika; Mabille, Laurence; Parent, Florence; Seferian, Andrei; Bulifon, Sophie; Montani, David; Humbert, Marc; Chaumet-Riffaud, Philippe; Lebon, Vincent; Durand, Emmanuel
2018-04-03
Purpose To assess the performance of the ITK-SNAP software for fluorodeoxyglucose (FDG) positron emission tomography (PET) segmentation of complex-shaped lung tumors compared with an optimized, expert-based manual reference standard. Materials and Methods Seventy-six FDG PET images of thoracic lesions were retrospectively segmented by using ITK-SNAP software. Each tumor was manually segmented by six raters to generate an optimized reference standard by using the simultaneous truth and performance level estimate algorithm. Four raters segmented 76 FDG PET images of lung tumors twice by using ITK-SNAP active contour algorithm. Accuracy of ITK-SNAP procedure was assessed by using Dice coefficient and Hausdorff metric. Interrater and intrarater reliability were estimated by using intraclass correlation coefficients of output volumes. Finally, the ITK-SNAP procedure was compared with currently recommended PET tumor delineation methods on the basis of thresholding at 41% volume of interest (VOI; VOI 41 ) and 50% VOI (VOI 50 ) of the tumor's maximal metabolism intensity. Results Accuracy estimates for the ITK-SNAP procedure indicated a Dice coefficient of 0.83 (95% confidence interval: 0.77, 0.89) and a Hausdorff distance of 12.6 mm (95% confidence interval: 9.82, 15.32). Interrater reliability was an intraclass correlation coefficient of 0.94 (95% confidence interval: 0.91, 0.96). The intrarater reliabilities were intraclass correlation coefficients above 0.97. Finally, VOI 41 and VOI 50 accuracy metrics were as follows: Dice coefficient, 0.48 (95% confidence interval: 0.44, 0.51) and 0.34 (95% confidence interval: 0.30, 0.38), respectively, and Hausdorff distance, 25.6 mm (95% confidence interval: 21.7, 31.4) and 31.3 mm (95% confidence interval: 26.8, 38.4), respectively. Conclusion ITK-SNAP is accurate and reliable for active-contour-based segmentation of heterogeneous thoracic PET tumors. ITK-SNAP surpassed the recommended PET methods compared with ground truth manual segmentation. © RSNA, 2018.
The Applicability of Confidence Intervals of Quantiles for the Generalized Logistic Distribution
NASA Astrophysics Data System (ADS)
Shin, H.; Heo, J.; Kim, T.; Jung, Y.
2007-12-01
The generalized logistic (GL) distribution has been widely used for frequency analysis. However, there is a little study related to the confidence intervals that indicate the prediction accuracy of distribution for the GL distribution. In this paper, the estimation of the confidence intervals of quantiles for the GL distribution is presented based on the method of moments (MOM), maximum likelihood (ML), and probability weighted moments (PWM) and the asymptotic variances of each quantile estimator are derived as functions of the sample sizes, return periods, and parameters. Monte Carlo simulation experiments are also performed to verify the applicability of the derived confidence intervals of quantile. As the results, the relative bias (RBIAS) and relative root mean square error (RRMSE) of the confidence intervals generally increase as return period increases and reverse as sample size increases. And PWM for estimating the confidence intervals performs better than the other methods in terms of RRMSE when the data is almost symmetric while ML shows the smallest RBIAS and RRMSE when the data is more skewed and sample size is moderately large. The GL model was applied to fit the distribution of annual maximum rainfall data. The results show that there are little differences in the estimated quantiles between ML and PWM while distinct differences in MOM.
Comparison of Optimal Design Methods in Inverse Problems
Banks, H. T.; Holm, Kathleen; Kappel, Franz
2011-01-01
Typical optimal design methods for inverse or parameter estimation problems are designed to choose optimal sampling distributions through minimization of a specific cost function related to the resulting error in parameter estimates. It is hoped that the inverse problem will produce parameter estimates with increased accuracy using data collected according to the optimal sampling distribution. Here we formulate the classical optimal design problem in the context of general optimization problems over distributions of sampling times. We present a new Prohorov metric based theoretical framework that permits one to treat succinctly and rigorously any optimal design criteria based on the Fisher Information Matrix (FIM). A fundamental approximation theory is also included in this framework. A new optimal design, SE-optimal design (standard error optimal design), is then introduced in the context of this framework. We compare this new design criteria with the more traditional D-optimal and E-optimal designs. The optimal sampling distributions from each design are used to compute and compare standard errors; the standard errors for parameters are computed using asymptotic theory or bootstrapping and the optimal mesh. We use three examples to illustrate ideas: the Verhulst-Pearl logistic population model [13], the standard harmonic oscillator model [13] and a popular glucose regulation model [16, 19, 29]. PMID:21857762
Partitioned-Interval Quantum Optical Communications Receiver
NASA Technical Reports Server (NTRS)
Vilnrotter, Victor A.
2013-01-01
The proposed quantum receiver in this innovation partitions each binary signal interval into two unequal segments: a short "pre-measurement" segment in the beginning of the symbol interval used to make an initial guess with better probability than 50/50 guessing, and a much longer segment used to make the high-sensitivity signal detection via field-cancellation and photon-counting detection. It was found that by assigning as little as 10% of the total signal energy to the pre-measurement segment, the initial 50/50 guess can be improved to about 70/30, using the best available measurements such as classical coherent or "optimized Kennedy" detection.
Tran, Mark W; Weiland, Tracey J; Phillips, Georgina A
2015-01-01
Psychosocial factors such as marital status (odds ratio, 3.52; 95% confidence interval, 1.43-8.69; P = .006) and nonclinical factors such as outpatient nonattendances (odds ratio, 2.52; 95% confidence interval, 1.22-5.23; P = .013) and referrals made (odds ratio, 1.20; 95% confidence interval, 1.06-1.35; P = .003) predict hospital utilization for patients in a chronic disease management program. Along with optimizing patients' clinical condition by prescribed medical guidelines and supporting patient self-management, addressing psychosocial and nonclinical issues are important in attempting to avoid hospital utilization for people with chronic illnesses.
Antunes, Rafael Souza; Ferraz, Denes; Garcia, Luane Ferreira; Thomaz, Douglas Vieira; Luque, Rafael; Lobón, Germán Sanz; Gil, Eric de Souza; Lopes, Flávio Marques
2018-05-15
In this work, an innovative polyphenol oxidase biosensor was developed from Jenipapo ( Genipa americana L.) fruit and used to assess phenolic compounds in industrial effluent samples obtained from a textile industry located in Jaraguá-GO, Brasil. The biosensor was prepared and optimized according to: the proportion of crude vegetal extract, pH and overall voltammetric parameters for differential pulse voltammetry. The calibration curve presented a linear interval from 10 to 310 µM (r² = 0.9982) and a limit of detection of 7 µM. Biosensor stability was evaluated throughout 15 days, and it exhibited 88.22% of the initial response. The amount of catechol standard recovered post analysis varied between 87.50% and 96.00%. Moreover, the biosensor was able to detect phenolic compounds in a real sample, and the results were in accordance with standard spectrophotometric assays. Therefore, the innovatively-designed biosensor hereby proposed is a promising tool for phenolic compound detection and quantification when environmental contaminants are concerned.
Weber, G; Bauer, J
1998-06-01
On fractionation of highly heterogeneous protein mixtures, optimal resolution was achieved by forcing proteins to migrate through a preestablished pH gradient, until they entered a medium with a pH similar but not equal to their pIs. For this purpose, up to seven different media were pumped through the electrophoresis chamber so that they were flowing adjacently to each other, forming a pH gradient declining stepwise from the cathode to the anode. This gradient had a sufficiently strong band-focusing effect to counterbalance sample distortion effects of the flowing medium as proteins approached their isoelectric medium closer than 0.5 pH units. Continuous free-flow zone electrophoresis (FFZE) with high throughput capability was applicable if proteins did not precipitate or aggregate in these media. If components of heterogeneous protein mixtures had already started to precipitate or aggregate, in a medium with a pH exceeding their pI by more than 0.5 pH units, the application of interval modus and media forming flat pH gradients appeared advantageous.
Study on transfer optimization of urban rail transit and conventional public transport
NASA Astrophysics Data System (ADS)
Wang, Jie; Sun, Quan Xin; Mao, Bao Hua
2018-04-01
This paper mainly studies the time optimization of feeder connection between rail transit and conventional bus in a shopping center. In order to achieve the goal of connecting rail transportation effectively and optimizing the convergence between the two transportations, the things had to be done are optimizing the departure intervals, shorting the passenger transfer time and improving the service level of public transit. Based on the goal that has the minimum of total waiting time of passengers and the number of start of classes, establish the optimizing model of bus connecting of departure time. This model has some constrains such as transfer time, load factor, and the convergence of public transportation grid spacing. It solves the problems by using genetic algorithms.
Reversing the Course of Forgetting
ERIC Educational Resources Information Center
White, K. Geoffrey; Brown, Glenn S.
2011-01-01
Forgetting functions were generated for pigeons in a delayed matching-to-sample task, in which accuracy decreased with increasing retention-interval duration. In baseline training with dark retention intervals, accuracy was high overall. Illumination of the experimental chamber by a houselight during the retention interval impaired performance…
Peeters, R; Galesloot, P J B
2002-03-01
The objective of this study was to estimate the daily fat yield and fat percentage from one sampled milking per cow per test day in an automatic milking system herd, when the milking times and milk yields of all individual milkings are recorded by the automatic milking system. Multiple regression models were used to estimate the 24-h fat percentage when only one milking is sampled for components and milk yields and milking times are known for all milkings in the 24-h period before the sampled milking. In total, 10,697 cow test day records, from 595 herd tests at 91 Dutch herds milked with an automatic milking system, were used. The best model to predict 24-h fat percentage included fat percentage, protein percentage, milk yield and milking interval of the sampled milking, milk yield, and milking interval of the preceding milking, and the interaction between milking interval and the ratio of fat and protein percentage of the sampled milking. This model gave a standard deviation of the prediction error (SE) for 24-h fat percentage of 0.321 and a correlation between the predicted and actual 24-h fat percentage of 0.910. For the 24-h fat yield, we found SE = 90 g and correlation = 0.967. This precision is slightly better than that of present a.m.-p.m. testing schemes. Extra attention must be paid to correctly matching the sample jars and the milkings. Furthermore, milkings with an interval of less than 4 h must be excluded from sampling as well as milkings that are interrupted or that follow an interrupted milking. Under these restrictions (correct matching, interval of at least 4 h, and no interrupted milking), one sampled milking suffices to get a satisfactory estimate for the test-day fat yield.
Yu, Chanki; Lee, Sang Wook
2016-05-20
We present a reliable and accurate global optimization framework for estimating parameters of isotropic analytical bidirectional reflectance distribution function (BRDF) models. This approach is based on a branch and bound strategy with linear programming and interval analysis. Conventional local optimization is often very inefficient for BRDF estimation since its fitting quality is highly dependent on initial guesses due to the nonlinearity of analytical BRDF models. The algorithm presented in this paper employs L1-norm error minimization to estimate BRDF parameters in a globally optimal way and interval arithmetic to derive our feasibility problem and lower bounding function. Our method is developed for the Cook-Torrance model but with several normal distribution functions such as the Beckmann, Berry, and GGX functions. Experiments have been carried out to validate the presented method using 100 isotropic materials from the MERL BRDF database, and our experimental results demonstrate that the L1-norm minimization provides a more accurate and reliable solution than the L2-norm minimization.
Confidence intervals for distinguishing ordinal and disordinal interactions in multiple regression.
Lee, Sunbok; Lei, Man-Kit; Brody, Gene H
2015-06-01
Distinguishing between ordinal and disordinal interaction in multiple regression is useful in testing many interesting theoretical hypotheses. Because the distinction is made based on the location of a crossover point of 2 simple regression lines, confidence intervals of the crossover point can be used to distinguish ordinal and disordinal interactions. This study examined 2 factors that need to be considered in constructing confidence intervals of the crossover point: (a) the assumption about the sampling distribution of the crossover point, and (b) the possibility of abnormally wide confidence intervals for the crossover point. A Monte Carlo simulation study was conducted to compare 6 different methods for constructing confidence intervals of the crossover point in terms of the coverage rate, the proportion of true values that fall to the left or right of the confidence intervals, and the average width of the confidence intervals. The methods include the reparameterization, delta, Fieller, basic bootstrap, percentile bootstrap, and bias-corrected accelerated bootstrap methods. The results of our Monte Carlo simulation study suggest that statistical inference using confidence intervals to distinguish ordinal and disordinal interaction requires sample sizes more than 500 to be able to provide sufficiently narrow confidence intervals to identify the location of the crossover point. (c) 2015 APA, all rights reserved).
Assessing accuracy of point fire intervals across landscapes with simulation modelling
Russell A. Parsons; Emily K. Heyerdahl; Robert E. Keane; Brigitte Dorner; Joseph Fall
2007-01-01
We assessed accuracy in point fire intervals using a simulation model that sampled four spatially explicit simulated fire histories. These histories varied in fire frequency and size and were simulated on a flat landscape with two forest types (dry versus mesic). We used three sampling designs (random, systematic grids, and stratified). We assessed the sensitivity of...
Sample Size Calculations for Precise Interval Estimation of the Eta-Squared Effect Size
ERIC Educational Resources Information Center
Shieh, Gwowen
2015-01-01
Analysis of variance is one of the most frequently used statistical analyses in the behavioral, educational, and social sciences, and special attention has been paid to the selection and use of an appropriate effect size measure of association in analysis of variance. This article presents the sample size procedures for precise interval estimation…
ERIC Educational Resources Information Center
Radley, Keith C.; O'Handley, Roderick D.; Labrot, Zachary C.
2015-01-01
Assessment in social skills training often utilizes procedures such as partial-interval recording (PIR) and momentary time sampling (MTS) to estimate changes in duration in social engagements due to intervention. Although previous research suggests PIR to be more inaccurate than MTS in estimating levels of behavior, treatment analysis decisions…
Sample Size Calculation for Estimating or Testing a Nonzero Squared Multiple Correlation Coefficient
ERIC Educational Resources Information Center
Krishnamoorthy, K.; Xia, Yanping
2008-01-01
The problems of hypothesis testing and interval estimation of the squared multiple correlation coefficient of a multivariate normal distribution are considered. It is shown that available one-sided tests are uniformly most powerful, and the one-sided confidence intervals are uniformly most accurate. An exact method of calculating sample size to…
Approximation Set of the Interval Set in Pawlak's Space
Wang, Jin; Wang, Guoyin
2014-01-01
The interval set is a special set, which describes uncertainty of an uncertain concept or set Z with its two crisp boundaries named upper-bound set and lower-bound set. In this paper, the concept of similarity degree between two interval sets is defined at first, and then the similarity degrees between an interval set and its two approximations (i.e., upper approximation set R¯(Z) and lower approximation set R_(Z)) are presented, respectively. The disadvantages of using upper-approximation set R¯(Z) or lower-approximation set R_(Z) as approximation sets of the uncertain set (uncertain concept) Z are analyzed, and a new method for looking for a better approximation set of the interval set Z is proposed. The conclusion that the approximation set R 0.5(Z) is an optimal approximation set of interval set Z is drawn and proved successfully. The change rules of R 0.5(Z) with different binary relations are analyzed in detail. Finally, a kind of crisp approximation set of the interval set Z is constructed. We hope this research work will promote the development of both the interval set model and granular computing theory. PMID:25177721
Optimal Time-Resource Allocation for Energy-Efficient Physical Activity Detection
Thatte, Gautam; Li, Ming; Lee, Sangwon; Emken, B. Adar; Annavaram, Murali; Narayanan, Shrikanth; Spruijt-Metz, Donna; Mitra, Urbashi
2011-01-01
The optimal allocation of samples for physical activity detection in a wireless body area network for health-monitoring is considered. The number of biometric samples collected at the mobile device fusion center, from both device-internal and external Bluetooth heterogeneous sensors, is optimized to minimize the transmission power for a fixed number of samples, and to meet a performance requirement defined using the probability of misclassification between multiple hypotheses. A filter-based feature selection method determines an optimal feature set for classification, and a correlated Gaussian model is considered. Using experimental data from overweight adolescent subjects, it is found that allocating a greater proportion of samples to sensors which better discriminate between certain activity levels can result in either a lower probability of error or energy-savings ranging from 18% to 22%, in comparison to equal allocation of samples. The current activity of the subjects and the performance requirements do not significantly affect the optimal allocation, but employing personalized models results in improved energy-efficiency. As the number of samples is an integer, an exhaustive search to determine the optimal allocation is typical, but computationally expensive. To this end, an alternate, continuous-valued vector optimization is derived which yields approximately optimal allocations and can be implemented on the mobile fusion center due to its significantly lower complexity. PMID:21796237
Wahl, Simone; Boulesteix, Anne-Laure; Zierer, Astrid; Thorand, Barbara; van de Wiel, Mark A
2016-10-26
Missing values are a frequent issue in human studies. In many situations, multiple imputation (MI) is an appropriate missing data handling strategy, whereby missing values are imputed multiple times, the analysis is performed in every imputed data set, and the obtained estimates are pooled. If the aim is to estimate (added) predictive performance measures, such as (change in) the area under the receiver-operating characteristic curve (AUC), internal validation strategies become desirable in order to correct for optimism. It is not fully understood how internal validation should be combined with multiple imputation. In a comprehensive simulation study and in a real data set based on blood markers as predictors for mortality, we compare three combination strategies: Val-MI, internal validation followed by MI on the training and test parts separately, MI-Val, MI on the full data set followed by internal validation, and MI(-y)-Val, MI on the full data set omitting the outcome followed by internal validation. Different validation strategies, including bootstrap und cross-validation, different (added) performance measures, and various data characteristics are considered, and the strategies are evaluated with regard to bias and mean squared error of the obtained performance estimates. In addition, we elaborate on the number of resamples and imputations to be used, and adopt a strategy for confidence interval construction to incomplete data. Internal validation is essential in order to avoid optimism, with the bootstrap 0.632+ estimate representing a reliable method to correct for optimism. While estimates obtained by MI-Val are optimistically biased, those obtained by MI(-y)-Val tend to be pessimistic in the presence of a true underlying effect. Val-MI provides largely unbiased estimates, with a slight pessimistic bias with increasing true effect size, number of covariates and decreasing sample size. In Val-MI, accuracy of the estimate is more strongly improved by increasing the number of bootstrap draws rather than the number of imputations. With a simple integrated approach, valid confidence intervals for performance estimates can be obtained. When prognostic models are developed on incomplete data, Val-MI represents a valid strategy to obtain estimates of predictive performance measures.
Chen, Ying-ping; Chen, Chao-Hong
2010-01-01
An adaptive discretization method, called split-on-demand (SoD), enables estimation of distribution algorithms (EDAs) for discrete variables to solve continuous optimization problems. SoD randomly splits a continuous interval if the number of search points within the interval exceeds a threshold, which is decreased at every iteration. After the split operation, the nonempty intervals are assigned integer codes, and the search points are discretized accordingly. As an example of using SoD with EDAs, the integration of SoD and the extended compact genetic algorithm (ECGA) is presented and numerically examined. In this integration, we adopt a local search mechanism as an optional component of our back end optimization engine. As a result, the proposed framework can be considered as a memetic algorithm, and SoD can potentially be applied to other memetic algorithms. The numerical experiments consist of two parts: (1) a set of benchmark functions on which ECGA with SoD and ECGA with two well-known discretization methods: the fixed-height histogram (FHH) and the fixed-width histogram (FWH) are compared; (2) a real-world application, the economic dispatch problem, on which ECGA with SoD is compared to other methods. The experimental results indicate that SoD is a better discretization method to work with ECGA. Moreover, ECGA with SoD works quite well on the economic dispatch problem and delivers solutions better than the best known results obtained by other methods in existence.
Interval Predictor Models with a Formal Characterization of Uncertainty and Reliability
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Giesy, Daniel P.; Kenny, Sean P.
2014-01-01
This paper develops techniques for constructing empirical predictor models based on observations. By contrast to standard models, which yield a single predicted output at each value of the model's inputs, Interval Predictors Models (IPM) yield an interval into which the unobserved output is predicted to fall. The IPMs proposed prescribe the output as an interval valued function of the model's inputs, render a formal description of both the uncertainty in the model's parameters and of the spread in the predicted output. Uncertainty is prescribed as a hyper-rectangular set in the space of model's parameters. The propagation of this set through the empirical model yields a range of outputs of minimal spread containing all (or, depending on the formulation, most) of the observations. Optimization-based strategies for calculating IPMs and eliminating the effects of outliers are proposed. Outliers are identified by evaluating the extent by which they degrade the tightness of the prediction. This evaluation can be carried out while the IPM is calculated. When the data satisfies mild stochastic assumptions, and the optimization program used for calculating the IPM is convex (or, when its solution coincides with the solution to an auxiliary convex program), the model's reliability (that is, the probability that a future observation would be within the predicted range of outputs) can be bounded rigorously by a non-asymptotic formula.
NASA Astrophysics Data System (ADS)
Hamza, Karim; Shalaby, Mohamed
2014-09-01
This article presents a framework for simulation-based design optimization of computationally expensive problems, where economizing the generation of sample designs is highly desirable. One popular approach for such problems is efficient global optimization (EGO), where an initial set of design samples is used to construct a kriging model, which is then used to generate new 'infill' sample designs at regions of the search space where there is high expectancy of improvement. This article attempts to address one of the limitations of EGO, where generation of infill samples can become a difficult optimization problem in its own right, as well as allow the generation of multiple samples at a time in order to take advantage of parallel computing in the evaluation of the new samples. The proposed approach is tested on analytical functions, and then applied to the vehicle crashworthiness design of a full Geo Metro model undergoing frontal crash conditions.
Optimal two-phase sampling design for comparing accuracies of two binary classification rules.
Xu, Huiping; Hui, Siu L; Grannis, Shaun
2014-02-10
In this paper, we consider the design for comparing the performance of two binary classification rules, for example, two record linkage algorithms or two screening tests. Statistical methods are well developed for comparing these accuracy measures when the gold standard is available for every unit in the sample, or in a two-phase study when the gold standard is ascertained only in the second phase in a subsample using a fixed sampling scheme. However, these methods do not attempt to optimize the sampling scheme to minimize the variance of the estimators of interest. In comparing the performance of two classification rules, the parameters of primary interest are the difference in sensitivities, specificities, and positive predictive values. We derived the analytic variance formulas for these parameter estimates and used them to obtain the optimal sampling design. The efficiency of the optimal sampling design is evaluated through an empirical investigation that compares the optimal sampling with simple random sampling and with proportional allocation. Results of the empirical study show that the optimal sampling design is similar for estimating the difference in sensitivities and in specificities, and both achieve a substantial amount of variance reduction with an over-sample of subjects with discordant results and under-sample of subjects with concordant results. A heuristic rule is recommended when there is no prior knowledge of individual sensitivities and specificities, or the prevalence of the true positive findings in the study population. The optimal sampling is applied to a real-world example in record linkage to evaluate the difference in classification accuracy of two matching algorithms. Copyright © 2013 John Wiley & Sons, Ltd.
Patient-specific Distraction Regimen to Avoid Growth-rod Failure.
Agarwal, Aakash; Jayaswal, Arvind; Goel, Vijay K; Agarwal, Anand K
2018-02-15
A finite element study to establish the relationship between patient's curve flexibility (determined using curve correction under gravity) in juvenile idiopathic scoliosis and the required distraction frequency to avoid growth rod fracture, as a function of time. To perform a parametric analysis using a juvenile scoliotic spine model (single mid-thoracic curve with the apex at the eighth thoracic vertebra) and establish the relationship between curve flexibility (determined using curve correction under gravity) and the distraction interval that allows a higher factor of safety for the growth rods. Previous studies have shown that frequent distraction with smaller magnitude of distractions are less likely to result in rod failure. However there has not been any methodology or a chart provided to apply this knowledge on to the individual patients that undergo the treatment. This study aims to fill in that gap. The parametric study was performed by varying the material properties of the disc, hence altering the axial stiffness of the scoliotic spine model. The stresses on the rod were found to increase with increased axial stiffness of the spine, and this resulted in the increase of required optimal frequency to achieve a factor of safety of two for growth rods. A relationship between the percentage correction in Cobb's angle due to gravity alone, and the required distraction interval for limiting the maximum von Mises stress to 255 MPa on the growth rods was established. The distraction interval required to limit the stresses to the selected nominal value reduces with increase in stiffness of the spine. Furthermore, the appropriate distraction interval reduces for each model as the spine becomes stiffer with time (autofusion). This points to the fact the optimal distraction frequency is a time-dependent variable that must be achieved to keep the maximum von Mises stress under the specified factor of safety. The current study demonstrates the possibility of translating fundamental information from finite element modeling to the clinical arena, for mitigating the occurrence of growth rod fracture, that is, establishing a relationship between optimal distraction interval and curve flexibility (determined using curve correction under gravity). N/A.
Kelishadi, Roya; Marateb, Hamid Reza; Mansourian, Marjan; Ardalan, Gelayol; Heshmat, Ramin; Adeli, Khosrow
2016-08-01
This study aimed to determine for the first time the age- and gender-specific reference intervals for biomarkers of bone, metabolism, nutrition, and obesity in a nationally representative sample of the Iranian children and adolescents. We assessed the data of blood samples obtained from healthy Iranian children and adolescents, aged 7 to 19 years. The reference intervals of glucose, lipid profile, liver enzymes, zinc, copper, chromium, magnesium, and 25-hydroxy vitamin D [25(OH)D] were determined according to the Clinical & Laboratory Standards Institute C28-A3 guidelines. The reference intervals were partitioned using the Harris-Boyd method according to age and gender. The study population consisted of 4800 school students (50% boys, mean age of 13.8 years). Twelve chemistry analyses were partitioned by age and gender, displaying the range of results between the 2.5th to 97.5th percentiles. Significant differences existed only between boys and girls at 18 to 19 years of age for low density lipoprotein-cholesterol. 25(OH)D had the only reference interval that was similar to all age groups and both sexes. This study presented the first national database of reference intervals for a number of biochemical markers in Iranian children and adolescents. It is the first report of its kind from the Middle East and North Africa. The findings underscore the importance of providing reference intervals in different ethnicities and in various regions.
Monthly fluctuations of insomnia symptoms in a population-based sample.
Morin, Charles M; Leblanc, M; Ivers, H; Bélanger, L; Mérette, Chantal; Savard, Josée; Jarrin, Denise C
2014-02-01
To document the monthly changes in sleep/insomnia status over a 12-month period; to determine the optimal time intervals to reliably capture new incident cases and recurrent episodes of insomnia and the likelihood of its persistence over time. Participants were 100 adults (mean age = 49.9 years; 66% women) randomly selected from a larger population-based sample enrolled in a longitudinal study of the natural history of insomnia. They completed 12 monthly telephone interviews assessing insomnia, use of sleep aids, stressful life events, and physical and mental health problems in the previous month. A total of 1,125 interviews of a potential 1,200 were completed. Based on data collected at each assessment, participants were classified into one of three subgroups: good sleepers, insomnia symptoms, and insomnia syndrome. At baseline, 42 participants were classified as good sleepers, 34 met criteria for insomnia symptoms, and 24 for an insomnia syndrome. There were significant fluctuations of insomnia over time, with 66% of the participants changing sleep status at least once over the 12 monthly assessments (51.5% for good sleepers, 59.5% for insomnia syndrome, and 93.4% for insomnia symptoms). Changes of status were more frequent among individuals with insomnia symptoms at baseline (mean = 3.46, SD = 2.36) than among those initially classified as good sleepers (mean = 2.12, SD = 2.70). Among the subgroup with insomnia symptoms at baseline, 88.3% reported improved sleep (i.e., became good sleepers) at least once over the 12 monthly assessments compared to 27.7% whose sleep worsened (i.e., met criteria for an insomnia syndrome) during the same period. Among individuals classified as good sleepers at baseline, risks of developing insomnia symptoms and syndrome over the subsequent months were, respectively, 48.6% and 14.5%. Monthly assessment over an interval of 6 months was found most reliable to estimate incidence rates, while an interval of 3 months proved the most reliable for defining chronic insomnia. Monthly assessment of insomnia and sleep patterns revealed significant variability over the course of a 12-month period. These findings highlight the importance for future epidemiological studies of conducting repeated assessment at shorter than the typical yearly interval in order to reliably capture the natural course of insomnia over time.
ERIC Educational Resources Information Center
Strazzeri, Kenneth Charles
2013-01-01
The purposes of this study were to investigate (a) undergraduate students' reasoning about the concepts of confidence intervals (b) undergraduate students' interactions with "well-designed" screencast videos on sampling distributions and confidence intervals, and (c) how screencast videos improve undergraduate students' reasoning ability…
Kalim, Shahid; Nazir, Shaista; Khan, Zia Ullah
2013-01-01
Protocols based on newer high sensitivity Troponin T (hsTropT) assays can rule in a suspected Acute Myocardial Infarction (AMI) as early as 3 hours. We conducted this study to audit adherence to our Trust's newly introduced AMI diagnostic protocol based on paired hsTropT testing at 0 and 3 hours. We retrospectively reviewed data of all patients who had hsTropT test done between 1st and 7th May 2012. Patient's demographics, utility of single or paired samples, time interval between paired samples, patient's presenting symptoms and ECG findings were noted and their means, medians, Standard deviations and proportions were calculated. A total of 66 patients had hsTropT test done during this period. Mean age was 63.30 +/- 17.46 years and 38 (57.57%) were males. Twenty-four (36.36%) patients had only single, rather than protocol recommended paired hsTropT samples, taken. Among the 42 (63.63%) patients with paired samples, the mean time interval was found to be 4.41 +/- 5.7 hours. Contrary to the recommendations, 15 (22.73%) had a very long whereas 2 (3.03%) had a very short time interval between two samples. A subgroup analysis of patients with single samples, found only 2 (3.03%) patient with ST-segment elevation, appropriate for single testing. Our study confirmed that in a large number of patients the protocol for paired sampling or a recommended time interval of 3 hours between 2 samples was not being followed.
Profile-likelihood Confidence Intervals in Item Response Theory Models.
Chalmers, R Philip; Pek, Jolynn; Liu, Yang
2017-01-01
Confidence intervals (CIs) are fundamental inferential devices which quantify the sampling variability of parameter estimates. In item response theory, CIs have been primarily obtained from large-sample Wald-type approaches based on standard error estimates, derived from the observed or expected information matrix, after parameters have been estimated via maximum likelihood. An alternative approach to constructing CIs is to quantify sampling variability directly from the likelihood function with a technique known as profile-likelihood confidence intervals (PL CIs). In this article, we introduce PL CIs for item response theory models, compare PL CIs to classical large-sample Wald-type CIs, and demonstrate important distinctions among these CIs. CIs are then constructed for parameters directly estimated in the specified model and for transformed parameters which are often obtained post-estimation. Monte Carlo simulation results suggest that PL CIs perform consistently better than Wald-type CIs for both non-transformed and transformed parameters.
Empirical likelihood-based confidence intervals for mean medical cost with censored data.
Jeyarajah, Jenny; Qin, Gengsheng
2017-11-10
In this paper, we propose empirical likelihood methods based on influence function and jackknife techniques for constructing confidence intervals for mean medical cost with censored data. We conduct a simulation study to compare the coverage probabilities and interval lengths of our proposed confidence intervals with that of the existing normal approximation-based confidence intervals and bootstrap confidence intervals. The proposed methods have better finite-sample performances than existing methods. Finally, we illustrate our proposed methods with a relevant example. Copyright © 2017 John Wiley & Sons, Ltd.
Garg, Amit; Biello, Katie; Hoot, Joyce W; Reddy, Shalini B; Wilson, Lindsay; George, Paul; Robinson-Bostom, Leslie; Belazarian, Leah; Domingues, Erik; Powers, Jennifer; Jacob, Reza; Powers, Michael; Besen, Justin; Geller, Alan C
2015-12-01
Assessing medical students on core skills related to melanoma detection is challenging in the absence of a well-developed instrument. We sought to develop an objective structured clinical examination for the detection and evaluation of melanoma among medical students. This was a prospective cohort analysis of student and objective rater agreement on performance of clinical skills and assessment of differences in performance across 3 schools. Kappa coefficients indicated excellent agreement for 3 of 5 core skills including commenting on the presence of the moulage (k = 0.87, 95% confidence interval 0.77-0.96), obtaining a history for the moulage (k = 0.84, 95% confidence interval 0.74-0.94), and making a clinical impression (k = 0.80, 95% confidence interval 0.68-0.92). There were no differences in performance across schools with respect to 3 of 5 core skills: commenting on the presence of the moulage (P = .15), initiating a history (P = .53), and managing the suspicious lesion (P value range .07-.17). Overall, 54.2% and 44.7% of students commented on the presence of the moulage and achieved maximum performance of core skills, respectively, with no difference in performance across schools. Limitations include overall sample size of students and schools. The Skin Cancer Objective Structured Clinical Examination represents a potentially important instrument to measure students' performance on the optimal step-by-step evaluation of a melanoma. Copyright © 2015 American Academy of Dermatology, Inc. Published by Elsevier Inc. All rights reserved.
Solution for a bipartite Euclidean traveling-salesman problem in one dimension
NASA Astrophysics Data System (ADS)
Caracciolo, Sergio; Di Gioacchino, Andrea; Gherardi, Marco; Malatesta, Enrico M.
2018-05-01
The traveling-salesman problem is one of the most studied combinatorial optimization problems, because of the simplicity in its statement and the difficulty in its solution. We characterize the optimal cycle for every convex and increasing cost function when the points are thrown independently and with an identical probability distribution in a compact interval. We compute the average optimal cost for every number of points when the distance function is the square of the Euclidean distance. We also show that the average optimal cost is not a self-averaging quantity by explicitly computing the variance of its distribution in the thermodynamic limit. Moreover, we prove that the cost of the optimal cycle is not smaller than twice the cost of the optimal assignment of the same set of points. Interestingly, this bound is saturated in the thermodynamic limit.
Solution for a bipartite Euclidean traveling-salesman problem in one dimension.
Caracciolo, Sergio; Di Gioacchino, Andrea; Gherardi, Marco; Malatesta, Enrico M
2018-05-01
The traveling-salesman problem is one of the most studied combinatorial optimization problems, because of the simplicity in its statement and the difficulty in its solution. We characterize the optimal cycle for every convex and increasing cost function when the points are thrown independently and with an identical probability distribution in a compact interval. We compute the average optimal cost for every number of points when the distance function is the square of the Euclidean distance. We also show that the average optimal cost is not a self-averaging quantity by explicitly computing the variance of its distribution in the thermodynamic limit. Moreover, we prove that the cost of the optimal cycle is not smaller than twice the cost of the optimal assignment of the same set of points. Interestingly, this bound is saturated in the thermodynamic limit.
McKinn, Shannon; Bonner, Carissa; Jansen, Jesse; Teixeira-Pinto, Armando; So, Matthew; Irwig, Les; Doust, Jenny; Glasziou, Paul; McCaffery, Kirsten
2016-08-05
Guidelines on cardiovascular disease (CVD) risk reassessment intervals are unclear, potentially leading to detrimental practice variation: too frequent can result in overtreatment and greater strain on the healthcare system; too infrequent could result in the neglect of high risk patients who require medication. This study aimed to understand the different factors that general practitioners (GPs) consider when deciding on the reassessment interval for patients previously assessed for primary CVD risk. This paper combines quantitative and qualitative data regarding reassessment intervals from two separate studies of CVD risk management. Experimental study: 144 Australian GPs viewed a random selection of hypothetical cases via a paper-based questionnaire, in which blood pressure, cholesterol and 5-year absolute risk (AR) were systematically varied to appear lower or higher. GPs were asked how they would manage each case, including an open-ended response for when they would reassess the patient. Interview study: Semi-structured interviews were conducted with a purposive sample of 25 Australian GPs, recruited separately from the GPs in the experimental study. Transcribed audio-recordings were thematically coded, using the Framework Analysis method. GPs stated that they would reassess the majority of patients across all absolute risk categories in 6 months or less (low AR = 52 % [CI95% = 47-57 %], moderate AR = 82 % [CI95% = 76-86 %], high AR = 87 % [CI95% = 82-90 %], total = 71 % [CI95% = 67-75 %]), with 48 % (CI95% = 43-53 %) of patients reassessed in under 3 months. The majority (75 % [CI95% = 70-79 %]) of patients with low-moderate AR (≤15 %) and an elevated risk factor would be reassessed in under 6 months. Interviews: GPs identified different functions for reassessment and risk factor monitoring, which affected recommended intervals. These included perceived psychosocial benefits to patients, preparing the patient for medication, and identifying barriers to lifestyle change and medication adherence. Reassessment and monitoring intervals were driven by patient motivation to change lifestyle, patient demand, individual risk factors, and GP attitudes. There is substantial variation in reassessment intervals for patients with the same risk profile. This suggests that GPs are not following reassessment recommendations in the Australian guidelines. The use of shorter intervals for low-moderate AR contradicts research on optimal monitoring intervals, and may result in unnecessary costs and over-treatment.
Towards the estimation of effect measures in studies using respondent-driven sampling.
Rotondi, Michael A
2014-06-01
Respondent-driven sampling (RDS) is an increasingly common sampling technique to recruit hidden populations. Statistical methods for RDS are not straightforward due to the correlation between individual outcomes and subject weighting; thus, analyses are typically limited to estimation of population proportions. This manuscript applies the method of variance estimates recovery (MOVER) to construct confidence intervals for effect measures such as risk difference (difference of proportions) or relative risk in studies using RDS. To illustrate the approach, MOVER is used to construct confidence intervals for differences in the prevalence of demographic characteristics between an RDS study and convenience study of injection drug users. MOVER is then applied to obtain a confidence interval for the relative risk between education levels and HIV seropositivity and current infection with syphilis, respectively. This approach provides a simple method to construct confidence intervals for effect measures in RDS studies. Since it only relies on a proportion and appropriate confidence limits, it can also be applied to previously published manuscripts.
Estimating clinical chemistry reference values based on an existing data set of unselected animals.
Dimauro, Corrado; Bonelli, Piero; Nicolussi, Paola; Rassu, Salvatore P G; Cappio-Borlino, Aldo; Pulina, Giuseppe
2008-11-01
In an attempt to standardise the determination of biological reference values, the International Federation of Clinical Chemistry (IFCC) has published a series of recommendations on developing reference intervals. The IFCC recommends the use of an a priori sampling of at least 120 healthy individuals. However, such a high number of samples and laboratory analysis is expensive, time-consuming and not always feasible, especially in veterinary medicine. In this paper, an alternative (a posteriori) method is described and is used to determine reference intervals for biochemical parameters of farm animals using an existing laboratory data set. The method used was based on the detection and removal of outliers to obtain a large sample of animals likely to be healthy from the existing data set. This allowed the estimation of reliable reference intervals for biochemical parameters in Sarda dairy sheep. This method may also be useful for the determination of reference intervals for different species, ages and gender.
Cleaning frequency and the microbial load in ice-cream.
Holm, Sonya; Toma, Ramses B; Reiboldt, Wendy; Newcomer, Chris; Calicchia, Melissa
2002-07-01
This study investigates the efficacy of a 62 h cleaning frequency in the manufacturing of ice-cream. Various product and product contact surfaces were sampled progressively throughout the time period between cleaning cycles, and analyzed for microbial growth. The coliform and standard plate counts (SPC) of these samples did not vary significantly over time after 0, 24, 48, or 62 h from Cleaning in Place (CiP). Data for product contact surfaces were significant for the SPC representing sample locations. Some of the variables in cleaning practices had significant influence on microbial loads. An increase in the number of flavors manufactured caused a decrease in SPC within the 24 h interval, but by the 48 h interval the SPC increased. More washouts within the first 24 h interval were favorable, as indicated by decreased SPC. The more frequently the liquefier was sanitized within the 62 h interval, the lower the SPC. This study indicates that food safety was not compromised and safety practices were effectively implemented throughout the process.
Sinha, Gita; Dyalchand, Ashok; Khale, Manisha; Kulkarni, Gopal; Vasudevan, Shubha; Bollinger, Robert C
2008-02-01
Sixty percent of India's HIV cases occur in rural residents. Despite government policy to expand antenatal HIV screening and prevention of maternal-to-child transmission (PMTCT), little is known about HIV testing among rural women during pregnancy. Between January and March 2006, a cross-sectional sample of 400 recently pregnant women from rural Maharashtra was administered a questionnaire regarding HIV awareness, risk, and history of antenatal HIV testing. Thirteen women (3.3%) reported receiving antenatal HIV testing. Neither antenatal care utilization nor history of sexually transmitted infection (STI) symptoms influenced odds of receiving HIV testing. Women who did not receive HIV testing, compared with women who did, were 95% less likely to have received antenatal HIV counseling (odds ratio = 0.05, 95% confidence interval: 0.02 to 0.17) and 80% less aware of an existing HIV testing facility (odds ratio = 0.19, 95% confidence interval: 0.04 to 0.75). Despite measurable HIV prevalence, high antenatal care utilization, and STI symptom history, recently pregnant rural Indian women report low HIV testing. Barriers to HIV testing during pregnancy include lack of discussion by antenatal care providers and lack of awareness of existing testing services. Provider-initiated HIV counseling and testing during pregnancy would optimize HIV prevention for women throughout rural India.
NASA Astrophysics Data System (ADS)
Yang, Z.; Burn, D. H.
2017-12-01
Extreme rainfall events can have devastating impacts on society. To quantify the associated risk, the IDF curve has been used to provide the essential rainfall-related information for urban planning. However, the recent changes in the rainfall climatology caused by climate change and urbanization have made the estimates provided by the traditional regional IDF approach increasingly inaccurate. This inaccuracy is mainly caused by two problems: 1) The ineffective choice of similarity indicators for the formation of a homogeneous group at different regions; and 2) An inadequate number of stations in the pooling group that does not adequately reflect the optimal balance between group size and group homogeneity or achieve the lowest uncertainty in the rainfall quantiles estimates. For the first issue, to consider the temporal difference among different meteorological and topographic indicators, a three-layer design is proposed based on three stages in the extreme rainfall formation: cloud formation, rainfall generation and change of rainfall intensity above urban surface. During the process, the impacts from climate change and urbanization are considered through the inclusion of potential relevant features at each layer. Then to consider spatial difference of similarity indicators for the homogeneous group formation at various regions, an automatic feature selection and weighting algorithm, specifically the hybrid searching algorithm of Tabu search, Lagrange Multiplier and Fuzzy C-means Clustering, is used to select the optimal combination of features for the potential optimal homogenous groups formation at a specific region. For the second issue, to compare the uncertainty of rainfall quantile estimates among potential groups, the two sample Kolmogorov-Smirnov test-based sample ranking process is used. During the process, linear programming is used to rank these groups based on the confidence intervals of the quantile estimates. The proposed methodology fills the gap of including the urbanization impacts during the pooling group formation, and challenges the traditional assumption that the same set of similarity indicators can be equally effective in generating the optimal homogeneous group for regions with different geographic and meteorological characteristics.
A proposal of optimal sampling design using a modularity strategy
NASA Astrophysics Data System (ADS)
Simone, A.; Giustolisi, O.; Laucelli, D. B.
2016-08-01
In real water distribution networks (WDNs) are present thousands nodes and optimal placement of pressure and flow observations is a relevant issue for different management tasks. The planning of pressure observations in terms of spatial distribution and number is named sampling design and it was faced considering model calibration. Nowadays, the design of system monitoring is a relevant issue for water utilities e.g., in order to manage background leakages, to detect anomalies and bursts, to guarantee service quality, etc. In recent years, the optimal location of flow observations related to design of optimal district metering areas (DMAs) and leakage management purposes has been faced considering optimal network segmentation and the modularity index using a multiobjective strategy. Optimal network segmentation is the basis to identify network modules by means of optimal conceptual cuts, which are the candidate locations of closed gates or flow meters creating the DMAs. Starting from the WDN-oriented modularity index, as a metric for WDN segmentation, this paper proposes a new way to perform the sampling design, i.e., the optimal location of pressure meters, using newly developed sampling-oriented modularity index. The strategy optimizes the pressure monitoring system mainly based on network topology and weights assigned to pipes according to the specific technical tasks. A multiobjective optimization minimizes the cost of pressure meters while maximizing the sampling-oriented modularity index. The methodology is presented and discussed using the Apulian and Exnet networks.
Schwacke, Lori H; Hall, Ailsa J; Townsend, Forrest I; Wells, Randall S; Hansen, Larry J; Hohn, Aleta A; Bossart, Gregory D; Fair, Patricia A; Rowles, Teresa K
2009-08-01
To develop robust reference intervals for hematologic and serum biochemical variables by use of data derived from free-ranging bottlenose dolphins (Tursiops truncatus) and examine potential variation in distributions of clinicopathologic values related to sampling sites' geographic locations. 255 free-ranging bottlenose dolphins. Data from samples collected during multiple bottlenose dolphin capture-release projects conducted at 4 southeastern US coastal locations in 2000 through 2006 were combined to determine reference intervals for 52 clinicopathologic variables. A nonparametric bootstrap approach was applied to estimate 95th percentiles and associated 90% confidence intervals; the need for partitioning by length and sex classes was determined by testing for differences in estimated thresholds with a bootstrap method. When appropriate, quantile regression was used to determine continuous functions for 95th percentiles dependent on length. The proportion of out-of-range samples for all clinicopathologic measurements was examined for each geographic site, and multivariate ANOVA was applied to further explore variation in leukocyte subgroups. A need for partitioning by length and sex classes was indicated for many clinicopathologic variables. For each geographic site, few significant deviations from expected number of out-of-range samples were detected. Although mean leukocyte counts did not vary among sites, differences in the mean counts for leukocyte subgroups were identified. Although differences in the centrality of distributions for some variables were detected, the 95th percentiles estimated from the pooled data were robust and applicable across geographic sites. The derived reference intervals provide critical information for conducting bottlenose dolphin population health studies.
NASA Astrophysics Data System (ADS)
Dietze, Michael; Fuchs, Margret; Kreutzer, Sebastian
2016-04-01
Many modern approaches of radiometric dating or geochemical fingerprinting rely on sampling sedimentary deposits. A key assumption of most concepts is that the extracted grain-size fraction of the sampled sediment adequately represents the actual process to be dated or the source area to be fingerprinted. However, these assumptions are not always well constrained. Rather, they have to align with arbitrary, method-determined size intervals, such as "coarse grain" or "fine grain" with partly even different definitions. Such arbitrary intervals violate principal process-based concepts of sediment transport and can thus introduce significant bias to the analysis outcome (i.e., a deviation of the measured from the true value). We present a flexible numerical framework (numOlum) for the statistical programming language R that allows quantifying the bias due to any given analysis size interval for different types of sediment deposits. This framework is applied to synthetic samples from the realms of luminescence dating and geochemical fingerprinting, i.e. a virtual reworked loess section. We show independent validation data from artificially dosed and subsequently mixed grain-size proportions and we present a statistical approach (end-member modelling analysis, EMMA) that allows accounting for the effect of measuring the compound dosimetric history or geochemical composition of a sample. EMMA separates polymodal grain-size distributions into the underlying transport process-related distributions and their contribution to each sample. These underlying distributions can then be used to adjust grain-size preparation intervals to minimise the incorporation of "undesired" grain-size fractions.
Modulation Based on Probability Density Functions
NASA Technical Reports Server (NTRS)
Williams, Glenn L.
2009-01-01
A proposed method of modulating a sinusoidal carrier signal to convey digital information involves the use of histograms representing probability density functions (PDFs) that characterize samples of the signal waveform. The method is based partly on the observation that when a waveform is sampled (whether by analog or digital means) over a time interval at least as long as one half cycle of the waveform, the samples can be sorted by frequency of occurrence, thereby constructing a histogram representing a PDF of the waveform during that time interval.
Min and Max Exponential Extreme Interval Values and Statistics
ERIC Educational Resources Information Center
Jance, Marsha; Thomopoulos, Nick
2009-01-01
The extreme interval values and statistics (expected value, median, mode, standard deviation, and coefficient of variation) for the smallest (min) and largest (max) values of exponentially distributed variables with parameter ? = 1 are examined for different observation (sample) sizes. An extreme interval value g[subscript a] is defined as a…
ERIC Educational Resources Information Center
Zakszeski, Brittany N.; Hojnoski, Robin L.; Wood, Brenna K.
2017-01-01
Classroom engagement is important to young children's academic and social development. Accurate methods of capturing this behavior are needed to inform and evaluate intervention efforts. This study compared the accuracy of interval durations (i.e., 5 s, 10 s, 15 s, 20 s, 30 s, and 60 s) of momentary time sampling (MTS) in approximating the…
Research on the principle and experimentation of optical compressive spectral imaging
NASA Astrophysics Data System (ADS)
Chen, Yuheng; Chen, Xinhua; Zhou, Jiankang; Ji, Yiqun; Shen, Weimin
2013-12-01
The optical compressive spectral imaging method is a novel spectral imaging technique that draws in the inspiration of compressed sensing, which takes on the advantages such as reducing acquisition data amount, realizing snapshot imaging, increasing signal to noise ratio and so on. Considering the influence of the sampling quality on the ultimate imaging quality, researchers match the sampling interval with the modulation interval in former reported imaging system, while the depressed sampling rate leads to the loss on the original spectral resolution. To overcome that technical defect, the demand for the matching between the sampling interval and the modulation interval is disposed of and the spectral channel number of the designed experimental device increases more than threefold comparing to that of the previous method. Imaging experiment is carried out by use of the experiment installation and the spectral data cube of the shooting target is reconstructed with the acquired compressed image by use of the two-step iterative shrinkage/thresholding algorithms. The experimental result indicates that the spectral channel number increases effectively and the reconstructed data stays high-fidelity. The images and spectral curves are able to accurately reflect the spatial and spectral character of the target.
Asymptotic confidence intervals for the Pearson correlation via skewness and kurtosis.
Bishara, Anthony J; Li, Jiexiang; Nash, Thomas
2018-02-01
When bivariate normality is violated, the default confidence interval of the Pearson correlation can be inaccurate. Two new methods were developed based on the asymptotic sampling distribution of Fisher's z' under the general case where bivariate normality need not be assumed. In Monte Carlo simulations, the most successful of these methods relied on the (Vale & Maurelli, 1983, Psychometrika, 48, 465) family to approximate a distribution via the marginal skewness and kurtosis of the sample data. In Simulation 1, this method provided more accurate confidence intervals of the correlation in non-normal data, at least as compared to no adjustment of the Fisher z' interval, or to adjustment via the sample joint moments. In Simulation 2, this approximate distribution method performed favourably relative to common non-parametric bootstrap methods, but its performance was mixed relative to an observed imposed bootstrap and two other robust methods (PM1 and HC4). No method was completely satisfactory. An advantage of the approximate distribution method, though, is that it can be implemented even without access to raw data if sample skewness and kurtosis are reported, making the method particularly useful for meta-analysis. Supporting information includes R code. © 2017 The British Psychological Society.
Han, Zong-wei; Huang, Wei; Luo, Yun; Zhang, Chun-di; Qi, Da-cheng
2015-03-01
Taking the soil organic matter in eastern Zhongxiang County, Hubei Province, as a research object, thirteen sample sets from different regions were arranged surrounding the road network, the spatial configuration of which was optimized by the simulated annealing approach. The topographic factors of these thirteen sample sets, including slope, plane curvature, profile curvature, topographic wetness index, stream power index and sediment transport index, were extracted by the terrain analysis. Based on the results of optimization, a multiple linear regression model with topographic factors as independent variables was built. At the same time, a multilayer perception model on the basis of neural network approach was implemented. The comparison between these two models was carried out then. The results revealed that the proposed approach was practicable in optimizing soil sampling scheme. The optimal configuration was capable of gaining soil-landscape knowledge exactly, and the accuracy of optimal configuration was better than that of original samples. This study designed a sampling configuration to study the soil attribute distribution by referring to the spatial layout of road network, historical samples, and digital elevation data, which provided an effective means as well as a theoretical basis for determining the sampling configuration and displaying spatial distribution of soil organic matter with low cost and high efficiency.
Chenel, Marylore; Bouzom, François; Aarons, Leon; Ogungbenro, Kayode
2008-12-01
To determine the optimal sampling time design of a drug-drug interaction (DDI) study for the estimation of apparent clearances (CL/F) of two co-administered drugs (SX, a phase I compound, potentially a CYP3A4 inhibitor, and MDZ, a reference CYP3A4 substrate) without any in vivo data using physiologically based pharmacokinetic (PBPK) predictions, population PK modelling and multiresponse optimal design. PBPK models were developed with AcslXtreme using only in vitro data to simulate PK profiles of both drugs when they were co-administered. Then, using simulated data, population PK models were developed with NONMEM and optimal sampling times were determined by optimizing the determinant of the population Fisher information matrix with PopDes using either two uniresponse designs (UD) or a multiresponse design (MD) with joint sampling times for both drugs. Finally, the D-optimal sampling time designs were evaluated by simulation and re-estimation with NONMEM by computing the relative root mean squared error (RMSE) and empirical relative standard errors (RSE) of CL/F. There were four and five optimal sampling times (=nine different sampling times) in the UDs for SX and MDZ, respectively, whereas there were only five sampling times in the MD. Whatever design and compound, CL/F was well estimated (RSE < 20% for MDZ and <25% for SX) and expected RSEs from PopDes were in the same range as empirical RSEs. Moreover, there was no bias in CL/F estimation. Since MD required only five sampling times compared to the two UDs, D-optimal sampling times of the MD were included into a full empirical design for the proposed clinical trial. A joint paper compares the designs with real data. This global approach including PBPK simulations, population PK modelling and multiresponse optimal design allowed, without any in vivo data, the design of a clinical trial, using sparse sampling, capable of estimating CL/F of the CYP3A4 substrate and potential inhibitor when co-administered together.
Finding Intervals of Abrupt Change in Earth Science Data
NASA Astrophysics Data System (ADS)
Zhou, X.; Shekhar, S.; Liess, S.
2011-12-01
In earth science data (e.g., climate data), it is often observed that a persistently abrupt change in value occurs in a certain time-period or spatial interval. For example, abrupt climate change is defined as an unusually large shift of precipitation, temperature, etc, that occurs during a relatively short time period. A similar pattern can also be found in geographical space, representing a sharp transition of the environment (e.g., vegetation between different ecological zones). Identifying such intervals of change from earth science datasets is a crucial step for understanding and attributing the underlying phenomenon. However, inconsistencies in these noisy datasets can obstruct the major change trend, and more importantly can complicate the search of the beginning and end points of the interval of change. Also, the large volume of data makes it challenging to process the dataset reasonably fast. In earth science data (e.g., climate data), it is often observed that a persistently abrupt change in value occurs in a certain time-period or spatial interval. For example, abrupt climate change is defined as an unusually large shift of precipitation, temperature, etc, that occurs during a relatively short time period. A similar change pattern can also be found in geographical space, representing a sharp transition of the environment (e.g., vegetation between different ecological zones). Identifying such intervals of change from earth science datasets is a crucial step for understanding and attributing the underlying phenomenon. However, inconsistencies in these noisy datasets can obstruct the major change trend, and more importantly can complicate the search of the beginning and end points of the interval of change. Also, the large volume of data makes it challenging to process the dataset fast. In this work, we analyze earth science data using a novel, automated data mining approach to identify spatial/temporal intervals of persistent, abrupt change. We first propose a statistical model to quantitatively evaluate the change abruptness and persistence in an interval. Then we design an algorithm to exhaustively examine all the intervals using this model. Intervals passing a threshold test will be kept as final results. We evaluate the proposed method with the Climate Research Unit (CRU) precipitation data, whereby we focus on the Sahel rainfall index. Results show that this method can find periods of persistent and abrupt value changes with different temporal scales. We also further optimize the algorithm using a smart strategy, which always examines longer intervals before its subsets. By doing this, we reduce the computational cost to only one third of that of the original algorithm for the above test case. More significantly, the optimized algorithm is also proven to scale up well with data volume and number of changes. Particularly, it achieves better performance when dealing with longer change intervals.
Approximate dynamic programming for optimal stationary control with control-dependent noise.
Jiang, Yu; Jiang, Zhong-Ping
2011-12-01
This brief studies the stochastic optimal control problem via reinforcement learning and approximate/adaptive dynamic programming (ADP). A policy iteration algorithm is derived in the presence of both additive and multiplicative noise using Itô calculus. The expectation of the approximated cost matrix is guaranteed to converge to the solution of some algebraic Riccati equation that gives rise to the optimal cost value. Moreover, the covariance of the approximated cost matrix can be reduced by increasing the length of time interval between two consecutive iterations. Finally, a numerical example is given to illustrate the efficiency of the proposed ADP methodology.
NASA Astrophysics Data System (ADS)
Trifonenkov, A. V.; Trifonenkov, V. P.
2017-01-01
This article deals with a feature of problems of calculating time-average characteristics of nuclear reactor optimal control sets. The operation of a nuclear reactor during threatened period is considered. The optimal control search problem is analysed. The xenon poisoning causes limitations on the variety of statements of the problem of calculating time-average characteristics of a set of optimal reactor power off controls. The level of xenon poisoning is limited. There is a problem of choosing an appropriate segment of the time axis to ensure that optimal control problem is consistent. Two procedures of estimation of the duration of this segment are considered. Two estimations as functions of the xenon limitation were plot. Boundaries of the interval of averaging are defined more precisely.
Inversion method based on stochastic optimization for particle sizing.
Sánchez-Escobar, Juan Jaime; Barbosa-Santillán, Liliana Ibeth; Vargas-Ubera, Javier; Aguilar-Valdés, Félix
2016-08-01
A stochastic inverse method is presented based on a hybrid evolutionary optimization algorithm (HEOA) to retrieve a monomodal particle-size distribution (PSD) from the angular distribution of scattered light. By solving an optimization problem, the HEOA (with the Fraunhofer approximation) retrieves the PSD from an intensity pattern generated by Mie theory. The analyzed light-scattering pattern can be attributed to unimodal normal, gamma, or lognormal distribution of spherical particles covering the interval of modal size parameters 46≤α≤150. The HEOA ensures convergence to the near-optimal solution during the optimization of a real-valued objective function by combining the advantages of a multimember evolution strategy and locally weighted linear regression. The numerical results show that our HEOA can be satisfactorily applied to solve the inverse light-scattering problem.
Large-scale expensive black-box function optimization
NASA Astrophysics Data System (ADS)
Rashid, Kashif; Bailey, William; Couët, Benoît
2012-09-01
This paper presents the application of an adaptive radial basis function method to a computationally expensive black-box reservoir simulation model of many variables. An iterative proxy-based scheme is used to tune the control variables, distributed for finer control over a varying number of intervals covering the total simulation period, to maximize asset NPV. The method shows that large-scale simulation-based function optimization of several hundred variables is practical and effective.
Bioinspired Concepts: Unified Theory for Complex Biological and Engineering Systems
2006-01-01
i.e., data flows of finite size arrive at the system randomly. For such a system , we propose a modified dual scheduling algorithm that stabilizes ...demon. We compute the efficiency of the controller over finite and infinite time intervals, and since the controller is optimal, this yields hard limits...and highly optimized tolerance. PNAS, 102, 2005. 51. G. N. Nair and R. J. Evans. Stabilizability of stochastic linear systems with finite feedback
An Interval Type-2 Fuzzy Multiple Echelon Supply Chain Model
NASA Astrophysics Data System (ADS)
Miller, Simon; John, Robert
Planning resources for a supply chain is a major factor determining its success or failure. In this paper we build on previous work introducing an Interval Type-2 Fuzzy Logic model of a multiple echelon supply chain. It is believed that the additional degree of uncertainty provided by Interval Type-2 Fuzzy Logic will allow for better representation of the uncertainty and vagueness present in resource planning models. First, the subject of Supply Chain Management is introduced, then some background is given on related work using Type-1 Fuzzy Logic. A description of the Interval Type-2 Fuzzy model is given, and a test scenario detailed. A Genetic Algorithm uses the model to search for a near-optimal plan for the scenario. A discussion of the results follows, along with conclusions and details of intended further work.
NASA Astrophysics Data System (ADS)
Park, Ju H.; Kwon, O. M.
In the letter, the global asymptotic stability of bidirectional associative memory (BAM) neural networks with delays is investigated. The delay is assumed to be time-varying and belongs to a given interval. A novel stability criterion for the stability is presented based on the Lyapunov method. The criterion is represented in terms of linear matrix inequality (LMI), which can be solved easily by various optimization algorithms. Two numerical examples are illustrated to show the effectiveness of our new result.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Man, Jun; Zhang, Jiangjiang; Li, Weixuan
2016-10-01
The ensemble Kalman filter (EnKF) has been widely used in parameter estimation for hydrological models. The focus of most previous studies was to develop more efficient analysis (estimation) algorithms. On the other hand, it is intuitively understandable that a well-designed sampling (data-collection) strategy should provide more informative measurements and subsequently improve the parameter estimation. In this work, a Sequential Ensemble-based Optimal Design (SEOD) method, coupled with EnKF, information theory and sequential optimal design, is proposed to improve the performance of parameter estimation. Based on the first-order and second-order statistics, different information metrics including the Shannon entropy difference (SD), degrees ofmore » freedom for signal (DFS) and relative entropy (RE) are used to design the optimal sampling strategy, respectively. The effectiveness of the proposed method is illustrated by synthetic one-dimensional and two-dimensional unsaturated flow case studies. It is shown that the designed sampling strategies can provide more accurate parameter estimation and state prediction compared with conventional sampling strategies. Optimal sampling designs based on various information metrics perform similarly in our cases. The effect of ensemble size on the optimal design is also investigated. Overall, larger ensemble size improves the parameter estimation and convergence of optimal sampling strategy. Although the proposed method is applied to unsaturated flow problems in this study, it can be equally applied in any other hydrological problems.« less
Simulation of lithium ion battery replacement in a battery pack for application in electric vehicles
NASA Astrophysics Data System (ADS)
Mathew, M.; Kong, Q. H.; McGrory, J.; Fowler, M.
2017-05-01
The design and optimization of the battery pack in an electric vehicle (EV) is essential for continued integration of EVs into the global market. Reconfigurable battery packs are of significant interest lately as they allow for damaged cells to be removed from the circuit, limiting their impact on the entire pack. This paper provides a simulation framework that models a battery pack and examines the effect of replacing damaged cells with new ones. The cells within the battery pack vary stochastically and the performance of the entire pack is evaluated under different conditions. The results show that by changing out cells in the battery pack, the state of health of the pack can be consistently maintained above a certain threshold value selected by the user. In situations where the cells are checked for replacement at discrete intervals, referred to as maintenance event intervals, it is found that the length of the interval is dependent on the mean time to failure of the individual cells. The simulation framework as well as the results from this paper can be utilized to better optimize lithium ion battery pack design in EVs and make long term deployment of EVs more economically feasible.
Optimal Wind Power Uncertainty Intervals for Electricity Market Operation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Ying; Zhou, Zhi; Botterud, Audun
It is important to select an appropriate uncertainty level of the wind power forecast for power system scheduling and electricity market operation. Traditional methods hedge against a predefined level of wind power uncertainty, such as a specific confidence interval or uncertainty set, which leaves the questions of how to best select the appropriate uncertainty levels. To bridge this gap, this paper proposes a model to optimize the forecast uncertainty intervals of wind power for power system scheduling problems, with the aim of achieving the best trade-off between economics and reliability. Then we reformulate and linearize the models into a mixedmore » integer linear programming (MILP) without strong assumptions on the shape of the probability distribution. In order to invest the impacts on cost, reliability, and prices in a electricity market, we apply the proposed model on a twosettlement electricity market based on a six-bus test system and on a power system representing the U.S. state of Illinois. The results show that the proposed method can not only help to balance the economics and reliability of the power system scheduling, but also help to stabilize the energy prices in electricity market operation.« less
Best-Fit Conic Approximation of Spacecraft Trajectory
NASA Technical Reports Server (NTRS)
Singh, Gurkipal
2005-01-01
A computer program calculates a best conic fit of a given spacecraft trajectory. Spacecraft trajectories are often propagated as conics onboard. The conic-section parameters as a result of the best-conic-fit are uplinked to computers aboard the spacecraft for use in updating predictions of the spacecraft trajectory for operational purposes. In the initial application for which this program was written, there is a requirement to fit a single conic section (necessitated by onboard memory constraints) accurate within 200 microradians to a sequence of positions measured over a 4.7-hour interval. The present program supplants a prior one that could not cover the interval with fewer than four successive conic sections. The present program is based on formulating the best-fit conic problem as a parameter-optimization problem and solving the problem numerically, on the ground, by use of a modified steepest-descent algorithm. For the purpose of this algorithm, optimization is defined as minimization of the maximum directional propagation error across the fit interval. In the specific initial application, the program generates a single 4.7-hour conic, the directional propagation of which is accurate to within 34 microradians easily exceeding the mission constraints by a wide margin.
Sun, Lian; Li, Chunhui; Cai, Yanpeng; Wang, Xuan
2017-06-14
In this study, an interval optimization model is developed to maximize the benefits of a water rights transfer system that comprises industry and agriculture sectors in the Ningxia Hui Autonomous Region in China. The model is subjected to a number of constraints including water saving potential from agriculture and ecological groundwater levels. Ecological groundwater levels serve as performance indicators of terrestrial ecology. The interval method is applied to present the uncertainty of parameters in the model. Two scenarios regarding dual industrial development targets (planned and unplanned ones) are used to investigate the difference in potential benefits of water rights transfer. Runoff of the Yellow River as the source of water rights fluctuates significantly in different years. Thus, compensation fees for agriculture are calculated to reflect the influence of differences in the runoff. Results show that there are more available water rights to transfer for industrial development. The benefits are considerable but unbalanced between buyers and sellers. The government should establish a water market that is freer and promote the interest of agriculture and farmers. Though there has been some success of water rights transfer, the ecological impacts and the relationship between sellers and buyers require additional studies.
Fuzzy rationality and parameter elicitation in decision analysis
NASA Astrophysics Data System (ADS)
Nikolova, Natalia D.; Tenekedjiev, Kiril I.
2010-07-01
It is widely recognised by decision analysts that real decision-makers always make estimates in an interval form. An overview of techniques to find an optimal alternative among such with imprecise and interval probabilities is presented. Scalarisation methods are outlined as most appropriate. A proper continuation of such techniques is fuzzy rational (FR) decision analysis. A detailed representation of the elicitation process influenced by fuzzy rationality is given. The interval character of probabilities leads to the introduction of ribbon functions, whose general form and special cases are compared with the p-boxes. As demonstrated, approximation of utilities in FR decision analysis does not depend on the probabilities, but the approximation of probabilities is dependent on preferences.
Structural characterization of semicrystalline polymer morphologies by imaging-SANS
NASA Astrophysics Data System (ADS)
Radulescu, A.; Fetters, L. J.; Richter, D.
2012-02-01
Control and optimization of polymer properties require the global knowledge of the constitutive microstructures of polymer morphologies in various conditions. The microstructural features can be typically explored over a wide length scale by combining pinhole-, focusing- and ultra-small-angle neutron scattering (SANS) techniques. Though it proved to be a successful approach, this involves major efforts related to the use of various scattering instruments and large amount of samples and the need to ensure the same crystallization kinetics for the samples investigated at various facilities, in different sample cell geometries and at different time intervals. With the installation and commissioning of the MgF2 neutron lenses at the KWS-2 SANS diffractometer installed at the Heinz Maier-Leibnitz neutron source (FRMII reactor) in Garching, a wide Q-range, between 10-4Å-1 and 0.5Å-1, can be covered at a single instrument. This enables investigation of polymer microstructures over a length scale from lnm up to 1μm, while the overall polymer morphology can be further examined up to 100μm by optical microscopy (including crossed polarizers). The study of different semi-crystalline polypropylene-based polymers in solution is discussed and the new imaging-SANS approach allowing for an unambiguous and complete structural characterization of polymer morphologies is presented.
Predictive control of hollow-fiber bioreactors for the production of monoclonal antibodies.
Dowd, J E; Weber, I; Rodriguez, B; Piret, J M; Kwok, K E
1999-05-20
The selection of medium feed rates for perfusion bioreactors represents a challenge for process optimization, particularly in bioreactors that are sampled infrequently. When the present and immediate future of a bioprocess can be adequately described, predictive control can minimize deviations from set points in a manner that can maximize process consistency. Predictive control of perfusion hollow-fiber bioreactors was investigated in a series of hybridoma cell cultures that compared operator control to computer estimation of feed rates. Adaptive software routines were developed to estimate the current and predict the future glucose uptake and lactate production of the bioprocess at each sampling interval. The current and future glucose uptake rates were used to select the perfusion feed rate in a designed response to deviations from the set point values. The routines presented a graphical user interface through which the operator was able to view the up-to-date culture performance and assess the model description of the immediate future culture performance. In addition, fewer samples were taken in the computer-estimated cultures, reducing labor and analytical expense. The use of these predictive controller routines and the graphical user interface decreased the glucose and lactate concentration variances up to sevenfold, and antibody yields increased by 10% to 43%. Copyright 1999 John Wiley & Sons, Inc.
Timescale- and Sensory Modality-Dependency of the Central Tendency of Time Perception.
Murai, Yuki; Yotsumoto, Yuko
2016-01-01
When individuals are asked to reproduce intervals of stimuli that are intermixedly presented at various times, longer intervals are often underestimated and shorter intervals overestimated. This phenomenon may be attributed to the central tendency of time perception, and suggests that our brain optimally encodes a stimulus interval based on current stimulus input and prior knowledge of the distribution of stimulus intervals. Two distinct systems are thought to be recruited in the perception of sub- and supra-second intervals. Sub-second timing is subject to local sensory processing, whereas supra-second timing depends on more centralized mechanisms. To clarify the factors that influence time perception, the present study investigated how both sensory modality and timescale affect the central tendency. In Experiment 1, participants were asked to reproduce sub- or supra-second intervals, defined by visual or auditory stimuli. In the sub-second range, the magnitude of the central tendency was significantly larger for visual intervals compared to auditory intervals, while visual and auditory intervals exhibited a correlated and comparable central tendency in the supra-second range. In Experiment 2, the ability to discriminate sub-second intervals in the reproduction task was controlled across modalities by using an interval discrimination task. Even when the ability to discriminate intervals was controlled, visual intervals exhibited a larger central tendency than auditory intervals in the sub-second range. In addition, the magnitude of the central tendency for visual and auditory sub-second intervals was significantly correlated. These results suggest that a common modality-independent mechanism is responsible for the supra-second central tendency, and that both the modality-dependent and modality-independent components of the timing system contribute to the central tendency in the sub-second range.
Shieh, G
2013-12-01
The use of effect sizes and associated confidence intervals in all empirical research has been strongly emphasized by journal publication guidelines. To help advance theory and practice in the social sciences, this article describes an improved procedure for constructing confidence intervals of the standardized mean difference effect size between two independent normal populations with unknown and possibly unequal variances. The presented approach has advantages over the existing formula in both theoretical justification and computational simplicity. In addition, simulation results show that the suggested one- and two-sided confidence intervals are more accurate in achieving the nominal coverage probability. The proposed estimation method provides a feasible alternative to the most commonly used measure of Cohen's d and the corresponding interval procedure when the assumption of homogeneous variances is not tenable. To further improve the potential applicability of the suggested methodology, the sample size procedures for precise interval estimation of the standardized mean difference are also delineated. The desired precision of a confidence interval is assessed with respect to the control of expected width and to the assurance probability of interval width within a designated value. Supplementary computer programs are developed to aid in the usefulness and implementation of the introduced techniques.
Conditional Optimal Design in Three- and Four-Level Experiments
ERIC Educational Resources Information Center
Hedges, Larry V.; Borenstein, Michael
2014-01-01
The precision of estimates of treatment effects in multilevel experiments depends on the sample sizes chosen at each level. It is often desirable to choose sample sizes at each level to obtain the smallest variance for a fixed total cost, that is, to obtain optimal sample allocation. This article extends previous results on optimal allocation to…
Manju, Md Abu; Candel, Math J J M; Berger, Martijn P F
2014-07-10
In this paper, the optimal sample sizes at the cluster and person levels for each of two treatment arms are obtained for cluster randomized trials where the cost-effectiveness of treatments on a continuous scale is studied. The optimal sample sizes maximize the efficiency or power for a given budget or minimize the budget for a given efficiency or power. Optimal sample sizes require information on the intra-cluster correlations (ICCs) for effects and costs, the correlations between costs and effects at individual and cluster levels, the ratio of the variance of effects translated into costs to the variance of the costs (the variance ratio), sampling and measuring costs, and the budget. When planning, a study information on the model parameters usually is not available. To overcome this local optimality problem, the current paper also presents maximin sample sizes. The maximin sample sizes turn out to be rather robust against misspecifying the correlation between costs and effects at the cluster and individual levels but may lose much efficiency when misspecifying the variance ratio. The robustness of the maximin sample sizes against misspecifying the ICCs depends on the variance ratio. The maximin sample sizes are robust under misspecification of the ICC for costs for realistic values of the variance ratio greater than one but not robust under misspecification of the ICC for effects. Finally, we show how to calculate optimal or maximin sample sizes that yield sufficient power for a test on the cost-effectiveness of an intervention.
Zhang, Pengpeng; Happersett, Laura; Ravindranath, Bosky; Zelefsky, Michael; Mageras, Gig; Hunt, Margie
2016-01-01
Purpose: Robust detection of implanted fiducials is essential for monitoring intrafractional motion during hypofractionated treatment. The authors developed a plan optimization strategy to ensure clear visibility of implanted fiducials and facilitate 3D localization during volumetric modulated arc therapy (VMAT). Methods: Periodic kilovoltage (kV) images were acquired at 20° gantry intervals and paired with simultaneously acquired 4.4° short arc megavoltage digital tomosynthesis (MV-DTS) to localize three fiducials during VMAT delivery for hypofractionated prostate cancer treatment. Beginning with the original optimized plan, control point segments where fiducials were consistently blocked by multileaf collimator (MLC) within each 4.4° MV-DTS interval were first identified. For each segment, MLC apertures were edited to expose the fiducial that led to the least increase in the cost function. Subsequently, MLC apertures of all control points not involved with fiducial visualization were reoptimized to compensate for plan quality losses and match the original dose–volume histogram. MV dose for each MV-DTS was also kept above 0.4 MU to ensure acceptable image quality. Different imaging (gantry) intervals and visibility margins around fiducials were also evaluated. Results: Fiducials were consistently blocked by the MLC for, on average, 36% of the imaging control points for five hypofractionated prostate VMAT plans but properly exposed after reoptimization. Reoptimization resulted in negligible dosimetric differences compared with original plans and outperformed simple aperture editing: on average, PTV D98 recovered from 87% to 94% of prescription, and PTV dose homogeneity improved from 9% to 7%. Without violating plan objectives and compromising delivery efficiency, the highest imaging frequency and largest margin that can be achieved are a 10° gantry interval, and 15 mm, respectively. Conclusions: VMAT plans can be made to accommodate MV-kV imaging of fiducials. Fiducial visualization rate and workflow efficiency are significantly improved with an automatic modification and reoptimization approach. PMID:27147314
Practicability of monitoring soil Cd, Hg, and Pb pollution based on a geochemical survey in China.
Xia, Xueqi; Yang, Zhongfang; Li, Guocheng; Yu, Tao; Hou, Qingye; Mutelo, Admire Muchimamui
2017-04-01
Repeated visiting, i.e., sampling and analysis at two or more temporal points, is one of the important ways of monitoring soil heavy metal contamination. However, with the concern about the cost, determination of the number of samples and the temporal interval, and their capability to detect a certain change is a key technical problem to be solved. This depends on the spatial variation of the parameters in the monitoring units. The "National Multi-Purpose Regional Geochemical Survey" (NMPRGS) project in China, acquired the spatial distribution of heavy metals using a high density sampling method in the most arable regions in China. Based on soil Cd, Hg, and Pb data and taking administrative regions as the monitoring units, the number of samples and temporal intervals that may be used for monitoring soil heavy metal contamination were determined. It was found that there is a large variety of spatial variation of the elements in each NMPRGS region. This results in the difficulty in the determination of the minimum detectable changes (MDC), the number of samples, and temporal intervals for revisiting. This paper recommends a suitable set of the number of samples (n r ) for each region under the balance of cost, practicability, and monitoring precision. Under n r , MDC values are acceptable for all the regions, and the minimum temporal intervals are practical with the range of 3.3-13.3 years. Copyright © 2017 Elsevier Ltd. All rights reserved.
Sequeira, Ivana R.; Lentle, Roger G.; Kruger, Marlena C.; Hurst, Roger D.
2014-01-01
Background Lactulose mannitol ratio tests are clinically useful for assessing disorders characterised by changes in gut permeability and for assessing mixing in the intestinal lumen. Variations between currently used test protocols preclude meaningful comparisons between studies. We determined the optimal sampling period and related this to intestinal residence. Methods Half-hourly lactulose and mannitol urinary excretions were determined over 6 hours in 40 healthy female volunteers after administration of either 600 mg aspirin or placebo, in randomised order at weekly intervals. Gastric and small intestinal transit times were assessed by the SmartPill in 6 subjects from the same population. Half-hourly percentage recoveries of lactulose and mannitol were grouped on a basis of compartment transit time. The rate of increase or decrease of each sugar within each group was explored by simple linear regression to assess the optimal period of sampling. Key Results The between subject standard errors for each half-hourly lactulose and mannitol excretion were lowest, the correlation of the quantity of each sugar excreted with time was optimal and the difference between the two sugars in this temporal relationship maximal during the period from 2½-4 h after ingestion. Half-hourly lactulose excretions were generally increased after dosage with aspirin whilst those of mannitol were unchanged as was the temporal pattern and period of lowest between subject standard error for both sugars. Conclusion The results indicate that between subject variation in the percentage excretion of the two sugars would be minimised and the differences in the temporal patterns of excretion would be maximised if the period of collection of urine used in clinical tests of small intestinal permeability were restricted to 2½-4 h post dosage. This period corresponds to a period when the column of digesta column containing the probes is passing from the small to the large intestine. PMID:24901524
Finite Element Vibration Modeling and Experimental Validation for an Aircraft Engine Casing
NASA Astrophysics Data System (ADS)
Rabbitt, Christopher
This thesis presents a procedure for the development and validation of a theoretical vibration model, applies this procedure to a pair of aircraft engine casings, and compares select parameters from experimental testing of those casings to those from a theoretical model using the Modal Assurance Criterion (MAC) and linear regression coefficients. A novel method of determining the optimal MAC between axisymmetric results is developed and employed. It is concluded that the dynamic finite element models developed as part of this research are fully capable of modelling the modal parameters within the frequency range of interest. Confidence intervals calculated in this research for correlation coefficients provide important information regarding the reliability of predictions, and it is recommended that these intervals be calculated for all comparable coefficients. The procedure outlined for aligning mode shapes around an axis of symmetry proved useful, and the results are promising for the development of further optimization techniques.
Francois, Monique E; Gillen, Jenna B; Little, Jonathan P
2017-01-01
Lifestyle interventions incorporating both diet and exercise strategies remain cornerstone therapies for treating metabolic disease. Carbohydrate-restriction and high-intensity interval training (HIIT) have independently been shown to improve cardiovascular and metabolic health. Carbohydrate-restriction reduces postprandial hyperglycemia, thereby limiting potential deleterious metabolic and cardiovascular consequences of excessive glucose excursions. Additionally, carbohydrate-restriction has been shown to improve body composition and blood lipids. The benefits of exercise for improving insulin sensitivity are well known. In this regard, HIIT has been shown to rapidly improve glucose control, endothelial function, and cardiorespiratory fitness. Here, we report the available evidence for each strategy and speculate that the combination of carbohydrate-restriction and HIIT will synergistically maximize the benefits of both approaches. We hypothesize that this lifestyle strategy represents an optimal intervention to treat metabolic disease; however, further research is warranted in order to harness the potential benefits of carbohydrate-restriction and HIIT for improving cardiometabolic health.
Interface modification based ultrashort laser microwelding between SiC and fused silica.
Zhang, Guodong; Bai, Jing; Zhao, Wei; Zhou, Kaiming; Cheng, Guanghua
2017-02-06
It is a big challenge to weld two materials with large differences in coefficients of thermal expansion and melting points. Here we report that the welding between fused silica (softening point, 1720°C) and SiC wafer (melting point, 3100°C) is achieved with a near infrared femtosecond laser at 800 nm. Elements are observed to have a spatial distribution gradient within the cross section of welding line, revealing that mixing and inter-diffusion of substances have occurred during laser irradiation. This is attributed to the femtosecond laser induced local phase transition and volume expansion. Through optimizing the welding parameters, pulse energy and interval of the welding lines, a shear joining strength as high as 15.1 MPa is achieved. In addition, the influence mechanism of the laser ablation on welding quality of the sample without pre-optical contact is carefully studied by measuring the laser induced interface modification.
Engagement in muscular strengthening activities is associated with better sleep
Loprinzi, Paul D.; Loenneke, Jeremy P.
2015-01-01
Few studies have examined whether engagement in muscular strengthening activities is associated with sleep duration, which was the purpose of this study. Data from the population-based 2005–2006 National Health and Nutrition Examination Survey were used, which included an analytic sample of 4386 adults (20–85 yrs). Sleep duration and engagement in muscle strengthening activities was self-reported. After adjustments (including aerobic-based physical activity), those engaging in muscular strength activities, compared to those not engaging in muscular strengthening activities, had an 19% increased odds of meeting sleep guidelines (7–8 h/night) (Odds Ratio = 1.19, 95% Confidence Interval: 1.01–1.38, P = 0.04). Promotion of muscular strengthening activities by clinicians should occur not only for improvements in other aspects of health (e.g., cardiovascular benefits), but also to help facilitate optimal sleep duration. PMID:26844170
Multiple Versus Single Set Validation of Multivariate Models to Avoid Mistakes.
Harrington, Peter de Boves
2018-01-02
Validation of multivariate models is of current importance for a wide range of chemical applications. Although important, it is neglected. The common practice is to use a single external validation set for evaluation. This approach is deficient and may mislead investigators with results that are specific to the single validation set of data. In addition, no statistics are available regarding the precision of a derived figure of merit (FOM). A statistical approach using bootstrapped Latin partitions is advocated. This validation method makes an efficient use of the data because each object is used once for validation. It was reviewed a decade earlier but primarily for the optimization of chemometric models this review presents the reasons it should be used for generalized statistical validation. Average FOMs with confidence intervals are reported and powerful, matched-sample statistics may be applied for comparing models and methods. Examples demonstrate the problems with single validation sets.
Casey, R; Griffin, T P; Wall, D; Dennedy, M C; Bell, M; O'Shea, P M
2017-01-01
Background The Endocrine Society Clinical Practice Guideline on Phaeochomocytoma and Paraganglioma recommends phlebotomy for plasma-free metanephrines with patients fasted and supine using appropriately defined reference intervals. Studies have shown higher diagnostic sensitivities using these criteria. Further, with seated-sampling protocols, for result interpretation, reference intervals that do not compromise diagnostic sensitivity should be employed. Objective To determine the impact on diagnostic performance and financial cost of using supine reference intervals for result interpretation with our current plasma-free metanephrines fasted/seated-sampling protocol. Methods We conducted a retrospective cohort study of patients who underwent screening for PPGL using plasma-free metanephrines from 2009 to 2014 at Galway University Hospitals. Plasma-free metanephrines were measured using liquid chromatography-tandem mass spectrometry. Supine thresholds for plasma normetanephrine and metanephrine set at 610 pmol/L and 310 pmol/L, respectively, were used. Results A total of 183 patients were evaluated. Mean age of participants was 53.4 (±16.3) years. Five of 183 (2.7%) patients had histologically confirmed PPGL (males, n=4). Using seated reference intervals for plasma-free metanephrines, diagnostic sensitivity and specificity were 100% and 98.9%, respectively, with two false-positive cases. Application of reference intervals established in subjects supine and fasted to this cohort gave diagnostic sensitivity of 100% with specificity of 74.7%. Financial analysis of each pretesting strategy demonstrated cost-equivalence (€147.27/patient). Conclusion Our cost analysis, together with the evidence that fasted/supine-sampling for plasma-free metanephrines, offers more reliable exclusion of PPGL mandates changing our current practice. This study highlights the important advantages of standardized diagnostic protocols for plasma-free metanephrines to ensure the highest diagnostic accuracy for investigation of PPGL.
Optimizing DMPK Properties: Experiences from a Big Pharma DMPK Department.
Sohlenius-Sternbeck, Anna-Karin; Janson, Juliette; Bylund, Johan; Baranczewski, Pawel; Breitholtz-Emanuelsson, Anna; Hu, Yin; Tsoi, Carrie; Lindgren, Anders; Gissberg, Olle; Bueters, Tjerk; Briem, Sveinn; Juric, Sanja; Johansson, Jenny; Bergh, Margareta; Hoogstraate, Janet
2016-01-01
The disposition of a drug is dependent on interactions between the body and the drug, its molecular properties and the physical and biological barriers presented in the body. In order for a drug to have a desired pharmacological effect it has to have the right properties to be able to reach the target site in sufficient concentration. This review details how drug metabolism and pharmacokinetics (DMPK) and physicochemical deliveries played an important role in data interpretation and compound optimization at AstraZeneca R&D in Södertälje, Sweden. A selection of assays central in the evaluation of the DMPK properties of new chemical entities is presented, with guidance and consideration on assay outcome interpretation. Early in projects, solubility, LogD, permeability and metabolic stability were measured to support effective optimization of DMPK properties. Changes made to facilitate high throughput, efficient bioanalysis and the handling of large amounts of samples are described. Already early in drug discovery, we used an integrated approach for the prediction of the fate of drugs in human (early dose to man) based on data obtained from in vitro experiments. The early dose to man was refined with project progression, which triggered more intricate assays and experiments. At later stages, preclinical in vivo pharmacokinetic (PK) data was integrated with pharmacodynamics (PD) to allow predictions of required dose, dose intervals and exposure profile to achieve the desired effect in man. A well-defined work flow of DMPK activities from early lead identification up to the selection of a candidate drug was developed. This resulted in a cost effective and efficient optimization of chemical series, and facilitated informed decision making throughout project progress.
Rashid, Abdul Ahid; Huma, Nuzhat; Zahoor, Tahir; Asgher, Muhammad
2017-02-01
The recovery of milk constituents from cheese whey is affected by various processing conditions followed during production of Ricotta cheese. The objective of the present investigation was to optimize the temperature (60-90 °C), pH (3-7) and CaCl2 concentration (2·0-6·0 mm) for maximum yield/recovery of milk constituents. The research work was carried out in two phases. In 1st phase, the influence of these processing conditions was evaluated through 20 experiments formulated by central composite design (CCD) keeping the yield as response factor. The results obtained from these experiments were used to optimize processing conditions for maximum yield using response surface methodology (RSM). The three best combinations of processing conditions (90 °C, pH 7, CaCl2 6 mm), (100 °C, pH 5, CaCl2 4 mm) and (75 °C, pH 8·4, CaCl2 4 mm) were exploited in the next phase for Ricotta cheese production from a mixture of Buffalo cheese whey and skim milk (9 : 1) to determine the influence of optimized conditions on the cheese composition. Ricotta cheeses were analyzed for various physicochemical (moisture, fat, protein, lactose, total solids, pH and acidity indicated) parameters during storage of 60 d at 4 ± 2 °C after every 15 d interval. Ricotta cheese prepared at 90 °C, pH 7 and CaCl2 6 mm exhibited the highest cheese yield, proteins and total solids, while high fat content was recorded for cheese processed at 100 °C, pH 5 and 4 mm CaCl2 concentration. A significant storage-related increase in acidity and NPN was recorded for all cheese samples.
ERIC Educational Resources Information Center
Taylor, Matthew A.; Skourides, Andreas; Alvero, Alicia M.
2012-01-01
Interval recording procedures are used by persons who collect data through observation to estimate the cumulative occurrence and nonoccurrence of behavior/events. Although interval recording procedures can increase the efficiency of observational data collection, they can also induce error from the observer. In the present study, 50 observers were…
Using an R Shiny to Enhance the Learning Experience of Confidence Intervals
ERIC Educational Resources Information Center
Williams, Immanuel James; Williams, Kelley Kim
2018-01-01
Many students find understanding confidence intervals difficult, especially because of the amalgamation of concepts such as confidence levels, standard error, point estimates and sample sizes. An R Shiny application was created to assist the learning process of confidence intervals using graphics and data from the US National Basketball…
Machine learning approaches for estimation of prediction interval for the model output.
Shrestha, Durga L; Solomatine, Dimitri P
2006-03-01
A novel method for estimating prediction uncertainty using machine learning techniques is presented. Uncertainty is expressed in the form of the two quantiles (constituting the prediction interval) of the underlying distribution of prediction errors. The idea is to partition the input space into different zones or clusters having similar model errors using fuzzy c-means clustering. The prediction interval is constructed for each cluster on the basis of empirical distributions of the errors associated with all instances belonging to the cluster under consideration and propagated from each cluster to the examples according to their membership grades in each cluster. Then a regression model is built for in-sample data using computed prediction limits as targets, and finally, this model is applied to estimate the prediction intervals (limits) for out-of-sample data. The method was tested on artificial and real hydrologic data sets using various machine learning techniques. Preliminary results show that the method is superior to other methods estimating the prediction interval. A new method for evaluating performance for estimating prediction interval is proposed as well.
Time-variant random interval natural frequency analysis of structures
NASA Astrophysics Data System (ADS)
Wu, Binhua; Wu, Di; Gao, Wei; Song, Chongmin
2018-02-01
This paper presents a new robust method namely, unified interval Chebyshev-based random perturbation method, to tackle hybrid random interval structural natural frequency problem. In the proposed approach, random perturbation method is implemented to furnish the statistical features (i.e., mean and standard deviation) and Chebyshev surrogate model strategy is incorporated to formulate the statistical information of natural frequency with regards to the interval inputs. The comprehensive analysis framework combines the superiority of both methods in a way that computational cost is dramatically reduced. This presented method is thus capable of investigating the day-to-day based time-variant natural frequency of structures accurately and efficiently under concrete intrinsic creep effect with probabilistic and interval uncertain variables. The extreme bounds of the mean and standard deviation of natural frequency are captured through the embedded optimization strategy within the analysis procedure. Three particularly motivated numerical examples with progressive relationship in perspective of both structure type and uncertainty variables are demonstrated to justify the computational applicability, accuracy and efficiency of the proposed method.
Neonatal stomach volume and physiology suggest feeding at 1-h intervals.
Bergman, Nils J
2013-08-01
There is insufficient evidence on optimal neonatal feeding intervals, with a wide range of practices. The stomach capacity could determine feeding frequency. A literature search was conducted for studies reporting volumes or dimensions of stomach capacity before or after birth. Six articles were found, suggesting a stomach capacity of 20 mL at birth. A stomach capacity of 20 mL translates to a feeding interval of approximately 1 h for a term neonate. This corresponds to the gastric emptying time for human milk, as well as the normal neonatal sleep cycle. Larger feeding volumes at longer intervals may therefore be stressful and the cause of spitting up, reflux and hypoglycaemia. Outcomes for low birthweight infants could possibly be improved if stress from overfeeding was avoided while supporting the development of normal gastrointestinal physiology. Cycles between feeding and sleeping at 1-h intervals likely meet the evolutionary expectations of human neonates. ©2013 Foundation Acta Paediatrica. Published by John Wiley & Sons Ltd.
Feng, Shu; Gale, Michael J; Fay, Jonathan D; Faridi, Ambar; Titus, Hope E; Garg, Anupam K; Michaels, Keith V; Erker, Laura R; Peters, Dawn; Smith, Travis B; Pennesi, Mark E
2015-09-01
To describe a standardized flood-illuminated adaptive optics (AO) imaging protocol suitable for the clinical setting and to assess sampling methods for measuring cone density. Cone density was calculated following three measurement protocols: 50 × 50-μm sampling window values every 0.5° along the horizontal and vertical meridians (fixed-interval method), the mean density of expanding 0.5°-wide arcuate areas in the nasal, temporal, superior, and inferior quadrants (arcuate mean method), and the peak cone density of a 50 × 50-μm sampling window within expanding arcuate areas near the meridian (peak density method). Repeated imaging was performed in nine subjects to determine intersession repeatability of cone density. Cone density montages could be created for 67 of the 74 subjects. Image quality was determined to be adequate for automated cone counting for 35 (52%) of the 67 subjects. We found that cone density varied with different sampling methods and regions tested. In the nasal and temporal quadrants, peak density most closely resembled histological data, whereas the arcuate mean and fixed-interval methods tended to underestimate the density compared with histological data. However, in the inferior and superior quadrants, arcuate mean and fixed-interval methods most closely matched histological data, whereas the peak density method overestimated cone density compared with histological data. Intersession repeatability testing showed that repeatability was greatest when sampling by arcuate mean and lowest when sampling by fixed interval. We show that different methods of sampling can significantly affect cone density measurements. Therefore, care must be taken when interpreting cone density results, even in a normal population.
Feng, Shu; Gale, Michael J.; Fay, Jonathan D.; Faridi, Ambar; Titus, Hope E.; Garg, Anupam K.; Michaels, Keith V.; Erker, Laura R.; Peters, Dawn; Smith, Travis B.; Pennesi, Mark E.
2015-01-01
Purpose To describe a standardized flood-illuminated adaptive optics (AO) imaging protocol suitable for the clinical setting and to assess sampling methods for measuring cone density. Methods Cone density was calculated following three measurement protocols: 50 × 50-μm sampling window values every 0.5° along the horizontal and vertical meridians (fixed-interval method), the mean density of expanding 0.5°-wide arcuate areas in the nasal, temporal, superior, and inferior quadrants (arcuate mean method), and the peak cone density of a 50 × 50-μm sampling window within expanding arcuate areas near the meridian (peak density method). Repeated imaging was performed in nine subjects to determine intersession repeatability of cone density. Results Cone density montages could be created for 67 of the 74 subjects. Image quality was determined to be adequate for automated cone counting for 35 (52%) of the 67 subjects. We found that cone density varied with different sampling methods and regions tested. In the nasal and temporal quadrants, peak density most closely resembled histological data, whereas the arcuate mean and fixed-interval methods tended to underestimate the density compared with histological data. However, in the inferior and superior quadrants, arcuate mean and fixed-interval methods most closely matched histological data, whereas the peak density method overestimated cone density compared with histological data. Intersession repeatability testing showed that repeatability was greatest when sampling by arcuate mean and lowest when sampling by fixed interval. Conclusions We show that different methods of sampling can significantly affect cone density measurements. Therefore, care must be taken when interpreting cone density results, even in a normal population. PMID:26325414
Automatic frequency control for FM transmitter
NASA Technical Reports Server (NTRS)
Honnell, M. A. (Inventor)
1974-01-01
An automatic frequency control circuit for an FM television transmitter is described. The frequency of the transmitter is sampled during what is termed the back porch portion of the horizontal synchronizing pulse which occurs during the retrace interval, the frequency sample compared with the frequency of a reference oscillator, and a correction applied to the frequency of the transmitter during this portion of the retrace interval.
ERIC Educational Resources Information Center
Pinto, Carlos; Machado, Armando
2011-01-01
To better understand short-term memory for temporal intervals, we re-examined the choose-short effect. In Experiment 1, to contrast the predictions of two models of this effect, the subjective shortening and the coding models, pigeons were exposed to a delayed matching-to-sample task with three sample durations (2, 6 and 18 s) and retention…
Ultrasonic sensor and method of use
Condreva, Kenneth J.
2001-01-01
An ultrasonic sensor system and method of use for measuring transit time though a liquid sample, using one ultrasonic transducer coupled to a precision time interval counter. The timing circuit captures changes in transit time, representing small changes in the velocity of sound transmitted, over necessarily small time intervals (nanoseconds) and uses the transit time changes to identify the presence of non-conforming constituents in the sample.
Optimal sampling with prior information of the image geometry in microfluidic MRI.
Han, S H; Cho, H; Paulsen, J L
2015-03-01
Recent advances in MRI acquisition for microscopic flows enable unprecedented sensitivity and speed in a portable NMR/MRI microfluidic analysis platform. However, the application of MRI to microfluidics usually suffers from prolonged acquisition times owing to the combination of the required high resolution and wide field of view necessary to resolve details within microfluidic channels. When prior knowledge of the image geometry is available as a binarized image, such as for microfluidic MRI, it is possible to reduce sampling requirements by incorporating this information into the reconstruction algorithm. The current approach to the design of the partial weighted random sampling schemes is to bias toward the high signal energy portions of the binarized image geometry after Fourier transformation (i.e. in its k-space representation). Although this sampling prescription is frequently effective, it can be far from optimal in certain limiting cases, such as for a 1D channel, or more generally yield inefficient sampling schemes at low degrees of sub-sampling. This work explores the tradeoff between signal acquisition and incoherent sampling on image reconstruction quality given prior knowledge of the image geometry for weighted random sampling schemes, finding that optimal distribution is not robustly determined by maximizing the acquired signal but from interpreting its marginal change with respect to the sub-sampling rate. We develop a corresponding sampling design methodology that deterministically yields a near optimal sampling distribution for image reconstructions incorporating knowledge of the image geometry. The technique robustly identifies optimal weighted random sampling schemes and provides improved reconstruction fidelity for multiple 1D and 2D images, when compared to prior techniques for sampling optimization given knowledge of the image geometry. Copyright © 2015 Elsevier Inc. All rights reserved.
Olney, Robert C; Salehi, Parisa; Prickett, Timothy C R; Lima, John J; Espiner, Eric A; Sikes, Kaitlin M; Geffner, Mitchell E
2016-10-01
C-type natriuretic peptide (CNP) and its aminoterminal propeptide (NTproCNP) are potential biomarkers of recombinant human growth hormone (rhGH) efficacy. The objective of this study was to describe the pharmacodynamics of plasma CNP and NTproCNP levels in response to rhGH treatment and to identify the optimal time of sampling after starting rhGH. This was a prospective, observational study. Subjects were treated with rhGH for 1 year, with blood sampled at regular intervals. Eighteen prepubertal children, eight with low levels of GH on biochemical testing and ten with idiopathic short stature, completed the study. Blood levels of CNP, NTproCNP, GH, insulin-like growth factor-I, leptin and bone-specific alkaline phosphatase were measured. Anthropometrics were obtained. Plasma levels of both CNP and NTproCNP reached peak levels 7-28 days after starting rhGH treatment and then declined to intermediate levels through the first year. Plasma NTproCNP levels after 14 days trended towards a correlation with height velocity after 6 and 12 months of treatment. Unexpectedly, serum GH levels measured 2 and 28 days after starting rhGH correlated strongly with height velocity after 6 and 12 months of treatment. This study identified 14 days after starting rhGH treatment as the optimal time for assessing CNP and NTproCNP levels as biomarkers of rhGH efficacy. Additionally, we identified GH levels as a potential biomarker. Larger, prospective studies are now needed to test the clinical utility of these biomarkers. © 2016 John Wiley & Sons Ltd.
Staerk, Laila; Wang, Biqi; Preis, Sarah R; Larson, Martin G; Lubitz, Steven A; Ellinor, Patrick T; McManus, David D; Ko, Darae; Weng, Lu-Chen; Lunetta, Kathryn L; Frost, Lars; Benjamin, Emelia J
2018-01-01
Abstract Objective To examine the association between risk factor burdens—categorized as optimal, borderline, or elevated—and the lifetime risk of atrial fibrillation. Design Community based cohort study. Setting Longitudinal data from the Framingham Heart Study. Participants Individuals free of atrial fibrillation at index ages 55, 65, and 75 years were assessed. Smoking, alcohol consumption, body mass index, blood pressure, diabetes, and history of heart failure or myocardial infarction were assessed as being optimal (that is, all risk factors were optimal), borderline (presence of borderline risk factors and absence of any elevated risk factor), or elevated (presence of at least one elevated risk factor) at index age. Main outcome measure Lifetime risk of atrial fibrillation at index age up to 95 years, accounting for the competing risk of death. Results At index age 55 years, the study sample comprised 5338 participants (2531 (47.4%) men). In this group, 247 (4.6%) had an optimal risk profile, 1415 (26.5%) had a borderline risk profile, and 3676 (68.9%) an elevated risk profile. The prevalence of elevated risk factors increased gradually when the index ages rose. For index age of 55 years, the lifetime risk of atrial fibrillation was 37.0% (95% confidence interval 34.3% to 39.6%). The lifetime risk of atrial fibrillation was 23.4% (12.8% to 34.5%) with an optimal risk profile, 33.4% (27.9% to 38.9%) with a borderline risk profile, and 38.4% (35.5% to 41.4%) with an elevated risk profile. Overall, participants with at least one elevated risk factor were associated with at least 37.8% lifetime risk of atrial fibrillation. The gradient in lifetime risk across risk factor burden was similar at index ages 65 and 75 years. Conclusions Regardless of index ages at 55, 65, or 75 years, an optimal risk factor profile was associated with a lifetime risk of atrial fibrillation of about one in five; this risk rose to more than one in three in individuals with at least one elevated risk factor. PMID:29699974
NASA Technical Reports Server (NTRS)
Shay, Rick; Swieringa, Kurt A.; Baxley, Brian T.
2012-01-01
Flight deck based Interval Management (FIM) applications using ADS-B are being developed to improve both the safety and capacity of the National Airspace System (NAS). FIM is expected to improve the safety and efficiency of the NAS by giving pilots the technology and procedures to precisely achieve an interval behind the preceding aircraft by a specific point. Concurrently but independently, Optimized Profile Descents (OPD) are being developed to help reduce fuel consumption and noise, however, the range of speeds available when flying an OPD results in a decrease in the delivery precision of aircraft to the runway. This requires the addition of a spacing buffer between aircraft, reducing system throughput. FIM addresses this problem by providing pilots with speed guidance to achieve a precise interval behind another aircraft, even while flying optimized descents. The Interval Management with Spacing to Parallel Dependent Runways (IMSPiDR) human-in-the-loop experiment employed 24 commercial pilots to explore the use of FIM equipment to conduct spacing operations behind two aircraft arriving to parallel runways, while flying an OPD during high-density operations. This paper describes the impact of variations in pilot operations; in particular configuring the aircraft, their compliance with FIM operating procedures, and their response to changes of the FIM speed. An example of the displayed FIM speeds used incorrectly by a pilot is also discussed. Finally, this paper examines the relationship between achieving airline operational goals for individual aircraft and the need for ATC to deliver aircraft to the runway with greater precision. The results show that aircraft can fly an OPD and conduct FIM operations to dependent parallel runways, enabling operational goals to be achieved efficiently while maintaining system throughput.
NASA Astrophysics Data System (ADS)
Hernández J., P.; Befani M., R.; Boschetti N., G.; Quintero C., E.; Díaz E., L.; Lado, M.; Paz-González, A.
2015-04-01
The Avellaneda District, located in northeastern of Santa Fe Province, Argentina, has an average annual rainfall of 1250 mm per year, but with a high variability in their seasonal distribution. Generally, the occurrence of precipitation in winter is low, while summer droughts are frequent. The yearly hydrological cycle shows a water deficit, given that the annual potential evapotranspiration is estimated at 1330 mm. Field crops such as soybean, corn, sunflower and cotton, which are affected by water stress during their critical growth periods, are dominant in this area. Therefore, a supplemental irrigation project has been developed in order to identify workable solutions. This project pumps water from Paraná River to provide a water supply to the target area under irrigation. A pressurized irrigation system operating on demand provides water to a network of channels, which in turn deliver water to farms. The scheduled surface of irrigation is 8800 hectares. The maximum flow rate was designed to be 8.25 m3/second. The soils have been classified as Aquic Argiudolls in areas of very gentle slopes, and Vertic Argiudolls in flat and concave reliefs; neither salinity nor excess sodium affect the soils of the study are. The objective of this study was to provide a quantitative data set to manage the irrigation project, through the determination of available water (AW), easily available water (EAw) and optimal water range (or interval) of the soil horizons. The study has been conducted in a text area of 1500 hectares in surface. Five soil profiles were sampled to determine physical properties (structure stability, effective root depth, infiltration, bulk density, penetration resistance and water holding capacity), chemical properties (pH, cation exchange capacity, base saturation, salinity, and sodium content ) and morphological characteristics of the successive horizons. Also several environmental characteristics were evaluated, including: climate, topographic conditions, relief, general and slope position, erosion, natural vegetation and agricultural crops. Indeed the computed available water (AW) content and easily available water (EAw) content values depended on bulk density, field capacity and permanent wilting point, but also they were affected by the soil penetration resistance measured to a depth of 80 cm; this parameter limits the extent of the soil volume explored by plant roots and therefore EAw content. Moreover, soil penetration resistance enables to take into account the concept of optimal water interval, which indicates how soil compaction limits the levels of easily available water that really can be extracted by the crop. The estimated values of EAw water ranged from 74 to 133 mm for the profiles studies. When including the concept of mechanical resistance to penetration to obtain the value of the optimal water interval, the above values decreased, ranging between 34 and 57 mm; this was mainly explained on the basis of the true depth of exploration by plant roots of the soil profiles. Based on the recorded values of the soil mechanical resistance to penetration, it was concluded that sunflower and corn crops will be mostly affected on their growth and root development. Subsequently, and for a maximum consumptive use of 10 mm/day, the commonly used irrigation interval of 13 days, should decrease to 6 days, if the new methodology is used i.e. if the limitations of soil depth exploration by crop roots are taken into account. This result is consistent with those from current practices under non irrigated conditions, where it has been shown that crop yields are affected by water shortage provided that an important precipitation doesn't occur among such interval.
Currie, Katharine D; Rosen, Lee M; Millar, Philip J; McKelvie, Robert S; MacDonald, Maureen J
2013-06-01
Decreased heart rate variability and attenuated heart rate recovery following exercise are associated with an increased risk of mortality in cardiac patients. This study investigated the effects of 12 weeks of moderate-intensity endurance exercise (END) and a novel low-volume high-intensity interval exercise protocol (HIT) on measures of heart rate recovery and heart rate variability in patients with coronary artery disease (CAD). Fourteen males with CAD participated in 12 weeks of END or HIT training, each consisting of 2 supervised exercise sessions per week. END consisted of 30-50 min of continuous cycling at 60% peak power output (PPO). HIT involved ten 1-min intervals at 88% PPO separated by 1-min intervals at 10% PPO. Heart rate recovery at 1 min and 2 min was measured before and after training (pre- and post-training, respectively) using a submaximal exercise bout. Resting time and spectral and nonlinear domain measures of heart rate variability were calculated. Following 12 weeks of END and HIT, there was no change in heart rate recovery at 1 min (END, 40 ± 12 beats·min(-1) vs. 37 ± 19 beats·min(-1); HIT, 31 ± 8 beats·min(-1) vs. 35 ± 8 beats·min(-1); p ≥ 0.05 for pre- vs. post-training) or 2 min (END, 44 ± 18 beats·min(-1) vs. 43 ± 19 beats·min(-1); HIT, 42 ± 10 beats·min(-1) vs. 50 ± 6 beats·min(-1); p ≥ 0.05 for pre- vs. post-training). All heart rate variability indices were unchanged following END and HIT training. In conclusion, neither END nor HIT exercise programs elicited training-induced improvements in cardiac autonomic function in patients with CAD. The absence of improvements with training may be attributed to the optimal medical management and normative pretraining state of our sample.
Babulal, Ganesh M; Addison, Aaron; Ghoshal, Nupur; Stout, Sarah H; Vernon, Elizabeth K; Sellan, Mark; Roe, Catherine M
2016-01-01
Background : The number of older adults in the United States will double by 2056. Additionally, the number of licensed drivers will increase along with extended driving-life expectancy. Motor vehicle crashes are a leading cause of injury and death in older adults. Alzheimer's disease (AD) also negatively impacts driving ability and increases crash risk. Conventional methods to evaluate driving ability are limited in predicting decline among older adults. Innovations in GPS hardware and software can monitor driving behavior in the actual environments people drive in. Commercial off-the-shelf (COTS) devices are affordable, easy to install and capture large volumes of data in real-time. However, adapting these methodologies for research can be challenging. This study sought to adapt a COTS device and determine an interval that produced accurate data on the actual route driven for use in future studies involving older adults with and without AD. Methods : Three subjects drove a single course in different vehicles at different intervals (30, 60 and 120 seconds), at different times of day, morning (9:00-11:59AM), afternoon (2:00-5:00PM) and night (7:00-10pm). The nine datasets were examined to determine the optimal collection interval. Results : Compared to the 120-second and 60-second intervals, the 30-second interval was optimal in capturing the actual route driven along with the lowest number of incorrect paths and affordability weighing considerations for data storage and curation. Discussion : Use of COTS devices offers minimal installation efforts, unobtrusive monitoring and discreet data extraction. However, these devices require strict protocols and controlled testing for adoption into research paradigms. After reliability and validity testing, these devices may provide valuable insight into daily driving behaviors and intraindividual change over time for populations of older adults with and without AD. Data can be aggregated over time to look at changes or adverse events and ascertain if decline in performance is occurring.
Abe, Toshikazu; Tokuda, Yasuharu; Cook, E Francis
2011-01-01
Optimal acceptable time intervals from collapse to bystander cardiopulmonary resuscitation (CPR) for neurologically favorable outcome among adults with witnessed out-of-hospital cardiopulmonary arrest (CPA) have been unclear. Our aim was to assess the optimal acceptable thresholds of the time intervals of CPR for neurologically favorable outcome and survival using a recursive partitioning model. From January 1, 2005 through December 31, 2009, we conducted a prospective population-based observational study across Japan involving consecutive out-of-hospital CPA patients (N = 69,648) who received a witnessed bystander CPR. Of 69,648 patients, 34,605 were assigned to the derivation data set and 35,043 to the validation data set. Time factors associated with better outcomes: the better outcomes were survival and neurologically favorable outcome at one month, defined as category one (good cerebral performance) or two (moderate cerebral disability) of the cerebral performance categories. Based on the recursive partitioning model from the derivation dataset (n = 34,605) to predict the neurologically favorable outcome at one month, 5 min threshold was the acceptable time interval from collapse to CPR initiation; 11 min from collapse to ambulance arrival; 18 min from collapse to return of spontaneous circulation (ROSC); and 19 min from collapse to hospital arrival. Among the validation dataset (n = 35,043), 209/2,292 (9.1%) in all patients with the acceptable time intervals and 1,388/2,706 (52.1%) in the subgroup with the acceptable time intervals and pre-hospital ROSC showed neurologically favorable outcome. Initiation of CPR should be within 5 min for obtaining neurologically favorable outcome among adults with witnessed out-of-hospital CPA. Patients with the acceptable time intervals of bystander CPR and pre-hospital ROSC within 18 min could have 50% chance of neurologically favorable outcome.
Very Similar Spacing-Effect Patterns in Very Different Learning/Practice Domains
Kornmeier, Jürgen; Spitzer, Manfred; Sosic-Vasic, Zrinka
2014-01-01
Temporally distributed (“spaced”) learning can be twice as efficient as massed learning. This “spacing effect” occurs with a broad spectrum of learning materials, with humans of different ages, with non-human vertebrates and also invertebrates. This indicates, that very basic learning mechanisms are at work (“generality”). Although most studies so far focused on very narrow spacing interval ranges, there is some evidence for a non-monotonic behavior of this “spacing effect” (“nonlinearity”) with optimal spacing intervals at different time scales. In the current study we focused both the nonlinearity aspect by using a broad range of spacing intervals and the generality aspect by using very different learning/practice domains: Participants learned German-Japanese word pairs and performed visual acuity tests. For each of six groups we used a different spacing interval between learning/practice units from 7 min to 24 h in logarithmic steps. Memory retention was studied in three consecutive final tests, one, seven and 28 days after the final learning unit. For both the vocabulary learning and visual acuity performance we found a highly significant effect of the factor spacing interval on the final test performance. In the 12 h-spacing-group about 85% of the learned words stayed in memory and nearly all of the visual acuity gain was preserved. In the 24 h-spacing-group, in contrast, only about 33% of the learned words were retained and the visual acuity gain dropped to zero. The very similar patterns of results from the two very different learning/practice domains point to similar underlying mechanisms. Further, our results indicate spacing in the range of 12 hours as optimal. A second peak may be around a spacing interval of 20 min but here the data are less clear. We discuss relations between our results and basic learning at the neuronal level. PMID:24609081
Bootstrap Prediction Intervals in Non-Parametric Regression with Applications to Anomaly Detection
NASA Technical Reports Server (NTRS)
Kumar, Sricharan; Srivistava, Ashok N.
2012-01-01
Prediction intervals provide a measure of the probable interval in which the outputs of a regression model can be expected to occur. Subsequently, these prediction intervals can be used to determine if the observed output is anomalous or not, conditioned on the input. In this paper, a procedure for determining prediction intervals for outputs of nonparametric regression models using bootstrap methods is proposed. Bootstrap methods allow for a non-parametric approach to computing prediction intervals with no specific assumptions about the sampling distribution of the noise or the data. The asymptotic fidelity of the proposed prediction intervals is theoretically proved. Subsequently, the validity of the bootstrap based prediction intervals is illustrated via simulations. Finally, the bootstrap prediction intervals are applied to the problem of anomaly detection on aviation data.
NASA Astrophysics Data System (ADS)
Zhang, Chenglong; Zhang, Fan; Guo, Shanshan; Liu, Xiao; Guo, Ping
2018-01-01
An inexact nonlinear mλ-measure fuzzy chance-constrained programming (INMFCCP) model is developed for irrigation water allocation under uncertainty. Techniques of inexact quadratic programming (IQP), mλ-measure, and fuzzy chance-constrained programming (FCCP) are integrated into a general optimization framework. The INMFCCP model can deal with not only nonlinearities in the objective function, but also uncertainties presented as discrete intervals in the objective function, variables and left-hand side constraints and fuzziness in the right-hand side constraints. Moreover, this model improves upon the conventional fuzzy chance-constrained programming by introducing a linear combination of possibility measure and necessity measure with varying preference parameters. To demonstrate its applicability, the model is then applied to a case study in the middle reaches of Heihe River Basin, northwest China. An interval regression analysis method is used to obtain interval crop water production functions in the whole growth period under uncertainty. Therefore, more flexible solutions can be generated for optimal irrigation water allocation. The variation of results can be examined by giving different confidence levels and preference parameters. Besides, it can reflect interrelationships among system benefits, preference parameters, confidence levels and the corresponding risk levels. Comparison between interval crop water production functions and deterministic ones based on the developed INMFCCP model indicates that the former is capable of reflecting more complexities and uncertainties in practical application. These results can provide more reliable scientific basis for supporting irrigation water management in arid areas.
Perez, Claudio A; Cohn, Theodore E; Medina, Leonel E; Donoso, José R
2007-08-31
Stochastic resonance (SR) is the counterintuitive phenomenon in which noise enhances detection of sub-threshold stimuli. The SR psychophysical threshold theory establishes that the required amplitude to exceed the sensory threshold barrier can be reached by adding noise to a sub-threshold stimulus. The aim of this study was to test the SR theory by comparing detection results from two different randomly-presented stimulus conditions. In the first condition, optimal noise was present during the whole attention interval; in the second, the optimal noise was restricted to the same time interval as the stimulus. SR threshold theory predicts no difference between the two conditions because noise helps the sub-threshold stimulus to reach threshold in both cases. The psychophysical experimental method used a 300 ms rectangular force pulse as a stimulus within an attention interval of 1.5 s, applied to the index finger of six human subjects in the two distinct conditions. For all subjects we show that in the condition in which the noise was present only when synchronized with the stimulus, detection was better (p<0.05) than in the condition in which the noise was delivered throughout the attention interval. These results provide the first direct evidence that SR threshold theory is incomplete and that a new phenomenon has been identified, which we call Coincidence-Enhanced Stochastic Resonance (CESR). We propose that CESR might occur because subject uncertainty is reduced when noise points at the same temporal window as the stimulus.
Baid, Smita K; Sinaii, Ninet; Wade, Matt; Rubino, Domenica; Nieman, Lynnette K
2007-08-01
Although bedtime salivary cortisol measurement has been proposed as the optimal screening test for the diagnosis of Cushing's syndrome, its performance using commercially available assays has not been widely evaluated. Our objective was to compare RIA and tandem mass spectrometry (LC-MS/MS) measurement of salivary cortisol in obese subjects and healthy volunteers. We conducted a cross-sectional prospective study of outpatients. We studied 261 obese subjects (186 female) with at least two additional features of Cushing's syndrome and 60 healthy volunteers (30 female). Subjects provided split bedtime salivary samples for cortisol measurement by commercially available RIA and LC-MS/MS. Results were considered normal or abnormal based on the laboratory reference range. Subjects with abnormal results underwent evaluation for Cushing's syndrome. In paired samples, RIA gave a lower specificity than LC-MS/MS in obese subjects (86 vs. 94%, P = 0.008) but not healthy volunteers (86 vs. 82%, P = 0.71). Among subjects with at least one abnormal result, both values were abnormal in 44% (confidence interval 26-62%) of obese and 75% (confidence interval 33-96%) of healthy volunteers. In obese subjects, salivary cortisol concentrations were less than 4.0 to 643 ng/dl (<0.11-17.7 nmol/liter; normal, < or =100 ng/dl, 2.80 nmol/liter) by LC-MS/MS and less than 50 to 2800 ng/dl (1.4-77.3 nmol/liter; normal, < or =170 ng/dl, 4.7 nmol/liter) by RIA. Cushing's syndrome was not diagnosed in any subject. Salivary cortisol levels should not be used as the sole test to diagnose Cushing's syndrome if laboratory-provided reference ranges are used for diagnostic interpretation.
Underserved Areas and Pediatric Resident Characteristics: Is There Reason for Optimism?
Laraque-Arena, Danielle; Frintner, Mary Pat; Cull, William L
2016-01-01
To examine whether resident characteristics and experiences are related to practice in underserved areas. Cross-sectional survey of a national random sample of pediatric residents (n = 1000) and additional sample of minority residents (n = 223) who were graduating in 2009 was conducted. Using weighted logistic regression, we examined relationships between resident characteristics (background, values, residency experiences, and practice goals) and reported 1) expectation to practice in underserved area and 2) postresidency position in underserved area. Response rate was 57%. Forty-one percent of the residents reported that they had an expectation of practicing in an underserved area. Of those who had already accepted postresidency positions, 38% reported positions in underserved areas. Service obligation in exchange for loans/scholarships and primary care/academic pediatrics practice goals were the strongest predictors of expectation of practicing in underserved areas (respectively, adjusted odds ratio 4.74, 95% confidence interval 1.87-12.01; adjusted odds ratio 3.48, 95% confidence interval 1.99-6.10). Other significant predictors include hospitalist practice goals, primary care practice goals, importance of racial/ethnic diversity of patient population in residency selection, early plan (before medical school) to care for underserved families, mother with a graduate or medical degree, and higher score on the Universalism value scale. Service obligation and primary care/academic pediatrics practice goal were also the strongest predictors for taking a postresidency job in underserved area. Trainee characteristics such as service obligations, values of humanism, and desire to serve underserved populations offer the hope that policies and public funding can be directed to support physicians with these characteristics to redress the maldistribution of physicians caring for children. Copyright © 2016 Academic Pediatric Association. Published by Elsevier Inc. All rights reserved.
Denlinger, Loren C; Manthei, David M; Seibold, Max A; Ahn, Kwangmi; Bleecker, Eugene; Boushey, Homer A; Calhoun, William J; Castro, Mario; Chinchili, Vernon M; Fahy, John V; Hawkins, Greg A; Icitovic, Nicolina; Israel, Elliot; Jarjour, Nizar N; King, Tonya; Kraft, Monica; Lazarus, Stephen C; Lehman, Erik; Martin, Richard J; Meyers, Deborah A; Peters, Stephen P; Sheerar, Dagna; Shi, Lei; Sutherland, E Rand; Szefler, Stanley J; Wechsler, Michael E; Sorkness, Christine A; Lemanske, Robert F
2013-01-01
The function of the P2X(7) nucleotide receptor protects against exacerbation in people with mild-intermittent asthma during viral illnesses, but the impact of disease severity and maintenance therapy has not been studied. To evaluate the association between P2X(7), asthma exacerbations, and incomplete symptom control in a more diverse population. A matched P2RX7 genetic case-control was performed with samples from Asthma Clinical Research Network trial participants enrolled before July 2006, and P2X(7) pore activity was determined in whole blood samples as an ancillary study to two trials completed subsequently. A total of 187 exacerbations were studied in 742 subjects, and the change in asthma symptom burden was studied in an additional 110 subjects during a trial of inhaled corticosteroids (ICS) dose optimization. African American carriers of the minor G allele of the rs2230911 loss-of-function single nucleotide polymorphism were more likely to have a history of prednisone use in the previous 12 months, with adjustment for ICS and long-acting β(2)-agonists use (odds ratio, 2.7; 95% confidence interval, 1.2-6.2; P = 0.018). Despite medium-dose ICS, attenuated pore function predicted earlier exacerbations in incompletely controlled patients with moderate asthma (hazard ratio, 3.2; confidence interval, 1.1-9.3; P = 0.033). After establishing control with low-dose ICS in patients with mild asthma, those with attenuated pore function had more asthma symptoms, rescue albuterol use, and FEV(1) reversal (P < 0.001, 0.03, and 0.03, respectively) during the ICS adjustment phase. P2X(7) pore function protects against exacerbations of asthma and loss of control, independent of baseline severity and the maintenance therapy.
Abou El Hassan, Mohamed; Stoianov, Alexandra; Araújo, Petra A T; Sadeghieh, Tara; Chan, Man Khun; Chen, Yunqi; Randell, Edward; Nieuwesteeg, Michelle; Adeli, Khosrow
2015-11-01
The CALIPER program has established a comprehensive database of pediatric reference intervals using largely the Abbott ARCHITECT biochemical assays. To expand clinical application of CALIPER reference standards, the present study is aimed at transferring CALIPER reference intervals from the Abbott ARCHITECT to Beckman Coulter AU assays. Transference of CALIPER reference intervals was performed based on the CLSI guidelines C28-A3 and EP9-A2. The new reference intervals were directly verified using up to 100 reference samples from the healthy CALIPER cohort. We found a strong correlation between Abbott ARCHITECT and Beckman Coulter AU biochemical assays, allowing the transference of the vast majority (94%; 30 out of 32 assays) of CALIPER reference intervals previously established using Abbott assays. Transferred reference intervals were, in general, similar to previously published CALIPER reference intervals, with some exceptions. Most of the transferred reference intervals were sex-specific and were verified using healthy reference samples from the CALIPER biobank based on CLSI criteria. It is important to note that the comparisons performed between the Abbott and Beckman Coulter assays make no assumptions as to assay accuracy or which system is more correct/accurate. The majority of CALIPER reference intervals were transferrable to Beckman Coulter AU assays, allowing the establishment of a new database of pediatric reference intervals. This further expands the utility of the CALIPER database to clinical laboratories using the AU assays; however, each laboratory should validate these intervals for their analytical platform and local population as recommended by the CLSI. Copyright © 2015 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.
Gonthier, Gerard
2013-01-01
The hydrogeology and water quality of the Dublin and Midville aquifer systems were characterized in the City of Waynesboro area in Burke County, Georgia, based on geophysical and drillers’ logs, flowmeter surveys, a 24-houraquifer test, and the collection and chemical analysis of water samples in a newly constructed well. At the test site, the Dublin aquifer system consists of interlayered sands and clays between depths of 396 and 691 feet, and the Midville aquifer system consists of a sandy clay layer overlying a sand and gravel layer between depths of 728 and 936 feet. The new well was constructed with three screened intervals in the Dublin aquifer system and four screened intervals in the Midville aquifer system. Wellbore-flowmeter testing at a pumping rate of 1,000 gallons per minute indicated that 52.2 percent of the total flow was from the shallower Dublin aquifer system with the remaining 47.8 percent from the deeper Midville aquifer system. The lower part of the lower Midville aquifer (900 to 930 feet deep), contributed only 0.1 percent of the total flow. Hydraulic properties of the two aquifer systems were estimated using data from two wellbore-flowmeter surveys and a 24-hour aquifer test. Estimated values of transmissivity for the Dublin and Midville aquifer systems were 2,000 and 1,000 feet squared per day, respectively. The upper and lower Dublin aquifers have a combined thickness of about 150 feet and the horizontal hydraulic conductivity of the Dublin aquifer system averages 10 feet per day. The upper Midville aquifer, lower Midville confining unit, and lower Midville aquifer have a combined thickness of about 210 feet, and the horizontal hydraulic conductivity of the Midville aquifer system averages 6 feet per day. Storage coefficient of the Dublin aquifer system, computed using the Theis method on water-level data from one observation well, was estimated to be 0.0003. With a thickness of about 150 feet, the specific storage of the Dublin aquifer system averages about 2×10-6 per foot. Water quality of the Dublin and Midville aquifer systems was characterized during the aquifer test on the basis of water samples collected from composite well flow originating from five depths in the completed production well during the aquifer test. Samples were analyzed for total dissolved solids, specific conductance, pH, alkalinity, and major ions. Water-quality results from composite samples, known flow contribution from individual screens, and a mixing equation were used to calculate water-quality values for sample intervals between sample depths or below the bottom sample depth. With the exception of iron and manganese, constituent concentrations of water from each of the sampled intervals and total flow from the well were within U.S. Environmental Protection Agency primary and secondary drinking-water standards. Water from the bottommost sample interval in the lower part of the lower Midville aquifer (900 to 930 feet) contained manganese and iron concentrations of 59.1 and 1,160 micrograms per liter, respectively, which exceeded secondary drinking-water standards. Because this interval contributed only 0.1 percent of the total flow to the well, water quality of this interval had little effect on the composite well water quality. Two other sample intervals from the Midville aquifer system and the total flow from both aquifer systems contained iron concentrations that slightly exceeded the secondary drinking-water standard of 300 micrograms per liter.
Predictive sensor method and apparatus
NASA Technical Reports Server (NTRS)
Nail, William L. (Inventor); Koger, Thomas L. (Inventor); Cambridge, Vivien (Inventor)
1990-01-01
A predictive algorithm is used to determine, in near real time, the steady state response of a slow responding sensor such as hydrogen gas sensor of the type which produces an output current proportional to the partial pressure of the hydrogen present. A microprocessor connected to the sensor samples the sensor output at small regular time intervals and predicts the steady state response of the sensor in response to a perturbation in the parameter being sensed, based on the beginning and end samples of the sensor output for the current sample time interval.
OSL response bleaching of BeO samples, using fluorescent light and blue LEDs
NASA Astrophysics Data System (ADS)
Groppo, D. P.; Caldas, L. V. E.
2016-07-01
The optically stimulated luminescence (OSL) is widely used as a dosimetric technique for many applications. In this work, the OSL response bleaching of BeO samples was studied. The samples were irradiated using a beta radiation source (90Sr+90Y); the bleaching treatments (fluorescent light and blue LEDs) were performed, and the results were compared. Various optical treatment time intervals were tested until reaching the complete bleaching of the OSL response. The best combination of the time interval and bleaching type was analyzed.
Demodulator for binary-phase modulated signals having a variable clock rate
NASA Technical Reports Server (NTRS)
Wu, Ta Tzu (Inventor)
1976-01-01
Method and apparatus for demodulating binary-phase modulated signals recorded on a magnetic stripe on a card as the card is manually inserted into a card reader. Magnetic transitions are sensed as the card is read and the time interval between immediately preceeding basic transitions determines the duration of a data sampling pulse which detects the presence or absence of an intermediate transition pulse indicative of two respective logic states. The duration of the data sampling pulse is approximately 75 percent of the preceeding interval between basic transitions to permit tracking succeeding time differences in basic transition intervals of up to approximately 25 percent.
Holloway, Owen G.; Waddell, Jonathan P.
2008-01-01
A borehole straddle packer was developed and tested by the U.S. Geological Survey to characterize the vertical distribution of contaminants, head, and hydraulic properties in open-borehole wells as part of an ongoing investigation of ground-water contamination at U.S. Air Force Plant 6 (AFP6) in Marietta, Georgia. To better understand contaminant fate and transport in a crystalline bedrock setting and to support remedial activities at AFP6, numerous wells have been constructed that include long open-hole intervals in the crystalline bedrock. These wells can include several discontinuities that produce water, which may contain contaminants. Because of the complexity of ground-water flow and contaminant movement in the crystalline bedrock, it is important to characterize the hydraulic and water-quality characteristics of discrete intervals in these wells. The straddle packer facilitates ground-water sampling and hydraulic testing of discrete intervals, and delivery of fluids including tracer suites and remedial agents into these discontinuities. The straddle packer consists of two inflatable packers, a dual-pump system, a pressure-sensing system, and an aqueous injection system. Tests were conducted to assess the accuracy of the pressure-sensing systems, and water samples were collected for analysis of volatile organic compound (VOCs) concentrations. Pressure-transducer readings matched computed water-column height, with a coefficient of determination of greater than 0.99. The straddle packer incorporates both an air-driven piston pump and a variable-frequency, electronic, submersible pump. Only slight differences were observed between VOC concentrations in samples collected using the two different types of sampling pumps during two sampling events in July and August 2005. A test conducted to assess the effect of stagnation on VOC concentrations in water trapped in the system's pump-tubing reel showed that concentrations were not affected. A comparison was conducted to assess differences between three water-sampling methods - collecting samples from the well by pumping a packer-isolated zone using a submersible pump, by using a grab sampler, and by using a passive diffusion sampler. Concentrations of tetrachloroethylene, trichloroethylene and 1,2-dichloropropane were greatest for samples collected using the submersible pump in the packed-isolated interval, suggesting that the straddle packer yielded the least dilute sample.
Kim, H J; Kwon, S B; Whang, K U; Lee, J S; Park, Y L; Lee, S Y
2018-02-01
Hyaluronidase injection is a commonly performed treatment for overcorrection or misplacement of hyaluronic acid (HA) filler. Many patients often wants the HA filler reinjection after the use of hyaluronidase, though the optimal timing of reinjection of HA filler still remains unknown. To provide the optimal time interval between hyaluronidase injections and HA filler reinjections. 6 Sprague-Dawley rats were injected with single monophasic HA filler. 1 week after injection, the injected sites were treated with hyaluronidase. Then, HA fillers were reinjected sequentially with differing time intervals from 30 minutes to 14 days. 1 hour after the reinjection of the last HA filler, all injection sites were excised for histologic evaluation. 3 hours after reinjection of HA filler, the appearance of filler material became evident again, retaining its shape and volume. 6 hours after reinjection, the filler materials restored almost its original volume and there were no significant differences from the positive control. Our data suggest that the hyaluronidase loses its effect in dermis and subcutaneous tissue within 3-6 hours after the injection and successful engraftment of reinjected HA filler can be accomplished 6 hours after the injection.
Garg, Harish
2013-03-01
The main objective of the present paper is to propose a methodology for analyzing the behavior of the complex repairable industrial systems. In real-life situations, it is difficult to find the most optimal design policies for MTBF (mean time between failures), MTTR (mean time to repair) and related costs by utilizing available resources and uncertain data. For this, the availability-cost optimization model has been constructed for determining the optimal design parameters for improving the system design efficiency. The uncertainties in the data related to each component of the system are estimated with the help of fuzzy and statistical methodology in the form of the triangular fuzzy numbers. Using these data, the various reliability parameters, which affects the system performance, are obtained in the form of the fuzzy membership function by the proposed confidence interval based fuzzy Lambda-Tau (CIBFLT) methodology. The computed results by CIBFLT are compared with the existing fuzzy Lambda-Tau methodology. Sensitivity analysis on the system MTBF has also been addressed. The methodology has been illustrated through a case study of washing unit, the main part of the paper industry. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.
Goodman, Susan M
2015-05-01
Patients with rheumatoid arthritis continue to undergo arthroplasty despite widespread use of potent disease-modifying drugs (DMARDs), including the biologic tumor necrosis-α inhibitors. In fact, over 80 % of RA patients are taking DMARDs or biologics at the time of arthroplasty. While many RA-specific factors including disease activity and disability may contribute to the increase in infection in RA patients undergoing arthroplasty, immunosuppressant medications may also play a role. As the age of patients with RA undergoing arthroplasty is rising, and the incidence of arthroplasty among the older population is increasing, optimal perioperative management of DMARDs and biologics in older patients with RA is an increasing challenge. Although evidence is sparse, most evidence supports withholding tumor necrosis-α inhibitors and other biologics prior to surgery based on the dosing interval, and continuing methotrexate and hydroxychloroquine through the perioperative period. There is no consensus regarding leflunomide, and rituximab risk does not appear related to the interval between infusion and surgery. This paper reviews arthroplasty outcomes including complications in patients with RA, and discusses the rationale for strategies for the optimal medication management of DMARDs and biologics in the perioperative period to minimize complications and improve outcomes.
Development of an Interval Management Algorithm Using Ground Speed Feedback for Delayed Traffic
NASA Technical Reports Server (NTRS)
Barmore, Bryan E.; Swieringa, Kurt A.; Underwood, Matthew C.; Abbott, Terence; Leonard, Robert D.
2016-01-01
One of the goals of NextGen is to enable frequent use of Optimized Profile Descents (OPD) for aircraft, even during periods of peak traffic demand. NASA is currently testing three new technologies that enable air traffic controllers to use speed adjustments to space aircraft during arrival and approach operations. This will allow an aircraft to remain close to their OPD. During the integration of these technologies, it was discovered that, due to a lack of accurate trajectory information for the leading aircraft, Interval Management aircraft were exhibiting poor behavior. NASA's Interval Management algorithm was modified to address the impact of inaccurate trajectory information and a series of studies were performed to assess the impact of this modification. These studies show that the modification provided some improvement when the Interval Management system lacked accurate trajectory information for the leading aircraft.