Evaluating data worth for ground-water management under uncertainty
Wagner, B.J.
1999-01-01
A decision framework is presented for assessing the value of ground-water sampling within the context of ground-water management under uncertainty. The framework couples two optimization models-a chance-constrained ground-water management model and an integer-programing sampling network design model-to identify optimal pumping and sampling strategies. The methodology consists of four steps: (1) The optimal ground-water management strategy for the present level of model uncertainty is determined using the chance-constrained management model; (2) for a specified data collection budget, the monitoring network design model identifies, prior to data collection, the sampling strategy that will minimize model uncertainty; (3) the optimal ground-water management strategy is recalculated on the basis of the projected model uncertainty after sampling; and (4) the worth of the monitoring strategy is assessed by comparing the value of the sample information-i.e., the projected reduction in management costs-with the cost of data collection. Steps 2-4 are repeated for a series of data collection budgets, producing a suite of management/monitoring alternatives, from which the best alternative can be selected. A hypothetical example demonstrates the methodology's ability to identify the ground-water sampling strategy with greatest net economic benefit for ground-water management.A decision framework is presented for assessing the value of ground-water sampling within the context of ground-water management under uncertainty. The framework couples two optimization models - a chance-constrained ground-water management model and an integer-programming sampling network design model - to identify optimal pumping and sampling strategies. The methodology consists of four steps: (1) The optimal ground-water management strategy for the present level of model uncertainty is determined using the chance-constrained management model; (2) for a specified data collection budget, the monitoring network design model identifies, prior to data collection, the sampling strategy that will minimize model uncertainty; (3) the optimal ground-water management strategy is recalculated on the basis of the projected model uncertainty after sampling; and (4) the worth of the monitoring strategy is assessed by comparing the value of the sample information - i.e., the projected reduction in management costs - with the cost of data collection. Steps 2-4 are repeated for a series of data collection budgets, producing a suite of management/monitoring alternatives, from which the best alternative can be selected. A hypothetical example demonstrates the methodology's ability to identify the ground-water sampling strategy with greatest net economic benefit for ground-water management.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Man, Jun; Zhang, Jiangjiang; Li, Weixuan
2016-10-01
The ensemble Kalman filter (EnKF) has been widely used in parameter estimation for hydrological models. The focus of most previous studies was to develop more efficient analysis (estimation) algorithms. On the other hand, it is intuitively understandable that a well-designed sampling (data-collection) strategy should provide more informative measurements and subsequently improve the parameter estimation. In this work, a Sequential Ensemble-based Optimal Design (SEOD) method, coupled with EnKF, information theory and sequential optimal design, is proposed to improve the performance of parameter estimation. Based on the first-order and second-order statistics, different information metrics including the Shannon entropy difference (SD), degrees ofmore » freedom for signal (DFS) and relative entropy (RE) are used to design the optimal sampling strategy, respectively. The effectiveness of the proposed method is illustrated by synthetic one-dimensional and two-dimensional unsaturated flow case studies. It is shown that the designed sampling strategies can provide more accurate parameter estimation and state prediction compared with conventional sampling strategies. Optimal sampling designs based on various information metrics perform similarly in our cases. The effect of ensemble size on the optimal design is also investigated. Overall, larger ensemble size improves the parameter estimation and convergence of optimal sampling strategy. Although the proposed method is applied to unsaturated flow problems in this study, it can be equally applied in any other hydrological problems.« less
Mizuno, Kana; Dong, Min; Fukuda, Tsuyoshi; Chandra, Sharat; Mehta, Parinda A; McConnell, Scott; Anaissie, Elias J; Vinks, Alexander A
2018-05-01
High-dose melphalan is an important component of conditioning regimens for patients undergoing hematopoietic stem cell transplantation. The current dosing strategy based on body surface area results in a high incidence of oral mucositis and gastrointestinal and liver toxicity. Pharmacokinetically guided dosing will individualize exposure and help minimize overexposure-related toxicity. The purpose of this study was to develop a population pharmacokinetic model and optimal sampling strategy. A population pharmacokinetic model was developed with NONMEM using 98 observations collected from 15 adult patients given the standard dose of 140 or 200 mg/m 2 by intravenous infusion. The determinant-optimal sampling strategy was explored with PopED software. Individual area under the curve estimates were generated by Bayesian estimation using full and the proposed sparse sampling data. The predictive performance of the optimal sampling strategy was evaluated based on bias and precision estimates. The feasibility of the optimal sampling strategy was tested using pharmacokinetic data from five pediatric patients. A two-compartment model best described the data. The final model included body weight and creatinine clearance as predictors of clearance. The determinant-optimal sampling strategies (and windows) were identified at 0.08 (0.08-0.19), 0.61 (0.33-0.90), 2.0 (1.3-2.7), and 4.0 (3.6-4.0) h post-infusion. An excellent correlation was observed between area under the curve estimates obtained with the full and the proposed four-sample strategy (R 2 = 0.98; p < 0.01) with a mean bias of -2.2% and precision of 9.4%. A similar relationship was observed in children (R 2 = 0.99; p < 0.01). The developed pharmacokinetic model-based sparse sampling strategy promises to achieve the target area under the curve as part of precision dosing.
GMOtrack: generator of cost-effective GMO testing strategies.
Novak, Petra Krau; Gruden, Kristina; Morisset, Dany; Lavrac, Nada; Stebih, Dejan; Rotter, Ana; Zel, Jana
2009-01-01
Commercialization of numerous genetically modified organisms (GMOs) has already been approved worldwide, and several additional GMOs are in the approval process. Many countries have adopted legislation to deal with GMO-related issues such as food safety, environmental concerns, and consumers' right of choice, making GMO traceability a necessity. The growing extent of GMO testing makes it important to study optimal GMO detection and identification strategies. This paper formally defines the problem of routine laboratory-level GMO tracking as a cost optimization problem, thus proposing a shift from "the same strategy for all samples" to "sample-centered GMO testing strategies." An algorithm (GMOtrack) for finding optimal two-phase (screening-identification) testing strategies is proposed. The advantages of cost optimization with increasing GMO presence on the market are demonstrated, showing that optimization approaches to analytic GMO traceability can result in major cost reductions. The optimal testing strategies are laboratory-dependent, as the costs depend on prior probabilities of local GMO presence, which are exemplified on food and feed samples. The proposed GMOtrack approach, publicly available under the terms of the General Public License, can be extended to other domains where complex testing is involved, such as safety and quality assurance in the food supply chain.
Optimal sampling strategies for detecting zoonotic disease epidemics.
Ferguson, Jake M; Langebrake, Jessica B; Cannataro, Vincent L; Garcia, Andres J; Hamman, Elizabeth A; Martcheva, Maia; Osenberg, Craig W
2014-06-01
The early detection of disease epidemics reduces the chance of successful introductions into new locales, minimizes the number of infections, and reduces the financial impact. We develop a framework to determine the optimal sampling strategy for disease detection in zoonotic host-vector epidemiological systems when a disease goes from below detectable levels to an epidemic. We find that if the time of disease introduction is known then the optimal sampling strategy can switch abruptly between sampling only from the vector population to sampling only from the host population. We also construct time-independent optimal sampling strategies when conducting periodic sampling that can involve sampling both the host and the vector populations simultaneously. Both time-dependent and -independent solutions can be useful for sampling design, depending on whether the time of introduction of the disease is known or not. We illustrate the approach with West Nile virus, a globally-spreading zoonotic arbovirus. Though our analytical results are based on a linearization of the dynamical systems, the sampling rules appear robust over a wide range of parameter space when compared to nonlinear simulation models. Our results suggest some simple rules that can be used by practitioners when developing surveillance programs. These rules require knowledge of transition rates between epidemiological compartments, which population was initially infected, and of the cost per sample for serological tests.
Simple Example of Backtest Overfitting (SEBO)
DOE Office of Scientific and Technical Information (OSTI.GOV)
In the field of mathematical finance, a "backtest" is the usage of historical market data to assess the performance of a proposed trading strategy. It is a relatively simple matter for a present-day computer system to explore thousands, millions or even billions of variations of a proposed strategy, and pick the best performing variant as the "optimal" strategy "in sample" (i.e., on the input dataset). Unfortunately, such an "optimal" strategy often performs very poorly "out of sample" (i.e. on another dataset), because the parameters of the invest strategy have been oversit to the in-sample data, a situation known as "backtestmore » overfitting". While the mathematics of backtest overfitting has been examined in several recent theoretical studies, here we pursue a more tangible analysis of this problem, in the form of an online simulator tool. Given a input random walk time series, the tool develops an "optimal" variant of a simple strategy by exhaustively exploring all integer parameter values among a handful of parameters. That "optimal" strategy is overfit, since by definition a random walk is unpredictable. Then the tool tests the resulting "optimal" strategy on a second random walk time series. In most runs using our online tool, the "optimal" strategy derived from the first time series performs poorly on the second time series, demonstrating how hard it is not to overfit a backtest. We offer this online tool, "Simple Example of Backtest Overfitting (SEBO)", to facilitate further research in this area.« less
Least squares polynomial chaos expansion: A review of sampling strategies
NASA Astrophysics Data System (ADS)
Hadigol, Mohammad; Doostan, Alireza
2018-04-01
As non-institutive polynomial chaos expansion (PCE) techniques have gained growing popularity among researchers, we here provide a comprehensive review of major sampling strategies for the least squares based PCE. Traditional sampling methods, such as Monte Carlo, Latin hypercube, quasi-Monte Carlo, optimal design of experiments (ODE), Gaussian quadratures, as well as more recent techniques, such as coherence-optimal and randomized quadratures are discussed. We also propose a hybrid sampling method, dubbed alphabetic-coherence-optimal, that employs the so-called alphabetic optimality criteria used in the context of ODE in conjunction with coherence-optimal samples. A comparison between the empirical performance of the selected sampling methods applied to three numerical examples, including high-order PCE's, high-dimensional problems, and low oversampling ratios, is presented to provide a road map for practitioners seeking the most suitable sampling technique for a problem at hand. We observed that the alphabetic-coherence-optimal technique outperforms other sampling methods, specially when high-order ODE are employed and/or the oversampling ratio is low.
A proposal of optimal sampling design using a modularity strategy
NASA Astrophysics Data System (ADS)
Simone, A.; Giustolisi, O.; Laucelli, D. B.
2016-08-01
In real water distribution networks (WDNs) are present thousands nodes and optimal placement of pressure and flow observations is a relevant issue for different management tasks. The planning of pressure observations in terms of spatial distribution and number is named sampling design and it was faced considering model calibration. Nowadays, the design of system monitoring is a relevant issue for water utilities e.g., in order to manage background leakages, to detect anomalies and bursts, to guarantee service quality, etc. In recent years, the optimal location of flow observations related to design of optimal district metering areas (DMAs) and leakage management purposes has been faced considering optimal network segmentation and the modularity index using a multiobjective strategy. Optimal network segmentation is the basis to identify network modules by means of optimal conceptual cuts, which are the candidate locations of closed gates or flow meters creating the DMAs. Starting from the WDN-oriented modularity index, as a metric for WDN segmentation, this paper proposes a new way to perform the sampling design, i.e., the optimal location of pressure meters, using newly developed sampling-oriented modularity index. The strategy optimizes the pressure monitoring system mainly based on network topology and weights assigned to pipes according to the specific technical tasks. A multiobjective optimization minimizes the cost of pressure meters while maximizing the sampling-oriented modularity index. The methodology is presented and discussed using the Apulian and Exnet networks.
Besmer, Michael D.; Hammes, Frederik; Sigrist, Jürg A.; Ort, Christoph
2017-01-01
Monitoring of microbial drinking water quality is a key component for ensuring safety and understanding risk, but conventional monitoring strategies are typically based on low sampling frequencies (e.g., quarterly or monthly). This is of concern because many drinking water sources, such as karstic springs are often subject to changes in bacterial concentrations on much shorter time scales (e.g., hours to days), for example after precipitation events. Microbial contamination events are crucial from a risk assessment perspective and should therefore be targeted by monitoring strategies to establish both the frequency of their occurrence and the magnitude of bacterial peak concentrations. In this study we used monitoring data from two specific karstic springs. We assessed the performance of conventional monitoring based on historical records and tested a number of alternative strategies based on a high-resolution data set of bacterial concentrations in spring water collected with online flow cytometry (FCM). We quantified the effect of increasing sampling frequency and found that for the specific case studied, at least bi-weekly sampling would be needed to detect precipitation events with a probability of >90%. We then proposed an optimized monitoring strategy with three targeted samples per event, triggered by precipitation measurements. This approach is more effective and efficient than simply increasing overall sampling frequency. It would enable the water utility to (1) analyze any relevant event and (2) limit median underestimation of peak concentrations to approximately 10%. We conclude with a generalized perspective on sampling optimization and argue that the assessment of short-term dynamics causing microbial peak loads initially requires increased sampling/analysis efforts, but can be optimized subsequently to account for limited resources. This offers water utilities and public health authorities systematic ways to evaluate and optimize their current monitoring strategies. PMID:29213255
Besmer, Michael D; Hammes, Frederik; Sigrist, Jürg A; Ort, Christoph
2017-01-01
Monitoring of microbial drinking water quality is a key component for ensuring safety and understanding risk, but conventional monitoring strategies are typically based on low sampling frequencies (e.g., quarterly or monthly). This is of concern because many drinking water sources, such as karstic springs are often subject to changes in bacterial concentrations on much shorter time scales (e.g., hours to days), for example after precipitation events. Microbial contamination events are crucial from a risk assessment perspective and should therefore be targeted by monitoring strategies to establish both the frequency of their occurrence and the magnitude of bacterial peak concentrations. In this study we used monitoring data from two specific karstic springs. We assessed the performance of conventional monitoring based on historical records and tested a number of alternative strategies based on a high-resolution data set of bacterial concentrations in spring water collected with online flow cytometry (FCM). We quantified the effect of increasing sampling frequency and found that for the specific case studied, at least bi-weekly sampling would be needed to detect precipitation events with a probability of >90%. We then proposed an optimized monitoring strategy with three targeted samples per event, triggered by precipitation measurements. This approach is more effective and efficient than simply increasing overall sampling frequency. It would enable the water utility to (1) analyze any relevant event and (2) limit median underestimation of peak concentrations to approximately 10%. We conclude with a generalized perspective on sampling optimization and argue that the assessment of short-term dynamics causing microbial peak loads initially requires increased sampling/analysis efforts, but can be optimized subsequently to account for limited resources. This offers water utilities and public health authorities systematic ways to evaluate and optimize their current monitoring strategies.
A new strategy for array optimization applied to Brazilian Decimetric Array
NASA Astrophysics Data System (ADS)
Faria, C.; Stephany, S.; Sawant, H. S.
Radio interferometric arrays measure the Fourier transform of the sky brightness distribution in a finite set of points that are determined by the cross-correlation of different pairs of antennas of the array The sky brightness distribution is reconstructed by the inverse Fourier transform of the sampled visibilities The quality of the reconstructed images strongly depends on the array configuration since it determines the sampling function and therefore the points in the Fourier Plane This work proposes a new optimization strategy for the array configuration that is based on the entropy of the distribution of the samples points in the Fourier plane A stochastic optimizer the Ant Colony Optimization employs entropy of the point distribution in the Fourier plane to iteratively refine the candidate solutions The proposed strategy was developed for the Brazilian Decimetric Array BDA a radio interferometric array that is currently being developed for solar observations at the Brazilian Institute for Space Research Configurations results corresponding to the Fourier plane coverage synthesized beam and side lobes levels are shown for an optimized BDA configuration obtained with the proposed strategy and compared to the results for a standard T array configuration that was originally proposed
A Structure-Adaptive Hybrid RBF-BP Classifier with an Optimized Learning Strategy
Wen, Hui; Xie, Weixin; Pei, Jihong
2016-01-01
This paper presents a structure-adaptive hybrid RBF-BP (SAHRBF-BP) classifier with an optimized learning strategy. SAHRBF-BP is composed of a structure-adaptive RBF network and a BP network of cascade, where the number of RBF hidden nodes is adjusted adaptively according to the distribution of sample space, the adaptive RBF network is used for nonlinear kernel mapping and the BP network is used for nonlinear classification. The optimized learning strategy is as follows: firstly, a potential function is introduced into training sample space to adaptively determine the number of initial RBF hidden nodes and node parameters, and a form of heterogeneous samples repulsive force is designed to further optimize each generated RBF hidden node parameters, the optimized structure-adaptive RBF network is used for adaptively nonlinear mapping the sample space; then, according to the number of adaptively generated RBF hidden nodes, the number of subsequent BP input nodes can be determined, and the overall SAHRBF-BP classifier is built up; finally, different training sample sets are used to train the BP network parameters in SAHRBF-BP. Compared with other algorithms applied to different data sets, experiments show the superiority of SAHRBF-BP. Especially on most low dimensional and large number of data sets, the classification performance of SAHRBF-BP outperforms other training SLFNs algorithms. PMID:27792737
NASA Astrophysics Data System (ADS)
Longting, M.; Ye, S.; Wu, J.
2014-12-01
Identification and removing the DNAPL source in aquifer system is vital in rendering remediation successful and lowering the remediation time and cost. Our work is to apply an optimal search strategy introduced by Zoi and Pinder[1], with some modifications, to a field site in Nanjing City, China to define the strength, and location of DNAPL sources using the least samples. The overall strategy uses Monte Carlo stochastic groundwater flow and transport modeling, incorporates existing sampling data into the search strategy, and determines optimal sampling locations that are selected according to the reduction in overall uncertainty of the field and the proximity to the source locations. After a sample is taken, the plume is updated using a Kalman filter. The updated plume is then compared to the concentration fields that emanate from each individual potential source using fuzzy set technique. The comparison followed provides weights that reflect the degree of truth regarding the location of the source. The above steps are repeated until the optimal source characteristics are determined. Considering our site case, some specific modifications and work have been done as follows. K random fields are generated after fitting the measurement K data to the variogram model. The locations of potential sources that are given initial weights are targeted based on the field survey, with multiple potential source locations around the workshops and wastewater basin. Considering the short history (1999-2010) of manufacturing optical brightener PF at the site, and the existing sampling data, a preliminary source strength is then estimated, which will be optimized by simplex method or GA later. The whole algorithm then will guide us for optimal sampling and update as the investigation proceeds, until the weights finally stabilized. Reference [1] Dokou Zoi, and George F. Pinder. "Optimal search strategy for the definition of a DNAPL source." Journal of Hydrology 376.3 (2009): 542-556. Acknowledgement: Funding supported by National Natural Science Foundation of China (No. 41030746, 40872155) and DuPont Company is appreciated.
Adapted random sampling patterns for accelerated MRI.
Knoll, Florian; Clason, Christian; Diwoky, Clemens; Stollberger, Rudolf
2011-02-01
Variable density random sampling patterns have recently become increasingly popular for accelerated imaging strategies, as they lead to incoherent aliasing artifacts. However, the design of these sampling patterns is still an open problem. Current strategies use model assumptions like polynomials of different order to generate a probability density function that is then used to generate the sampling pattern. This approach relies on the optimization of design parameters which is very time consuming and therefore impractical for daily clinical use. This work presents a new approach that generates sampling patterns by making use of power spectra of existing reference data sets and hence requires neither parameter tuning nor an a priori mathematical model of the density of sampling points. The approach is validated with downsampling experiments, as well as with accelerated in vivo measurements. The proposed approach is compared with established sampling patterns, and the generalization potential is tested by using a range of reference images. Quantitative evaluation is performed for the downsampling experiments using RMS differences to the original, fully sampled data set. Our results demonstrate that the image quality of the method presented in this paper is comparable to that of an established model-based strategy when optimization of the model parameter is carried out and yields superior results to non-optimized model parameters. However, no random sampling pattern showed superior performance when compared to conventional Cartesian subsampling for the considered reconstruction strategy.
Lampa, Erik G; Nilsson, Leif; Liljelind, Ingrid E; Bergdahl, Ingvar A
2006-06-01
When assessing occupational exposures, repeated measurements are in most cases required. Repeated measurements are more resource intensive than a single measurement, so careful planning of the measurement strategy is necessary to assure that resources are spent wisely. The optimal strategy depends on the objectives of the measurements. Here, two different models of random effects analysis of variance (ANOVA) are proposed for the optimization of measurement strategies by the minimization of the variance of the estimated log-transformed arithmetic mean value of a worker group, i.e. the strategies are optimized for precise estimation of that value. The first model is a one-way random effects ANOVA model. For that model it is shown that the best precision in the estimated mean value is always obtained by including as many workers as possible in the sample while restricting the number of replicates to two or at most three regardless of the size of the variance components. The second model introduces the 'shared temporal variation' which accounts for those random temporal fluctuations of the exposure that the workers have in common. It is shown for that model that the optimal sample allocation depends on the relative sizes of the between-worker component and the shared temporal component, so that if the between-worker component is larger than the shared temporal component more workers should be included in the sample and vice versa. The results are illustrated graphically with an example from the reinforced plastics industry. If there exists a shared temporal variation at a workplace, that variability needs to be accounted for in the sampling design and the more complex model is recommended.
Park, Jinil; Shin, Taehoon; Yoon, Soon Ho; Goo, Jin Mo; Park, Jang-Yeon
2016-05-01
The purpose of this work was to develop a 3D radial-sampling strategy which maintains uniform k-space sample density after retrospective respiratory gating, and demonstrate its feasibility in free-breathing ultrashort-echo-time lung MRI. A multi-shot, interleaved 3D radial sampling function was designed by segmenting a single-shot trajectory of projection views such that each interleaf samples k-space in an incoherent fashion. An optimal segmentation factor for the interleaved acquisition was derived based on an approximate model of respiratory patterns such that radial interleaves are evenly accepted during the retrospective gating. The optimality of the proposed sampling scheme was tested by numerical simulations and phantom experiments using human respiratory waveforms. Retrospectively, respiratory-gated, free-breathing lung MRI with the proposed sampling strategy was performed in healthy subjects. The simulation yielded the most uniform k-space sample density with the optimal segmentation factor, as evidenced by the smallest standard deviation of the number of neighboring samples as well as minimal side-lobe energy in the point spread function. The optimality of the proposed scheme was also confirmed by minimal image artifacts in phantom images. Human lung images showed that the proposed sampling scheme significantly reduced streak and ring artifacts compared with the conventional retrospective respiratory gating while suppressing motion-related blurring compared with full sampling without respiratory gating. In conclusion, the proposed 3D radial-sampling scheme can effectively suppress the image artifacts due to non-uniform k-space sample density in retrospectively respiratory-gated lung MRI by uniformly distributing gated radial views across the k-space. Copyright © 2016 John Wiley & Sons, Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mueller, Juliane
MISO is an optimization framework for solving computationally expensive mixed-integer, black-box, global optimization problems. MISO uses surrogate models to approximate the computationally expensive objective function. Hence, derivative information, which is generally unavailable for black-box simulation objective functions, is not needed. MISO allows the user to choose the initial experimental design strategy, the type of surrogate model, and the sampling strategy.
SPECT System Optimization Against A Discrete Parameter Space
Meng, L. J.; Li, N.
2013-01-01
In this paper, we present an analytical approach for optimizing the design of a static SPECT system or optimizing the sampling strategy with a variable/adaptive SPECT imaging hardware against an arbitrarily given set of system parameters. This approach has three key aspects. First, it is designed to operate over a discretized system parameter space. Second, we have introduced an artificial concept of virtual detector as the basic building block of an imaging system. With a SPECT system described as a collection of the virtual detectors, one can convert the task of system optimization into a process of finding the optimum imaging time distribution (ITD) across all virtual detectors. Thirdly, the optimization problem (finding the optimum ITD) could be solved with a block-iterative approach or other non-linear optimization algorithms. In essence, the resultant optimum ITD could provide a quantitative measure of the relative importance (or effectiveness) of the virtual detectors and help to identify the system configuration or sampling strategy that leads to an optimum imaging performance. Although we are using SPECT imaging as a platform to demonstrate the system optimization strategy, this development also provides a useful framework for system optimization problems in other modalities, such as positron emission tomography (PET) and X-ray computed tomography (CT) [1, 2]. PMID:23587609
Burger, Emily A; Sy, Stephen; Nygård, Mari; Kim, Jane J
2017-01-01
Human papillomavirus (HPV) testing allows women to self-collect cervico-vaginal cells at home (i.e., self-sampling). Using primary data from a randomized pilot study, we evaluated the long-term consequences and cost-effectiveness of using self-sampling to improve participation to routine cervical cancer screening in Norway. We compared a strategy reflecting screening participation (using reminder letters) to strategies that involved mailing self-sampling device kits to women noncompliant to screening within a 5- or 10-year period under two scenarios: (A) self-sampling respondents had moderate under-screening histories, or (B) respondents to self-sampling had moderate and severe under-screening histories. Model outcomes included quality-adjusted life-years (QALY) and lifetime costs. The "most cost-effective" strategy was identified as the strategy just below $100,000 per QALY gained. Mailing self-sampling device kits to all women noncompliant to screening within a 5- or 10-year period can be more effective and less costly than the current reminder letter policy; however, the optimal self-sampling strategy was dependent on the profile of self-sampling respondents. For example, "10-yearly self-sampling" is preferred ($95,500 per QALY gained) if "5-yearly self-sampling" could only attract moderate under-screeners; however, "5-yearly self-sampling" is preferred if this strategy could additionally attract severe under-screeners. Targeted self-sampling of noncompliers likely represents good value-for-money; however, the preferred strategy is contingent on the screening histories and compliance of respondents. The magnitude of the health benefit and optimal self-sampling strategy is dependent on the profile and behavior of respondents. Health authorities should understand these factors prior to selecting and implementing a self-sampling policy. Cancer Epidemiol Biomarkers Prev; 26(1); 95-103. ©2016 AACR. ©2016 American Association for Cancer Research.
Xun-Ping, W; An, Z
2017-07-27
Objective To optimize and simplify the survey method of Oncomelania hupensis snails in marshland endemic regions of schistosomiasis, so as to improve the precision, efficiency and economy of the snail survey. Methods A snail sampling strategy (Spatial Sampling Scenario of Oncomelania based on Plant Abundance, SOPA) which took the plant abundance as auxiliary variable was explored and an experimental study in a 50 m×50 m plot in a marshland in the Poyang Lake region was performed. Firstly, the push broom surveyed data was stratified into 5 layers by the plant abundance data; then, the required numbers of optimal sampling points of each layer through Hammond McCullagh equation were calculated; thirdly, every sample point in the line with the Multiple Directional Interpolation (MDI) placement scheme was pinpointed; and finally, the comparison study among the outcomes of the spatial random sampling strategy, the traditional systematic sampling method, the spatial stratified sampling method, Sandwich spatial sampling and inference and SOPA was performed. Results The method (SOPA) proposed in this study had the minimal absolute error of 0.213 8; and the traditional systematic sampling method had the largest estimate, and the absolute error was 0.924 4. Conclusion The snail sampling strategy (SOPA) proposed in this study obtains the higher estimation accuracy than the other four methods.
Burger, Emily A; Sy, Stephen; Nygård, Mari; Kim, Jane J
2016-01-01
Background Human papillomavirus (HPV) testing allows women to self-collect cervico-vaginal cells at home (i.e., self-sampling). Using primary data from a randomized pilot study, we evaluated the long-term consequences and cost-effectiveness of using self-sampling to improve participation to routine cervical cancer screening in Norway. Methods We compared a strategy reflecting screening participation (using reminder letters) to strategies that involved mailing self-sampling device kits to women non-compliant to screening within a 5-year or 10-year period under two scenarios: A) self-sampling respondents had moderate under-screening histories, or B) respondents to self-sampling had moderate and severe under-screening histories. Model outcomes included quality-adjusted life-years (QALY) and lifetime costs. The ‘most cost-effective’ strategy was identified as the strategy just below $100,000 per QALY gained. Results Mailing self-sampling device kits to all women non-compliant to screening within a 5-year or 10-year period can be more effective and less costly than the current reminder letter policy; however, the optimal self-sampling strategy was dependent on the profile of self-sampling respondents. For example, ‘10-yearly self-sampling’ is preferred ($95,500 per QALY gained) if ‘5-yearly self-sampling’ could only attract moderate under-screeners; however, ‘5-yearly self-sampling’ is preferred if this strategy could additionally attract severe under-screeners. Conclusions Targeted self-sampling of non-compliers likely represents good value-for-money; however, the preferred strategy is contingent on the screening histories and compliance of respondents. Impact The magnitude of the health benefit and optimal self-sampling strategy is dependent on the profile and behavior of respondents. Health authorities should understand these factors prior to selecting and implementing a self-sampling policy. PMID:27624639
MOI to TEI : a Mars Sample Return strategy
NASA Technical Reports Server (NTRS)
Smith, Chad W.; Maddock, Robert W.
2006-01-01
This paper describes the issues and challenges related to the design of the rendezvous between the Earth Return Vehicle (ERV) and the Orbiting Sample (OS) for the Mars Sample Return (MSR) mission. In particular, attention will be focused on the strategy for 'optimizing' the intermediate segment of the rendezvous process, during which there are a great number of variables that must be considered and well understood.
Compressive sampling of polynomial chaos expansions: Convergence analysis and sampling strategies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hampton, Jerrad; Doostan, Alireza, E-mail: alireza.doostan@colorado.edu
2015-01-01
Sampling orthogonal polynomial bases via Monte Carlo is of interest for uncertainty quantification of models with random inputs, using Polynomial Chaos (PC) expansions. It is known that bounding a probabilistic parameter, referred to as coherence, yields a bound on the number of samples necessary to identify coefficients in a sparse PC expansion via solution to an ℓ{sub 1}-minimization problem. Utilizing results for orthogonal polynomials, we bound the coherence parameter for polynomials of Hermite and Legendre type under their respective natural sampling distribution. In both polynomial bases we identify an importance sampling distribution which yields a bound with weaker dependence onmore » the order of the approximation. For more general orthonormal bases, we propose the coherence-optimal sampling: a Markov Chain Monte Carlo sampling, which directly uses the basis functions under consideration to achieve a statistical optimality among all sampling schemes with identical support. We demonstrate these different sampling strategies numerically in both high-order and high-dimensional, manufactured PC expansions. In addition, the quality of each sampling method is compared in the identification of solutions to two differential equations, one with a high-dimensional random input and the other with a high-order PC expansion. In both cases, the coherence-optimal sampling scheme leads to similar or considerably improved accuracy.« less
Xu, Henglong; Yong, Jiang; Xu, Guangjian
2015-12-30
Sampling frequency is important to obtain sufficient information for temporal research of microfauna. To determine an optimal strategy for exploring the seasonal variation in ciliated protozoa, a dataset from the Yellow Sea, northern China was studied. Samples were collected with 24 (biweekly), 12 (monthly), 8 (bimonthly per season) and 4 (seasonally) sampling events. Compared to the 24 samplings (100%), the 12-, 8- and 4-samplings recovered 94%, 94%, and 78% of the total species, respectively. To reveal the seasonal distribution, the 8-sampling regime may result in >75% information of the seasonal variance, while the traditional 4-sampling may only explain <65% of the total variance. With the increase of the sampling frequency, the biotic data showed stronger correlations with seasonal variables (e.g., temperature, salinity) in combination with nutrients. It is suggested that the 8-sampling events per year may be an optimal sampling strategy for ciliated protozoan seasonal research in marine ecosystems. Copyright © 2015 Elsevier Ltd. All rights reserved.
Laber, Eric B; Zhao, Ying-Qi; Regh, Todd; Davidian, Marie; Tsiatis, Anastasios; Stanford, Joseph B; Zeng, Donglin; Song, Rui; Kosorok, Michael R
2016-04-15
A personalized treatment strategy formalizes evidence-based treatment selection by mapping patient information to a recommended treatment. Personalized treatment strategies can produce better patient outcomes while reducing cost and treatment burden. Thus, among clinical and intervention scientists, there is a growing interest in conducting randomized clinical trials when one of the primary aims is estimation of a personalized treatment strategy. However, at present, there are no appropriate sample size formulae to assist in the design of such a trial. Furthermore, because the sampling distribution of the estimated outcome under an estimated optimal treatment strategy can be highly sensitive to small perturbations in the underlying generative model, sample size calculations based on standard (uncorrected) asymptotic approximations or computer simulations may not be reliable. We offer a simple and robust method for powering a single stage, two-armed randomized clinical trial when the primary aim is estimating the optimal single stage personalized treatment strategy. The proposed method is based on inverting a plugin projection confidence interval and is thereby regular and robust to small perturbations of the underlying generative model. The proposed method requires elicitation of two clinically meaningful parameters from clinical scientists and uses data from a small pilot study to estimate nuisance parameters, which are not easily elicited. The method performs well in simulated experiments and is illustrated using data from a pilot study of time to conception and fertility awareness. Copyright © 2015 John Wiley & Sons, Ltd.
Hou, Zeyu; Lu, Wenxi; Xue, Haibo; Lin, Jin
2017-08-01
Surrogate-based simulation-optimization technique is an effective approach for optimizing the surfactant enhanced aquifer remediation (SEAR) strategy for clearing DNAPLs. The performance of the surrogate model, which is used to replace the simulation model for the aim of reducing computation burden, is the key of corresponding researches. However, previous researches are generally based on a stand-alone surrogate model, and rarely make efforts to improve the approximation accuracy of the surrogate model to the simulation model sufficiently by combining various methods. In this regard, we present set pair analysis (SPA) as a new method to build ensemble surrogate (ES) model, and conducted a comparative research to select a better ES modeling pattern for the SEAR strategy optimization problems. Surrogate models were developed using radial basis function artificial neural network (RBFANN), support vector regression (SVR), and Kriging. One ES model is assembling RBFANN model, SVR model, and Kriging model using set pair weights according their performance, and the other is assembling several Kriging (the best surrogate modeling method of three) models built with different training sample datasets. Finally, an optimization model, in which the ES model was embedded, was established to obtain the optimal remediation strategy. The results showed the residuals of the outputs between the best ES model and simulation model for 100 testing samples were lower than 1.5%. Using an ES model instead of the simulation model was critical for considerably reducing the computation time of simulation-optimization process and maintaining high computation accuracy simultaneously. Copyright © 2017 Elsevier B.V. All rights reserved.
Petinataud, Dimitri; Berger, Sibel; Ferdynus, Cyril; Debourgogne, Anne; Contet-Audonneau, Nelly; Machouart, Marie
2016-05-01
Onychomycosis is a common nail disorder mainly due to dermatophytes for which the conventional diagnosis requires direct microscopic observation and culture of a biological sample. Nevertheless, antifungal treatments are commonly prescribed without a mycological examination having been performed, partly because of the slow growth of dermatophytes. Therefore, molecular biology has been applied to this pathology, to support a quick and accurate distinction between onychomycosis and other nail damage. Commercial kits are now available from several companies for improving traditional microbiological diagnosis. In this paper, we present the first evaluation of the real-time PCR kit marketed by Bio Evolution for the diagnosis of dermatophytosis. Secondly, we compare the efficacy of the kit on optimal and non-optimal samples. This study was conducted on 180 nails samples, processed by conventional methods and retrospectively analysed using this kit. According to our results, this molecular kit has shown high specificity and sensitivity in detecting dermatophytes, regardless of sample quality. On the other hand, and as expected, optimal samples allowed the identification of a higher number of dermatophytes by conventional mycological diagnosis, compared to non-optimal samples. Finally, we have suggested several strategies for the practical use of such a kit in a medical laboratory for quick pathogen detection. © 2016 Blackwell Verlag GmbH.
Bare-Bones Teaching-Learning-Based Optimization
Zou, Feng; Wang, Lei; Hei, Xinhong; Chen, Debao; Jiang, Qiaoyong; Li, Hongye
2014-01-01
Teaching-learning-based optimization (TLBO) algorithm which simulates the teaching-learning process of the class room is one of the recently proposed swarm intelligent (SI) algorithms. In this paper, a new TLBO variant called bare-bones teaching-learning-based optimization (BBTLBO) is presented to solve the global optimization problems. In this method, each learner of teacher phase employs an interactive learning strategy, which is the hybridization of the learning strategy of teacher phase in the standard TLBO and Gaussian sampling learning based on neighborhood search, and each learner of learner phase employs the learning strategy of learner phase in the standard TLBO or the new neighborhood search strategy. To verify the performance of our approaches, 20 benchmark functions and two real-world problems are utilized. Conducted experiments can been observed that the BBTLBO performs significantly better than, or at least comparable to, TLBO and some existing bare-bones algorithms. The results indicate that the proposed algorithm is competitive to some other optimization algorithms. PMID:25013844
Bare-bones teaching-learning-based optimization.
Zou, Feng; Wang, Lei; Hei, Xinhong; Chen, Debao; Jiang, Qiaoyong; Li, Hongye
2014-01-01
Teaching-learning-based optimization (TLBO) algorithm which simulates the teaching-learning process of the class room is one of the recently proposed swarm intelligent (SI) algorithms. In this paper, a new TLBO variant called bare-bones teaching-learning-based optimization (BBTLBO) is presented to solve the global optimization problems. In this method, each learner of teacher phase employs an interactive learning strategy, which is the hybridization of the learning strategy of teacher phase in the standard TLBO and Gaussian sampling learning based on neighborhood search, and each learner of learner phase employs the learning strategy of learner phase in the standard TLBO or the new neighborhood search strategy. To verify the performance of our approaches, 20 benchmark functions and two real-world problems are utilized. Conducted experiments can been observed that the BBTLBO performs significantly better than, or at least comparable to, TLBO and some existing bare-bones algorithms. The results indicate that the proposed algorithm is competitive to some other optimization algorithms.
Gradient Material Strategies for Hydrogel Optimization in Tissue Engineering Applications
2018-01-01
Although a number of combinatorial/high-throughput approaches have been developed for biomaterial hydrogel optimization, a gradient sample approach is particularly well suited to identify hydrogel property thresholds that alter cellular behavior in response to interacting with the hydrogel due to reduced variation in material preparation and the ability to screen biological response over a range instead of discrete samples each containing only one condition. This review highlights recent work on cell–hydrogel interactions using a gradient material sample approach. Fabrication strategies for composition, material and mechanical property, and bioactive signaling gradient hydrogels that can be used to examine cell–hydrogel interactions will be discussed. The effects of gradients in hydrogel samples on cellular adhesion, migration, proliferation, and differentiation will then be examined, providing an assessment of the current state of the field and the potential of wider use of the gradient sample approach to accelerate our understanding of matrices on cellular behavior. PMID:29485612
A multiple-objective optimal exploration strategy
Christakos, G.; Olea, R.A.
1988-01-01
Exploration for natural resources is accomplished through partial sampling of extensive domains. Such imperfect knowledge is subject to sampling error. Complex systems of equations resulting from modelling based on the theory of correlated random fields are reduced to simple analytical expressions providing global indices of estimation variance. The indices are utilized by multiple objective decision criteria to find the best sampling strategies. The approach is not limited by geometric nature of the sampling, covers a wide range in spatial continuity and leads to a step-by-step procedure. ?? 1988.
NASA Astrophysics Data System (ADS)
Rajabi, Mohammad Mahdi; Ataie-Ashtiani, Behzad; Janssen, Hans
2015-02-01
The majority of literature regarding optimized Latin hypercube sampling (OLHS) is devoted to increasing the efficiency of these sampling strategies through the development of new algorithms based on the combination of innovative space-filling criteria and specialized optimization schemes. However, little attention has been given to the impact of the initial design that is fed into the optimization algorithm, on the efficiency of OLHS strategies. Previous studies, as well as codes developed for OLHS, have relied on one of the following two approaches for the selection of the initial design in OLHS: (1) the use of random points in the hypercube intervals (random LHS), and (2) the use of midpoints in the hypercube intervals (midpoint LHS). Both approaches have been extensively used, but no attempt has been previously made to compare the efficiency and robustness of their resulting sample designs. In this study we compare the two approaches and show that the space-filling characteristics of OLHS designs are sensitive to the initial design that is fed into the optimization algorithm. It is also illustrated that the space-filling characteristics of OLHS designs based on midpoint LHS are significantly better those based on random LHS. The two approaches are compared by incorporating their resulting sample designs in Monte Carlo simulation (MCS) for uncertainty propagation analysis, and then, by employing the sample designs in the selection of the training set for constructing non-intrusive polynomial chaos expansion (NIPCE) meta-models which subsequently replace the original full model in MCSs. The analysis is based on two case studies involving numerical simulation of density dependent flow and solute transport in porous media within the context of seawater intrusion in coastal aquifers. We show that the use of midpoint LHS as the initial design increases the efficiency and robustness of the resulting MCSs and NIPCE meta-models. The study also illustrates that this relative improvement decreases with increasing number of sample points and input parameter dimensions. Since the computational time and efforts for generating the sample designs in the two approaches are identical, the use of midpoint LHS as the initial design in OLHS is thus recommended.
Fredriksson, Mattias J; Petersson, Patrik; Axelsson, Bengt-Olof; Bylund, Dan
2011-10-17
A strategy for rapid optimization of liquid chromatography column temperature and gradient shape is presented. The optimization as such is based on the well established retention and peak width models implemented in software like e.g. DryLab and LC simulator. The novel part of the strategy is a highly automated processing algorithm for detection and tracking of chromatographic peaks in noisy liquid chromatography-mass spectrometry (LC-MS) data. The strategy is presented and visualized by the optimization of the separation of two degradants present in ultraviolet (UV) exposed fluocinolone acetonide. It should be stressed, however, that it can be utilized for LC-MS analysis of any sample and application where several runs are conducted on the same sample. In the application presented, 30 components that were difficult or impossible to detect in the UV data could be automatically detected and tracked in the MS data by using the proposed strategy. The number of correctly tracked components was above 95%. Using the parameters from the reconstructed data sets to the model gave good agreement between predicted and observed retention times at optimal conditions. The area of the smallest tracked component was estimated to 0.08% compared to the main component, a level relevant for the characterization of impurities in the pharmaceutical industry. Copyright © 2011 Elsevier B.V. All rights reserved.
Sampling is the act of selecting items from a specified population in order to estimate the parameters of that population (e.g., selecting soil samples to characterize the properties at an environmental site). Sampling occurs at various levels and times throughout an environmenta...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stemkens, Bjorn, E-mail: b.stemkens@umcutrecht.nl; Tijssen, Rob H.N.; Senneville, Baudouin D. de
2015-03-01
Purpose: To determine the optimum sampling strategy for retrospective reconstruction of 4-dimensional (4D) MR data for nonrigid motion characterization of tumor and organs at risk for radiation therapy purposes. Methods and Materials: For optimization, we compared 2 surrogate signals (external respiratory bellows and internal MRI navigators) and 2 MR sampling strategies (Cartesian and radial) in terms of image quality and robustness. Using the optimized protocol, 6 pancreatic cancer patients were scanned to calculate the 4D motion. Region of interest analysis was performed to characterize the respiratory-induced motion of the tumor and organs at risk simultaneously. Results: The MRI navigator was foundmore » to be a more reliable surrogate for pancreatic motion than the respiratory bellows signal. Radial sampling is most benign for undersampling artifacts and intraview motion. Motion characterization revealed interorgan and interpatient variation, as well as heterogeneity within the tumor. Conclusions: A robust 4D-MRI method, based on clinically available protocols, is presented and successfully applied to characterize the abdominal motion in a small number of pancreatic cancer patients.« less
Human Information Processing and Supervisory Control.
1980-05-01
interpretation of information .............. 16 Sampling strategies .............................. 17 Speed-accuracy tradeoff ................... 23...operator is usually highly trained, and largely controls the tasks, being allowed to use what strategies he will.. Risk is incurred in ways which can...his search less than optimally effective. Hence from matters of tactics and strategy which will be discussed below, straightforward questions of
The SDSS-IV MaNGA Sample: Design, Optimization, and Usage Considerations
NASA Astrophysics Data System (ADS)
Wake, David A.; Bundy, Kevin; Diamond-Stanic, Aleksandar M.; Yan, Renbin; Blanton, Michael R.; Bershady, Matthew A.; Sánchez-Gallego, José R.; Drory, Niv; Jones, Amy; Kauffmann, Guinevere; Law, David R.; Li, Cheng; MacDonald, Nicholas; Masters, Karen; Thomas, Daniel; Tinker, Jeremy; Weijmans, Anne-Marie; Brownstein, Joel R.
2017-09-01
We describe the sample design for the SDSS-IV MaNGA survey and present the final properties of the main samples along with important considerations for using these samples for science. Our target selection criteria were developed while simultaneously optimizing the size distribution of the MaNGA integral field units (IFUs), the IFU allocation strategy, and the target density to produce a survey defined in terms of maximizing signal-to-noise ratio, spatial resolution, and sample size. Our selection strategy makes use of redshift limits that only depend on I-band absolute magnitude (M I ), or, for a small subset of our sample, M I and color (NUV - I). Such a strategy ensures that all galaxies span the same range in angular size irrespective of luminosity and are therefore covered evenly by the adopted range of IFU sizes. We define three samples: the Primary and Secondary samples are selected to have a flat number density with respect to M I and are targeted to have spectroscopic coverage to 1.5 and 2.5 effective radii (R e ), respectively. The Color-Enhanced supplement increases the number of galaxies in the low-density regions of color-magnitude space by extending the redshift limits of the Primary sample in the appropriate color bins. The samples cover the stellar mass range 5× {10}8≤slant {M}* ≤slant 3× {10}11 {M}⊙ {h}-2 and are sampled at median physical resolutions of 1.37 and 2.5 kpc for the Primary and Secondary samples, respectively. We provide weights that will statistically correct for our luminosity and color-dependent selection function and IFU allocation strategy, thus correcting the observed sample to a volume-limited sample.
SVM-Based Synthetic Fingerprint Discrimination Algorithm and Quantitative Optimization Strategy
Chen, Suhang; Chang, Sheng; Huang, Qijun; He, Jin; Wang, Hao; Huang, Qiangui
2014-01-01
Synthetic fingerprints are a potential threat to automatic fingerprint identification systems (AFISs). In this paper, we propose an algorithm to discriminate synthetic fingerprints from real ones. First, four typical characteristic factors—the ridge distance features, global gray features, frequency feature and Harris Corner feature—are extracted. Then, a support vector machine (SVM) is used to distinguish synthetic fingerprints from real fingerprints. The experiments demonstrate that this method can achieve a recognition accuracy rate of over 98% for two discrete synthetic fingerprint databases as well as a mixed database. Furthermore, a performance factor that can evaluate the SVM's accuracy and efficiency is presented, and a quantitative optimization strategy is established for the first time. After the optimization of our synthetic fingerprint discrimination task, the polynomial kernel with a training sample proportion of 5% is the optimized value when the minimum accuracy requirement is 95%. The radial basis function (RBF) kernel with a training sample proportion of 15% is a more suitable choice when the minimum accuracy requirement is 98%. PMID:25347063
Picheny, Victor; Trépos, Ronan; Casadebaig, Pierre
2017-01-01
Accounting for the interannual climatic variations is a well-known issue for simulation-based studies of environmental systems. It often requires intensive sampling (e.g., averaging the simulation outputs over many climatic series), which hinders many sequential processes, in particular optimization algorithms. We propose here an approach based on a subset selection in a large basis of climatic series, using an ad-hoc similarity function and clustering. A non-parametric reconstruction technique is introduced to estimate accurately the distribution of the output of interest using only the subset sampling. The proposed strategy is non-intrusive and generic (i.e. transposable to most models with climatic data inputs), and can be combined to most “off-the-shelf” optimization solvers. We apply our approach to sunflower ideotype design using the crop model SUNFLO. The underlying optimization problem is formulated as a multi-objective one to account for risk-aversion. Our approach achieves good performances even for limited computational budgets, outperforming significantly standard strategies. PMID:28542198
Optimizing direct amplification of forensic commercial kits for STR determination.
Caputo, M; Bobillo, M C; Sala, A; Corach, D
2017-04-01
Direct DNA amplification in forensic genotyping reduces analytical time when large sample sets are being analyzed. The amplification success depends mainly upon two factors: on one hand, the PCR chemistry and, on the other, the type of solid substrate where the samples are deposited. We developed a workflow strategy aiming to optimize times and cost when starting from blood samples spotted onto diverse absorbent substrates. A set of 770 blood samples spotted onto Blood cards, Whatman ® 3 MM paper, FTA™ Classic cards, and Whatman ® Grade 1 was analyzed by a unified working strategy including a low-cost pre-treatment, a PCR amplification volume scale-down, and the use of the 3500 Genetic Analyzer as the analytical platform. Samples were analyzed using three different commercial multiplex STR direct amplification kits. The efficiency of the strategy was evidenced by a higher percentage of high-quality profiles obtained (over 94%), a reduced number of re-injections (average 3.2%), and a reduced amplification failure rate (lower than 5%). Average peak height ratio among different commercial kits was 0.91, and the intra-locus balance showed values ranging from 0.92 to 0.94. A comparison with previously reported results was performed demonstrating the efficiency of the proposed modifications. The protocol described herein showed high performance, producing optimal quality profiles, and being both time and cost effective. Copyright © 2017 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.
Evaluating information content of SNPs for sample-tagging in re-sequencing projects.
Hu, Hao; Liu, Xiang; Jin, Wenfei; Hilger Ropers, H; Wienker, Thomas F
2015-05-15
Sample-tagging is designed for identification of accidental sample mix-up, which is a major issue in re-sequencing studies. In this work, we develop a model to measure the information content of SNPs, so that we can optimize a panel of SNPs that approach the maximal information for discrimination. The analysis shows that as low as 60 optimized SNPs can differentiate the individuals in a population as large as the present world, and only 30 optimized SNPs are in practice sufficient in labeling up to 100 thousand individuals. In the simulated populations of 100 thousand individuals, the average Hamming distances, generated by the optimized set of 30 SNPs are larger than 18, and the duality frequency, is lower than 1 in 10 thousand. This strategy of sample discrimination is proved robust in large sample size and different datasets. The optimized sets of SNPs are designed for Whole Exome Sequencing, and a program is provided for SNP selection, allowing for customized SNP numbers and interested genes. The sample-tagging plan based on this framework will improve re-sequencing projects in terms of reliability and cost-effectiveness.
NASA Astrophysics Data System (ADS)
Jiang, Xue; Lu, Wenxi; Hou, Zeyu; Zhao, Haiqing; Na, Jin
2015-11-01
The purpose of this study was to identify an optimal surfactant-enhanced aquifer remediation (SEAR) strategy for aquifers contaminated by dense non-aqueous phase liquid (DNAPL) based on an ensemble of surrogates-based optimization technique. A saturated heterogeneous medium contaminated by nitrobenzene was selected as case study. A new kind of surrogate-based SEAR optimization employing an ensemble surrogate (ES) model together with a genetic algorithm (GA) is presented. Four methods, namely radial basis function artificial neural network (RBFANN), kriging (KRG), support vector regression (SVR), and kernel extreme learning machines (KELM), were used to create four individual surrogate models, which were then compared. The comparison enabled us to select the two most accurate models (KELM and KRG) to establish an ES model of the SEAR simulation model, and the developed ES model as well as these four stand-alone surrogate models was compared. The results showed that the average relative error of the average nitrobenzene removal rates between the ES model and the simulation model for 20 test samples was 0.8%, which is a high approximation accuracy, and which indicates that the ES model provides more accurate predictions than the stand-alone surrogate models. Then, a nonlinear optimization model was formulated for the minimum cost, and the developed ES model was embedded into this optimization model as a constrained condition. Besides, GA was used to solve the optimization model to provide the optimal SEAR strategy. The developed ensemble surrogate-optimization approach was effective in seeking a cost-effective SEAR strategy for heterogeneous DNAPL-contaminated sites. This research is expected to enrich and develop the theoretical and technical implications for the analysis of remediation strategy optimization of DNAPL-contaminated aquifers.
NASA Astrophysics Data System (ADS)
Lu, W., Sr.; Xin, X.; Luo, J.; Jiang, X.; Zhang, Y.; Zhao, Y.; Chen, M.; Hou, Z.; Ouyang, Q.
2015-12-01
The purpose of this study was to identify an optimal surfactant-enhanced aquifer remediation (SEAR) strategy for aquifers contaminated by dense non-aqueous phase liquid (DNAPL) based on an ensemble of surrogates-based optimization technique. A saturated heterogeneous medium contaminated by nitrobenzene was selected as case study. A new kind of surrogate-based SEAR optimization employing an ensemble surrogate (ES) model together with a genetic algorithm (GA) is presented. Four methods, namely radial basis function artificial neural network (RBFANN), kriging (KRG), support vector regression (SVR), and kernel extreme learning machines (KELM), were used to create four individual surrogate models, which were then compared. The comparison enabled us to select the two most accurate models (KELM and KRG) to establish an ES model of the SEAR simulation model, and the developed ES model as well as these four stand-alone surrogate models was compared. The results showed that the average relative error of the average nitrobenzene removal rates between the ES model and the simulation model for 20 test samples was 0.8%, which is a high approximation accuracy, and which indicates that the ES model provides more accurate predictions than the stand-alone surrogate models. Then, a nonlinear optimization model was formulated for the minimum cost, and the developed ES model was embedded into this optimization model as a constrained condition. Besides, GA was used to solve the optimization model to provide the optimal SEAR strategy. The developed ensemble surrogate-optimization approach was effective in seeking a cost-effective SEAR strategy for heterogeneous DNAPL-contaminated sites. This research is expected to enrich and develop the theoretical and technical implications for the analysis of remediation strategy optimization of DNAPL-contaminated aquifers.
Sample allocation balancing overall representativeness and stratum precision.
Diaz-Quijano, Fredi Alexander
2018-05-07
In large-scale surveys, it is often necessary to distribute a preset sample size among a number of strata. Researchers must make a decision between prioritizing overall representativeness or precision of stratum estimates. Hence, I evaluated different sample allocation strategies based on stratum size. The strategies evaluated herein included allocation proportional to stratum population; equal sample for all strata; and proportional to the natural logarithm, cubic root, and square root of the stratum population. This study considered the fact that, from a preset sample size, the dispersion index of stratum sampling fractions is correlated with the population estimator error and the dispersion index of stratum-specific sampling errors would measure the inequality in precision distribution. Identification of a balanced and efficient strategy was based on comparing those both dispersion indices. Balance and efficiency of the strategies changed depending on overall sample size. As the sample to be distributed increased, the most efficient allocation strategies were equal sample for each stratum; proportional to the logarithm, to the cubic root, to square root; and that proportional to the stratum population, respectively. Depending on sample size, each of the strategies evaluated could be considered in optimizing the sample to keep both overall representativeness and stratum-specific precision. Copyright © 2018 Elsevier Inc. All rights reserved.
Yi, Xinzhu; Bayen, Stéphane; Kelly, Barry C; Li, Xu; Zhou, Zhi
2015-12-01
A solid-phase extraction/liquid chromatography/electrospray ionization/multi-stage mass spectrometry (SPE-LC-ESI-MS/MS) method was optimized in this study for sensitive and simultaneous detection of multiple antibiotics in urban surface waters and soils. Among the seven classes of tested antibiotics, extraction efficiencies of macrolides, lincosamide, chloramphenicol, and polyether antibiotics were significantly improved under optimized sample extraction pH. Instead of only using acidic extraction in many existing studies, the results indicated that antibiotics with low pK a values (<7) were extracted more efficiently under acidic conditions and antibiotics with high pK a values (>7) were extracted more efficiently under neutral conditions. The effects of pH were more obvious on polar compounds than those on non-polar compounds. Optimization of extraction pH resulted in significantly improved sample recovery and better detection limits. Compared with reported values in the literature, the average reduction of minimal detection limits obtained in this study was 87.6% in surface waters (0.06-2.28 ng/L) and 67.1% in soils (0.01-18.16 ng/g dry wt). This method was subsequently applied to detect antibiotics in environmental samples in a heavily populated urban city, and macrolides, sulfonamides, and lincomycin were frequently detected. Antibiotics with highest detected concentrations were sulfamethazine (82.5 ng/L) in surface waters and erythromycin (6.6 ng/g dry wt) in soils. The optimized sample extraction strategy can be used to improve the detection of a variety of antibiotics in environmental surface waters and soils.
Impact of sampling strategy on stream load estimates in till landscape of the Midwest
Vidon, P.; Hubbard, L.E.; Soyeux, E.
2009-01-01
Accurately estimating various solute loads in streams during storms is critical to accurately determine maximum daily loads for regulatory purposes. This study investigates the impact of sampling strategy on solute load estimates in streams in the US Midwest. Three different solute types (nitrate, magnesium, and dissolved organic carbon (DOC)) and three sampling strategies are assessed. Regardless of the method, the average error on nitrate loads is higher than for magnesium or DOC loads, and all three methods generally underestimate DOC loads and overestimate magnesium loads. Increasing sampling frequency only slightly improves the accuracy of solute load estimates but generally improves the precision of load calculations. This type of investigation is critical for water management and environmental assessment so error on solute load calculations can be taken into account by landscape managers, and sampling strategies optimized as a function of monitoring objectives. ?? 2008 Springer Science+Business Media B.V.
An Asymptotically-Optimal Sampling-Based Algorithm for Bi-directional Motion Planning
Starek, Joseph A.; Gomez, Javier V.; Schmerling, Edward; Janson, Lucas; Moreno, Luis; Pavone, Marco
2015-01-01
Bi-directional search is a widely used strategy to increase the success and convergence rates of sampling-based motion planning algorithms. Yet, few results are available that merge both bi-directional search and asymptotic optimality into existing optimal planners, such as PRM*, RRT*, and FMT*. The objective of this paper is to fill this gap. Specifically, this paper presents a bi-directional, sampling-based, asymptotically-optimal algorithm named Bi-directional FMT* (BFMT*) that extends the Fast Marching Tree (FMT*) algorithm to bidirectional search while preserving its key properties, chiefly lazy search and asymptotic optimality through convergence in probability. BFMT* performs a two-source, lazy dynamic programming recursion over a set of randomly-drawn samples, correspondingly generating two search trees: one in cost-to-come space from the initial configuration and another in cost-to-go space from the goal configuration. Numerical experiments illustrate the advantages of BFMT* over its unidirectional counterpart, as well as a number of other state-of-the-art planners. PMID:27004130
Using Extreme Groups Strategy When Measures Are Not Normally Distributed.
ERIC Educational Resources Information Center
Fowler, Robert L.
1992-01-01
A Monte Carlo simulation explored how to optimize power in the extreme groups strategy when sampling from nonnormal distributions. Results show that the optimum percent for the extreme group selection was approximately the same for all population shapes, except the extremely platykurtic (uniform) distribution. (SLD)
Atta Mills, Ebenezer Fiifi Emire; Yan, Dawen; Yu, Bo; Wei, Xinyuan
2016-01-01
We propose a consolidated risk measure based on variance and the safety-first principle in a mean-risk portfolio optimization framework. The safety-first principle to financial portfolio selection strategy is modified and improved. Our proposed models are subjected to norm regularization to seek near-optimal stable and sparse portfolios. We compare the cumulative wealth of our preferred proposed model to a benchmark, S&P 500 index for the same period. Our proposed portfolio strategies have better out-of-sample performance than the selected alternative portfolio rules in literature and control the downside risk of the portfolio returns.
Provencher, Véronique; Desrosiers, Johanne; Demers, Louise; Carmichael, Pierre-Hugues
2016-01-01
This study aimed to (1) determine the categories of behavioral coping strategies most strongly correlated with optimal seniors' social participation in different activity and role domains and (2) identify the demographic, health and environmental factors associated with the use of these coping strategies optimizing social participation. The sample consisted of 350 randomly recruited community-dwelling older adults (≥65 years). Coping strategies and social participation were measured, respectively, using the Inventory of Coping Strategies Used by the Elderly and Assessment of Life Habits questionnaires. Information about demographic, health and environmental factors was also collected during the interview. Regression analyses showed a strong relationship between the use of cooking- and transportation-related coping strategies and optimal participation in the domains of nutrition and community life, respectively. Older age and living alone were associated with increased use of cooking-related strategies, while good self-rated health and not living in a seniors' residence were correlated with greater use of transportation-related strategies. Our study helped to identify useful behavioral coping strategies that should be incorporated in disability prevention programs designed to promote community-dwelling seniors' social participation. However, the appropriateness of these strategies depends on whether they are used in relevant contexts and tailored to specific needs. Our results support the relevance of including behavioral coping strategies related to cooking and transportation in disability prevention programs designed to promote community-dwelling seniors' social participation in the domains of nutrition and community life, respectively. Older age and living alone were associated with increased use of cooking-related strategies, while good self-rated health and not living in a seniors' residence were correlated with greater use of transportation-related strategies. These factors should be considered in order to optimize implementation of these useful strategies in disability prevention programs. The appropriateness of these selected strategies depends on whether they are used in relevant contexts and tailored to specific needs.
Optimizing Telehealth Strategies for Subspecialty Care: Recommendations from Rural Pediatricians
Demirci, Jill R.; Bogen, Debra L.; Mehrotra, Ateev; Miller, Elizabeth
2015-01-01
Abstract Background: Telehealth offers strategies to improve access to subspecialty care for children in rural communities. Rural pediatrician experiences and preferences regarding the use of these telehealth strategies for children's subspecialty care needs are not known. We elicited rural pediatrician experiences and preferences regarding different pediatric subspecialty telehealth strategies. Materials and Methods: Seventeen semistructured telephone interviews were conducted with rural pediatricians from 17 states within the United States. Interviewees were recruited by e-mails to a pediatric rural health listserv and to rural pediatricians identified through snowball sampling. Themes were identified through thematic analysis of interview transcripts. Institutional Review Board approval was obtained. Results: Rural pediatricians identified several telehealth strategies to improve access to subspecialty care, including physician access hotlines, remote electronic medical record access, electronic messaging systems, live video telemedicine, and telehealth triage systems. Rural pediatricians provided recommendations for optimizing the utility of each of these strategies based on their experiences with different systems. Rural pediatricians preferred specific telehealth strategies for specific clinical contexts, resulting in a proposed framework describing the complementary role of different telehealth strategies for pediatric subspecialty care. Finally, rural pediatricians identified additional benefits associated with the use of telehealth strategies and described a desire for telehealth systems that enhanced (rather than replaced) personal relationships between rural pediatricians and subspecialists. Conclusions: Rural pediatricians described complementary roles for different subspecialty care telehealth strategies. Additionally, rural pediatricians provided recommendations for optimizing individual telehealth strategies. Input from rural pediatricians will be crucial for optimizing specific telehealth strategies and designing effective telehealth systems. PMID:25919585
Wang, Hongbin; Zhang, Yongqian; Gui, Shuqi; Zhang, Yong; Lu, Fuping; Deng, Yulin
2017-08-15
Comparisons across large numbers of samples are frequently necessary in quantitative proteomics. Many quantitative methods used in proteomics are based on stable isotope labeling, but most of these are only useful for comparing two samples. For up to eight samples, the iTRAQ labeling technique can be used. For greater numbers of samples, the label-free method has been used, but this method was criticized for low reproducibility and accuracy. An ingenious strategy has been introduced, comparing each sample against a 18 O-labeled reference sample that was created by pooling equal amounts of all samples. However, it is necessary to use proportion-known protein mixtures to investigate and evaluate this new strategy. Another problem for comparative proteomics of multiple samples is the poor coincidence and reproducibility in protein identification results across samples. In present study, a method combining 18 O-reference strategy and a quantitation and identification-decoupled strategy was investigated with proportion-known protein mixtures. The results obviously demonstrated that the 18 O-reference strategy had greater accuracy and reliability than other previously used comparison methods based on transferring comparison or label-free strategies. By the decoupling strategy, the quantification data acquired by LC-MS and the identification data acquired by LC-MS/MS are matched and correlated to identify differential expressed proteins, according to retention time and accurate mass. This strategy made protein identification possible for all samples using a single pooled sample, and therefore gave a good reproducibility in protein identification across multiple samples, and allowed for optimizing peptide identification separately so as to identify more proteins. Copyright © 2017 Elsevier B.V. All rights reserved.
Sturrock, Hugh J W; Gething, Pete W; Ashton, Ruth A; Kolaczinski, Jan H; Kabatereine, Narcis B; Brooker, Simon
2011-09-01
In schistosomiasis control, there is a need to geographically target treatment to populations at high risk of morbidity. This paper evaluates alternative sampling strategies for surveys of Schistosoma mansoni to target mass drug administration in Kenya and Ethiopia. Two main designs are considered: lot quality assurance sampling (LQAS) of children from all schools; and a geostatistical design that samples a subset of schools and uses semi-variogram analysis and spatial interpolation to predict prevalence in the remaining unsurveyed schools. Computerized simulations are used to investigate the performance of sampling strategies in correctly classifying schools according to treatment needs and their cost-effectiveness in identifying high prevalence schools. LQAS performs better than geostatistical sampling in correctly classifying schools, but at a cost with a higher cost per high prevalence school correctly classified. It is suggested that the optimal surveying strategy for S. mansoni needs to take into account the goals of the control programme and the financial and drug resources available.
Peng, Ting; Sun, Xiaochun; Mumm, Rita H
2014-01-01
Multiple trait integration (MTI) is a multi-step process of converting an elite variety/hybrid for value-added traits (e.g. transgenic events) through backcross breeding. From a breeding standpoint, MTI involves four steps: single event introgression, event pyramiding, trait fixation, and version testing. This study explores the feasibility of marker-aided backcross conversion of a target maize hybrid for 15 transgenic events in the light of the overall goal of MTI of recovering equivalent performance in the finished hybrid conversion along with reliable expression of the value-added traits. Using the results to optimize single event introgression (Peng et al. Optimized breeding strategies for multiple trait integration: I. Minimizing linkage drag in single event introgression. Mol Breed, 2013) which produced single event conversions of recurrent parents (RPs) with ≤8 cM of residual non-recurrent parent (NRP) germplasm with ~1 cM of NRP germplasm in the 20 cM regions flanking the event, this study focused on optimizing process efficiency in the second and third steps in MTI: event pyramiding and trait fixation. Using computer simulation and probability theory, we aimed to (1) fit an optimal breeding strategy for pyramiding of eight events into the female RP and seven in the male RP, and (2) identify optimal breeding strategies for trait fixation to create a 'finished' conversion of each RP homozygous for all events. In addition, next-generation seed needs were taken into account for a practical approach to process efficiency. Building on work by Ishii and Yonezawa (Optimization of the marker-based procedures for pyramiding genes from multiple donor lines: I. Schedule of crossing between the donor lines. Crop Sci 47:537-546, 2007a), a symmetric crossing schedule for event pyramiding was devised for stacking eight (seven) events in a given RP. Options for trait fixation breeding strategies considered selfing and doubled haploid approaches to achieve homozygosity as well as seed chipping and tissue sampling approaches to facilitate genotyping. With selfing approaches, two generations of selfing rather than one for trait fixation (i.e. 'F2 enrichment' as per Bonnett et al. in Strategies for efficient implementation of molecular markers in wheat breeding. Mol Breed 15:75-85, 2005) were utilized to eliminate bottlenecking due to extremely low frequencies of desired genotypes in the population. The efficiency indicators such as total number of plants grown across generations, total number of marker data points, total number of generations, number of seeds sampled by seed chipping, number of plants requiring tissue sampling, and number of pollinations (i.e. selfing and crossing) were considered in comparisons of breeding strategies. A breeding strategy involving seed chipping and a two-generation selfing approach (SC + SELF) was determined to be the most efficient breeding strategy in terms of time to market and resource requirements. Doubled haploidy may have limited utility in trait fixation for MTI under the defined breeding scenario. This outcome paves the way for optimizing the last step in the MTI process, version testing, which involves hybridization of female and male RP conversions to create versions of the converted hybrid for performance evaluation and possible commercial release.
Rats track odour trails accurately using a multi-layered strategy with near-optimal sampling.
Khan, Adil Ghani; Sarangi, Manaswini; Bhalla, Upinder Singh
2012-02-28
Tracking odour trails is a crucial behaviour for many animals, often leading to food, mates or away from danger. It is an excellent example of active sampling, where the animal itself controls how to sense the environment. Here we show that rats can track odour trails accurately with near-optimal sampling. We trained rats to follow odour trails drawn on paper spooled through a treadmill. By recording local field potentials (LFPs) from the olfactory bulb, and sniffing rates, we find that sniffing but not LFPs differ between tracking and non-tracking conditions. Rats can track odours within ~1 cm, and this accuracy is degraded when one nostril is closed. Moreover, they show path prediction on encountering a fork, wide 'casting' sweeps on encountering a gap and detection of reappearance of the trail in 1-2 sniffs. We suggest that rats use a multi-layered strategy, and achieve efficient sampling and high accuracy in this complex task.
Trace DNA Sampling Success from Evidence Items Commonly Encountered in Forensic Casework.
Dziak, Renata; Peneder, Amy; Buetter, Alicia; Hageman, Cecilia
2018-05-01
Trace DNA analysis is a significant part of a forensic laboratory's workload. Knowing optimal sampling strategies and item success rates for particular item types can assist in evidence selection and examination processes and shorten turnaround times. In this study, forensic short tandem repeat (STR) casework results were reviewed to determine how often STR profiles suitable for comparison were obtained from "handler" and "wearer" areas of 764 items commonly submitted for examination. One hundred and fifty-five (155) items obtained from volunteers were also sampled. Items were analyzed for best sampling location and strategy. For casework items, headwear and gloves provided the highest success rates. Experimentally, eyeglasses and earphones, T-shirts, fabric gloves and watches provided the highest success rates. Eyeglasses and latex gloves provided optimal results if the entire surfaces were swabbed. In general, at least 10%, and up to 88% of all trace DNA analyses resulted in suitable STR profiles for comparison. © 2017 American Academy of Forensic Sciences.
Rey, Sergio J.; Stephens, Philip A.; Laura, Jason R.
2017-01-01
Large data contexts present a number of challenges to optimal choropleth map classifiers. Application of optimal classifiers to a sample of the attribute space is one proposed solution. The properties of alternative sampling-based classification methods are examined through a series of Monte Carlo simulations. The impacts of spatial autocorrelation, number of desired classes, and form of sampling are shown to have significant impacts on the accuracy of map classifications. Tradeoffs between improved speed of the sampling approaches and loss of accuracy are also considered. The results suggest the possibility of guiding the choice of classification scheme as a function of the properties of large data sets.
Windsor, Liliane Cambraia; Benoit, Ellen; Smith, Douglas; Pinto, Rogério M; Kugler, Kari C
2018-04-27
Rates of alcohol and illicit drug use (AIDU) are consistently similar across racial groups (Windsor and Negi, J Addict Dis 28:258-68, 2009; Keyes et al. Soc Sci Med 124:132-41, 2015). Yet AIDU has significantly higher consequences for residents in distressed communities with concentrations of African Americans (DCAA - i.e., localities with high rates of poverty and crime) who also have considerably less access to effective treatment of substance use disorders (SUD). This project is optimizing Community Wise, an innovative multi-level behavioral-health intervention created in partnership with service providers and residents of distressed communities with histories of SUD and incarceration, to reduce health inequalities related to AIDU. Grounded in critical consciousness theory, community-based participatory research principles (CBPR), and the multiphase optimization strategy (MOST), this study employs a 2 × 2 × 2 × 2 factorial design to engineer the most efficient, effective, and scalable version of Community Wise that can be delivered for US$250 per person or less. This study is fully powered to detect change in AIDU in a sample of 528 men with a histories of SUD and incarceration, residing in Newark, NJ in the United States. A community collaborative board oversees recruitment using a variety of strategies including indigenous field worker sampling, facility-based sampling, community advertisement through fliers, and street outreach. Participants are randomly assigned to one of 16 conditions that include a combination of the following candidate intervention components: peer or licensed facilitator, group dialogue, personal goal development, and community organizing. All participants receive a core critical-thinking component. Data are collected at baseline plus five post-baseline monthly follow ups. Once the optimized Community Wise intervention is identified, it will be evaluated against an existing standard of care in a future randomized clinical trial. This paper describes the protocol of the first ever study using CBPR and MOST to optimize a substance use intervention targeting a marginalized population. Data from this study will culminate in an optimized Community Wise manual; enhanced methodological strategies to develop multi-component scalable interventions using MOST and CBPR; and a better understanding of the application of critical consciousness theory to the field of health inequalities related to AIDU. ClinicalTrials.gov, NCT02951455 . Registered on 1 November 2016.
Multi-point objective-oriented sequential sampling strategy for constrained robust design
NASA Astrophysics Data System (ADS)
Zhu, Ping; Zhang, Siliang; Chen, Wei
2015-03-01
Metamodelling techniques are widely used to approximate system responses of expensive simulation models. In association with the use of metamodels, objective-oriented sequential sampling methods have been demonstrated to be effective in balancing the need for searching an optimal solution versus reducing the metamodelling uncertainty. However, existing infilling criteria are developed for deterministic problems and restricted to one sampling point in one iteration. To exploit the use of multiple samples and identify the true robust solution in fewer iterations, a multi-point objective-oriented sequential sampling strategy is proposed for constrained robust design problems. In this article, earlier development of objective-oriented sequential sampling strategy for unconstrained robust design is first extended to constrained problems. Next, a double-loop multi-point sequential sampling strategy is developed. The proposed methods are validated using two mathematical examples followed by a highly nonlinear automotive crashworthiness design example. The results show that the proposed method can mitigate the effect of both metamodelling uncertainty and design uncertainty, and identify the robust design solution more efficiently than the single-point sequential sampling approach.
Optimal social-networking strategy is a function of socioeconomic conditions.
Oishi, Shigehiro; Kesebir, Selin
2012-12-01
In the two studies reported here, we examined the relation among residential mobility, economic conditions, and optimal social-networking strategy. In study 1, a computer simulation showed that regardless of economic conditions, having a broad social network with weak friendship ties is advantageous when friends are likely to move away. By contrast, having a small social network with deep friendship ties is advantageous when the economy is unstable but friends are not likely to move away. In study 2, we examined the validity of the computer simulation using a sample of American adults. Results were consistent with the simulation: American adults living in a zip code where people are residentially stable but economically challenged were happier if they had a narrow but deep social network, whereas in other socioeconomic conditions, people were generally happier if they had a broad but shallow networking strategy. Together, our studies demonstrate that the optimal social-networking strategy varies as a function of socioeconomic conditions.
A Comparison of Two Sampling Strategies to Assess Discomycete Diversity in Wet Tropical Forests
SHARON A. CANTRELL
2004-01-01
Most of the fungal diversity studies that have used a systematic collecting scheme have not included the discomycetes, so optimal sampling methods are not available for this group. In this study, I tested two sampling methods at each sites in the Caribbean National Forest, Puerto Rico and Ebano Verde Reserve, Dominican Republic. For a plot-based sampling method, 10 Ã...
Riedel, Natalie; Müller, Andreas; Ebener, Melanie
2015-05-01
To investigate whether aging employees' selection, optimization, and compensation (SOC) strategies were associated with work ability over and above job demand and control variables, as well as across professions. Multivariable linear regressions were conducted using a representative sample of German employees born in 1959 and 1965 (N = 6057). SOC was assessed to have an independent effect on work ability. Associations of job demands and control variables with work ability were more prominent. The SOC tended to enhance the positive association between decision authority and work ability. Individual strategies of selection, optimization, and compensation could be considered as psychosocial resources adding up to a better work ability and complement prevention programs. Workplace interventions should deal with job demands and control to maintain older employees' work ability in times of working population shrinkage.
Optimism and the experience of pain: benefits of seeing the glass as half full
Goodin, Burel R.; Bulls, Hailey W.
2014-01-01
There is a strong body of literature that lends support to the health-promoting effects of an optimistic personality disposition, observed across various physical and psychological dimensions. In accordance with this evidence base, it has been suggested that optimism may positively influence the course and experience of pain. Although the associations among optimism and pain outcomes have only recently begun to be adequately studied, emerging experimental and clinical research links optimism to lower pain sensitivity and better adjustment to chronic pain. This review highlights recent studies that have examined the effects of optimism on the pain experience using samples of individuals with clinically painful conditions as well as healthy samples in laboratory settings. Furthermore, factors such as catastrophizing, hope, acceptance and coping strategies, which are thought to play a role in how optimism exerts its beneficial effects on pain, are also addressed. PMID:23519832
Adaptive sampling of information in perceptual decision-making.
Cassey, Thomas C; Evens, David R; Bogacz, Rafal; Marshall, James A R; Ludwig, Casimir J H
2013-01-01
In many perceptual and cognitive decision-making problems, humans sample multiple noisy information sources serially, and integrate the sampled information to make an overall decision. We derive the optimal decision procedure for two-alternative choice tasks in which the different options are sampled one at a time, sources vary in the quality of the information they provide, and the available time is fixed. To maximize accuracy, the optimal observer allocates time to sampling different information sources in proportion to their noise levels. We tested human observers in a corresponding perceptual decision-making task. Observers compared the direction of two random dot motion patterns that were triggered only when fixated. Observers allocated more time to the noisier pattern, in a manner that correlated with their sensory uncertainty about the direction of the patterns. There were several differences between the optimal observer predictions and human behaviour. These differences point to a number of other factors, beyond the quality of the currently available sources of information, that influences the sampling strategy.
NASA Astrophysics Data System (ADS)
Feng, J.; Bai, L.; Liu, S.; Su, X.; Hu, H.
2012-07-01
In this paper, the MODIS remote sensing data, featured with low-cost, high-timely and moderate/low spatial resolutions, in the North China Plain (NCP) as a study region were firstly used to carry out mixed-pixel spectral decomposition to extract an useful regionalized indicator parameter (RIP) (i.e., an available ratio, that is, fraction/percentage, of winter wheat planting area in each pixel as a regionalized indicator variable (RIV) of spatial sampling) from the initial selected indicators. Then, the RIV values were spatially analyzed, and the spatial structure characteristics (i.e., spatial correlation and variation) of the NCP were achieved, which were further processed to obtain the scalefitting, valid a priori knowledge or information of spatial sampling. Subsequently, founded upon an idea of rationally integrating probability-based and model-based sampling techniques and effectively utilizing the obtained a priori knowledge or information, the spatial sampling models and design schemes and their optimization and optimal selection were developed, as is a scientific basis of improving and optimizing the existing spatial sampling schemes of large-scale cropland remote sensing monitoring. Additionally, by the adaptive analysis and decision strategy the optimal local spatial prediction and gridded system of extrapolation results were able to excellently implement an adaptive report pattern of spatial sampling in accordance with report-covering units in order to satisfy the actual needs of sampling surveys.
ERIC Educational Resources Information Center
Flynn, Brian S.; Worden, John K.; Bunn, Janice Yanushka; Dorwaldt, Anne L.; Connolly, Scott W.; Ashikaga, Takamaru
2007-01-01
Mass media interventions are among the strategies recommended for youth cigarette smoking prevention, but little is known about optimal methods for reaching diverse youth audiences. Grades 4 through 12 samples of youth from four states (n = 1,230) rated smoking-prevention messages in classroom settings. Similar proportions of African American,…
Optimal trading strategies—a time series approach
NASA Astrophysics Data System (ADS)
Bebbington, Peter A.; Kühn, Reimer
2016-05-01
Motivated by recent advances in the spectral theory of auto-covariance matrices, we are led to revisit a reformulation of Markowitz’ mean-variance portfolio optimization approach in the time domain. In its simplest incarnation it applies to a single traded asset and allows an optimal trading strategy to be found which—for a given return—is minimally exposed to market price fluctuations. The model is initially investigated for a range of synthetic price processes, taken to be either second order stationary, or to exhibit second order stationary increments. Attention is paid to consequences of estimating auto-covariance matrices from small finite samples, and auto-covariance matrix cleaning strategies to mitigate against these are investigated. Finally we apply our framework to real world data.
Gang, Grace J; Siewerdsen, Jeffrey H; Stayman, J Webster
2017-12-01
This paper presents a joint optimization of dynamic fluence field modulation (FFM) and regularization in quadratic penalized-likelihood reconstruction that maximizes a task-based imaging performance metric. We adopted a task-driven imaging framework for prospective designs of the imaging parameters. A maxi-min objective function was adopted to maximize the minimum detectability index ( ) throughout the image. The optimization algorithm alternates between FFM (represented by low-dimensional basis functions) and local regularization (including the regularization strength and directional penalty weights). The task-driven approach was compared with three FFM strategies commonly proposed for FBP reconstruction (as well as a task-driven TCM strategy) for a discrimination task in an abdomen phantom. The task-driven FFM assigned more fluence to less attenuating anteroposterior views and yielded approximately constant fluence behind the object. The optimal regularization was almost uniform throughout image. Furthermore, the task-driven FFM strategy redistribute fluence across detector elements in order to prescribe more fluence to the more attenuating central region of the phantom. Compared with all strategies, the task-driven FFM strategy not only improved minimum by at least 17.8%, but yielded higher over a large area inside the object. The optimal FFM was highly dependent on the amount of regularization, indicating the importance of a joint optimization. Sample reconstructions of simulated data generally support the performance estimates based on computed . The improvements in detectability show the potential of the task-driven imaging framework to improve imaging performance at a fixed dose, or, equivalently, to provide a similar level of performance at reduced dose.
Control strategies for planetary rover motion and manipulator control
NASA Technical Reports Server (NTRS)
Trautwein, W.
1973-01-01
An unusual insect-like vehicle designed for planetary surface exploration is made the occasion for a discussion of control concepts in path selection, hazard detection, obstacle negotiation, and soil sampling. A control scheme which actively articulates the pitching motion between a single-loop front module and a dual loop rear module leads to near optimal behavior in soft soil; at the same time the vehicle's front module acts as a reliable tactile forward probe with a detection range much longer than the stopping distance. Some optimal control strategies are discussed, and the photos of a working scale model are displayed.
NASA Astrophysics Data System (ADS)
Liao, Kaihua; Zhou, Zhiwen; Lai, Xiaoming; Zhu, Qing; Feng, Huihui
2017-04-01
The identification of representative soil moisture sampling sites is important for the validation of remotely sensed mean soil moisture in a certain area and ground-based soil moisture measurements in catchment or hillslope hydrological studies. Numerous approaches have been developed to identify optimal sites for predicting mean soil moisture. Each method has certain advantages and disadvantages, but they have rarely been evaluated and compared. In our study, surface (0-20 cm) soil moisture data from January 2013 to March 2016 (a total of 43 sampling days) were collected at 77 sampling sites on a mixed land-use (tea and bamboo) hillslope in the hilly area of Taihu Lake Basin, China. A total of 10 methods (temporal stability (TS) analyses based on 2 indices, K-means clustering based on 6 kinds of inputs and 2 random sampling strategies) were evaluated for determining optimal sampling sites for mean soil moisture estimation. They were TS analyses based on the smallest index of temporal stability (ITS, a combination of the mean relative difference and standard deviation of relative difference (SDRD)) and based on the smallest SDRD, K-means clustering based on soil properties and terrain indices (EFs), repeated soil moisture measurements (Theta), EFs plus one-time soil moisture data (EFsTheta), and the principal components derived from EFs (EFs-PCA), Theta (Theta-PCA), and EFsTheta (EFsTheta-PCA), and global and stratified random sampling strategies. Results showed that the TS based on the smallest ITS was better (RMSE = 0.023 m3 m-3) than that based on the smallest SDRD (RMSE = 0.034 m3 m-3). The K-means clustering based on EFsTheta (-PCA) was better (RMSE <0.020 m3 m-3) than these based on EFs (-PCA) and Theta (-PCA). The sampling design stratified by the land use was more efficient than the global random method. Forty and 60 sampling sites are needed for stratified sampling and global sampling respectively to make their performances comparable to the best K-means method (EFsTheta-PCA). Overall, TS required only one site, but its accuracy was limited. The best K-means method required <8 sites and yielded high accuracy, but extra soil and terrain information is necessary when using this method. The stratified sampling strategy can only be used if no pre-knowledge about soil moisture variation is available. This information will help in selecting the optimal methods for estimation the area mean soil moisture.
Petruzzellis, Francesco; Palandrani, Chiara; Savi, Tadeja; Alberti, Roberto; Nardini, Andrea; Bacaro, Giovanni
2017-12-01
The choice of the best sampling strategy to capture mean values of functional traits for a species/population, while maintaining information about traits' variability and minimizing the sampling size and effort, is an open issue in functional trait ecology. Intraspecific variability (ITV) of functional traits strongly influences sampling size and effort. However, while adequate information is available about intraspecific variability between individuals (ITV BI ) and among populations (ITV POP ), relatively few studies have analyzed intraspecific variability within individuals (ITV WI ). Here, we provide an analysis of ITV WI of two foliar traits, namely specific leaf area (SLA) and osmotic potential (π), in a population of Quercus ilex L. We assessed the baseline ITV WI level of variation between the two traits and provided the minimum and optimal sampling size in order to take into account ITV WI , comparing sampling optimization outputs with those previously proposed in the literature. Different factors accounted for different amount of variance of the two traits. SLA variance was mostly spread within individuals (43.4% of the total variance), while π variance was mainly spread between individuals (43.2%). Strategies that did not account for all the canopy strata produced mean values not representative of the sampled population. The minimum size to adequately capture the studied functional traits corresponded to 5 leaves taken randomly from 5 individuals, while the most accurate and feasible sampling size was 4 leaves taken randomly from 10 individuals. We demonstrate that the spatial structure of the canopy could significantly affect traits variability. Moreover, different strategies for different traits could be implemented during sampling surveys. We partially confirm sampling sizes previously proposed in the recent literature and encourage future analysis involving different traits.
NASA Astrophysics Data System (ADS)
Abedini, M. J.; Nasseri, M.; Burn, D. H.
2012-04-01
In any geostatistical study, an important consideration is the choice of an appropriate, repeatable, and objective search strategy that controls the nearby samples to be included in the location-specific estimation procedure. Almost all geostatistical software available in the market puts the onus on the user to supply search strategy parameters in a heuristic manner. These parameters are solely controlled by geographical coordinates that are defined for the entire area under study, and the user has no guidance as to how to choose these parameters. The main thesis of the current study is that the selection of search strategy parameters has to be driven by data—both the spatial coordinates and the sample values—and cannot be chosen beforehand. For this purpose, a genetic-algorithm-based ordinary kriging with moving neighborhood technique is proposed. The search capability of a genetic algorithm is exploited to search the feature space for appropriate, either local or global, search strategy parameters. Radius of circle/sphere and/or radii of standard or rotated ellipse/ellipsoid are considered as the decision variables to be optimized by GA. The superiority of GA-based ordinary kriging is demonstrated through application to the Wolfcamp Aquifer piezometric head data. Assessment of numerical results showed that definition of search strategy parameters based on both geographical coordinates and sample values improves cross-validation statistics when compared with that based on geographical coordinates alone. In the case of a variable search neighborhood for each estimation point, optimization of local search strategy parameters for an elliptical support domain—the orientation of which is dictated by anisotropic axes—via GA was able to capture the dynamics of piezometric head in west Texas/New Mexico in an efficient way.
NASA Astrophysics Data System (ADS)
Peng, Haijun; Wang, Wei
2016-10-01
An adaptive surrogate model-based multi-objective optimization strategy that combines the benefits of invariant manifolds and low-thrust control toward developing a low-computational-cost transfer trajectory between libration orbits around the L1 and L2 libration points in the Sun-Earth system has been proposed in this paper. A new structure for a multi-objective transfer trajectory optimization model that divides the transfer trajectory into several segments and gives the dominations for invariant manifolds and low-thrust control in different segments has been established. To reduce the computational cost of multi-objective transfer trajectory optimization, a mixed sampling strategy-based adaptive surrogate model has been proposed. Numerical simulations show that the results obtained from the adaptive surrogate-based multi-objective optimization are in agreement with the results obtained using direct multi-objective optimization methods, and the computational workload of the adaptive surrogate-based multi-objective optimization is only approximately 10% of that of direct multi-objective optimization. Furthermore, the generating efficiency of the Pareto points of the adaptive surrogate-based multi-objective optimization is approximately 8 times that of the direct multi-objective optimization. Therefore, the proposed adaptive surrogate-based multi-objective optimization provides obvious advantages over direct multi-objective optimization methods.
Optimization Strategies for Sensor and Actuator Placement
NASA Technical Reports Server (NTRS)
Padula, Sharon L.; Kincaid, Rex K.
1999-01-01
This paper provides a survey of actuator and sensor placement problems from a wide range of engineering disciplines and a variety of applications. Combinatorial optimization methods are recommended as a means for identifying sets of actuators and sensors that maximize performance. Several sample applications from NASA Langley Research Center, such as active structural acoustic control, are covered in detail. Laboratory and flight tests of these applications indicate that actuator and sensor placement methods are effective and important. Lessons learned in solving these optimization problems can guide future research.
Gu, Yingxin; Wylie, Bruce K.; Boyte, Stephen; Picotte, Joshua J.; Howard, Danny; Smith, Kelcy; Nelson, Kurtis
2016-01-01
Regression tree models have been widely used for remote sensing-based ecosystem mapping. Improper use of the sample data (model training and testing data) may cause overfitting and underfitting effects in the model. The goal of this study is to develop an optimal sampling data usage strategy for any dataset and identify an appropriate number of rules in the regression tree model that will improve its accuracy and robustness. Landsat 8 data and Moderate-Resolution Imaging Spectroradiometer-scaled Normalized Difference Vegetation Index (NDVI) were used to develop regression tree models. A Python procedure was designed to generate random replications of model parameter options across a range of model development data sizes and rule number constraints. The mean absolute difference (MAD) between the predicted and actual NDVI (scaled NDVI, value from 0–200) and its variability across the different randomized replications were calculated to assess the accuracy and stability of the models. In our case study, a six-rule regression tree model developed from 80% of the sample data had the lowest MAD (MADtraining = 2.5 and MADtesting = 2.4), which was suggested as the optimal model. This study demonstrates how the training data and rule number selections impact model accuracy and provides important guidance for future remote-sensing-based ecosystem modeling.
Enhancing sampling design in mist-net bat surveys by accounting for sample size optimization.
Trevelin, Leonardo Carreira; Novaes, Roberto Leonan Morim; Colas-Rosas, Paul François; Benathar, Thayse Cristhina Melo; Peres, Carlos A
2017-01-01
The advantages of mist-netting, the main technique used in Neotropical bat community studies to date, include logistical implementation, standardization and sampling representativeness. Nonetheless, study designs still have to deal with issues of detectability related to how different species behave and use the environment. Yet there is considerable sampling heterogeneity across available studies in the literature. Here, we approach the problem of sample size optimization. We evaluated the common sense hypothesis that the first six hours comprise the period of peak night activity for several species, thereby resulting in a representative sample for the whole night. To this end, we combined re-sampling techniques, species accumulation curves, threshold analysis, and community concordance of species compositional data, and applied them to datasets of three different Neotropical biomes (Amazonia, Atlantic Forest and Cerrado). We show that the strategy of restricting sampling to only six hours of the night frequently results in incomplete sampling representation of the entire bat community investigated. From a quantitative standpoint, results corroborated the existence of a major Sample Area effect in all datasets, although for the Amazonia dataset the six-hour strategy was significantly less species-rich after extrapolation, and for the Cerrado dataset it was more efficient. From the qualitative standpoint, however, results demonstrated that, for all three datasets, the identity of species that are effectively sampled will be inherently impacted by choices of sub-sampling schedule. We also propose an alternative six-hour sampling strategy (at the beginning and the end of a sample night) which performed better when resampling Amazonian and Atlantic Forest datasets on bat assemblages. Given the observed magnitude of our results, we propose that sample representativeness has to be carefully weighed against study objectives, and recommend that the trade-off between logistical constraints and additional sampling performance should be carefully evaluated.
Enhancing sampling design in mist-net bat surveys by accounting for sample size optimization
Trevelin, Leonardo Carreira; Novaes, Roberto Leonan Morim; Colas-Rosas, Paul François; Benathar, Thayse Cristhina Melo; Peres, Carlos A.
2017-01-01
The advantages of mist-netting, the main technique used in Neotropical bat community studies to date, include logistical implementation, standardization and sampling representativeness. Nonetheless, study designs still have to deal with issues of detectability related to how different species behave and use the environment. Yet there is considerable sampling heterogeneity across available studies in the literature. Here, we approach the problem of sample size optimization. We evaluated the common sense hypothesis that the first six hours comprise the period of peak night activity for several species, thereby resulting in a representative sample for the whole night. To this end, we combined re-sampling techniques, species accumulation curves, threshold analysis, and community concordance of species compositional data, and applied them to datasets of three different Neotropical biomes (Amazonia, Atlantic Forest and Cerrado). We show that the strategy of restricting sampling to only six hours of the night frequently results in incomplete sampling representation of the entire bat community investigated. From a quantitative standpoint, results corroborated the existence of a major Sample Area effect in all datasets, although for the Amazonia dataset the six-hour strategy was significantly less species-rich after extrapolation, and for the Cerrado dataset it was more efficient. From the qualitative standpoint, however, results demonstrated that, for all three datasets, the identity of species that are effectively sampled will be inherently impacted by choices of sub-sampling schedule. We also propose an alternative six-hour sampling strategy (at the beginning and the end of a sample night) which performed better when resampling Amazonian and Atlantic Forest datasets on bat assemblages. Given the observed magnitude of our results, we propose that sample representativeness has to be carefully weighed against study objectives, and recommend that the trade-off between logistical constraints and additional sampling performance should be carefully evaluated. PMID:28334046
Consistently Sampled Correlation Filters with Space Anisotropic Regularization for Visual Tracking
Shi, Guokai; Xu, Tingfa; Luo, Jiqiang; Li, Yuankun
2017-01-01
Most existing correlation filter-based tracking algorithms, which use fixed patches and cyclic shifts as training and detection measures, assume that the training samples are reliable and ignore the inconsistencies between training samples and detection samples. We propose to construct and study a consistently sampled correlation filter with space anisotropic regularization (CSSAR) to solve these two problems simultaneously. Our approach constructs a spatiotemporally consistent sample strategy to alleviate the redundancies in training samples caused by the cyclical shifts, eliminate the inconsistencies between training samples and detection samples, and introduce space anisotropic regularization to constrain the correlation filter for alleviating drift caused by occlusion. Moreover, an optimization strategy based on the Gauss-Seidel method was developed for obtaining robust and efficient online learning. Both qualitative and quantitative evaluations demonstrate that our tracker outperforms state-of-the-art trackers in object tracking benchmarks (OTBs). PMID:29231876
Scanning electron microscope fine tuning using four-bar piezoelectric actuated mechanism
NASA Astrophysics Data System (ADS)
Hatamleh, Khaled S.; Khasawneh, Qais A.; Al-Ghasem, Adnan; Jaradat, Mohammad A.; Sawaqed, Laith; Al-Shabi, Mohammad
2018-01-01
Scanning Electron Microscopes are extensively used for accurate micro/nano images exploring. Several strategies have been proposed to fine tune those microscopes in the past few years. This work presents a new fine tuning strategy of a scanning electron microscope sample table using four bar piezoelectric actuated mechanisms. The introduced paper presents an algorithm to find all possible inverse kinematics solutions of the proposed mechanism. In addition, another algorithm is presented to search for the optimal inverse kinematic solution. Both algorithms are used simultaneously by means of a simulation study to fine tune a scanning electron microscope sample table through a pre-specified circular or linear path of motion. Results of the study shows that, proposed algorithms were able to minimize the power required to drive the piezoelectric actuated mechanism by a ratio of 97.5% for all simulated paths of motion when compared to general non-optimized solution.
Yousef, A M; Melhem, M; Xue, B; Arafat, T; Reynolds, D K; Van Wart, S A
2013-05-01
Clopidogrel is metabolized primarily into an inactive carboxyl metabolite (clopidogrel-IM) or to a lesser extent an active thiol metabolite. A population pharmacokinetic (PK) model was developed using NONMEM(®) to describe the time course of clopidogrel-IM in plasma and to design a sparse-sampling strategy to predict clopidogrel-IM exposures for use in characterizing anti-platelet activity. Serial blood samples from 76 healthy Jordanian subjects administered a single 75 mg oral dose of clopidogrel were collected and assayed for clopidogrel-IM using reverse phase high performance liquid chromatography. A two-compartment (2-CMT) PK model with first-order absorption and elimination plus an absorption lag-time was evaluated, as well as a variation of this model designed to mimic enterohepatic recycling (EHC). Optimal PK sampling strategies (OSS) were determined using WinPOPT based upon collection of 3-12 post-dose samples. A two-compartment model with EHC provided the best fit and reduced bias in C(max) (median prediction error (PE%) of 9.58% versus 12.2%) relative to the basic two-compartment model, AUC(0-24) was similar for both models (median PE% = 1.39%). The OSS for fitting the two-compartment model with EHC required the collection of seven samples (0.25, 1, 2, 4, 5, 6 and 12 h). Reasonably unbiased and precise exposures were obtained when re-fitting this model to a reduced dataset considering only these sampling times. A two-compartment model considering EHC best characterized the time course of clopidogrel-IM in plasma. Use of the suggested OSS will allow for the collection of fewer PK samples when assessing clopidogrel-IM exposures. Copyright © 2013 John Wiley & Sons, Ltd.
NASA Technical Reports Server (NTRS)
Huyse, Luc; Bushnell, Dennis M. (Technical Monitor)
2001-01-01
Free-form shape optimization of airfoils poses unexpected difficulties. Practical experience has indicated that a deterministic optimization for discrete operating conditions can result in dramatically inferior performance when the actual operating conditions are different from the - somewhat arbitrary - design values used for the optimization. Extensions to multi-point optimization have proven unable to adequately remedy this problem of "localized optimization" near the sampled operating conditions. This paper presents an intrinsically statistical approach and demonstrates how the shortcomings of multi-point optimization with respect to "localized optimization" can be overcome. The practical examples also reveal how the relative likelihood of each of the operating conditions is automatically taken into consideration during the optimization process. This is a key advantage over the use of multipoint methods.
Memory-efficient dynamic programming backtrace and pairwise local sequence alignment.
Newberg, Lee A
2008-08-15
A backtrace through a dynamic programming algorithm's intermediate results in search of an optimal path, or to sample paths according to an implied probability distribution, or as the second stage of a forward-backward algorithm, is a task of fundamental importance in computational biology. When there is insufficient space to store all intermediate results in high-speed memory (e.g. cache) existing approaches store selected stages of the computation, and recompute missing values from these checkpoints on an as-needed basis. Here we present an optimal checkpointing strategy, and demonstrate its utility with pairwise local sequence alignment of sequences of length 10,000. Sample C++-code for optimal backtrace is available in the Supplementary Materials. Supplementary data is available at Bioinformatics online.
NASA Astrophysics Data System (ADS)
Enzenhöfer, R.; Geiges, A.; Nowak, W.
2011-12-01
Advection-based well-head protection zones are commonly used to manage the contamination risk of drinking water wells. Considering the insufficient knowledge about hazards and transport properties within the catchment, current Water Safety Plans recommend that catchment managers and stakeholders know, control and monitor all possible hazards within the catchments and perform rational risk-based decisions. Our goal is to supply catchment managers with the required probabilistic risk information, and to generate tools that allow for optimal and rational allocation of resources between improved monitoring versus extended safety margins and risk mitigation measures. To support risk managers with the indispensable information, we address the epistemic uncertainty of advective-dispersive solute transport and well vulnerability (Enzenhoefer et al., 2011) within a stochastic simulation framework. Our framework can separate between uncertainty of contaminant location and actual dilution of peak concentrations by resolving heterogeneity with high-resolution Monte-Carlo simulation. To keep computational costs low, we solve the reverse temporal moment transport equation. Only in post-processing, we recover the time-dependent solute breakthrough curves and the deduced well vulnerability criteria from temporal moments by non-linear optimization. Our first step towards optimal risk management is optimal positioning of sampling locations and optimal choice of data types to reduce best the epistemic prediction uncertainty for well-head delineation, using the cross-bred Likelihood Uncertainty Estimator (CLUE, Leube et al., 2011) for optimal sampling design. Better monitoring leads to more reliable and realistic protection zones and thus helps catchment managers to better justify smaller, yet conservative safety margins. In order to allow an optimal choice in sampling strategies, we compare the trade-off in monitoring versus the delineation costs by accounting for ill-delineated fractions of protection zones. Within an illustrative simplified 2D synthetic test case, we demonstrate our concept, involving synthetic transmissivity and head measurements for conditioning. We demonstrate the worth of optimally collected data in the context of protection zone delineation by assessing the reduced areal demand of delineated area at user-specified risk acceptance level. Results indicate that, thanks to optimally collected data, risk-aware delineation can be made at low to moderate additional costs compared to conventional delineation strategies.
Demerouti, Evangelia; Bakker, Arnold B; Leiter, Michael
2014-01-01
The present study aims to explain why research thus far has found only low to moderate associations between burnout and performance. We argue that employees use adaptive strategies that help them to maintain their performance (i.e., task performance, adaptivity to change) at acceptable levels despite experiencing burnout (i.e., exhaustion, disengagement). We focus on the strategies included in the selective optimization with compensation model. Using a sample of 294 employees and their supervisors, we found that compensation is the most successful strategy in buffering the negative associations of disengagement with supervisor-rated task performance and both disengagement and exhaustion with supervisor-rated adaptivity to change. In contrast, selection exacerbates the negative relationship of exhaustion with supervisor-rated adaptivity to change. In total, 42% of the hypothesized interactions proved to be significant. Our study uncovers successful and unsuccessful strategies that people use to deal with their burnout symptoms in order to achieve satisfactory job performance. PsycINFO Database Record (c) 2014 APA, all rights reserved.
Bartram, Jack; Mountjoy, Edward; Brooks, Tony; Hancock, Jeremy; Williamson, Helen; Wright, Gary; Moppett, John; Goulden, Nick; Hubank, Mike
2016-07-01
High-throughput sequencing (HTS) (next-generation sequencing) of the rearranged Ig and T-cell receptor genes promises to be less expensive and more sensitive than current methods of monitoring minimal residual disease (MRD) in patients with acute lymphoblastic leukemia. However, the adoption of new approaches by clinical laboratories requires careful evaluation of all potential sources of error and the development of strategies to ensure the highest accuracy. Timely and efficient clinical use of HTS platforms will depend on combining multiple samples (multiplexing) in each sequencing run. Here we examine the Ig heavy-chain gene HTS on the Illumina MiSeq platform for MRD. We identify errors associated with multiplexing that could potentially impact the accuracy of MRD analysis. We optimize a strategy that combines high-purity, sequence-optimized oligonucleotides, dual indexing, and an error-aware demultiplexing approach to minimize errors and maximize sensitivity. We present a probability-based, demultiplexing pipeline Error-Aware Demultiplexer that is suitable for all MiSeq strategies and accurately assigns samples to the correct identifier without excessive loss of data. Finally, using controls quantified by digital PCR, we show that HTS-MRD can accurately detect as few as 1 in 10(6) copies of specific leukemic MRD. Crown Copyright © 2016. Published by Elsevier Inc. All rights reserved.
Cache-Aware Asymptotically-Optimal Sampling-Based Motion Planning
Ichnowski, Jeffrey; Prins, Jan F.; Alterovitz, Ron
2014-01-01
We present CARRT* (Cache-Aware Rapidly Exploring Random Tree*), an asymptotically optimal sampling-based motion planner that significantly reduces motion planning computation time by effectively utilizing the cache memory hierarchy of modern central processing units (CPUs). CARRT* can account for the CPU’s cache size in a manner that keeps its working dataset in the cache. The motion planner progressively subdivides the robot’s configuration space into smaller regions as the number of configuration samples rises. By focusing configuration exploration in a region for periods of time, nearest neighbor searching is accelerated since the working dataset is small enough to fit in the cache. CARRT* also rewires the motion planning graph in a manner that complements the cache-aware subdivision strategy to more quickly refine the motion planning graph toward optimality. We demonstrate the performance benefit of our cache-aware motion planning approach for scenarios involving a point robot as well as the Rethink Robotics Baxter robot. PMID:25419474
Cache-Aware Asymptotically-Optimal Sampling-Based Motion Planning.
Ichnowski, Jeffrey; Prins, Jan F; Alterovitz, Ron
2014-05-01
We present CARRT* (Cache-Aware Rapidly Exploring Random Tree*), an asymptotically optimal sampling-based motion planner that significantly reduces motion planning computation time by effectively utilizing the cache memory hierarchy of modern central processing units (CPUs). CARRT* can account for the CPU's cache size in a manner that keeps its working dataset in the cache. The motion planner progressively subdivides the robot's configuration space into smaller regions as the number of configuration samples rises. By focusing configuration exploration in a region for periods of time, nearest neighbor searching is accelerated since the working dataset is small enough to fit in the cache. CARRT* also rewires the motion planning graph in a manner that complements the cache-aware subdivision strategy to more quickly refine the motion planning graph toward optimality. We demonstrate the performance benefit of our cache-aware motion planning approach for scenarios involving a point robot as well as the Rethink Robotics Baxter robot.
Gang, G J; Siewerdsen, J H; Stayman, J W
2017-02-11
This work presents a task-driven joint optimization of fluence field modulation (FFM) and regularization in quadratic penalized-likelihood (PL) reconstruction. Conventional FFM strategies proposed for filtered-backprojection (FBP) are evaluated in the context of PL reconstruction for comparison. We present a task-driven framework that leverages prior knowledge of the patient anatomy and imaging task to identify FFM and regularization. We adopted a maxi-min objective that ensures a minimum level of detectability index ( d' ) across sample locations in the image volume. The FFM designs were parameterized by 2D Gaussian basis functions to reduce dimensionality of the optimization and basis function coefficients were estimated using the covariance matrix adaptation evolutionary strategy (CMA-ES) algorithm. The FFM was jointly optimized with both space-invariant and spatially-varying regularization strength ( β ) - the former via an exhaustive search through discrete values and the latter using an alternating optimization where β was exhaustively optimized locally and interpolated to form a spatially-varying map. The optimal FFM inverts as β increases, demonstrating the importance of a joint optimization. For the task and object investigated, the optimal FFM assigns more fluence through less attenuating views, counter to conventional FFM schemes proposed for FBP. The maxi-min objective homogenizes detectability throughout the image and achieves a higher minimum detectability than conventional FFM strategies. The task-driven FFM designs found in this work are counter to conventional patterns for FBP and yield better performance in terms of the maxi-min objective, suggesting opportunities for improved image quality and/or dose reduction when model-based reconstructions are applied in conjunction with FFM.
G-STRATEGY: Optimal Selection of Individuals for Sequencing in Genetic Association Studies
Wang, Miaoyan; Jakobsdottir, Johanna; Smith, Albert V.; McPeek, Mary Sara
2017-01-01
In a large-scale genetic association study, the number of phenotyped individuals available for sequencing may, in some cases, be greater than the study’s sequencing budget will allow. In that case, it can be important to prioritize individuals for sequencing in a way that optimizes power for association with the trait. Suppose a cohort of phenotyped individuals is available, with some subset of them possibly already sequenced, and one wants to choose an additional fixed-size subset of individuals to sequence in such a way that the power to detect association is maximized. When the phenotyped sample includes related individuals, power for association can be gained by including partial information, such as phenotype data of ungenotyped relatives, in the analysis, and this should be taken into account when assessing whom to sequence. We propose G-STRATEGY, which uses simulated annealing to choose a subset of individuals for sequencing that maximizes the expected power for association. In simulations, G-STRATEGY performs extremely well for a range of complex disease models and outperforms other strategies with, in many cases, relative power increases of 20–40% over the next best strategy, while maintaining correct type 1 error. G-STRATEGY is computationally feasible even for large datasets and complex pedigrees. We apply G-STRATEGY to data on HDL and LDL from the AGES-Reykjavik and REFINE-Reykjavik studies, in which G-STRATEGY is able to closely-approximate the power of sequencing the full sample by selecting for sequencing a only small subset of the individuals. PMID:27256766
Mission Planning and Decision Support for Underwater Glider Networks: A Sampling on-Demand Approach
Ferri, Gabriele; Cococcioni, Marco; Alvarez, Alberto
2015-01-01
This paper describes an optimal sampling approach to support glider fleet operators and marine scientists during the complex task of planning the missions of fleets of underwater gliders. Optimal sampling, which has gained considerable attention in the last decade, consists in planning the paths of gliders to minimize a specific criterion pertinent to the phenomenon under investigation. Different criteria (e.g., A, G, or E optimality), used in geosciences to obtain an optimum design, lead to different sampling strategies. In particular, the A criterion produces paths for the gliders that minimize the overall level of uncertainty over the area of interest. However, there are commonly operative situations in which the marine scientists may prefer not to minimize the overall uncertainty of a certain area, but instead they may be interested in achieving an acceptable uncertainty sufficient for the scientific or operational needs of the mission. We propose and discuss here an approach named sampling on-demand that explicitly addresses this need. In our approach the user provides an objective map, setting both the amount and the geographic distribution of the uncertainty to be achieved after assimilating the information gathered by the fleet. A novel optimality criterion, called Aη, is proposed and the resulting minimization problem is solved by using a Simulated Annealing based optimizer that takes into account the constraints imposed by the glider navigation features, the desired geometry of the paths and the problems of reachability caused by ocean currents. This planning strategy has been implemented in a Matlab toolbox called SoDDS (Sampling on-Demand and Decision Support). The tool is able to automatically download the ocean fields data from MyOcean repository and also provides graphical user interfaces to ease the input process of mission parameters and targets. The results obtained by running SoDDS on three different scenarios are provided and show that SoDDS, which is currently used at NATO STO Centre for Maritime Research and Experimentation (CMRE), can represent a step forward towards a systematic mission planning of glider fleets, dramatically reducing the efforts of glider operators. PMID:26712763
Mission Planning and Decision Support for Underwater Glider Networks: A Sampling on-Demand Approach.
Ferri, Gabriele; Cococcioni, Marco; Alvarez, Alberto
2015-12-26
This paper describes an optimal sampling approach to support glider fleet operators and marine scientists during the complex task of planning the missions of fleets of underwater gliders. Optimal sampling, which has gained considerable attention in the last decade, consists in planning the paths of gliders to minimize a specific criterion pertinent to the phenomenon under investigation. Different criteria (e.g., A, G, or E optimality), used in geosciences to obtain an optimum design, lead to different sampling strategies. In particular, the A criterion produces paths for the gliders that minimize the overall level of uncertainty over the area of interest. However, there are commonly operative situations in which the marine scientists may prefer not to minimize the overall uncertainty of a certain area, but instead they may be interested in achieving an acceptable uncertainty sufficient for the scientific or operational needs of the mission. We propose and discuss here an approach named sampling on-demand that explicitly addresses this need. In our approach the user provides an objective map, setting both the amount and the geographic distribution of the uncertainty to be achieved after assimilating the information gathered by the fleet. A novel optimality criterion, called A η , is proposed and the resulting minimization problem is solved by using a Simulated Annealing based optimizer that takes into account the constraints imposed by the glider navigation features, the desired geometry of the paths and the problems of reachability caused by ocean currents. This planning strategy has been implemented in a Matlab toolbox called SoDDS (Sampling on-Demand and Decision Support). The tool is able to automatically download the ocean fields data from MyOcean repository and also provides graphical user interfaces to ease the input process of mission parameters and targets. The results obtained by running SoDDS on three different scenarios are provided and show that SoDDS, which is currently used at NATO STO Centre for Maritime Research and Experimentation (CMRE), can represent a step forward towards a systematic mission planning of glider fleets, dramatically reducing the efforts of glider operators.
NASA Astrophysics Data System (ADS)
Garabito, German; Cruz, João Carlos Ribeiro; Oliva, Pedro Andrés Chira; Söllner, Walter
2017-01-01
The Common Reflection Surface stack is a robust method for simulating zero-offset and common-offset sections with high accuracy from multi-coverage seismic data. For simulating common-offset sections, the Common-Reflection-Surface stack method uses a hyperbolic traveltime approximation that depends on five kinematic parameters for each selected sample point of the common-offset section to be simulated. The main challenge of this method is to find a computationally efficient data-driven optimization strategy for accurately determining the five kinematic stacking parameters on which each sample of the stacked common-offset section depends. Several authors have applied multi-step strategies to obtain the optimal parameters by combining different pre-stack data configurations. Recently, other authors used one-step data-driven strategies based on a global optimization for estimating simultaneously the five parameters from multi-midpoint and multi-offset gathers. In order to increase the computational efficiency of the global optimization process, we use in this paper a reduced form of the Common-Reflection-Surface traveltime approximation that depends on only four parameters, the so-called Common Diffraction Surface traveltime approximation. By analyzing the convergence of both objective functions and the data enhancement effect after applying the two traveltime approximations to the Marmousi synthetic dataset and a real land dataset, we conclude that the Common-Diffraction-Surface approximation is more efficient within certain aperture limits and preserves at the same time a high image accuracy. The preserved image quality is also observed in a direct comparison after applying both approximations for simulating common-offset sections on noisy pre-stack data.
Hu, Meng; Krauss, Martin; Brack, Werner; Schulze, Tobias
2016-11-01
Liquid chromatography-high resolution mass spectrometry (LC-HRMS) is a well-established technique for nontarget screening of contaminants in complex environmental samples. Automatic peak detection is essential, but its performance has only rarely been assessed and optimized so far. With the aim to fill this gap, we used pristine water extracts spiked with 78 contaminants as a test case to evaluate and optimize chromatogram and spectral data processing. To assess whether data acquisition strategies have a significant impact on peak detection, three values of MS cycle time (CT) of an LTQ Orbitrap instrument were tested. Furthermore, the key parameter settings of the data processing software MZmine 2 were optimized to detect the maximum number of target peaks from the samples by the design of experiments (DoE) approach and compared to a manual evaluation. The results indicate that short CT significantly improves the quality of automatic peak detection, which means that full scan acquisition without additional MS 2 experiments is suggested for nontarget screening. MZmine 2 detected 75-100 % of the peaks compared to manual peak detection at an intensity level of 10 5 in a validation dataset on both spiked and real water samples under optimal parameter settings. Finally, we provide an optimization workflow of MZmine 2 for LC-HRMS data processing that is applicable for environmental samples for nontarget screening. The results also show that the DoE approach is useful and effort-saving for optimizing data processing parameters. Graphical Abstract ᅟ.
NASA Astrophysics Data System (ADS)
Lai, Xiaoming; Zhu, Qing; Zhou, Zhiwen; Liao, Kaihua
2017-12-01
In this study, seven random combination sampling strategies were applied to investigate the uncertainties in estimating the hillslope mean soil water content (SWC) and correlation coefficients between the SWC and soil/terrain properties on a tea + bamboo hillslope. One of the sampling strategies is the global random sampling and the other six are the stratified random sampling on the top, middle, toe, top + mid, top + toe and mid + toe slope positions. When each sampling strategy was applied, sample sizes were gradually reduced and each sampling size contained 3000 replicates. Under each sampling size of each sampling strategy, the relative errors (REs) and coefficients of variation (CVs) of the estimated hillslope mean SWC and correlation coefficients between the SWC and soil/terrain properties were calculated to quantify the accuracy and uncertainty. The results showed that the uncertainty of the estimations decreased as the sampling size increasing. However, larger sample sizes were required to reduce the uncertainty in correlation coefficient estimation than in hillslope mean SWC estimation. Under global random sampling, 12 randomly sampled sites on this hillslope were adequate to estimate the hillslope mean SWC with RE and CV ≤10%. However, at least 72 randomly sampled sites were needed to ensure the estimated correlation coefficients with REs and CVs ≤10%. Comparing with all sampling strategies, reducing sampling sites on the middle slope had the least influence on the estimation of hillslope mean SWC and correlation coefficients. Under this strategy, 60 sites (10 on the middle slope and 50 on the top and toe slopes) were enough to ensure the estimated correlation coefficients with REs and CVs ≤10%. This suggested that when designing the SWC sampling, the proportion of sites on the middle slope can be reduced to 16.7% of the total number of sites. Findings of this study will be useful for the optimal SWC sampling design.
Simulator for multilevel optimization research
NASA Technical Reports Server (NTRS)
Padula, S. L.; Young, K. C.
1986-01-01
A computer program designed to simulate and improve multilevel optimization techniques is described. By using simple analytic functions to represent complex engineering analyses, the simulator can generate and test a large variety of multilevel decomposition strategies in a relatively short time. This type of research is an essential step toward routine optimization of large aerospace systems. The paper discusses the types of optimization problems handled by the simulator and gives input and output listings and plots for a sample problem. It also describes multilevel implementation techniques which have value beyond the present computer program. Thus, this document serves as a user's manual for the simulator and as a guide for building future multilevel optimization applications.
Wlodarczyk, Dorota
2017-03-01
This study explored the effects intervening in the linkages of optimism and hope with subjective health in the short term after myocardial infarction. A two-wave study design was used. The sample consisted of 222 myocardial infarction survivors. When adopting a cross-sectional design, optimism and hope predicted subjective health at Time 1 and Time 2. After controlling for baseline subjective health, they were no longer significant predictors of subjective health at Time 2. Parallel indirect effects of seeking social support and problem solving were significant for both optimism and hope. After controlling for the shared variance between optimism and hope, these effects remained significant only for optimism.
Mariño-Repizo, Leonardo; Goicoechea, Hector; Raba, Julio; Cerutti, Soledad
2018-06-07
A novel, simple, easy and cheap sample treatment strategy based on salting-out assisted liquid-liquid extraction (SALLE) for ochratoxin A (OTA) ultra-trace analysis in beer samples using ultra-high performance liquid chromatography-tandem mass spectrometry determination was developed. The factors involved in the efficiency of pretreatment were studied employing factorial design in the screening phase and the optimal conditions of the significant variables on the analytical response were evaluated using a central composite face-centred design (CCF). Consequently, the amount of salt ((NH 4 ) 2 SO 4 ), together with the volumes of sample, hydrophilic (acetone) and nonpolar (toluene) solvents, and times of vortexing and centrifugation were optimized. Under optimized conditions, the limits of detection (LOD) and quantification (LOQ) were 0.02 µg l -1 and 0.08 µg l -1 respectively. OTA extraction recovery by SALLE was approximately 90% (0.2 µg l -1 ). Furthermore, the methodology was in agreement with EU Directive requirements and was successfully applied for analysis of beer samples.
Bowler, Michael G; Bowler, Matthew W
2014-01-01
The advent of micro-focused X-ray beams has led to the development of a number of advanced methods of sample evaluation and data collection. In particular, multiple-position data-collection and helical oscillation strategies are now becoming commonplace in order to alleviate the problems associated with radiation damage. However, intra-crystal and inter-crystal variation means that it is not always obvious on which crystals or on which region or regions of a crystal these protocols should be performed. For the automation of this process for large-scale screening, and to provide an indication of the best strategy for data collection, a metric of crystal variability could be useful. Here, measures of the intrinsic variability within protein crystals are presented and their implications for optimal data-collection strategies are discussed.
Deng, Bai-chuan; Yun, Yong-huan; Liang, Yi-zeng; Yi, Lun-zhao
2014-10-07
In this study, a new optimization algorithm called the Variable Iterative Space Shrinkage Approach (VISSA) that is based on the idea of model population analysis (MPA) is proposed for variable selection. Unlike most of the existing optimization methods for variable selection, VISSA statistically evaluates the performance of variable space in each step of optimization. Weighted binary matrix sampling (WBMS) is proposed to generate sub-models that span the variable subspace. Two rules are highlighted during the optimization procedure. First, the variable space shrinks in each step. Second, the new variable space outperforms the previous one. The second rule, which is rarely satisfied in most of the existing methods, is the core of the VISSA strategy. Compared with some promising variable selection methods such as competitive adaptive reweighted sampling (CARS), Monte Carlo uninformative variable elimination (MCUVE) and iteratively retaining informative variables (IRIV), VISSA showed better prediction ability for the calibration of NIR data. In addition, VISSA is user-friendly; only a few insensitive parameters are needed, and the program terminates automatically without any additional conditions. The Matlab codes for implementing VISSA are freely available on the website: https://sourceforge.net/projects/multivariateanalysis/files/VISSA/.
Adineh, Vahid R; Marceau, Ross K W; Chen, Yu; Si, Kae J; Velkov, Tony; Cheng, Wenlong; Li, Jian; Fu, Jing
2017-10-01
We present a novel approach for analysis of low-conductivity and insulating materials with conventional pulsed-voltage atom probe tomography (APT), by incorporating an ultrathin metallic coating on focused ion beam prepared needle-shaped specimens. Finite element electrostatic simulations of coated atom probe specimens were performed, which suggest remarkable improvement in uniform voltage distribution and subsequent field evaporation of the insulated samples with a metallic coating of approximately 10nm thickness. Using design of experiment technique, an experimental investigation was performed to study physical vapor deposition coating of needle specimens with end tip radii less than 100nm. The final geometries of the coated APT specimens were characterized with high-resolution scanning electron microscopy and transmission electron microscopy, and an empirical model was proposed to determine the optimal coating thickness for a given specimen size. The optimal coating strategy was applied to APT specimens of resin embedded Au nanospheres. Results demonstrate that the optimal coating strategy allows unique pulsed-voltage atom probe analysis and 3D imaging of biological and insulated samples. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Grisey, A.; Yon, S.; Pechoux, T.; Letort, V.; Lafitte, P.
2017-03-01
Treatment time reduction is a key issue to expand the use of high intensity focused ultrasound (HIFU) surgery, especially for benign pathologies. This study aims at quantitatively assessing the potential reduction of the treatment time arising from moving the focal point during long pulses. In this context, the optimization of the focal point trajectory is crucial to achieve a uniform thermal dose repartition and avoid boiling. At first, a numerical optimization algorithm was used to generate efficient trajectories. Thermal conduction was simulated in 3D with a finite difference code and damages to the tissue were modeled using the thermal dose formula. Given an initial trajectory, the thermal dose field was first computed, then, making use of Pontryagin's maximum principle, the trajectory was iteratively refined. Several initial trajectories were tested. Then, an ex vivo study was conducted in order to validate the efficicency of the resulting optimized strategies. Single pulses were performed at 3MHz on fresh veal liver samples with an Echopulse and the size of each unitary lesion was assessed by cutting each sample along three orthogonal planes and measuring the dimension of the whitened area based on photographs. We propose a promising approach to significantly shorten HIFU treatment time: the numerical optimization algorithm was shown to provide a reliable insight on trajectories that can improve treatment strategies. The model must now be improved in order to take in vivo conditions into account and extensively validated.
Multi-Objective Design Of Optimal Greenhouse Gas Observation Networks
NASA Astrophysics Data System (ADS)
Lucas, D. D.; Bergmann, D. J.; Cameron-Smith, P. J.; Gard, E.; Guilderson, T. P.; Rotman, D.; Stolaroff, J. K.
2010-12-01
One of the primary scientific functions of a Greenhouse Gas Information System (GHGIS) is to infer GHG source emission rates and their uncertainties by combining measurements from an observational network with atmospheric transport modeling. Certain features of the observational networks that serve as inputs to a GHGIS --for example, sampling location and frequency-- can greatly impact the accuracy of the retrieved GHG emissions. Observation System Simulation Experiments (OSSEs) provide a framework to characterize emission uncertainties associated with a given network configuration. By minimizing these uncertainties, OSSEs can be used to determine optimal sampling strategies. Designing a real-world GHGIS observing network, however, will involve multiple, conflicting objectives; there will be trade-offs between sampling density, coverage and measurement costs. To address these issues, we have added multi-objective optimization capabilities to OSSEs. We demonstrate these capabilities by quantifying the trade-offs between retrieval error and measurement costs for a prototype GHGIS, and deriving GHG observing networks that are Pareto optimal. [LLNL-ABS-452333: This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
Joint Optimization of Fluence Field Modulation and Regularization in Task-Driven Computed Tomography
Gang, G. J.; Siewerdsen, J. H.; Stayman, J. W.
2017-01-01
Purpose This work presents a task-driven joint optimization of fluence field modulation (FFM) and regularization in quadratic penalized-likelihood (PL) reconstruction. Conventional FFM strategies proposed for filtered-backprojection (FBP) are evaluated in the context of PL reconstruction for comparison. Methods We present a task-driven framework that leverages prior knowledge of the patient anatomy and imaging task to identify FFM and regularization. We adopted a maxi-min objective that ensures a minimum level of detectability index (d′) across sample locations in the image volume. The FFM designs were parameterized by 2D Gaussian basis functions to reduce dimensionality of the optimization and basis function coefficients were estimated using the covariance matrix adaptation evolutionary strategy (CMA-ES) algorithm. The FFM was jointly optimized with both space-invariant and spatially-varying regularization strength (β) - the former via an exhaustive search through discrete values and the latter using an alternating optimization where β was exhaustively optimized locally and interpolated to form a spatially-varying map. Results The optimal FFM inverts as β increases, demonstrating the importance of a joint optimization. For the task and object investigated, the optimal FFM assigns more fluence through less attenuating views, counter to conventional FFM schemes proposed for FBP. The maxi-min objective homogenizes detectability throughout the image and achieves a higher minimum detectability than conventional FFM strategies. Conclusions The task-driven FFM designs found in this work are counter to conventional patterns for FBP and yield better performance in terms of the maxi-min objective, suggesting opportunities for improved image quality and/or dose reduction when model-based reconstructions are applied in conjunction with FFM. PMID:28626290
Joint optimization of fluence field modulation and regularization in task-driven computed tomography
NASA Astrophysics Data System (ADS)
Gang, G. J.; Siewerdsen, J. H.; Stayman, J. W.
2017-03-01
Purpose: This work presents a task-driven joint optimization of fluence field modulation (FFM) and regularization in quadratic penalized-likelihood (PL) reconstruction. Conventional FFM strategies proposed for filtered-backprojection (FBP) are evaluated in the context of PL reconstruction for comparison. Methods: We present a task-driven framework that leverages prior knowledge of the patient anatomy and imaging task to identify FFM and regularization. We adopted a maxi-min objective that ensures a minimum level of detectability index (d') across sample locations in the image volume. The FFM designs were parameterized by 2D Gaussian basis functions to reduce dimensionality of the optimization and basis function coefficients were estimated using the covariance matrix adaptation evolutionary strategy (CMA-ES) algorithm. The FFM was jointly optimized with both space-invariant and spatially-varying regularization strength (β) - the former via an exhaustive search through discrete values and the latter using an alternating optimization where β was exhaustively optimized locally and interpolated to form a spatially-varying map. Results: The optimal FFM inverts as β increases, demonstrating the importance of a joint optimization. For the task and object investigated, the optimal FFM assigns more fluence through less attenuating views, counter to conventional FFM schemes proposed for FBP. The maxi-min objective homogenizes detectability throughout the image and achieves a higher minimum detectability than conventional FFM strategies. Conclusions: The task-driven FFM designs found in this work are counter to conventional patterns for FBP and yield better performance in terms of the maxi-min objective, suggesting opportunities for improved image quality and/or dose reduction when model-based reconstructions are applied in conjunction with FFM.
Grodowska, Katarzyna; Parczewski, Andrzej
2013-01-01
The purpose of the present work was to find optimum conditions of headspace gas chromatography (HS-GC) determination of residual solvents which usually appear in pharmaceutical products. Two groups of solvents were taken into account in the present examination. Group I consisted of isopropanol, n-propanol, isobutanol, n-butanol and 1,4-dioxane and group II included cyclohexane, n-hexane and n-heptane. The members of the groups were selected in previous investigations in which experimental design and chemometric methods were applied. Four factors were taken into consideration in optimization which describe HS conditions: sample volume, equilibration time, equilibrium temperature and NaCl concentration in a sample. The relative GC peak area served as an optimization criterion which was considered separately for each analyte. Sequential variable size simplex optimization strategy was used and the progress of optimization was traced and visualized in various ways simultaneously. The optimum HS conditions appeared different for the groups of solvents tested, which proves that influence of experimental conditions (factors) depends on analyte properties. The optimization resulted in significant signal increase (from seven to fifteen times).
The relation between cognitive and metacognitive strategic processing during a science simulation.
Dinsmore, Daniel L; Zoellner, Brian P
2018-03-01
This investigation was designed to uncover the relations between students' cognitive and metacognitive strategies used during a complex climate simulation. While cognitive strategy use during science inquiry has been studied, the factors related to this strategy use, such as concurrent metacognition, prior knowledge, and prior interest, have not been investigated in a multidimensional fashion. This study addressed current issues in strategy research by examining not only how metacognitive, surface-level, and deep-level strategies influence performance, but also how these strategies related to each other during a contextually relevant science simulation. The sample for this study consisted of 70 undergraduates from a mid-sized Southeastern university in the United States. These participants were recruited from both physical and life science (e.g., biology) and education majors to obtain a sample with variance in terms of their prior knowledge, interest, and strategy use. Participants completed measures of prior knowledge and interest about global climate change. Then, they were asked to engage in an online climate simulator for up to 30 min while thinking aloud. Finally, participants were asked to answer three outcome questions about global climate change. Results indicated a poor fit for the statistical model of the frequency and level of processing predicting performance. However, a statistical model that independently examined the influence of metacognitive monitoring and control of cognitive strategies showed a very strong relation between the metacognitive and cognitive strategies. Finally, smallest space analysis results provided evidence that strategy use may be better captured in a multidimensional fashion, particularly with attention paid towards the combination of strategies employed. Conclusions drawn from the evidence point to the need for more dynamic, multidimensional models of strategic processing that account for the patterns of optimal and non-optimal strategy use. Additionally, analyses that can capture these complex patterns need to be further explored. © 2017 The British Psychological Society.
Maximizing the Biochemical Resolving Power of Fluorescence Microscopy
Esposito, Alessandro; Popleteeva, Marina; Venkitaraman, Ashok R.
2013-01-01
Most recent advances in fluorescence microscopy have focused on achieving spatial resolutions below the diffraction limit. However, the inherent capability of fluorescence microscopy to non-invasively resolve different biochemical or physical environments in biological samples has not yet been formally described, because an adequate and general theoretical framework is lacking. Here, we develop a mathematical characterization of the biochemical resolution in fluorescence detection with Fisher information analysis. To improve the precision and the resolution of quantitative imaging methods, we demonstrate strategies for the optimization of fluorescence lifetime, fluorescence anisotropy and hyperspectral detection, as well as different multi-dimensional techniques. We describe optimized imaging protocols, provide optimization algorithms and describe precision and resolving power in biochemical imaging thanks to the analysis of the general properties of Fisher information in fluorescence detection. These strategies enable the optimal use of the information content available within the limited photon-budget typically available in fluorescence microscopy. This theoretical foundation leads to a generalized strategy for the optimization of multi-dimensional optical detection, and demonstrates how the parallel detection of all properties of fluorescence can maximize the biochemical resolving power of fluorescence microscopy, an approach we term Hyper Dimensional Imaging Microscopy (HDIM). Our work provides a theoretical framework for the description of the biochemical resolution in fluorescence microscopy, irrespective of spatial resolution, and for the development of a new class of microscopes that exploit multi-parametric detection systems. PMID:24204821
2016-09-01
to both genetic algorithms and evolution strategies to achieve these goals. The results of this research offer a promising new set of modified ...abs_all.jsp?arnumber=203904 [163] Z. Michalewicz, C. Z. Janikow, and J. B. Krawczyk, “A modified genetic algo- rithm for optimal control problems...Available: http://arc.aiaa.org/doi/abs/10.2514/ 2.7053 375 [166] N. Yokoyama and S. Suzuki, “ Modified genetic algorithm for constrained trajectory
NASA Astrophysics Data System (ADS)
Frolov, Sergey; Garau, Bartolame; Bellingham, James
2014-08-01
Regular grid ("lawnmower") survey is a classical strategy for synoptic sampling of the ocean. Is it possible to achieve a more effective use of available resources if one takes into account a priori knowledge about variability in magnitudes of uncertainty and decorrelation scales? In this article, we develop and compare the performance of several path-planning algorithms: optimized "lawnmower," a graph-search algorithm (A*), and a fully nonlinear genetic algorithm. We use the machinery of the best linear unbiased estimator (BLUE) to quantify the ability of a vehicle fleet to synoptically map distribution of phytoplankton off the central California coast. We used satellite and in situ data to specify covariance information required by the BLUE estimator. Computational experiments showed that two types of sampling strategies are possible: a suboptimal space-filling design (produced by the "lawnmower" and the A* algorithms) and an optimal uncertainty-aware design (produced by the genetic algorithm). Unlike the space-filling designs that attempted to cover the entire survey area, the optimal design focused on revisiting areas of high uncertainty. Results of the multivehicle experiments showed that fleet performance predictors, such as cumulative speed or the weight of the fleet, predicted the performance of a homogeneous fleet well; however, these were poor predictors for comparing the performance of different platforms.
New Mathematical Strategy Using Branch and Bound Method
NASA Astrophysics Data System (ADS)
Tarray, Tanveer Ahmad; Bhat, Muzafar Rasool
In this paper, the problem of optimal allocation in stratified random sampling is used in the presence of nonresponse. The problem is formulated as a nonlinear programming problem (NLPP) and is solved using Branch and Bound method. Also the results are formulated through LINGO.
Improved Strategies and Optimization of Calibration Models for Real-time PCR Absolute Quantification
Real-time PCR absolute quantification applications rely on the use of standard curves to make estimates of DNA target concentrations in unknown samples. Traditional absolute quantification approaches dictate that a standard curve must accompany each experimental run. However, t...
Li, Sheng; Yao, Xinhua; Fu, Jianzhong
2014-07-16
Thermoelectric energy harvesting is emerging as a promising alternative energy source to drive wireless sensors in mechanical systems. Typically, the waste heat from spindle units in machine tools creates potential for thermoelectric generation. However, the problem of low and fluctuant ambient temperature differences in spindle units limits the application of thermoelectric generation to drive a wireless sensor. This study is devoted to presenting a transformer-based power management system and its associated control strategy to make the wireless sensor work stably at different speeds of the spindle. The charging/discharging time of capacitors is optimized through this energy-harvesting strategy. A rotating spindle platform is set up to test the performance of the power management system at different speeds. The experimental results show that a longer sampling cycle time will increase the stability of the wireless sensor. The experiments also prove that utilizing the optimal time can make the power management system work more effectively compared with other systems using the same sample cycle.
Li, Sheng; Yao, Xinhua; Fu, Jianzhong
2014-01-01
Thermoelectric energy harvesting is emerging as a promising alternative energy source to drive wireless sensors in mechanical systems. Typically, the waste heat from spindle units in machine tools creates potential for thermoelectric generation. However, the problem of low and fluctuant ambient temperature differences in spindle units limits the application of thermoelectric generation to drive a wireless sensor. This study is devoted to presenting a transformer-based power management system and its associated control strategy to make the wireless sensor work stably at different speeds of the spindle. The charging/discharging time of capacitors is optimized through this energy-harvesting strategy. A rotating spindle platform is set up to test the performance of the power management system at different speeds. The experimental results show that a longer sampling cycle time will increase the stability of the wireless sensor. The experiments also prove that utilizing the optimal time can make the power management system work more effectively compared with other systems using the same sample cycle. PMID:25033189
Pilon, Alan Cesar; Carnevale Neto, Fausto; Freire, Rafael Teixeira; Cardoso, Patrícia; Carneiro, Renato Lajarim; Da Silva Bolzani, Vanderlan; Castro-Gamboa, Ian
2016-03-01
A major challenge in metabolomic studies is how to extract and analyze an entire metabolome. So far, no single method was able to clearly complete this task in an efficient and reproducible way. In this work we proposed a sequential strategy for the extraction and chromatographic separation of metabolites from leaves Jatropha gossypifolia using a design of experiments and partial least square model. The effect of 14 different solvents on extraction process was evaluated and an optimized separation condition on liquid chromatography was estimated considering mobile phase composition and analysis time. The initial conditions of extraction using methanol and separation in 30 min between 5 and 100% water/methanol (1:1 v/v) with 0.1% of acetic acid, 20 μL sample volume, 3.0 mL min(-1) flow rate and 25°C column temperature led to 107 chromatographic peaks. After the optimization strategy using i-propanol/chloroform (1:1 v/v) for extraction, linear gradient elution of 60 min between 5 and 100% water/(acetonitrile/methanol 68:32 v/v with 0.1% of acetic acid), 30 μL sample volume, 2.0 mL min(-1) flow rate, and 30°C column temperature, we detected 140 chromatographic peaks, 30.84% more peaks compared to initial method. This is a reliable strategy using a limited number of experiments for metabolomics protocols. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Zhou, Guisheng; Wang, Mengyue; Li, Yang; Peng, Ying; Li, Xiaobo
2015-08-01
In the present study, a new strategy based on chemical analysis and chemometrics methods was proposed for the comprehensive analysis and profiling of underivatized free amino acids (FAAs) and small peptides among various Luo-Han-Guo (LHG) samples. Firstly, the ultrasound-assisted extraction (UAE) parameters were optimized using Plackett-Burman (PB) screening and Box-Behnken designs (BBD), and the following optimal UAE conditions were obtained: ultrasound power of 280 W, extraction time of 43 min, and the solid-liquid ratio of 302 mL/g. Secondly, a rapid and sensitive analytical method was developed for simultaneous quantification of 24 FAAs and 3 active small peptides in LHG at trace levels using hydrophilic interaction ultra-performance liquid chromatography coupled with triple-quadrupole linear ion-trap tandem mass spectrometry (HILIC-UHPLC-QTRAP(®)/MS(2)). The analytical method was validated by matrix effects, linearity, LODs, LOQs, precision, repeatability, stability, and recovery. Thirdly, the proposed optimal UAE conditions and analytical methods were applied to measurement of LHG samples. It was shown that LHG was rich in essential amino acids, which were beneficial nutrient substances for human health. Finally, based on the contents of the 27 analytes, the chemometrics methods of unsupervised principal component analysis (PCA) and supervised counter propagation artificial neural network (CP-ANN) were applied to differentiate and classify the 40 batches of LHG samples from different cultivated forms, regions, and varieties. As a result, these samples were mainly clustered into three clusters, which illustrated the cultivating disparity among the samples. In summary, the presented strategy had potential for the investigation of edible plants and agricultural products containing FAAs and small peptides.
Slaughter, Susan E; Bampton, Erin; Erin, Daniel F; Ickert, Carla; Jones, C Allyson; Estabrooks, Carole A
2017-06-01
Innovative approaches are required to facilitate the adoption and sustainability of evidence-based care practices. We propose a novel implementation strategy, a peer reminder role, which involves offering a brief formal reminder to peers during structured unit meetings. This study aims to (a) identify healthcare aide (HCA) perceptions of a peer reminder role for HCAs, and (b) develop a conceptual framework for the role based on these perceptions. In 2013, a qualitative focus group study was conducted in five purposively sampled residential care facilities in western Canada. A convenience sample of 24 HCAs agreed to participate in five focus groups. Concurrent with data collection, two researchers coded the transcripts and identified themes by consensus. They jointly determined when saturation was achieved and took steps to optimize the trustworthiness of the findings. Five HCAs from the original focus groups commented on the resulting conceptual framework. HCAs were cautious about accepting a role that might alienate them from their co-workers. They emphasized feeling comfortable with the peer reminder role and identified circumstances that would optimize their comfort including: effective implementation strategies, perceptions of the role, role credibility and a supportive context. These intersecting themes formed a peer reminder conceptual framework. We identified HCAs' perspectives of a new peer reminder role designed specifically for them. Based on their perceptions, a conceptual framework was developed to guide the implementation of a peer reminder role for HCAs. This role may be a strategic implementation strategy to optimize the sustainability of new practices in residential care settings, and the related framework could offer guidance on how to implement this role. © 2017 Sigma Theta Tau International.
Dong, Min; McGann, Patrick T; Mizuno, Tomoyuki; Ware, Russell E; Vinks, Alexander A
2016-04-01
Hydroxyurea has emerged as the primary disease-modifying therapy for patients with sickle cell anaemia (SCA). The laboratory and clinical benefits of hydroxyurea are optimal at maximum tolerated dose (MTD), but the current empirical dose escalation process often takes up to 12 months. The purpose of this study was to develop a pharmacokinetic-guided dosing strategy to reduce the time required to reach hydroxyurea MTD in children with SCA. Pharmacokinetic (PK) data from the HUSTLE trial (NCT00305175) were used to develop a population PK model using non-linear mixed effects modelling (nonmem 7.2). A D-optimal sampling strategy was developed to estimate individual PK and hydroxyurea exposure (area under the concentration-time curve (AUC)). The initial AUC target was derived from HUSTLE clinical data and defined as the mean AUC at MTD. PK profiles were best described by a one compartment with Michaelis-Menten elimination and a transit absorption model. Body weight and cystatin C were identified as significant predictors of hydroxyurea clearance. The following clinically feasible sampling times are included in a new prospective protocol: pre-dose (baseline), 15-20 min, 50-60 min and 3 h after an initial 20 mg kg(-1) oral dose. The mean target AUC(0,∞) for initial dose titration was 115 mg l(-1) h. We developed a PK model-based individualized dosing strategy for the prospective Therapeutic Response Evaluation and Adherence Trial (TREAT, ClinicalTrials.gov NCT02286154). This approach has the potential to optimize the dose titration of hydroxyurea therapy for children with SCA, such that the clinical benefits at MTD are achieved more quickly. © 2015 The British Pharmacological Society.
Dong, Min; McGann, Patrick T.; Mizuno, Tomoyuki; Ware, Russell E.
2016-01-01
AIMS Hydroxyurea has emerged as the primary disease‐modifying therapy for patients with sickle cell anaemia (SCA). The laboratory and clinical benefits of hydroxyurea are optimal at maximum tolerated dose (MTD), but the current empirical dose escalation process often takes up to 12 months. The purpose of this study was to develop a pharmacokinetic‐guided dosing strategy to reduce the time required to reach hydroxyurea MTD in children with SCA. Methods Pharmacokinetic (PK) data from the HUSTLE trial (NCT00305175) were used to develop a population PK model using non‐linear mixed effects modelling (nonmem 7.2). A D‐optimal sampling strategy was developed to estimate individual PK and hydroxyurea exposure (area under the concentration–time curve (AUC)). The initial AUC target was derived from HUSTLE clinical data and defined as the mean AUC at MTD. Results PK profiles were best described by a one compartment with Michaelis–Menten elimination and a transit absorption model. Body weight and cystatin C were identified as significant predictors of hydroxyurea clearance. The following clinically feasible sampling times are included in a new prospective protocol: pre‐dose (baseline), 15–20 min, 50–60 min and 3 h after an initial 20 mg kg–1 oral dose. The mean target AUC(0,∞) for initial dose titration was 115 mg l–1 h. Conclusion We developed a PK model‐based individualized dosing strategy for the prospective Therapeutic Response Evaluation and Adherence Trial (TREAT, ClinicalTrials.gov NCT02286154). This approach has the potential to optimize the dose titration of hydroxyurea therapy for children with SCA, such that the clinical benefits at MTD are achieved more quickly. PMID:26615061
A Bayesian Approach to Interactive Retrieval
ERIC Educational Resources Information Center
Tague, Jean M.
1973-01-01
A probabilistic model for interactive retrieval is presented. Bayesian statistical decision theory principles are applied: use of prior and sample information about the relationship of document descriptions to query relevance; maximization of expected value of a utility function, to the problem of optimally restructuring search strategies in an…
Sample treatments prior to capillary electrophoresis-mass spectrometry.
Hernández-Borges, Javier; Borges-Miquel, Teresa M; Rodríguez-Delgado, Miguel Angel; Cifuentes, Alejandro
2007-06-15
Sample preparation is a crucial part of chemical analysis and in most cases can become the bottleneck of the whole analytical process. Its adequacy is a key factor in determining the success of the analysis and, therefore, careful selection and optimization of the parameters controlling sample treatment should be carried out. This work revises the different strategies that have been developed for sample preparation prior to capillary electrophoresis-mass spectrometry (CE-MS). Namely the present work presents an exhaustive and critical revision of the different samples treatments used together with on-line CE-MS including works published from January 2000 to July 2006.
Tan, Zhijing; Yin, Haidi; Nie, Song; Lin, Zhenxin; Zhu, Jianhui; Ruffin, Mack T; Anderson, Michelle A; Simeone, Diane M; Lubman, David M
2015-04-03
Glycosylation has significant effects on protein function and cell metastasis, which are important in cancer progression. It is of great interest to identify site-specific glycosylation in search of potential cancer biomarkers. However, the abundance of glycopeptides is low compared to that of nonglycopeptides after trypsin digestion of serum samples, and the mass spectrometric signals of glycopeptides are often masked by coeluting nonglycopeptides due to low ionization efficiency. Selective enrichment of glycopeptides from complex serum samples is essential for mass spectrometry (MS)-based analysis. Herein, a strategy has been optimized using LCA enrichment to improve the identification of core-fucosylation (CF) sites in serum of pancreatic cancer patients. The optimized strategy was then applied to analyze CF glycopeptide sites in 13 sets of serum samples from pancreatic cancer, chronic pancreatitis, healthy controls, and a standard reference. In total, 630 core-fucosylation sites were identified from 322 CF proteins in pancreatic cancer patient serum using an Orbitrap Elite mass spectrometer. Further data analysis revealed that 8 CF peptides exhibited a significant difference between pancreatic cancer and other controls, which may be potential diagnostic biomarkers for pancreatic cancer.
Yan, Yiming; Tan, Zhichao; Su, Nan; Zhao, Chunhui
2017-08-24
In this paper, a building extraction method is proposed based on a stacked sparse autoencoder with an optimized structure and training samples. Building extraction plays an important role in urban construction and planning. However, some negative effects will reduce the accuracy of extraction, such as exceeding resolution, bad correction and terrain influence. Data collected by multiple sensors, as light detection and ranging (LIDAR), optical sensor etc., are used to improve the extraction. Using digital surface model (DSM) obtained from LIDAR data and optical images, traditional method can improve the extraction effect to a certain extent, but there are some defects in feature extraction. Since stacked sparse autoencoder (SSAE) neural network can learn the essential characteristics of the data in depth, SSAE was employed to extract buildings from the combined DSM data and optical image. A better setting strategy of SSAE network structure is given, and an idea of setting the number and proportion of training samples for better training of SSAE was presented. The optical data and DSM were combined as input of the optimized SSAE, and after training by an optimized samples, the appropriate network structure can extract buildings with great accuracy and has good robustness.
NASA Astrophysics Data System (ADS)
Sturdevant-Rees, P. L.; Long, S. C.; Barten, P. K.
2002-05-01
A forty-month investigation to collect microbial and water-quality measurements during storm events under a variety of meteorological and land-use conditions is in its initial stages. Intense sampling during storm event periods will be used to optimize sampling and analysis strategies for accurate determination of constituent loads. Of particular interest is identification of meteorological and hydrologic conditions under which sampling and analysis of surface waters for traditional microbial organisms, emerging microbial organisms and non-bacterial pathogens are critical to ensure the integrity of surface-water drinking supplies. This work is particular to the Quabbin-Ware-Wachusett reservoir system in Massachusetts, which provides unfiltered drinking water to 2.5 million people in Boston and surrounding communities. Sampling and analysis strategies will be optimized in terms of number of samples over the hydrograph, timing of sample collection (including sample initiation), constituents measured, volumes analyzed, and monetary and personnel costs. Initial water-quality analyses include pH, temperature, turbidity, conductivity, total suspended solids, total phosphorus, total Kjeldahl-nitrogen, ammonia nitrogen, and total and fecal coliforms. Giardia cysts and Cryptosporidium oocysts will also be measured at all sample sites. Sorbitol-fermenting Bifidobacteria, Rhodococcus coprophilus, Clostridium perfringens spores, and Somatic and F-specific coliphages are measured at select sites as potential alternative source-specific indicator organisms. It is anticipated that the final database will consist of transport data for the above parameters during twenty-four distinct storm-events in addition to monthly baseline data. Results and analyses for the first monitored storm-event will be presented.
Das, Anup Kumar; Mandal, Vivekananda; Mandal, Subhash C
2013-01-01
Triterpenoids are a group of important phytocomponents from Ficus racemosa (syn. Ficus glomerata Roxb.) that are known to possess diverse pharmacological activities and which have prompted the development of various extraction techniques and strategies for its better utilisation. To develop an effective, rapid and ecofriendly microwave-assisted extraction (MAE) strategy to optimise the extraction of a potent bioactive triterpenoid compound, lupeol, from young leaves of Ficus racemosa using response surface methodology (RSM) for industrial scale-up. Initially a Plackett-Burman design matrix was applied to identify the most significant extraction variables amongst microwave power, irradiation time, particle size, solvent:sample ratio loading, varying solvent strength and pre-leaching time on lupeol extraction. Among the six variables tested, microwave power, irradiation time and solvent-sample/loading ratio were found to have a significant effect (P < 0.05) on lupeol extraction and were fitted to a Box-Behnken-design-generated quadratic polynomial equation to predict optimal extraction conditions as well as to locate operability regions with maximum yield. The optimal conditions were microwave power of 65.67% of 700 W, extraction time of 4.27 min and solvent-sample ratio loading of 21.33 mL/g. Confirmation trials under the optimal conditions gave an experimental yield (18.52 µg/g of dry leaves) close to the RSM predicted value of 18.71 µg/g. Under the optimal conditions the mathematical model was found to be well fitted with the experimental data. The MAE was found to be a more rapid, convenient and appropriate extraction method, with a higher yield and lower solvent consumption when compared with conventional extraction techniques. Copyright © 2012 John Wiley & Sons, Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wiebenga, J. H.; Atzema, E. H.; Boogaard, A. H. van den
Robust design of forming processes using numerical simulations is gaining attention throughout the industry. In this work, it is demonstrated how robust optimization can assist in further stretching the limits of metal forming processes. A deterministic and a robust optimization study are performed, considering a stretch-drawing process of a hemispherical cup product. For the robust optimization study, both the effect of material and process scatter are taken into account. For quantifying the material scatter, samples of 41 coils of a drawing quality forming steel have been collected. The stochastic material behavior is obtained by a hybrid approach, combining mechanical testingmore » and texture analysis, and efficiently implemented in a metamodel based optimization strategy. The deterministic and robust optimization results are subsequently presented and compared, demonstrating an increased process robustness and decreased number of product rejects by application of the robust optimization approach.« less
Knowledge-based nonuniform sampling in multidimensional NMR.
Schuyler, Adam D; Maciejewski, Mark W; Arthanari, Haribabu; Hoch, Jeffrey C
2011-07-01
The full resolution afforded by high-field magnets is rarely realized in the indirect dimensions of multidimensional NMR experiments because of the time cost of uniformly sampling to long evolution times. Emerging methods utilizing nonuniform sampling (NUS) enable high resolution along indirect dimensions by sampling long evolution times without sampling at every multiple of the Nyquist sampling interval. While the earliest NUS approaches matched the decay of sampling density to the decay of the signal envelope, recent approaches based on coupled evolution times attempt to optimize sampling by choosing projection angles that increase the likelihood of resolving closely-spaced resonances. These approaches employ knowledge about chemical shifts to predict optimal projection angles, whereas prior applications of tailored sampling employed only knowledge of the decay rate. In this work we adapt the matched filter approach as a general strategy for knowledge-based nonuniform sampling that can exploit prior knowledge about chemical shifts and is not restricted to sampling projections. Based on several measures of performance, we find that exponentially weighted random sampling (envelope matched sampling) performs better than shift-based sampling (beat matched sampling). While shift-based sampling can yield small advantages in sensitivity, the gains are generally outweighed by diminished robustness. Our observation that more robust sampling schemes are only slightly less sensitive than schemes highly optimized using prior knowledge about chemical shifts has broad implications for any multidimensional NMR study employing NUS. The results derived from simulated data are demonstrated with a sample application to PfPMT, the phosphoethanolamine methyltransferase of the human malaria parasite Plasmodium falciparum.
Optimization of Planet Finder Observing Strategy
NASA Astrophysics Data System (ADS)
Sinukoff, E.
2014-03-01
We evaluate radial velocity observing strategies to be considered for future planethunting surveys with the Automated Planet Finder, a new 2.4-m telescope at Lick Observatory. Observing strategies can be optimized to mitigate stellar noise, which can mask and imitate the weak Doppler signals of low-mass planets. We estimate and compare sensitivities of 5 different observing strategies to planets around G2-M2 dwarfs, constructing RV noise models for each stellar spectral type, accounting for acoustic, granulation, and magnetic activity modes. The strategies differ in exposure time, nightly and monthly cadence, and number of years. Synthetic RV time-series are produced by injecting a planet signal onto the stellar noise, sampled according to each observing strategy. For each star and each observing strategy, thousands of planet injection recovery trials are conducted to determine the detection efficiency as a function of orbital period, minimum mass, and eccentricity. We find that 4-year observing strategies of 10 nights per month are sensitive to planets ~25-40% lower in mass than the corresponding 1 year strategies of 30 nights per month. Three 5-minute exposures spaced evenly throughout each night provide a 10% gain in sensitivity over the corresponding single 15-minute exposure strategies. All strategies are sensitive to planets of lowest mass around the modeled K7 dwarf. This study indicates that APF surveys adopting the 4-year strategies should detect Earth-mass planets on < 10-day orbits around quiet late-K dwarfs as well as > 1.6 Earth-mass planets in their habitable zones.
Bonta, Maximilian; Török, Szilvia; Hegedus, Balazs; Döme, Balazs; Limbeck, Andreas
2017-03-01
Laser ablation-inductively coupled plasma-mass spectrometry (LA-ICP-MS) is one of the most commonly applied methods for lateral trace element distribution analysis in medical studies. Many improvements of the technique regarding quantification and achievable lateral resolution have been achieved in the last years. Nevertheless, sample preparation is also of major importance and the optimal sample preparation strategy still has not been defined. While conventional histology knows a number of sample pre-treatment strategies, little is known about the effect of these approaches on the lateral distributions of elements and/or their quantities in tissues. The technique of formalin fixation and paraffin embedding (FFPE) has emerged as the gold standard in tissue preparation. However, the potential use for elemental distribution studies is questionable due to a large number of sample preparation steps. In this work, LA-ICP-MS was used to examine the applicability of the FFPE sample preparation approach for elemental distribution studies. Qualitative elemental distributions as well as quantitative concentrations in cryo-cut tissues as well as FFPE samples were compared. Results showed that some metals (especially Na and K) are severely affected by the FFPE process, whereas others (e.g., Mn, Ni) are less influenced. Based on these results, a general recommendation can be given: FFPE samples are completely unsuitable for the analysis of alkaline metals. When analyzing transition metals, FFPE samples can give comparable results to snap-frozen tissues. Graphical abstract Sample preparation strategies for biological tissues are compared with regard to the elemental distributions and average trace element concentrations.
Cepeda-Vázquez, Mayela; Blumenthal, David; Camel, Valérie; Rega, Barbara
2017-03-01
Furan, a possibly carcinogenic compound to humans, and furfural, a naturally occurring volatile contributing to aroma, can be both found in thermally treated foods. These process-induced compounds, formed by close reaction pathways, play an important role as markers of food safety and quality. A method capable of simultaneously quantifying both molecules is thus highly relevant for developing mitigation strategies and preserving the sensory properties of food at the same time. We have developed a unique reliable and sensitive headspace trap (HS trap) extraction method coupled to GC-MS for the simultaneous quantification of furan and furfural in a solid processed food (sponge cake). HS Trap extraction has been optimized using an optimal design of experiments (O-DOE) approach, considering four instrumental and two sample preparation variables, as well as a blocking factor identified during preliminary assays. Multicriteria and multiple response optimization was performed based on a desirability function, yielding the following conditions: thermostatting temperature, 65°C; thermostatting time, 15min; number of pressurization cycles, 4; dry purge time, 0.9min; water / sample amount ratio (dry basis), 16; and total amount (water + sample amount, dry basis), 10g. The performances of the optimized method were also assessed: repeatability (RSD: ≤3.3% for furan and ≤2.6% for furfural), intermediate precision (RSD: 4.0% for furan and 4.3% for furfural), linearity (R 2 : 0.9957 for furan and 0.9996 for furfural), LOD (0.50ng furan g sample dry basis -1 and 10.2ng furfural g sample dry basis -1 ), LOQ (0.99ng furan g sample dry basis -1 and 41.1ng furfural g sample dry basis -1 ). Matrix effect was observed mainly for furan. Finally, the optimized method was applied to other sponge cakes with different matrix characteristics and levels of analytes. Copyright © 2016. Published by Elsevier B.V.
Design and implementation of real-time wireless projection system based on ARM embedded system
NASA Astrophysics Data System (ADS)
Long, Zhaohua; Tang, Hao; Huang, Junhua
2018-04-01
Aiming at the shortage of existing real-time screen sharing system, a real-time wireless projection system is proposed in this paper. Based on the proposed system, a weight-based frame deletion strategy combined sampling time period and data variation is proposed. By implementing the system on the hardware platform, the results show that the system can achieve good results. The weight-based strategy can improve the service quality, reduce the delay and optimize the real-time customer service system [1].
Remmelink, M; Sokolow, Y; Leduc, D
2015-04-01
Histopathology is key to the diagnosis and staging of lung cancer. This analysis requires tissue sampling from primary and/or metastatic lesions. The choice of sampling technique is intended to optimize diagnostic yield while avoiding unnecessarily invasive procedures. Recent developments in targeted therapy require increasingly precise histological and molecular characterization of the tumor. Therefore, pathologists must be economical with tissue samples to ensure that they have the opportunity to perform all the analyses required. More than ever, good communication between clinician, endoscopist or surgeon, and pathologist is essential. This is necessary to ensure that all participants in the process of lung cancer diagnosis collaborate to ensure that the appropriate number and type of biopsies are performed with the appropriate tissue sampling treatment. This will allow performance of all the necessary analyses leading to a more precise characterization of the tumor, and thus the optimal treatment for patients with lung cancer. Copyright © 2015 SPLF. Published by Elsevier Masson SAS. All rights reserved.
NASA Astrophysics Data System (ADS)
Ahmed, S.; Salucci, M.; Miorelli, R.; Anselmi, N.; Oliveri, G.; Calmon, P.; Reboud, C.; Massa, A.
2017-10-01
A quasi real-time inversion strategy is presented for groove characterization of a conductive non-ferromagnetic tube structure by exploiting eddy current testing (ECT) signal. Inversion problem has been formulated by non-iterative Learning-by-Examples (LBE) strategy. Within the framework of LBE, an efficient training strategy has been adopted with the combination of feature extraction and a customized version of output space filling (OSF) adaptive sampling in order to get optimal training set during offline phase. Partial Least Squares (PLS) and Support Vector Regression (SVR) have been exploited for feature extraction and prediction technique respectively to have robust and accurate real time inversion during online phase.
A strategy to optimize the thermoelectric performance in a spark plasma sintering process
Chiu, Wan-Ting; Chen, Cheng-Lung; Chen, Yang-Yuan
2016-01-01
Spark plasma sintering (SPS) is currently widely applied to existing alloys as a means of further enhancing the alloys’ figure of merit. However, the determination of the optimal sintering condition is challenging in the SPS process. This report demonstrates a systematic way to independently optimize the Seebeck coefficient S and the ratio of electrical to thermal conductivity (σ/κ) and thus achieve the maximum figure of merit zT = S2(σ/κ)T. Sb2−xInxTe3 (x = 0–0.2) were chosen as examples to validate the method. Although high sintering temperature and pressure are helpful in enhancing the compactness and electrical conductivity of pressed samples, the resultant deteriorated Seebeck coefficient and increasing thermal conductivity eventually offset the benefit. We found that the optimal sintering temperature coincides with temperatures at which the maximum Seebeck coefficient begins to degrade, whereas the optimal sintering pressure coincided with the pressure at which the σ/κ ratio reaches a maximum. Based on this principle, the optimized sintering conditions were determined, and the zT of Sb1.9In0.1Te3 is raised to 0.92 at 600 K, showing an approximately 84% enhancement. This work develops a facile strategy for selecting the optimal SPS sintering condition to further enhance the zT of bulk specimens. PMID:26975209
Selective Data Acquisition in NMR. The Quantification of Anti-phase Scalar Couplings
NASA Astrophysics Data System (ADS)
Hodgkinson, P.; Holmes, K. J.; Hore, P. J.
Almost all time-domain NMR experiments employ "linear sampling," in which the NMR response is digitized at equally spaced times, with uniform signal averaging. Here, the possibilities of nonlinear sampling are explored using anti-phase doublets in the indirectly detected dimensions of multidimensional COSY-type experiments as an example. The Cramér-Rao lower bounds are used to evaluate and optimize experiments in which the sampling points, or the extent of signal averaging at each point, or both, are varied. The optimal nonlinear sampling for the estimation of the coupling constant J, by model fitting, turns out to involve just a few key time points, for example, at the first node ( t= 1/ J) of the sin(π Jt) modulation. Such sparse sampling patterns can be used to derive more practical strategies, in which the sampling or the signal averaging is distributed around the most significant time points. The improvements in the quantification of NMR parameters can be quite substantial especially when, as is often the case for indirectly detected dimensions, the total number of samples is limited by the time available.
Motion Correction in PROPELLER and Turboprop-MRI
Tamhane, Ashish A.; Arfanakis, Konstantinos
2009-01-01
PROPELLER and Turboprop-MRI are characterized by greatly reduced sensitivity to motion, compared to their predecessors, fast spin-echo and gradient and spin-echo, respectively. This is due to the inherent self-navigation and motion correction of PROPELLER-based techniques. However, it is unknown how various acquisition parameters that determine k-space sampling affect the accuracy of motion correction in PROPELLER and Turboprop-MRI. The goal of this work was to evaluate the accuracy of motion correction in both techniques, to identify an optimal rotation correction approach, and determine acquisition strategies for optimal motion correction. It was demonstrated that, blades with multiple lines allow more accurate estimation of motion than blades with fewer lines. Also, it was shown that Turboprop-MRI is less sensitive to motion than PROPELLER. Furthermore, it was demonstrated that the number of blades does not significantly affect motion correction. Finally, clinically appropriate acquisition strategies that optimize motion correction were discussed for PROPELLER and Turboprop-MRI. PMID:19365858
Optimism, coping and long-term recovery from coronary artery surgery in women.
King, K B; Rowe, M A; Kimble, L P; Zerwic, J J
1998-02-01
Optimism, coping strategies, and psychological and functional outcomes were measured in 55 women undergoing coronary artery surgery. Data were collected in-hospital and at 1, 6, and 12 months after surgery. Optimism was related to positive moods and life satisfaction, and inversely related to negative moods. Few relationships were found between optimism and functional ability. Cognitive coping strategies accounted for a mediating effect between optimism and negative mood. Optimists were more likely to accept their situation, and less likely to use escapism. In turn, these coping strategies were inversely related to negative mood and mediated the relationship between optimism and this outcome. Optimism was not related to problem-focused coping strategies; this, these coping strategies cannot explain the relationship between optimism and outcomes.
A fast optimization approach for treatment planning of volumetric modulated arc therapy.
Yan, Hui; Dai, Jian-Rong; Li, Ye-Xiong
2018-05-30
Volumetric modulated arc therapy (VMAT) is widely used in clinical practice. It not only significantly reduces treatment time, but also produces high-quality treatment plans. Current optimization approaches heavily rely on stochastic algorithms which are time-consuming and less repeatable. In this study, a novel approach is proposed to provide a high-efficient optimization algorithm for VMAT treatment planning. A progressive sampling strategy is employed for beam arrangement of VMAT planning. The initial beams with equal-space are added to the plan in a coarse sampling resolution. Fluence-map optimization and leaf-sequencing are performed for these beams. Then, the coefficients of fluence-maps optimization algorithm are adjusted according to the known fluence maps of these beams. In the next round the sampling resolution is doubled and more beams are added. This process continues until the total number of beams arrived. The performance of VMAT optimization algorithm was evaluated using three clinical cases and compared to those of a commercial planning system. The dosimetric quality of VMAT plans is equal to or better than the corresponding IMRT plans for three clinical cases. The maximum dose to critical organs is reduced considerably for VMAT plans comparing to those of IMRT plans, especially in the head and neck case. The total number of segments and monitor units are reduced for VMAT plans. For three clinical cases, VMAT optimization takes < 5 min accomplished using proposed approach and is 3-4 times less than that of the commercial system. The proposed VMAT optimization algorithm is able to produce high-quality VMAT plans efficiently and consistently. It presents a new way to accelerate current optimization process of VMAT planning.
Baumes, Laurent A
2006-01-01
One of the main problems in high-throughput research for materials is still the design of experiments. At early stages of discovery programs, purely exploratory methodologies coupled with fast screening tools should be employed. This should lead to opportunities to find unexpected catalytic results and identify the "groups" of catalyst outputs, providing well-defined boundaries for future optimizations. However, very few new papers deal with strategies that guide exploratory studies. Mostly, traditional designs, homogeneous covering, or simple random samplings are exploited. Typical catalytic output distributions exhibit unbalanced datasets for which an efficient learning is hardly carried out, and interesting but rare classes are usually unrecognized. Here is suggested a new iterative algorithm for the characterization of the search space structure, working independently of learning processes. It enhances recognition rates by transferring catalysts to be screened from "performance-stable" space zones to "unsteady" ones which necessitate more experiments to be well-modeled. The evaluation of new algorithm attempts through benchmarks is compulsory due to the lack of past proofs about their efficiency. The method is detailed and thoroughly tested with mathematical functions exhibiting different levels of complexity. The strategy is not only empirically evaluated, the effect or efficiency of sampling on future Machine Learning performances is also quantified. The minimum sample size required by the algorithm for being statistically discriminated from simple random sampling is investigated.
Bergmann, Sarah; Keitel-Korndörfer, Anja; Herfurth-Majstorovic, Katharina; Wendt, Verena; Klein, Annette M; von Klitzing, Kai; Grube, Matthias
2017-02-01
Recruitment of participants with obesity is a real challenge. To reduce time and costs in similar projects, we investigated various recruiting strategies used in a longitudinal family study with respect to their enrolment yield and cost effectiveness. Results may help other research groups to optimize their recruitment strategies. We applied different recruitment strategies to acquire families with children aged 6 to 47 months and at least one parent with obesity (risk group) or two parents of normal weight (control group) for a longitudinal non-interventional study. Based on four main strategies-via media, kindergartens, health professionals and focusing on the community-we examined 15 different subcategories of strategies. Based on enrolment yield and relative costs (e.g., material expenses, staff time) we analyzed the effectiveness of each recruitment strategy. Following different recruitment approaches, 685 families contacted us; 26% (n = 178) of these met the inclusion criteria. Of the four main strategies, the community-focused strategy was the most successful one (accounting for 36.5% of the sample) followed by contacts with kindergartens (accounting for 28.1% of the sample). Of the subcategories, two strategies were outstanding: Posters (community-focused strategies), and recruitment via kindergartens using phone contacts rather than emailing. Only a small number of participants were recruited via announcements in newspapers (lower cost strategy), advertisements on public transport or face-to-face recruitment at various places (higher cost strategies). Results revealed that only a combination of different active and passive methods and approaches led to a sufficient sample size. In this study, recruitment via posters and contacting kindergartens on the phone produced the highest numbers of participants (high enrolment yield) at moderate costs.
Cost-Benefit Analysis of Computer Resources for Machine Learning
Champion, Richard A.
2007-01-01
Machine learning describes pattern-recognition algorithms - in this case, probabilistic neural networks (PNNs). These can be computationally intensive, in part because of the nonlinear optimizer, a numerical process that calibrates the PNN by minimizing a sum of squared errors. This report suggests efficiencies that are expressed as cost and benefit. The cost is computer time needed to calibrate the PNN, and the benefit is goodness-of-fit, how well the PNN learns the pattern in the data. There may be a point of diminishing returns where a further expenditure of computer resources does not produce additional benefits. Sampling is suggested as a cost-reduction strategy. One consideration is how many points to select for calibration and another is the geometric distribution of the points. The data points may be nonuniformly distributed across space, so that sampling at some locations provides additional benefit while sampling at other locations does not. A stratified sampling strategy can be designed to select more points in regions where they reduce the calibration error and fewer points in regions where they do not. Goodness-of-fit tests ensure that the sampling does not introduce bias. This approach is illustrated by statistical experiments for computing correlations between measures of roadless area and population density for the San Francisco Bay Area. The alternative to training efficiencies is to rely on high-performance computer systems. These may require specialized programming and algorithms that are optimized for parallel performance.
Patil, A A; Sachin, B S; Shinde, D B; Wakte, P S
2013-07-01
Coumestan wedelolactone is an important phytocomponent from Eclipta alba (L.) Hassk. It possesses diverse pharmacological activities, which have prompted the development of various extraction techniques and strategies for its better utilization. The aim of the present study is to develop and optimize supercritical carbon dioxide assisted sample preparation and HPLC identification of wedelolactone from E. alba (L.) Hassk. The response surface methodology was employed to study the optimization of sample preparation using supercritical carbon dioxide for wedelolactone from E. alba (L.) Hassk. The optimized sample preparation involves the investigation of quantitative effects of sample preparation parameters viz. operating pressure, temperature, modifier concentration and time on yield of wedelolactone using Box-Behnken design. The wedelolactone content was determined using validated HPLC methodology. The experimental data were fitted to second-order polynomial equation using multiple regression analysis and analyzed using the appropriate statistical method. By solving the regression equation and analyzing 3D plots, the optimum extraction conditions were found to be: extraction pressure, 25 MPa; temperature, 56 °C; modifier concentration, 9.44% and extraction time, 60 min. Optimum extraction conditions demonstrated wedelolactone yield of 15.37 ± 0.63 mg/100 g E. alba (L.) Hassk, which was in good agreement with the predicted values. Temperature and modifier concentration showed significant effect on the wedelolactone yield. The supercritical carbon dioxide extraction showed higher selectivity than the conventional Soxhlet assisted extraction method. Copyright © 2013 Elsevier Masson SAS. All rights reserved.
Systematical Optimization of Reverse-phase Chromatography for Shotgun Proteomics
Xu, Ping; Duong, Duc M.; Peng, Junmin
2009-01-01
Summary We report the optimization of a common LC/MS/MS platform to maximize the number of proteins identified from a complex biological sample. The platform uses digested yeast lysate on a 75 μm internal diameter × 12 cm reverse-phase column that is combined with an LTQ-Orbitrap mass spectrometer. We first generated a yeast peptide mix that was quantified by multiple methods including the strategy of stable isotope labeling with amino acids in cell culture (SILAC). The peptide mix was analyzed on a highly reproducible, automated nanoLC/MS/MS system with systematic adjustment of loading amount, flow rate, elution gradient range and length. Interestingly, the column was found to be almost saturated by loading ~1 μg of the sample. Whereas the optimal flow rate (~0.2 μl/min) and elution buffer range (13–32% of acetonitrile) appeared to be independent of the loading amount, the best gradient length varied according to the amount of samples: 160 min for 1 μg of the peptide mix, but 40 min for 10 ng of the same sample. The effect of these parameters on elution peptide peak width is evaluated. After full optimization, 1,012 proteins (clustered in 806 groups) with an estimated protein false discovery rate of ~3% were identified in 1 μg of yeast lysate in a single 160-min LC/MS/MS run. PMID:19566079
NASA Astrophysics Data System (ADS)
Ma, Junjun; Xiong, Xiong; He, Feng; Zhang, Wei
2017-04-01
The stock price fluctuation is studied in this paper with intrinsic time perspective. The event, directional change (DC) or overshoot, are considered as time scale of price time series. With this directional change law, its corresponding statistical properties and parameter estimation is tested in Chinese stock market. Furthermore, a directional change trading strategy is proposed for invest in the market portfolio in Chinese stock market, and both in-sample and out-of-sample performance are compared among the different method of model parameter estimation. We conclude that DC method can capture important fluctuations in Chinese stock market and gain profit due to the statistical property that average upturn overshoot size is bigger than average downturn directional change size. The optimal parameter of DC method is not fixed and we obtained 1.8% annual excess return with this DC-based trading strategy.
Case-based fracture image retrieval.
Zhou, Xin; Stern, Richard; Müller, Henning
2012-05-01
Case-based fracture image retrieval can assist surgeons in decisions regarding new cases by supplying visually similar past cases. This tool may guide fracture fixation and management through comparison of long-term outcomes in similar cases. A fracture image database collected over 10 years at the orthopedic service of the University Hospitals of Geneva was used. This database contains 2,690 fracture cases associated with 43 classes (based on the AO/OTA classification). A case-based retrieval engine was developed and evaluated using retrieval precision as a performance metric. Only cases in the same class as the query case are considered as relevant. The scale-invariant feature transform (SIFT) is used for image analysis. Performance evaluation was computed in terms of mean average precision (MAP) and early precision (P10, P30). Retrieval results produced with the GNU image finding tool (GIFT) were used as a baseline. Two sampling strategies were evaluated. One used a dense 40 × 40 pixel grid sampling, and the second one used the standard SIFT features. Based on dense pixel grid sampling, three unsupervised feature selection strategies were introduced to further improve retrieval performance. With dense pixel grid sampling, the image is divided into 1,600 (40 × 40) square blocks. The goal is to emphasize the salient regions (blocks) and ignore irrelevant regions. Regions are considered as important when a high variance of the visual features is found. The first strategy is to calculate the variance of all descriptors on the global database. The second strategy is to calculate the variance of all descriptors for each case. A third strategy is to perform a thumbnail image clustering in a first step and then to calculate the variance for each cluster. Finally, a fusion between a SIFT-based system and GIFT is performed. A first comparison on the selection of sampling strategies using SIFT features shows that dense sampling using a pixel grid (MAP = 0.18) outperformed the SIFT detector-based sampling approach (MAP = 0.10). In a second step, three unsupervised feature selection strategies were evaluated. A grid parameter search is applied to optimize parameters for feature selection and clustering. Results show that using half of the regions (700 or 800) obtains the best performance for all three strategies. Increasing the number of clusters in clustering can also improve the retrieval performance. The SIFT descriptor variance in each case gave the best indication of saliency for the regions (MAP = 0.23), better than the other two strategies (MAP = 0.20 and 0.21). Combining GIFT (MAP = 0.23) and the best SIFT strategy (MAP = 0.23) produced significantly better results (MAP = 0.27) than each system alone. A case-based fracture retrieval engine was developed and is available for online demonstration. SIFT is used to extract local features, and three feature selection strategies were introduced and evaluated. A baseline using the GIFT system was used to evaluate the salient point-based approaches. Without supervised learning, SIFT-based systems with optimized parameters slightly outperformed the GIFT system. A fusion of the two approaches shows that the information contained in the two approaches is complementary. Supervised learning on the feature space is foreseen as the next step of this study.
A Cascade Optimization Strategy for Solution of Difficult Multidisciplinary Design Problems
NASA Technical Reports Server (NTRS)
Patnaik, Surya N.; Coroneos, Rula M.; Hopkins, Dale A.; Berke, Laszlo
1996-01-01
A research project to comparatively evaluate 10 nonlinear optimization algorithms was recently completed. A conclusion was that no single optimizer could successfully solve all 40 problems in the test bed, even though most optimizers successfully solved at least one-third of the problems. We realized that improved search directions and step lengths, available in the 10 optimizers compared, were not likely to alleviate the convergence difficulties. For the solution of those difficult problems we have devised an alternative approach called cascade optimization strategy. The cascade strategy uses several optimizers, one followed by another in a specified sequence, to solve a problem. A pseudorandom scheme perturbs design variables between the optimizers. The cascade strategy has been tested successfully in the design of supersonic and subsonic aircraft configurations and air-breathing engines for high-speed civil transport applications. These problems could not be successfully solved by an individual optimizer. The cascade optimization strategy, however, generated feasible optimum solutions for both aircraft and engine problems. This paper presents the cascade strategy and solutions to a number of these problems.
Optimal Inspection of Imports to Prevent Invasive Pest Introduction.
Chen, Cuicui; Epanchin-Niell, Rebecca S; Haight, Robert G
2018-03-01
The United States imports more than 1 billion live plants annually-an important and growing pathway for introduction of damaging nonnative invertebrates and pathogens. Inspection of imports is one safeguard for reducing pest introductions, but capacity constraints limit inspection effort. We develop an optimal sampling strategy to minimize the costs of pest introductions from trade by posing inspection as an acceptance sampling problem that incorporates key features of the decision context, including (i) simultaneous inspection of many heterogeneous lots, (ii) a lot-specific sampling effort, (iii) a budget constraint that limits total inspection effort, (iv) inspection error, and (v) an objective of minimizing cost from accepted defective units. We derive a formula for expected number of accepted infested units (expected slippage) given lot size, sample size, infestation rate, and detection rate, and we formulate and analyze the inspector's optimization problem of allocating a sampling budget among incoming lots to minimize the cost of slippage. We conduct an empirical analysis of live plant inspection, including estimation of plant infestation rates from historical data, and find that inspections optimally target the largest lots with the highest plant infestation rates, leaving some lots unsampled. We also consider that USDA-APHIS, which administers inspections, may want to continue inspecting all lots at a baseline level; we find that allocating any additional capacity, beyond a comprehensive baseline inspection, to the largest lots with the highest infestation rates allows inspectors to meet the dual goals of minimizing the costs of slippage and maintaining baseline sampling without substantial compromise. © 2017 Society for Risk Analysis.
NASA Astrophysics Data System (ADS)
Mo, S.; Lu, D.; Shi, X.; Zhang, G.; Ye, M.; Wu, J.
2016-12-01
Surrogate models have shown remarkable computational efficiency in hydrological simulations involving design space exploration, sensitivity analysis, uncertainty quantification, etc. The central task of constructing a global surrogate models is to achieve a prescribed approximation accuracy with as few original model executions as possible, which requires a good design strategy to optimize the distribution of data points in the parameter domains and an effective stopping criterion to automatically terminate the design process when desired approximation accuracy is achieved. This study proposes a novel adaptive sampling strategy, which starts from a small number of initial samples and adaptively selects additional samples by balancing the collection in unexplored regions and refinement in interesting areas. We define an efficient and effective evaluation metric basing on Taylor expansion to select the most promising potential samples from candidate points, and propose a robust stopping criterion basing on the approximation accuracy at new points to guarantee the achievement of desired accuracy. The numerical results of several benchmark analytical functions indicate that the proposed approach is more computationally efficient and robust than the widely used maximin distance design and two other well-known adaptive sampling strategies. The application to two complicated multiphase flow problems further demonstrates the efficiency and effectiveness of our method in constructing global surrogate models for high-dimensional and highly nonlinear problems. Acknowledgements: This work was financially supported by the National Nature Science Foundation of China grants No. 41030746 and 41172206.
Emergency strategy optimization for the environmental control system in manned spacecraft
NASA Astrophysics Data System (ADS)
Li, Guoxiang; Pang, Liping; Liu, Meng; Fang, Yufeng; Zhang, Helin
2018-02-01
It is very important for a manned environmental control system (ECS) to be able to reconfigure its operation strategy in emergency conditions. In this article, a multi-objective optimization is established to design the optimal emergency strategy for an ECS in an insufficient power supply condition. The maximum ECS lifetime and the minimum power consumption are chosen as the optimization objectives. Some adjustable key variables are chosen as the optimization variables, which finally represent the reconfigured emergency strategy. The non-dominated sorting genetic algorithm-II is adopted to solve this multi-objective optimization problem. Optimization processes are conducted at four different carbon dioxide partial pressure control levels. The study results show that the Pareto-optimal frontiers obtained from this multi-objective optimization can represent the relationship between the lifetime and the power consumption of the ECS. Hence, the preferred emergency operation strategy can be recommended for situations when there is suddenly insufficient power.
Strategies influence neural activity for feedback learning across child and adolescent development.
Peters, Sabine; Koolschijn, P Cédric M P; Crone, Eveline A; Van Duijvenvoorde, Anna C K; Raijmakers, Maartje E J
2014-09-01
Learning from feedback is an important aspect of executive functioning that shows profound improvements during childhood and adolescence. This is accompanied by neural changes in the feedback-learning network, which includes pre-supplementary motor area (pre- SMA)/anterior cingulate cortex (ACC), dorsolateral prefrontal cortex (DLPFC), superior parietal cortex (SPC), and the basal ganglia. However, there can be considerable differences within age ranges in performance that are ascribed to differences in strategy use. This is problematic for traditional approaches of analyzing developmental data, in which age groups are assumed to be homogenous in strategy use. In this study, we used latent variable models to investigate if underlying strategy groups could be detected for a feedback-learning task and whether there were differences in neural activation patterns between strategies. In a sample of 268 participants between ages 8 to 25 years, we observed four underlying strategy groups, which were cut across age groups and varied in the optimality of executive functioning. These strategy groups also differed in neural activity during learning; especially the most optimal performing group showed more activity in DLPFC, SPC and pre-SMA/ACC compared to the other groups. However, age differences remained an important contributor to neural activation, even when correcting for strategy. These findings contribute to the debate of age versus performance predictors of neural development, and highlight the importance of studying individual differences in strategy use when studying development. Copyright © 2014 Elsevier Ltd. All rights reserved.
Pothineni, Sudhir Babu; Venugopalan, Nagarajan; Ogata, Craig M.; Hilgart, Mark C.; Stepanov, Sergey; Sanishvili, Ruslan; Becker, Michael; Winter, Graeme; Sauter, Nicholas K.; Smith, Janet L.; Fischetti, Robert F.
2014-01-01
The calculation of single- and multi-crystal data collection strategies and a data processing pipeline have been tightly integrated into the macromolecular crystallographic data acquisition and beamline control software JBluIce. Both tasks employ wrapper scripts around existing crystallographic software. JBluIce executes scripts through a distributed resource management system to make efficient use of all available computing resources through parallel processing. The JBluIce single-crystal data collection strategy feature uses a choice of strategy programs to help users rank sample crystals and collect data. The strategy results can be conveniently exported to a data collection run. The JBluIce multi-crystal strategy feature calculates a collection strategy to optimize coverage of reciprocal space in cases where incomplete data are available from previous samples. The JBluIce data processing runs simultaneously with data collection using a choice of data reduction wrappers for integration and scaling of newly collected data, with an option for merging with pre-existing data. Data are processed separately if collected from multiple sites on a crystal or from multiple crystals, then scaled and merged. Results from all strategy and processing calculations are displayed in relevant tabs of JBluIce. PMID:25484844
Pothineni, Sudhir Babu; Venugopalan, Nagarajan; Ogata, Craig M.; ...
2014-11-18
The calculation of single- and multi-crystal data collection strategies and a data processing pipeline have been tightly integrated into the macromolecular crystallographic data acquisition and beamline control software JBluIce. Both tasks employ wrapper scripts around existing crystallographic software. JBluIce executes scripts through a distributed resource management system to make efficient use of all available computing resources through parallel processing. The JBluIce single-crystal data collection strategy feature uses a choice of strategy programs to help users rank sample crystals and collect data. The strategy results can be conveniently exported to a data collection run. The JBluIce multi-crystal strategy feature calculates amore » collection strategy to optimize coverage of reciprocal space in cases where incomplete data are available from previous samples. The JBluIce data processing runs simultaneously with data collection using a choice of data reduction wrappers for integration and scaling of newly collected data, with an option for merging with pre-existing data. Data are processed separately if collected from multiple sites on a crystal or from multiple crystals, then scaled and merged. Results from all strategy and processing calculations are displayed in relevant tabs of JBluIce.« less
Dual-mode nested search method for categorical uncertain multi-objective optimization
NASA Astrophysics Data System (ADS)
Tang, Long; Wang, Hu
2016-10-01
Categorical multi-objective optimization is an important issue involved in many matching design problems. Non-numerical variables and their uncertainty are the major challenges of such optimizations. Therefore, this article proposes a dual-mode nested search (DMNS) method. In the outer layer, kriging metamodels are established using standard regular simplex mapping (SRSM) from categorical candidates to numerical values. Assisted by the metamodels, a k-cluster-based intelligent sampling strategy is developed to search Pareto frontier points. The inner layer uses an interval number method to model the uncertainty of categorical candidates. To improve the efficiency, a multi-feature convergent optimization via most-promising-area stochastic search (MFCOMPASS) is proposed to determine the bounds of objectives. Finally, typical numerical examples are employed to demonstrate the effectiveness of the proposed DMNS method.
Single step optimization of manipulator maneuvers with variable structure control
NASA Technical Reports Server (NTRS)
Chen, N.; Dwyer, T. A. W., III
1987-01-01
One step ahead optimization has been recently proposed for spacecraft attitude maneuvers as well as for robot manipulator maneuvers. Such a technique yields a discrete time control algorithm implementable as a sequence of state-dependent, quadratic programming problems for acceleration optimization. Its sensitivity to model accuracy, for the required inversion of the system dynamics, is shown in this paper to be alleviated by a fast variable structure control correction, acting between the sampling intervals of the slow one step ahead discrete time acceleration command generation algorithm. The slow and fast looping concept chosen follows that recently proposed for optimal aiming strategies with variable structure control. Accelerations required by the VSC correction are reserved during the slow one step ahead command generation so that the ability to overshoot the sliding surface is guaranteed.
spsann - optimization of sample patterns using spatial simulated annealing
NASA Astrophysics Data System (ADS)
Samuel-Rosa, Alessandro; Heuvelink, Gerard; Vasques, Gustavo; Anjos, Lúcia
2015-04-01
There are many algorithms and computer programs to optimize sample patterns, some private and others publicly available. A few have only been presented in scientific articles and text books. This dispersion and somewhat poor availability is holds back to their wider adoption and further development. We introduce spsann, a new R-package for the optimization of sample patterns using spatial simulated annealing. R is the most popular environment for data processing and analysis. Spatial simulated annealing is a well known method with widespread use to solve optimization problems in the soil and geo-sciences. This is mainly due to its robustness against local optima and easiness of implementation. spsann offers many optimizing criteria for sampling for variogram estimation (number of points or point-pairs per lag distance class - PPL), trend estimation (association/correlation and marginal distribution of the covariates - ACDC), and spatial interpolation (mean squared shortest distance - MSSD). spsann also includes the mean or maximum universal kriging variance (MUKV) as an optimizing criterion, which is used when the model of spatial variation is known. PPL, ACDC and MSSD were combined (PAN) for sampling when we are ignorant about the model of spatial variation. spsann solves this multi-objective optimization problem scaling the objective function values using their maximum absolute value or the mean value computed over 1000 random samples. Scaled values are aggregated using the weighted sum method. A graphical display allows to follow how the sample pattern is being perturbed during the optimization, as well as the evolution of its energy state. It is possible to start perturbing many points and exponentially reduce the number of perturbed points. The maximum perturbation distance reduces linearly with the number of iterations. The acceptance probability also reduces exponentially with the number of iterations. R is memory hungry and spatial simulated annealing is a computationally intensive method. As such, many strategies were used to reduce the computation time and memory usage: a) bottlenecks were implemented in C++, b) a finite set of candidate locations is used for perturbing the sample points, and c) data matrices are computed only once and then updated at each iteration instead of being recomputed. spsann is available at GitHub under a licence GLP Version 2.0 and will be further developed to: a) allow the use of a cost surface, b) implement other sensitive parts of the source code in C++, c) implement other optimizing criteria, d) allow to add or delete points to/from an existing point pattern.
An Elitist Multiobjective Tabu Search for Optimal Design of Groundwater Remediation Systems.
Yang, Yun; Wu, Jianfeng; Wang, Jinguo; Zhou, Zhifang
2017-11-01
This study presents a new multiobjective evolutionary algorithm (MOEA), the elitist multiobjective tabu search (EMOTS), and incorporates it with MODFLOW/MT3DMS to develop a groundwater simulation-optimization (SO) framework based on modular design for optimal design of groundwater remediation systems using pump-and-treat (PAT) technique. The most notable improvement of EMOTS over the original multiple objective tabu search (MOTS) lies in the elitist strategy, selection strategy, and neighborhood move rule. The elitist strategy is to maintain all nondominated solutions within later search process for better converging to the true Pareto front. The elitism-based selection operator is modified to choose two most remote solutions from current candidate list as seed solutions to increase the diversity of searching space. Moreover, neighborhood solutions are uniformly generated using the Latin hypercube sampling (LHS) in the bounded neighborhood space around each seed solution. To demonstrate the performance of the EMOTS, we consider a synthetic groundwater remediation example. Problem formulations consist of two objective functions with continuous decision variables of pumping rates while meeting water quality requirements. Especially, sensitivity analysis is evaluated through the synthetic case for determination of optimal combination of the heuristic parameters. Furthermore, the EMOTS is successfully applied to evaluate remediation options at the field site of the Massachusetts Military Reservation (MMR) in Cape Cod, Massachusetts. With both the hypothetical and the large-scale field remediation sites, the EMOTS-based SO framework is demonstrated to outperform the original MOTS in achieving the performance metrics of optimality and diversity of nondominated frontiers with desirable stability and robustness. © 2017, National Ground Water Association.
Using Genotype Abundance to Improve Phylogenetic Inference
Mesin, Luka; Victora, Gabriel D; Minin, Vladimir N; Matsen, Frederick A
2018-01-01
Abstract Modern biological techniques enable very dense genetic sampling of unfolding evolutionary histories, and thus frequently sample some genotypes multiple times. This motivates strategies to incorporate genotype abundance information in phylogenetic inference. In this article, we synthesize a stochastic process model with standard sequence-based phylogenetic optimality, and show that tree estimation is substantially improved by doing so. Our method is validated with extensive simulations and an experimental single-cell lineage tracing study of germinal center B cell receptor affinity maturation. PMID:29474671
Perales-Sánchez, Janitzio X K; Reyes-Moreno, Cuauhtémoc; Gómez-Favela, Mario A; Milán-Carrillo, Jorge; Cuevas-Rodríguez, Edith O; Valdez-Ortiz, Angel; Gutiérrez-Dorado, Roberto
2014-09-01
The aim of this study was to optimize the germination conditions of amaranth seeds that would maximize the antioxidant activity (AoxA), total phenolic (TPC), and flavonoid (TFC) contents. To optimize the germination bioprocess, response surface methodology was applied over three response variables (AoxA, TPC, TFC). A central composite rotable experimental design with two factors [germination temperature (GT), 20-45 ºC; germination time (Gt), 14-120 h] in five levels was used; 13 treatments were generated. The amaranth seeds were soaked in distilled water (25 °C/6 h) before germination. The sprouts from each treatment were dried (50 °C/8 h), cooled, and ground to obtain germinated amaranth flours (GAF). The best combination of germination bioprocess variables for producing optimized GAF with the highest AoxA [21.56 mmol trolox equivalent (TE)/100 g sample, dw], TPC [247.63 mg gallic acid equivalent (GAE)/100 g sample, dw], and TFC [81.39 mg catechin equivalent (CAE)/100 g sample, dw] was GT = 30 ºC/Gt = 78 h. The germination bioprocess increased AoxA, TPC, and TFC in 300-470, 829, and 213%, respectively. The germination is an effective strategy to increase the TPC and TFC of amaranth seeds for enhancing functionality with improved antioxidant activity.
Consensus Classification Using Non-Optimized Classifiers.
Brownfield, Brett; Lemos, Tony; Kalivas, John H
2018-04-03
Classifying samples into categories is a common problem in analytical chemistry and other fields. Classification is usually based on only one method, but numerous classifiers are available with some being complex, such as neural networks, and others are simple, such as k nearest neighbors. Regardless, most classification schemes require optimization of one or more tuning parameters for best classification accuracy, sensitivity, and specificity. A process not requiring exact selection of tuning parameter values would be useful. To improve classification, several ensemble approaches have been used in past work to combine classification results from multiple optimized single classifiers. The collection of classifications for a particular sample are then combined by a fusion process such as majority vote to form the final classification. Presented in this Article is a method to classify a sample by combining multiple classification methods without specifically classifying the sample by each method, that is, the classification methods are not optimized. The approach is demonstrated on three analytical data sets. The first is a beer authentication set with samples measured on five instruments, allowing fusion of multiple instruments by three ways. The second data set is composed of textile samples from three classes based on Raman spectra. This data set is used to demonstrate the ability to classify simultaneously with different data preprocessing strategies, thereby reducing the need to determine the ideal preprocessing method, a common prerequisite for accurate classification. The third data set contains three wine cultivars for three classes measured at 13 unique chemical and physical variables. In all cases, fusion of nonoptimized classifiers improves classification. Also presented are atypical uses of Procrustes analysis and extended inverted signal correction (EISC) for distinguishing sample similarities to respective classes.
Theory of sampling: four critical success factors before analysis.
Wagner, Claas; Esbensen, Kim H
2015-01-01
Food and feed materials characterization, risk assessment, and safety evaluations can only be ensured if QC measures are based on valid analytical data, stemming from representative samples. The Theory of Sampling (TOS) is the only comprehensive theoretical framework that fully defines all requirements to ensure sampling correctness and representativity, and to provide the guiding principles for sampling in practice. TOS also defines the concept of material heterogeneity and its impact on the sampling process, including the effects from all potential sampling errors. TOS's primary task is to eliminate bias-generating errors and to minimize sampling variability. Quantitative measures are provided to characterize material heterogeneity, on which an optimal sampling strategy should be based. Four critical success factors preceding analysis to ensure a representative sampling process are presented here.
Pharmacogenomics in neurology: current state and future steps.
Chan, Andrew; Pirmohamed, Munir; Comabella, Manuel
2011-11-01
In neurology, as in any other clinical specialty, there is a need to develop treatment strategies that allow stratification of therapies to optimize efficacy and minimize toxicity. Pharmacogenomics is one such method for therapy optimization: it aims to elucidate the relationship between human genome sequence variation and differential drug responses. Approaches have focused on candidate approaches investigating absorption-, distribution-, metabolism, and elimination (ADME)-related genes (pharmacokinetic pathways), and potential drug targets (pharmacodynamic pathways). To date, however, only few genetic variants have been incorporated into clinical algorithms. Unfortunately, a large number of studies have thrown up contradictory results due to a number of deficiencies, including small sample sizes, inadequate phenotyping, and genotyping strategies. Thus, there still exists an urgent need to establish biomarkers that could help to select for patients with an optimal benefit to risk relationship. Here we review recent advances, and limitations, in pharmacogenomics for agents used in neuroimmunology, neurodegenerative diseases, ischemic stroke, epilepsy, and primary headaches. Further work is still required in all of these areas, which really needs to progress on several fronts, including better standardized phenotyping, appropriate sample sizes through multicenter collaborations and judicious use of new technological advances such as genome-wide approaches, next generation sequencing and systems biology. In time, this is likely to lead to improvements in the benefit-harm balance of neurological therapies, cost efficiency, and identification of new drugs. Copyright © 2011 American Neurological Association.
Mars entry-to-landing trajectory optimization and closed loop guidance
NASA Technical Reports Server (NTRS)
Ilgen, Marc R.; Manning, Raymund A.; Cruz, Manuel I.
1991-01-01
The guidance strategy of the Mars Rover Sample Return mission is presented in detail. Aeromaneuver versus aerobrake trades are examined, and an aerobrake analysis is presented which takes into account targeting, guidance, flight control, trajectory profile, delivery accuracy. An aeromaneuver analysis is given which includes the entry corridor, maneuver footprint, guidance, preentry phase, constant drag phase, equilibrium guide phase, variable drag phase, influence of trajectory profile on the entry flight loads, parachute deployment conditions and strategies, and landing accuracy. The Mars terminal descent phase is analyzed.
Carleton, R. Drew; Heard, Stephen B.; Silk, Peter J.
2013-01-01
Estimation of pest density is a basic requirement for integrated pest management in agriculture and forestry, and efficiency in density estimation is a common goal. Sequential sampling techniques promise efficient sampling, but their application can involve cumbersome mathematics and/or intensive warm-up sampling when pests have complex within- or between-site distributions. We provide tools for assessing the efficiency of sequential sampling and of alternative, simpler sampling plans, using computer simulation with “pre-sampling” data. We illustrate our approach using data for balsam gall midge (Paradiplosis tumifex) attack in Christmas tree farms. Paradiplosis tumifex proved recalcitrant to sequential sampling techniques. Midge distributions could not be fit by a common negative binomial distribution across sites. Local parameterization, using warm-up samples to estimate the clumping parameter k for each site, performed poorly: k estimates were unreliable even for samples of n∼100 trees. These methods were further confounded by significant within-site spatial autocorrelation. Much simpler sampling schemes, involving random or belt-transect sampling to preset sample sizes, were effective and efficient for P. tumifex. Sampling via belt transects (through the longest dimension of a stand) was the most efficient, with sample means converging on true mean density for sample sizes of n∼25–40 trees. Pre-sampling and simulation techniques provide a simple method for assessing sampling strategies for estimating insect infestation. We suspect that many pests will resemble P. tumifex in challenging the assumptions of sequential sampling methods. Our software will allow practitioners to optimize sampling strategies before they are brought to real-world applications, while potentially avoiding the need for the cumbersome calculations required for sequential sampling methods. PMID:24376556
An implementation of differential evolution algorithm for inversion of geoelectrical data
NASA Astrophysics Data System (ADS)
Balkaya, Çağlayan
2013-11-01
Differential evolution (DE), a population-based evolutionary algorithm (EA) has been implemented to invert self-potential (SP) and vertical electrical sounding (VES) data sets. The algorithm uses three operators including mutation, crossover and selection similar to genetic algorithm (GA). Mutation is the most important operator for the success of DE. Three commonly used mutation strategies including DE/best/1 (strategy 1), DE/rand/1 (strategy 2) and DE/rand-to-best/1 (strategy 3) were applied together with a binomial type crossover. Evolution cycle of DE was realized without boundary constraints. For the test studies performed with SP data, in addition to both noise-free and noisy synthetic data sets two field data sets observed over the sulfide ore body in the Malachite mine (Colorado) and over the ore bodies in the Neem-Ka Thana cooper belt (India) were considered. VES test studies were carried out using synthetically produced resistivity data representing a three-layered earth model and a field data set example from Gökçeada (Turkey), which displays a seawater infiltration problem. Mutation strategies mentioned above were also extensively tested on both synthetic and field data sets in consideration. Of these, strategy 1 was found to be the most effective strategy for the parameter estimation by providing less computational cost together with a good accuracy. The solutions obtained by DE for the synthetic cases of SP were quite consistent with particle swarm optimization (PSO) which is a more widely used population-based optimization algorithm than DE in geophysics. Estimated parameters of SP and VES data were also compared with those obtained from Metropolis-Hastings (M-H) sampling algorithm based on simulated annealing (SA) without cooling to clarify uncertainties in the solutions. Comparison to the M-H algorithm shows that DE performs a fast approximate posterior sampling for the case of low-dimensional inverse geophysical problems.
Technologies for imaging neural activity in large volumes
Ji, Na; Freeman, Jeremy; Smith, Spencer L.
2017-01-01
Neural circuitry has evolved to form distributed networks that act dynamically across large volumes. Collecting data from individual planes, conventional microscopy cannot sample circuitry across large volumes at the temporal resolution relevant to neural circuit function and behaviors. Here, we review emerging technologies for rapid volume imaging of neural circuitry. We focus on two critical challenges: the inertia of optical systems, which limits image speed, and aberrations, which restrict the image volume. Optical sampling time must be long enough to ensure high-fidelity measurements, but optimized sampling strategies and point spread function engineering can facilitate rapid volume imaging of neural activity within this constraint. We also discuss new computational strategies for the processing and analysis of volume imaging data of increasing size and complexity. Together, optical and computational advances are providing a broader view of neural circuit dynamics, and help elucidate how brain regions work in concert to support behavior. PMID:27571194
Quantifying Antimicrobial Resistance at Veal Calf Farms
Bosman, Angela B.; Wagenaar, Jaap; Stegeman, Arjan; Vernooij, Hans; Mevius, Dik
2012-01-01
This study was performed to determine a sampling strategy to quantify the prevalence of antimicrobial resistance on veal calf farms, based on the variation in antimicrobial resistance within and between calves on five farms. Faecal samples from 50 healthy calves (10 calves/farm) were collected. From each individual sample and one pooled faecal sample per farm, 90 selected Escherichia coli isolates were tested for their resistance against 25 mg/L amoxicillin, 25 mg/L tetracycline, 0.5 mg/L cefotaxime, 0.125 mg/L ciprofloxacin and 8/152 mg/L trimethoprim/sulfamethoxazole (tmp/s) by replica plating. From each faecal sample another 10 selected E. coli isolates were tested for their resistance by broth microdilution as a reference. Logistic regression analysis was performed to compare the odds of testing an isolate resistant between both test methods (replica plating vs. broth microdilution) and to evaluate the effect of pooling faecal samples. Bootstrap analysis was used to investigate the precision of the estimated prevalence of resistance to each antimicrobial obtained by several simulated sampling strategies. Replica plating showed similar odds of E. coli isolates tested resistant compared to broth microdilution, except for ciprofloxacin (OR 0.29, p≤0.05). Pooled samples showed in general lower odds of an isolate being resistant compared to individual samples, although these differences were not significant. Bootstrap analysis showed that within each antimicrobial the various compositions of a pooled sample provided consistent estimates for the mean proportion of resistant isolates. Sampling strategies should be based on the variation in resistance among isolates within faecal samples and between faecal samples, which may vary by antimicrobial. In our study, the optimal sampling strategy from the perspective of precision of the estimated levels of resistance and practicality consists of a pooled faecal sample from 20 individual animals, of which 90 isolates are tested for their susceptibility by replica plating. PMID:22970313
Interrelations of stress, optimism and control in older people's psychological adjustment.
Bretherton, Susan Jane; McLean, Louise Anne
2015-06-01
To investigate the influence of perceived stress, optimism and perceived control of internal states on the psychological adjustment of older adults. The sample consisted of 212 older adults, aged between 58 and 103 (M = 80.42 years, SD = 7.31 years), living primarily in retirement villages in Melbourne, Victoria. Participants completed the Perceived Stress Scale, Life Orientation Test-Revised, Perceived Control of Internal States Scale and the World Health Organisation Quality of Life-Bref. Optimism significantly mediated the relationship between older people's perceived stress and psychological health, and perceived control of internal states mediated the relationships among stress, optimism and psychological health. The variables explained 49% of the variance in older people's psychological adjustment. It is suggested that strategies to improve optimism and perceived control may improve the psychological adjustment of older people struggling to adapt to life's stressors. © 2014 ACOTA.
Field observations and management strategy for hot spring wastewater in Wulai area, Taiwan.
Lin, J Y; Chen, C F; Lei, F R; Hsieh, C D
2010-01-01
Hot springs are important centers for recreation and tourism. However, the pollution that may potentially be caused by hot spring wastewater has rarely been discussed. More than half of Taiwan's hot springs are located in areas where the water quality of water bodies is to be protected, and untreated wastewater could pollute the receiving water bodies. In this study, we investigate hot spring wastewater in the Wulai area, one of Taiwan's famous hot spring resorts. Used water from five hot spring hotels was sampled and ten sampling events were carried out to evaluate the changes in the quality of used water in different seasons, at different periods of the week, and from different types of hotels. The concentrations of different pollutants in hot spring wastewater were found to exhibit wide variations, as follows: COD, 10-250 mg/L; SS, N.D.-93 mg/L; NH(3)-N, 0.01-1.93 mg/L; TP, 0.01-0.45 mg/L; and E. coli, 10-27,500 CFU/100 mL. The quality of hot spring wastewater depends on the operation of public pools, because this affects the frequency of supplementary fresh water and the outflow volume. Two management strategies, namely, onsite treatment systems and individually packaged treatment equipment, are considered, and a multi-objective optimization model is used to determine the optimal strategy.
Comparison of different incubation conditions for microbiological environmental monitoring.
Gordon, Oliver; Berchtold, Manfred; Staerk, Alexandra; Roesti, David
2014-01-01
Environmental monitoring represents an integral part of the microbiological quality control system of a pharmaceutical manufacturing operation. However, guidance documents differ regarding recommendation of a procedure, particularly regarding incubation time, incubation temperature, or nutrient media. Because of these discrepancies, many manufacturers decide for a particular environmental monitoring sample incubation strategy and support this decision with validation data. Such validations are typically laboratory-based in vitro studies, meaning that these are based on comparing incubation conditions and nutrient media through use of cultured microorganisms. An informal survey of the results of these in vitro studies performed at Novartis or European manufacturing sites of different pharmaceutical companies highlighted that no consensus regarding the optimal incubation conditions for microbial recovery existed. To address this question differently, we collected a significant amount of samples directly from air, inanimate surfaces, and personnel in pharmaceutical production and packaging rooms during manufacturing operation (in situ study). Samples were incubated under different conditions suggested in regulatory guidelines, and recovery of total aerobic microorganisms as well as moulds was assessed. We found the highest recovery of total aerobic count from areas with personnel flow using a general microbiological growth medium incubated at 30-35 °C. The highest recovery of moulds was obtained with mycological medium incubated at 20-25 °C. Single-plate strategies (two-temperature incubation or an intermediate incubation temperature of 25-30 °C) also yielded reasonable recovery of total aerobic count and moulds. However, recovery of moulds was found to be highly inefficient at 30-35 °C compared to lower incubation temperatures. This deficiency could not be rectified by subsequent incubation at 20-25 °C. A laboratory-based in vitro study performed in parallel was inconclusive. We consider our results potentially conferrable to other pharmaceutical manufacturing sites in moderate climate zones and believe that these should represent a valuable reference for definition of the incubation strategy of microbiological environmental monitoring samples. Microbiological environmental monitoring confirms that pharmaceutical cleanrooms are in an appropriate hygienic condition for manufacturing of drug products. Guidance documents from different health authorities or expert groups differ regarding recommendation of the applied incubation time, incubation temperature, or nutrient media. Therefore, many pharmaceutical manufacturers perform studies that aim to identify the optimal incubation setup for environmental monitoring samples. An informal survey of the results of such studies, which had been performed at Novartis or European manufacturing sites of different pharmaceutical companies, highlighted no consensus regarding the optimal incubation conditions for microbial recovery. All these studies had been conducted in the laboratory using selections of cultured microbial strains. We tried to solve this disagreement by collecting a significant amount of real environmental monitoring samples directly from the environment in pharmaceutical production and packaging rooms during manufacturing operation. These samples were then incubated under different conditions suggested in the regulatory guidelines. We believe that the results of our study are more meaningful than laboratory-based experiments because we used environmental samples with microorganisms directly isolated from the manufacturing area. Therefore, we believe that our results should represent a valuable reference for definition of the incubation strategy of microbiological environmental monitoring samples. © PDA, Inc. 2014.
Rincent, R; Laloë, D; Nicolas, S; Altmann, T; Brunel, D; Revilla, P; Rodríguez, V M; Moreno-Gonzalez, J; Melchinger, A; Bauer, E; Schoen, C-C; Meyer, N; Giauffret, C; Bauland, C; Jamin, P; Laborde, J; Monod, H; Flament, P; Charcosset, A; Moreau, L
2012-10-01
Genomic selection refers to the use of genotypic information for predicting breeding values of selection candidates. A prediction formula is calibrated with the genotypes and phenotypes of reference individuals constituting the calibration set. The size and the composition of this set are essential parameters affecting the prediction reliabilities. The objective of this study was to maximize reliabilities by optimizing the calibration set. Different criteria based on the diversity or on the prediction error variance (PEV) derived from the realized additive relationship matrix-best linear unbiased predictions model (RA-BLUP) were used to select the reference individuals. For the latter, we considered the mean of the PEV of the contrasts between each selection candidate and the mean of the population (PEVmean) and the mean of the expected reliabilities of the same contrasts (CDmean). These criteria were tested with phenotypic data collected on two diversity panels of maize (Zea mays L.) genotyped with a 50k SNPs array. In the two panels, samples chosen based on CDmean gave higher reliabilities than random samples for various calibration set sizes. CDmean also appeared superior to PEVmean, which can be explained by the fact that it takes into account the reduction of variance due to the relatedness between individuals. Selected samples were close to optimality for a wide range of trait heritabilities, which suggests that the strategy presented here can efficiently sample subsets in panels of inbred lines. A script to optimize reference samples based on CDmean is available on request.
Dawson, Ree; Lavori, Philip W
2012-01-01
Clinical demand for individualized "adaptive" treatment policies in diverse fields has spawned development of clinical trial methodology for their experimental evaluation via multistage designs, building upon methods intended for the analysis of naturalistically observed strategies. Because often there is no need to parametrically smooth multistage trial data (in contrast to observational data for adaptive strategies), it is possible to establish direct connections among different methodological approaches. We show by algebraic proof that the maximum likelihood (ML) and optimal semiparametric (SP) estimators of the population mean of the outcome of a treatment policy and its standard error are equal under certain experimental conditions. This result is used to develop a unified and efficient approach to design and inference for multistage trials of policies that adapt treatment according to discrete responses. We derive a sample size formula expressed in terms of a parametric version of the optimal SP population variance. Nonparametric (sample-based) ML estimation performed well in simulation studies, in terms of achieved power, for scenarios most likely to occur in real studies, even though sample sizes were based on the parametric formula. ML outperformed the SP estimator; differences in achieved power predominately reflected differences in their estimates of the population mean (rather than estimated standard errors). Neither methodology could mitigate the potential for overestimated sample sizes when strong nonlinearity was purposely simulated for certain discrete outcomes; however, such departures from linearity may not be an issue for many clinical contexts that make evaluation of competitive treatment policies meaningful.
A novel strategy for isolation and determination of sugars and sugar alcohols from conifers.
Sarvin, B A; Seregin, A P; Shpigun, O A; Rodin, I A; Stavrianidi, A N
2018-06-02
The ultrasound-assisted extraction method for isolation of 17 sugars and sugar alcohols from conifers with a subsequent hydrophilic interaction liquid chromatography-tandem mass spectrometry method for their determination is proposed. The optimization of extraction parameters was carried out using Taguchi - L 9 (3 4 ) orthogonal array experimental design for the following parameters-a methanol concentration in the extraction solution, an extraction time, a type of plant sample and an extraction temperature. The optimal ultrasound-assisted extraction conditions were-MeOH concentration - 30% (water - 70%), extraction time - 30 min, type of plant sample - II (grinded leaves 2-4 mm long), extraction temperature - 60 °C. Pure water and acetonitrile were used as eluents in gradient elution mode to separate the analytes. Direct determination of multiple sugars and sugar alcohols was carried out using a mass spectrometric detector operated in a multiple reaction monitoring mode, providing detection limits in the range between 0.1 and 20 ng/mL and good analytical characteristics of the method without derivatization. The developed approach was validated by multiple successive extraction method applied to test its performance on a series of 10 samples, i.e. 2 samples per each of 5 genera: Abies, Larix, Picea, Pinus (Pinaceae) and Juniperus (Cupressaceae), widely distributed in the boreal conifer forests of Eurasia. The novel strategy can be used for profiling of sugars and sugar alcohols in a wide range of plant species. Copyright © 2018. Published by Elsevier B.V.
Tamhane, Ashish A; Arfanakis, Konstantinos
2009-07-01
Periodically-rotated overlapping parallel lines with enhanced reconstruction (PROPELLER) and Turboprop MRI are characterized by greatly reduced sensitivity to motion, compared to their predecessors, fast spin-echo (FSE) and gradient and spin-echo (GRASE), respectively. This is due to the inherent self-navigation and motion correction of PROPELLER-based techniques. However, it is unknown how various acquisition parameters that determine k-space sampling affect the accuracy of motion correction in PROPELLER and Turboprop MRI. The goal of this work was to evaluate the accuracy of motion correction in both techniques, to identify an optimal rotation correction approach, and determine acquisition strategies for optimal motion correction. It was demonstrated that blades with multiple lines allow more accurate estimation of motion than blades with fewer lines. Also, it was shown that Turboprop MRI is less sensitive to motion than PROPELLER. Furthermore, it was demonstrated that the number of blades does not significantly affect motion correction. Finally, clinically appropriate acquisition strategies that optimize motion correction are discussed for PROPELLER and Turboprop MRI. (c) 2009 Wiley-Liss, Inc.
Sinkó, József; Kákonyi, Róbert; Rees, Eric; Metcalf, Daniel; Knight, Alex E.; Kaminski, Clemens F.; Szabó, Gábor; Erdélyi, Miklós
2014-01-01
Localization-based super-resolution microscopy image quality depends on several factors such as dye choice and labeling strategy, microscope quality and user-defined parameters such as frame rate and number as well as the image processing algorithm. Experimental optimization of these parameters can be time-consuming and expensive so we present TestSTORM, a simulator that can be used to optimize these steps. TestSTORM users can select from among four different structures with specific patterns, dye and acquisition parameters. Example results are shown and the results of the vesicle pattern are compared with experimental data. Moreover, image stacks can be generated for further evaluation using localization algorithms, offering a tool for further software developments. PMID:24688813
Population-based validation of a German version of the Brief Resilience Scale
Wenzel, Mario; Stieglitz, Rolf-Dieter; Kunzler, Angela; Bagusat, Christiana; Helmreich, Isabella; Gerlicher, Anna; Kampa, Miriam; Kubiak, Thomas; Kalisch, Raffael; Lieb, Klaus; Tüscher, Oliver
2018-01-01
Smith and colleagues developed the Brief Resilience Scale (BRS) to assess the individual ability to recover from stress despite significant adversity. This study aimed to validate the German version of the BRS. We used data from a population-based (sample 1: n = 1.481) and a representative (sample 2: n = 1.128) sample of participants from the German general population (age ≥ 18) to assess reliability and validity. Confirmatory factor analyses (CFA) were conducted to compare one- and two-factorial models from previous studies with a method-factor model which especially accounts for the wording of the items. Reliability was analyzed. Convergent validity was measured by correlating BRS scores with mental health measures, coping, social support, and optimism. Reliability was good (α = .85, ω = .85 for both samples). The method-factor model showed excellent model fit (sample 1: χ2/df = 7.544; RMSEA = .07; CFI = .99; SRMR = .02; sample 2: χ2/df = 1.166; RMSEA = .01; CFI = 1.00; SRMR = .01) which was significantly better than the one-factor model (Δχ2(4) = 172.71, p < .001) or the two-factor model (Δχ2(3) = 31.16, p < .001). The BRS was positively correlated with well-being, social support, optimism, and the coping strategies active coping, positive reframing, acceptance, and humor. It was negatively correlated with somatic symptoms, anxiety and insomnia, social dysfunction, depression, and the coping strategies religion, denial, venting, substance use, and self-blame. To conclude, our results provide evidence for the reliability and validity of the German adaptation of the BRS as well as the unidimensional structure of the scale once method effects are accounted for. PMID:29438435
Ji, Julie L; Holmes, Emily A; Blackwell, Simon E
2017-01-01
Optimism is associated with positive outcomes across many health domains, from cardiovascular disease to depression. However, we know little about cognitive processes underlying optimism in psychopathology. The present study tested whether the ability to vividly imagine positive events in one's future was associated with dispositional optimism in a sample of depressed adults. Cross-sectional and longitudinal analyses were conducted, using baseline (all participants, N=150) and follow-up data (participants in the control condition only, N=63) from a clinical trial (Blackwell et al., 2015). Vividness of positive prospective imagery, assessed on a laboratory-administered task at baseline, was significantly associated with both current optimism levels at baseline and future (seven months later) optimism levels, including when controlling for potential confounds. Even when depressed, those individuals able to envision a brighter future were more optimistic, and regained optimism more quickly over time, than those less able to do so at baseline. Strategies to increase the vividness of positive prospective imagery may aid development of mental health interventions to boost optimism. Copyright © 2016 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.
Simple automatic strategy for background drift correction in chromatographic data analysis.
Fu, Hai-Yan; Li, He-Dong; Yu, Yong-Jie; Wang, Bing; Lu, Peng; Cui, Hua-Peng; Liu, Ping-Ping; She, Yuan-Bin
2016-06-03
Chromatographic background drift correction, which influences peak detection and time shift alignment results, is a critical stage in chromatographic data analysis. In this study, an automatic background drift correction methodology was developed. Local minimum values in a chromatogram were initially detected and organized as a new baseline vector. Iterative optimization was then employed to recognize outliers, which belong to the chromatographic peaks, in this vector, and update the outliers in the baseline until convergence. The optimized baseline vector was finally expanded into the original chromatogram, and linear interpolation was employed to estimate background drift in the chromatogram. The principle underlying the proposed method was confirmed using a complex gas chromatographic dataset. Finally, the proposed approach was applied to eliminate background drift in liquid chromatography quadrupole time-of-flight samples used in the metabolic study of Escherichia coli samples. The proposed method was comparable with three classical techniques: morphological weighted penalized least squares, moving window minimum value strategy and background drift correction by orthogonal subspace projection. The proposed method allows almost automatic implementation of background drift correction, which is convenient for practical use. Copyright © 2016 Elsevier B.V. All rights reserved.
Yuan, Wenjia; Shen, Weidong; Zhang, Yueguang; Liu, Xu
2014-05-05
Dielectric multilayer beam splitter with differential phase shift on transmission and reflection for division-of-amplitude photopolarimeter (DOAP) was presented for the first time to our knowledge. The optimal parameters for the beam splitter are Tp = 78.9%, Ts = 21.1% and Δr - Δt = π/2 at 532nm at an angle of incidence of 45°. Multilayer anti-reflection coating with low phase shift was applied to reduce the backside reflection. Different design strategies that can achieve all optimal targets at the wavelength were tested. Two design methods were presented to optimize the differential phase shift. The samples were prepared by ion beam sputtering (IBS). The experimental results show good agreement with those of the design. The ellipsometric parameters of samples were measured in reflection (ψr, Δr) = (26.5°, 135.1°) and (28.2°, 133.5°), as well as in transmission (ψt, Δt) = (62.5°, 46.1°) and (63.5°, 46°) at 532.6nm. The normalized determinant of instrument matrix to evaluate the performance of samples is respectively 0.998 and 0.991 at 532.6nm.
Sparse and optimal acquisition design for diffusion MRI and beyond
Koay, Cheng Guan; Özarslan, Evren; Johnson, Kevin M.; Meyerand, M. Elizabeth
2012-01-01
Purpose: Diffusion magnetic resonance imaging (MRI) in combination with functional MRI promises a whole new vista for scientists to investigate noninvasively the structural and functional connectivity of the human brain—the human connectome, which had heretofore been out of reach. As with other imaging modalities, diffusion MRI data are inherently noisy and its acquisition time-consuming. Further, a faithful representation of the human connectome that can serve as a predictive model requires a robust and accurate data-analytic pipeline. The focus of this paper is on one of the key segments of this pipeline—in particular, the development of a sparse and optimal acquisition (SOA) design for diffusion MRI multiple-shell acquisition and beyond. Methods: The authors propose a novel optimality criterion for sparse multiple-shell acquisition and quasimultiple-shell designs in diffusion MRI and a novel and effective semistochastic and moderately greedy combinatorial search strategy with simulated annealing to locate the optimum design or configuration. The goal of the optimality criteria is threefold: first, to maximize uniformity of the diffusion measurements in each shell, which is equivalent to maximal incoherence in angular measurements; second, to maximize coverage of the diffusion measurements around each radial line to achieve maximal incoherence in radial measurements for multiple-shell acquisition; and finally, to ensure maximum uniformity of diffusion measurement directions in the limiting case when all the shells are coincidental as in the case of a single-shell acquisition. The approach taken in evaluating the stability of various acquisition designs is based on the condition number and the A-optimal measure of the design matrix. Results: Even though the number of distinct configurations for a given set of diffusion gradient directions is very large in general—e.g., in the order of 10232 for a set of 144 diffusion gradient directions, the proposed search strategy was found to be effective in finding the optimum configuration. It was found that the square design is the most robust (i.e., with stable condition numbers and A-optimal measures under varying experimental conditions) among many other possible designs of the same sample size. Under the same performance evaluation, the square design was found to be more robust than the widely used sampling schemes similar to that of 3D radial MRI and of diffusion spectrum imaging (DSI). Conclusions: A novel optimality criterion for sparse multiple-shell acquisition and quasimultiple-shell designs in diffusion MRI and an effective search strategy for finding the best configuration have been developed. The results are very promising, interesting, and practical for diffusion MRI acquisitions. PMID:22559620
OPTIMIZATION OF INTEGRATED URBAN WET-WEATHER CONTROL STRATEGIES
An optimization method for urban wet weather control (WWC) strategies is presented. The developed optimization model can be used to determine the most cost-effective strategies for the combination of centralized storage-release systems and distributed on-site WWC alternatives. T...
The optimal dynamic immunization under a controlled heterogeneous node-based SIRS model
NASA Astrophysics Data System (ADS)
Yang, Lu-Xing; Draief, Moez; Yang, Xiaofan
2016-05-01
Dynamic immunizations, under which the state of the propagation network of electronic viruses can be changed by adjusting the control measures, are regarded as an alternative to static immunizations. This paper addresses the optimal dynamical immunization under the widely accepted SIRS assumption. First, based on a controlled heterogeneous node-based SIRS model, an optimal control problem capturing the optimal dynamical immunization is formulated. Second, the existence of an optimal dynamical immunization scheme is shown, and the corresponding optimality system is derived. Next, some numerical examples are given to show that an optimal immunization strategy can be worked out by numerically solving the optimality system, from which it is found that the network topology has a complex impact on the optimal immunization strategy. Finally, the difference between a payoff and the minimum payoff is estimated in terms of the deviation of the corresponding immunization strategy from the optimal immunization strategy. The proposed optimal immunization scheme is justified, because it can achieve a low level of infections at a low cost.
Tepekule, Burcu; Uecker, Hildegard; Derungs, Isabel; Frenoy, Antoine; Bonhoeffer, Sebastian
2017-09-01
Multiple treatment strategies are available for empiric antibiotic therapy in hospitals, but neither clinical studies nor theoretical investigations have yielded a clear picture when which strategy is optimal and why. Extending earlier work of others and us, we present a mathematical model capturing treatment strategies using two drugs, i.e the multi-drug therapies referred to as cycling, mixing, and combination therapy, as well as monotherapy with either drug. We randomly sample a large parameter space to determine the conditions determining success or failure of these strategies. We find that combination therapy tends to outperform the other treatment strategies. By using linear discriminant analysis and particle swarm optimization, we find that the most important parameters determining success or failure of combination therapy relative to the other treatment strategies are the de novo rate of emergence of double resistance in patients infected with sensitive bacteria and the fitness costs associated with double resistance. The rate at which double resistance is imported into the hospital via patients admitted from the outside community has little influence, as all treatment strategies are affected equally. The parameter sets for which combination therapy fails tend to fall into areas with low biological plausibility as they are characterised by very high rates of de novo emergence of resistance to both drugs compared to a single drug, and the cost of double resistance is considerably smaller than the sum of the costs of single resistance.
Hammami, Naïma; Mertens, Karl; Overholser, Rosanna; Goetghebeur, Els; Catry, Boudewijn; Lambert, Marie-Laurence
2016-05-01
Surveillance of central-line-associated bloodstream infections requires the labor-intensive counting of central-line days (CLDs). This workload could be reduced by sampling. Our objective was to evaluate the accuracy of various sampling strategies in the estimation of CLDs in intensive care units (ICUs) and to establish a set of rules to identify optimal sampling strategies depending on ICU characteristics. Analyses of existing data collected according to the European protocol for patient-based surveillance of ICU-acquired infections in Belgium between 2004 and 2012. CLD data were reported by 56 ICUs in 39 hospitals during 364 trimesters. We compared estimated CLD data obtained from weekly and monthly sampling schemes with the observed exhaustive CLD data over the trimester by assessing the CLD percentage error (ie, observed CLDs - estimated CLDs/observed CLDs). We identified predictors of improved accuracy using linear mixed models. When sampling once per week or 3 times per month, 80% of ICU trimesters had a CLD percentage error within 10%. When sampling twice per week, this was >90% of ICU trimesters. Sampling on Tuesdays provided the best estimations. In the linear mixed model, the observed CLD count was the best predictor for a smaller percentage error. The following sampling strategies provided an estimate within 10% of the actual CLD for 97% of the ICU trimesters with 90% confidence: 3 times per month in an ICU with >650 CLDs per trimester or each Tuesday in an ICU with >480 CLDs per trimester. Sampling of CLDs provides an acceptable alternative to daily collection of CLD data.
More efficient evolutionary strategies for model calibration with watershed model for demonstration
NASA Astrophysics Data System (ADS)
Baggett, J. S.; Skahill, B. E.
2008-12-01
Evolutionary strategies allow automatic calibration of more complex models than traditional gradient based approaches, but they are more computationally intensive. We present several efficiency enhancements for evolution strategies, many of which are not new, but when combined have been shown to dramatically decrease the number of model runs required for calibration of synthetic problems. To reduce the number of expensive model runs we employ a surrogate objective function for an adaptively determined fraction of the population at each generation (Kern et al., 2006). We demonstrate improvements to the adaptive ranking strategy that increase its efficiency while sacrificing little reliability and further reduce the number of model runs required in densely sampled parts of parameter space. Furthermore, we include a gradient individual in each generation that is usually not selected when the search is in a global phase or when the derivatives are poorly approximated, but when selected near a smooth local minimum can dramatically increase convergence speed (Tahk et al., 2007). Finally, the selection of the gradient individual is used to adapt the size of the population near local minima. We show, by incorporating these enhancements into the Covariance Matrix Adaption Evolution Strategy (CMAES; Hansen, 2006), that their synergetic effect is greater than their individual parts. This hybrid evolutionary strategy exploits smooth structure when it is present but degrades to an ordinary evolutionary strategy, at worst, if smoothness is not present. Calibration of 2D-3D synthetic models with the modified CMAES requires approximately 10%-25% of the model runs of ordinary CMAES. Preliminary demonstration of this hybrid strategy will be shown for watershed model calibration problems. Hansen, N. (2006). The CMA Evolution Strategy: A Comparing Review. In J.A. Lozano, P. Larrañga, I. Inza and E. Bengoetxea (Eds.). Towards a new evolutionary computation. Advances in estimation of distribution algorithms. pp. 75-102, Springer Kern, S., N. Hansen and P. Koumoutsakos (2006). Local Meta-Models for Optimization Using Evolution Strategies. In Ninth International Conference on Parallel Problem Solving from Nature PPSN IX, Proceedings, pp.939-948, Berlin: Springer. Tahk, M., Woo, H., and Park. M, (2007). A hybrid optimization of evolutionary and gradient search. Engineering Optimization, (39), 87-104.
Computing Optimal Stochastic Portfolio Execution Strategies: A Parametric Approach Using Simulations
NASA Astrophysics Data System (ADS)
Moazeni, Somayeh; Coleman, Thomas F.; Li, Yuying
2010-09-01
Computing optimal stochastic portfolio execution strategies under appropriate risk consideration presents great computational challenge. We investigate a parametric approach for computing optimal stochastic strategies using Monte Carlo simulations. This approach allows reduction in computational complexity by computing coefficients for a parametric representation of a stochastic dynamic strategy based on static optimization. Using this technique, constraints can be similarly handled using appropriate penalty functions. We illustrate the proposed approach to minimize the expected execution cost and Conditional Value-at-Risk (CVaR).
Groves, K E M; Sketris, I; Tett, S E
2003-08-01
Prescription drug samples, as used by the pharmaceutical industry to market their products, are of current interest because of their influence on prescribing, and their potential impact on consumer safety. Very little research has been conducted into the use and misuse of prescription drug samples, and the influence of samples on health policies designed to improve the rational use of medicines. This is a topical issue in the prescription drug debate, with increasing costs and increasing concerns about optimizing use of medicines. This manuscript critically evaluates the research that has been conducted to date about prescription drug samples, discusses the issues raised in the context of traditional marketing theory, and suggests possible alternatives for the future.
NASA Astrophysics Data System (ADS)
Sun, Dongye; Lin, Xinyou; Qin, Datong; Deng, Tao
2012-11-01
Energy management(EM) is a core technique of hybrid electric bus(HEB) in order to advance fuel economy performance optimization and is unique for the corresponding configuration. There are existing algorithms of control strategy seldom take battery power management into account with international combustion engine power management. In this paper, a type of power-balancing instantaneous optimization(PBIO) energy management control strategy is proposed for a novel series-parallel hybrid electric bus. According to the characteristic of the novel series-parallel architecture, the switching boundary condition between series and parallel mode as well as the control rules of the power-balancing strategy are developed. The equivalent fuel model of battery is implemented and combined with the fuel of engine to constitute the objective function which is to minimize the fuel consumption at each sampled time and to coordinate the power distribution in real-time between the engine and battery. To validate the proposed strategy effective and reasonable, a forward model is built based on Matlab/Simulink for the simulation and the dSPACE autobox is applied to act as a controller for hardware in-the-loop integrated with bench test. Both the results of simulation and hardware-in-the-loop demonstrate that the proposed strategy not only enable to sustain the battery SOC within its operational range and keep the engine operation point locating the peak efficiency region, but also the fuel economy of series-parallel hybrid electric bus(SPHEB) dramatically advanced up to 30.73% via comparing with the prototype bus and a similar improvement for PBIO strategy relative to rule-based strategy, the reduction of fuel consumption is up to 12.38%. The proposed research ensures the algorithm of PBIO is real-time applicability, improves the efficiency of SPHEB system, as well as suite to complicated configuration perfectly.
Han, Yanxi; Li, Jinming
2017-10-26
In this era of precision medicine, molecular biology is becoming increasingly significant for the diagnosis and therapeutic management of non-small cell lung cancer. The specimen as the primary element of the whole testing flow is particularly important for maintaining the accuracy of gene alteration testing. Presently, the main sample types applied in routine diagnosis are tissue and cytology biopsies. Liquid biopsies are considered as the most promising alternatives when tissue and cytology samples are not available. Each sample type possesses its own strengths and weaknesses, pertaining to the disparity of sampling, preparation and preservation procedures, the heterogeneity of inter- or intratumors, the tumor cellularity (percentage and number of tumor cells) of specimens, etc., and none of them can individually be a "one size to fit all". Therefore, in this review, we summarized the strengths and weaknesses of different sample types that are widely used in clinical practice, offered solutions to reduce the negative impact of the samples and proposed an optimized strategy for choice of samples during the entire diagnostic course. We hope to provide valuable information to laboratories for choosing optimal clinical specimens to achieve comprehensive functional genomic landscapes and formulate individually tailored treatment plans for NSCLC patients that are in advanced stages.
An Optimized Configuration for the Brazilian Decimetric Array
NASA Astrophysics Data System (ADS)
Sawant, Hanumant; Faria, Claudio; Stephany, Stephan
The Brazilian Decimetric Array (BDA) is a radio interferometer designed to operate in the frequency range of 1.2-1.7, 2.8 and 5.6 GHz and to obtain images of radio sources with high dynamic range. A 5-antenna configuration is already operational being implemented in BDA phase I. Phase II will provide a 26-antenna configuration forming a compact T-array, whereas phase III will include further 12 antennas. However, the BDA site has topographic constraints that preclude the placement of these antennas along the lines defined by the 3 arms of the T-array. Therefore, some antennas must be displaced in a direction that is slightly transverse tothese lines. This work presents the investigation of possible optimized configurations for all 38 antennas spread over the distances of 2.5 x 1.25 km. It was required to determine the optimal position of the last 12 antennas.A new optimization strategy was then proposed in order to obtain the optimal array configuration. It is based on the entropy of the distribution of the sampled points in the Fourier plane. A stochastic model, Ant Colony Optimization, uses the entropy of the such distribution to iteratively refine the candidate solutions. The proposed strategy can be used to determine antenna locations for free-shape arrays in order to provide uniform u-v coverage with minimum redundancy of sampled points in u-v plane that are less susceptible to errors due to unmeasured Fourier components. A different distribution could be chosen for the coverage. It also allows to consider the topographical constraints of the available site. Furthermore, it provides an optimal configuration even considering the predetermined placement of the 26 antennas that compose the central T-array. In this case, the optimal location of the last 12 antennas was determined. Performance results corresponding to the Fourier plane coverage, synthesized beam and sidelobes levels are shown for this optimized BDA configuration and are compared to the results of the standard T-array configuration that cannot be implemented due to site constraints. —————————————————————————————-
Patel, Nitin R; Ankolekar, Suresh; Antonijevic, Zoran; Rajicic, Natasa
2013-05-10
We describe a value-driven approach to optimizing pharmaceutical portfolios. Our approach incorporates inputs from research and development and commercial functions by simultaneously addressing internal and external factors. This approach differentiates itself from current practices in that it recognizes the impact of study design parameters, sample size in particular, on the portfolio value. We develop an integer programming (IP) model as the basis for Bayesian decision analysis to optimize phase 3 development portfolios using expected net present value as the criterion. We show how this framework can be used to determine optimal sample sizes and trial schedules to maximize the value of a portfolio under budget constraints. We then illustrate the remarkable flexibility of the IP model to answer a variety of 'what-if' questions that reflect situations that arise in practice. We extend the IP model to a stochastic IP model to incorporate uncertainty in the availability of drugs from earlier development phases for phase 3 development in the future. We show how to use stochastic IP to re-optimize the portfolio development strategy over time as new information accumulates and budget changes occur. Copyright © 2013 John Wiley & Sons, Ltd.
CAMS as a tool for human factors research in spaceflight
NASA Astrophysics Data System (ADS)
Sauer, Juergen
2004-01-01
The paper reviews a number of research studies that were carried out with a PC-based task environment called Cabin Air Management System (CAMS) simulating the operation of a spacecraft's life support system. As CAMS was a multiple task environment, it allowed the measurement of performance at different levels. Four task components of different priority were embedded in the task environment: diagnosis and repair of system faults, maintaining atmospheric parameters in a safe state, acknowledgement of system alarms (reaction time), and keeping a record of critical system resources (prospective memory). Furthermore, the task environment permitted the examination of different task management strategies and changes in crew member state (fatigue, anxiety, mental effort). A major goal of the research programme was to examine how crew members adapted to various forms of sub-optimal working conditions, such as isolation and confinement, sleep deprivation and noise. None of the studies provided evidence for decrements in primary task performance. However, the results showed a number of adaptive responses of crew members to adjust to the different sub-optimal working conditions. There was evidence for adjustments in information sampling strategies (usually reductions in sampling frequency) as a result of unfavourable working conditions. The results also showed selected decrements in secondary task performance. Prospective memory seemed to be somewhat more vulnerable to sub-optimal working conditions than performance on the reaction time task. Finally, suggestions are made for future research with the CAMS environment.
de Oliveira, Fabio Santos; Korn, Mauro
2006-01-15
A sensitive SIA method was developed for sulphate determination in automotive fuel ethanol. This method was based on the reaction of sulphate with barium-dimethylsulphonazo(III) leading to a decrease on the magnitude of analytical signal monitored at 665 nm. Alcohol fuel samples were previously burned up to avoid matrix effects for sulphate determinations. Binary sampling and stop-flow strategies were used to increase the sensitivity of the method. The optimization of analytical parameter was performed by response surface method using Box-Behnker and central composite designs. The proposed sequential flow procedure permits to determine up to 10.0mg SO(4)(2-)l(-1) with R.S.D. <2.5% and limit of detection of 0.27 mg l(-1). The method has been successfully applied for sulphate determination in automotive fuel alcohol and the results agreed with the reference volumetric method. In the optimized condition the SIA system carried out 27 samples per hour.
Nursing contributions to chronic disease management in primary care.
Lukewich, Julia; Edge, Dana S; VanDenKerkhof, Elizabeth; Tranmer, Joan
2014-02-01
As the prevalence of chronic diseases continues to increase, emphasis is being placed on the development of primary care strategies that enhance healthcare delivery. Innovations include interprofessional healthcare teams and chronic disease management strategies. To determine the roles of nurses working in primary care settings in Ontario and the extent to which chronic disease management strategies have been implemented. We conducted a cross-sectional survey of a random sample of primary care nurses, including registered practical nurses, registered nurses, and nurse practitioners, in Ontario between May and July 2011. Nurses in primary care reported engaging in chronic disease management activities but to different extents depending on their regulatory designation (licensure category). Chronic disease management strategy implementation was not uniform across primary care practices where the nurses worked. There is the potential to optimize and standardize the nursing role within primary care and improve the implementation of chronic disease management strategies.
Optimal Design of River Monitoring Network in Taizihe River by Matter Element Analysis
Wang, Hui; Liu, Zhe; Sun, Lina; Luo, Qing
2015-01-01
The objective of this study is to optimize the river monitoring network in Taizihe River, Northeast China. The situation of the network and water characteristics were studied in this work. During this study, water samples were collected once a month during January 2009 - December 2010 from seventeen sites. Futhermore, the 16 monitoring indexes were analyzed in the field and laboratory. The pH value of surface water sample was found to be in the range of 6.83 to 9.31, and the average concentrations of NH4 +-N, chemical oxygen demand (COD), volatile phenol and total phosphorus (TP) were found decreasing significantly. The water quality of the river has been improved from 2009 to 2010. Through the calculation of the data availability and the correlation between adjacent sections, it was found that the present monitoring network was inefficient as well as the optimization was indispensable. In order to improve the situation, the matter element analysis and gravity distance were applied in the optimization of river monitoring network, which were proved to be a useful method to optimize river quality monitoring network. The amount of monitoring sections were cut from 17 to 13 for the monitoring network was more cost-effective after being optimized. The results of this study could be used in developing effective management strategies to improve the environmental quality of Taizihe River. Also, the results show that the proposed model can be effectively used for the optimal design of monitoring networks in river systems. PMID:26023785
Optimal strategy analysis based on robust predictive control for inventory system with random demand
NASA Astrophysics Data System (ADS)
Saputra, Aditya; Widowati, Sutrisno
2017-12-01
In this paper, the optimal strategy for a single product single supplier inventory system with random demand is analyzed by using robust predictive control with additive random parameter. We formulate the dynamical system of this system as a linear state space with additive random parameter. To determine and analyze the optimal strategy for the given inventory system, we use robust predictive control approach which gives the optimal strategy i.e. the optimal product volume that should be purchased from the supplier for each time period so that the expected cost is minimal. A numerical simulation is performed with some generated random inventory data. We simulate in MATLAB software where the inventory level must be controlled as close as possible to a set point decided by us. From the results, robust predictive control model provides the optimal strategy i.e. the optimal product volume that should be purchased and the inventory level was followed the given set point.
Human-in-the-loop Bayesian optimization of wearable device parameters
Malcolm, Philippe; Speeckaert, Jozefien; Siviy, Christoper J.; Walsh, Conor J.; Kuindersma, Scott
2017-01-01
The increasing capabilities of exoskeletons and powered prosthetics for walking assistance have paved the way for more sophisticated and individualized control strategies. In response to this opportunity, recent work on human-in-the-loop optimization has considered the problem of automatically tuning control parameters based on realtime physiological measurements. However, the common use of metabolic cost as a performance metric creates significant experimental challenges due to its long measurement times and low signal-to-noise ratio. We evaluate the use of Bayesian optimization—a family of sample-efficient, noise-tolerant, and global optimization methods—for quickly identifying near-optimal control parameters. To manage experimental complexity and provide comparisons against related work, we consider the task of minimizing metabolic cost by optimizing walking step frequencies in unaided human subjects. Compared to an existing approach based on gradient descent, Bayesian optimization identified a near-optimal step frequency with a faster time to convergence (12 minutes, p < 0.01), smaller inter-subject variability in convergence time (± 2 minutes, p < 0.01), and lower overall energy expenditure (p < 0.01). PMID:28926613
Limited-sampling strategies for anti-infective agents: systematic review.
Sprague, Denise A; Ensom, Mary H H
2009-09-01
Area under the concentration-time curve (AUC) is a pharmacokinetic parameter that represents overall exposure to a drug. For selected anti-infective agents, pharmacokinetic-pharmacodynamic parameters, such as AUC/MIC (where MIC is the minimal inhibitory concentration), have been correlated with outcome in a few studies. A limited-sampling strategy may be used to estimate pharmacokinetic parameters such as AUC, without the frequent, costly, and inconvenient blood sampling that would be required to directly calculate the AUC. To discuss, by means of a systematic review, the strengths, limitations, and clinical implications of published studies involving a limited-sampling strategy for anti-infective agents and to propose improvements in methodology for future studies. The PubMed and EMBASE databases were searched using the terms "anti-infective agents", "limited sampling", "optimal sampling", "sparse sampling", "AUC monitoring", "abbreviated AUC", "abbreviated sampling", and "Bayesian". The reference lists of retrieved articles were searched manually. Included studies were classified according to modified criteria from the US Preventive Services Task Force. Twenty studies met the inclusion criteria. Six of the studies (involving didanosine, zidovudine, nevirapine, ciprofloxacin, efavirenz, and nelfinavir) were classified as providing level I evidence, 4 studies (involving vancomycin, didanosine, lamivudine, and lopinavir-ritonavir) provided level II-1 evidence, 2 studies (involving saquinavir and ceftazidime) provided level II-2 evidence, and 8 studies (involving ciprofloxacin, nelfinavir, vancomycin, ceftazidime, ganciclovir, pyrazinamide, meropenem, and alpha interferon) provided level III evidence. All of the studies providing level I evidence used prospectively collected data and proper validation procedures with separate, randomly selected index and validation groups. However, most of the included studies did not provide an adequate description of the methods or the characteristics of included patients, which limited their generalizability. Many limited-sampling strategies have been developed for anti-infective agents that do not have a clearly established link between AUC and clinical outcomes in humans. Future studies should first determine if there is an association between AUC monitoring and clinical outcomes. Thereafter, it may be worthwhile to prospectively develop and validate a limited-sampling strategy for the particular anti-infective agent in a similar population.
Mixed-Strategy Chance Constrained Optimal Control
NASA Technical Reports Server (NTRS)
Ono, Masahiro; Kuwata, Yoshiaki; Balaram, J.
2013-01-01
This paper presents a novel chance constrained optimal control (CCOC) algorithm that chooses a control action probabilistically. A CCOC problem is to find a control input that minimizes the expected cost while guaranteeing that the probability of violating a set of constraints is below a user-specified threshold. We show that a probabilistic control approach, which we refer to as a mixed control strategy, enables us to obtain a cost that is better than what deterministic control strategies can achieve when the CCOC problem is nonconvex. The resulting mixed-strategy CCOC problem turns out to be a convexification of the original nonconvex CCOC problem. Furthermore, we also show that a mixed control strategy only needs to "mix" up to two deterministic control actions in order to achieve optimality. Building upon an iterative dual optimization, the proposed algorithm quickly converges to the optimal mixed control strategy with a user-specified tolerance.
Optimal vaccination strategies and rational behaviour in seasonal epidemics.
Doutor, Paulo; Rodrigues, Paula; Soares, Maria do Céu; Chalub, Fabio A C C
2016-12-01
We consider a SIRS model with time dependent transmission rate. We assume time dependent vaccination which confers the same immunity as natural infection. We study two types of vaccination strategies: (i) optimal vaccination, in the sense that it minimizes the effort of vaccination in the set of vaccination strategies for which, for any sufficiently small perturbation of the disease free state, the number of infectious individuals is monotonically decreasing; (ii) Nash-equilibria strategies where all individuals simultaneously minimize the joint risk of vaccination versus the risk of the disease. The former case corresponds to an optimal solution for mandatory vaccinations, while the second corresponds to the equilibrium to be expected if vaccination is fully voluntary. We are able to show the existence of both optimal and Nash strategies in a general setting. In general, these strategies will not be functions but Radon measures. For specific forms of the transmission rate, we provide explicit formulas for the optimal and the Nash vaccination strategies.
Adaptive sampling strategies with high-throughput molecular dynamics
NASA Astrophysics Data System (ADS)
Clementi, Cecilia
Despite recent significant hardware and software developments, the complete thermodynamic and kinetic characterization of large macromolecular complexes by molecular simulations still presents significant challenges. The high dimensionality of these systems and the complexity of the associated potential energy surfaces (creating multiple metastable regions connected by high free energy barriers) does not usually allow to adequately sample the relevant regions of their configurational space by means of a single, long Molecular Dynamics (MD) trajectory. Several different approaches have been proposed to tackle this sampling problem. We focus on the development of ensemble simulation strategies, where data from a large number of weakly coupled simulations are integrated to explore the configurational landscape of a complex system more efficiently. Ensemble methods are of increasing interest as the hardware roadmap is now mostly based on increasing core counts, rather than clock speeds. The main challenge in the development of an ensemble approach for efficient sampling is in the design of strategies to adaptively distribute the trajectories over the relevant regions of the systems' configurational space, without using any a priori information on the system global properties. We will discuss the definition of smart adaptive sampling approaches that can redirect computational resources towards unexplored yet relevant regions. Our approaches are based on new developments in dimensionality reduction for high dimensional dynamical systems, and optimal redistribution of resources. NSF CHE-1152344, NSF CHE-1265929, Welch Foundation C-1570.
NASA Astrophysics Data System (ADS)
Mota, Mariana F. B.; Gama, Ednilton M.; Rodrigues, Gabrielle de C.; Rodrigues, Guilherme D.; Nascentes, Clésia C.; Costa, Letícia M.
2018-01-01
In this work, a dilute-and-shoot method was developed for Ca, P, S and Zn determination in new and used lubricating oil samples by total reflection X-ray fluorescence (TXRF). The oil samples were diluted with organic solvents followed by addition of yttrium as internal standard and the TXRF measurements were performed after solvent evaporation. The method was optimized using an interlaboratorial reference material. The experimental parameters evaluated were sample volume (50 or 100 μL), measurement time (250 or 500 s) and volume deposited on the quartz glass sample carrier (5 or 10 μL). All of them were evaluated and optimized using xylene, kerosene and hexane. Analytical figures of merit (accuracy, precision, limit of detection and quantification) were used to evaluate the performance of the analytical method for all solvents. The recovery rates varied from 99 to 111% and the relative standard deviation remained between 1.7% and 10% (n = 8). For all elements, the results obtained by applying the new method were in agreement with the certified value. After the validation step, the method was applied for Ca, P, S and Zn quantification in eight new and four used lubricating oil samples, for all solvents. The concentration of the elements in the samples varied in the ranges of 1620-3711 mg L- 1 for Ca, 704-1277 mg L- 1 for P, 2027-9147 mg L- 1 for S, and 898-1593 mg L- 1 for Zn. The association of TXRF with a dilute-and-shoot sample preparation strategy was efficient for Ca, P, S and Zn determination in lubricating oils, presenting accurate results. Additionally, the time required for analysis is short, the reagent volumes are low minimizing waste generation, and the technique does not require calibration curves.
Optimal Keno Strategies and the Central Limit Theorem
ERIC Educational Resources Information Center
Johnson, Roger W.
2006-01-01
For the casino game Keno we determine optimal playing strategies. To decide such optimal strategies, both exact (hypergeometric) and approximate probability calculations are used. The approximate calculations are obtained via the Central Limit Theorem and simulation, and an important lesson about the application of the Central Limit Theorem is…
Welton, Nicky J; Madan, Jason; Ades, Anthony E
2011-09-01
Reimbursement decisions are typically based on cost-effectiveness analyses. While a cost-effectiveness analysis can identify the optimum strategy, there is usually some degree of uncertainty around this decision. Sources of uncertainty include statistical sampling error in treatment efficacy measures, underlying baseline risk, utility measures and costs, as well as uncertainty in the structure of the model. The optimal strategy is therefore only optimal on average, and a decision to adopt this strategy might still be the wrong decision if all uncertainty could be eliminated. This means that there is a quantifiable expected (average) loss attaching to decisions made under uncertainty, and hence a value in collecting information to reduce that uncertainty. Value of information (VOI) analyses can be used to provide guidance on whether more research would be cost-effective, which particular model inputs (parameters) have the most bearing on decision uncertainty, and can also help with the design and sample size of further research. Here, we introduce the key concepts in VOI analyses, and highlight the inputs required to calculate it. The adoption of the new biologic treatments for RA and PsA tends to be based on placebo-controlled trials. We discuss the possible role of VOI analyses in deciding whether head-to-head comparisons of the biologic therapies should be carried out, illustrating with examples from other fields. We emphasize the need for a model of the natural history of RA and PsA, which reflects a consensus view.
Design of optimal groundwater remediation systems under flexible environmental-standard constraints.
Fan, Xing; He, Li; Lu, Hong-Wei; Li, Jing
2015-01-01
In developing optimal groundwater remediation strategies, limited effort has been exerted to solve the uncertainty in environmental quality standards. When such uncertainty is not considered, either over optimistic or over pessimistic optimization strategies may be developed, probably leading to the formulation of rigid remediation strategies. This study advances a mathematical programming modeling approach for optimizing groundwater remediation design. This approach not only prevents the formulation of over optimistic and over pessimistic optimization strategies but also provides a satisfaction level that indicates the degree to which the environmental quality standard is satisfied. Therefore the approach may be expected to be significantly more acknowledged by the decision maker than those who do not consider standard uncertainty. The proposed approach is applied to a petroleum-contaminated site in western Canada. Results from the case study show that (1) the peak benzene concentrations can always satisfy the environmental standard under the optimal strategy, (2) the pumping rates of all wells decrease under a relaxed standard or long-term remediation approach, (3) the pumping rates are less affected by environmental quality constraints under short-term remediation, and (4) increased flexible environmental standards have a reduced effect on the optimal remediation strategy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adams, Brian M.; Ebeida, Mohamed Salah; Eldred, Michael S.
The Dakota (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a exible and extensible interface between simulation codes and iterative analysis methods. Dakota contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quanti cation with sampling, reliability, and stochastic expansion methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components requiredmore » for iterative systems analyses, the Dakota toolkit provides a exible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a user's manual for the Dakota software and provides capability overviews and procedures for software execution, as well as a variety of example studies.« less
Skull removal in MR images using a modified artificial bee colony optimization algorithm.
Taherdangkoo, Mohammad
2014-01-01
Removal of the skull from brain Magnetic Resonance (MR) images is an important preprocessing step required for other image analysis techniques such as brain tissue segmentation. In this paper, we propose a new algorithm based on the Artificial Bee Colony (ABC) optimization algorithm to remove the skull region from brain MR images. We modify the ABC algorithm using a different strategy for initializing the coordinates of scout bees and their direction of search. Moreover, we impose an additional constraint to the ABC algorithm to avoid the creation of discontinuous regions. We found that our algorithm successfully removed all bony skull from a sample of de-identified MR brain images acquired from different model scanners. The obtained results of the proposed algorithm compared with those of previously introduced well known optimization algorithms such as Particle Swarm Optimization (PSO) and Ant Colony Optimization (ACO) demonstrate the superior results and computational performance of our algorithm, suggesting its potential for clinical applications.
Adly, Amr A.; Abd-El-Hafiz, Salwa K.
2014-01-01
Transformers are regarded as crucial components in power systems. Due to market globalization, power transformer manufacturers are facing an increasingly competitive environment that mandates the adoption of design strategies yielding better performance at lower costs. In this paper, a power transformer design methodology using multi-objective evolutionary optimization is proposed. Using this methodology, which is tailored to be target performance design-oriented, quick rough estimation of transformer design specifics may be inferred. Testing of the suggested approach revealed significant qualitative and quantitative match with measured design and performance values. Details of the proposed methodology as well as sample design results are reported in the paper. PMID:26257939
Adly, Amr A; Abd-El-Hafiz, Salwa K
2015-05-01
Transformers are regarded as crucial components in power systems. Due to market globalization, power transformer manufacturers are facing an increasingly competitive environment that mandates the adoption of design strategies yielding better performance at lower costs. In this paper, a power transformer design methodology using multi-objective evolutionary optimization is proposed. Using this methodology, which is tailored to be target performance design-oriented, quick rough estimation of transformer design specifics may be inferred. Testing of the suggested approach revealed significant qualitative and quantitative match with measured design and performance values. Details of the proposed methodology as well as sample design results are reported in the paper.
Evans, Scott R; Rubin, Daniel; Follmann, Dean; Pennello, Gene; Huskins, W Charles; Powers, John H; Schoenfeld, David; Chuang-Stein, Christy; Cosgrove, Sara E; Fowler, Vance G; Lautenbach, Ebbing; Chambers, Henry F
2015-09-01
Clinical trials that compare strategies to optimize antibiotic use are of critical importance but are limited by competing risks that distort outcome interpretation, complexities of noninferiority trials, large sample sizes, and inadequate evaluation of benefits and harms at the patient level. The Antibacterial Resistance Leadership Group strives to overcome these challenges through innovative trial design. Response adjusted for duration of antibiotic risk (RADAR) is a novel methodology utilizing a superiority design and a 2-step process: (1) categorizing patients into an overall clinical outcome (based on benefits and harms), and (2) ranking patients with respect to a desirability of outcome ranking (DOOR). DOORs are constructed by assigning higher ranks to patients with (1) better overall clinical outcomes and (2) shorter durations of antibiotic use for similar overall clinical outcomes. DOOR distributions are compared between antibiotic use strategies. The probability that a randomly selected patient will have a better DOOR if assigned to the new strategy is estimated. DOOR/RADAR represents a new paradigm in assessing the risks and benefits of new strategies to optimize antibiotic use. © The Author 2015. Published by Oxford University Press on behalf of the Infectious Diseases Society of America. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Optimal sensor placement for time-domain identification using a wavelet-based genetic algorithm
NASA Astrophysics Data System (ADS)
Mahdavi, Seyed Hossein; Razak, Hashim Abdul
2016-06-01
This paper presents a wavelet-based genetic algorithm strategy for optimal sensor placement (OSP) effective for time-domain structural identification. Initially, the GA-based fitness evaluation is significantly improved by using adaptive wavelet functions. Later, a multi-species decimal GA coding system is modified to be suitable for an efficient search around the local optima. In this regard, a local operation of mutation is introduced in addition with regeneration and reintroduction operators. It is concluded that different characteristics of applied force influence the features of structural responses, and therefore the accuracy of time-domain structural identification is directly affected. Thus, the reliable OSP strategy prior to the time-domain identification will be achieved by those methods dealing with minimizing the distance of simulated responses for the entire system and condensed system considering the force effects. The numerical and experimental verification on the effectiveness of the proposed strategy demonstrates the considerably high computational performance of the proposed OSP strategy, in terms of computational cost and the accuracy of identification. It is deduced that the robustness of the proposed OSP algorithm lies in the precise and fast fitness evaluation at larger sampling rates which result in the optimum evaluation of the GA-based exploration and exploitation phases towards the global optimum solution.
Ren, Luquan; Zhou, Xueli; Song, Zhengyi; Zhao, Che; Liu, Qingping; Xue, Jingze; Li, Xiujuan
2017-03-16
Recently, with a broadening range of available materials and alteration of feeding processes, several extrusion-based 3D printing processes for metal materials have been developed. An emerging process is applicable for the fabrication of metal parts into electronics and composites. In this paper, some critical parameters of extrusion-based 3D printing processes were optimized by a series of experiments with a melting extrusion printer. The raw materials were copper powder and a thermoplastic organic binder system and the system included paraffin wax, low density polyethylene, and stearic acid (PW-LDPE-SA). The homogeneity and rheological behaviour of the raw materials, the strength of the green samples, and the hardness of the sintered samples were investigated. Moreover, the printing and sintering parameters were optimized with an orthogonal design method. The influence factors in regard to the ultimate tensile strength of the green samples can be described as follows: infill degree > raster angle > layer thickness. As for the sintering process, the major factor on hardness is sintering temperature, followed by holding time and heating rate. The highest hardness of the sintered samples was very close to the average hardness of commercially pure copper material. Generally, the extrusion-based printing process for producing metal materials is a promising strategy because it has some advantages over traditional approaches for cost, efficiency, and simplicity.
Ren, Luquan; Zhou, Xueli; Song, Zhengyi; Zhao, Che; Liu, Qingping; Xue, Jingze; Li, Xiujuan
2017-01-01
Recently, with a broadening range of available materials and alteration of feeding processes, several extrusion-based 3D printing processes for metal materials have been developed. An emerging process is applicable for the fabrication of metal parts into electronics and composites. In this paper, some critical parameters of extrusion-based 3D printing processes were optimized by a series of experiments with a melting extrusion printer. The raw materials were copper powder and a thermoplastic organic binder system and the system included paraffin wax, low density polyethylene, and stearic acid (PW–LDPE–SA). The homogeneity and rheological behaviour of the raw materials, the strength of the green samples, and the hardness of the sintered samples were investigated. Moreover, the printing and sintering parameters were optimized with an orthogonal design method. The influence factors in regard to the ultimate tensile strength of the green samples can be described as follows: infill degree > raster angle > layer thickness. As for the sintering process, the major factor on hardness is sintering temperature, followed by holding time and heating rate. The highest hardness of the sintered samples was very close to the average hardness of commercially pure copper material. Generally, the extrusion-based printing process for producing metal materials is a promising strategy because it has some advantages over traditional approaches for cost, efficiency, and simplicity. PMID:28772665
A Numerical Climate Observing Network Design Study
NASA Technical Reports Server (NTRS)
Stammer, Detlef
2003-01-01
This project was concerned with three related questions of an optimal design of a climate observing system: 1. The spatial sampling characteristics required from an ARGO system. 2. The degree to which surface observations from ARGO can be used to calibrate and test satellite remote sensing observations of sea surface salinity (SSS) as it is anticipated now. 3. The more general design of an climate observing system as it is required in the near future for CLIVAR in the Atlantic. An important question in implementing an observing system is that of the sampling density required to observe climate-related variations in the ocean. For that purpose this project was concerned with the sampling requirements for the ARGO float system, but investigated also other elements of a climate observing system. As part of this project we studied the horizontal and vertical sampling characteristics of a global ARGO system which is required to make it fully complementary to altimeter data with the goal to capture climate related variations on large spatial scales (less thanAttachment: 1000 km). We addressed this question in the framework of a numerical model study in the North Atlantic with an 1/6 horizontal resolution. The advantage of a numerical design study is the knowledge of the full model state. Sampled by a synthetic float array, model results will therefore allow to test and improve existing deployment strategies with the goal to make the system as optimal and cost-efficient as possible. Attachment: "Optimal observations for variational data assimilation".
NASA Astrophysics Data System (ADS)
Hou, Zeyu; Lu, Wenxi
2018-05-01
Knowledge of groundwater contamination sources is critical for effectively protecting groundwater resources, estimating risks, mitigating disaster, and designing remediation strategies. Many methods for groundwater contamination source identification (GCSI) have been developed in recent years, including the simulation-optimization technique. This study proposes utilizing a support vector regression (SVR) model and a kernel extreme learning machine (KELM) model to enrich the content of the surrogate model. The surrogate model was itself key in replacing the simulation model, reducing the huge computational burden of iterations in the simulation-optimization technique to solve GCSI problems, especially in GCSI problems of aquifers contaminated by dense nonaqueous phase liquids (DNAPLs). A comparative study between the Kriging, SVR, and KELM models is reported. Additionally, there is analysis of the influence of parameter optimization and the structure of the training sample dataset on the approximation accuracy of the surrogate model. It was found that the KELM model was the most accurate surrogate model, and its performance was significantly improved after parameter optimization. The approximation accuracy of the surrogate model to the simulation model did not always improve with increasing numbers of training samples. Using the appropriate number of training samples was critical for improving the performance of the surrogate model and avoiding unnecessary computational workload. It was concluded that the KELM model developed in this work could reasonably predict system responses in given operation conditions. Replacing the simulation model with a KELM model considerably reduced the computational burden of the simulation-optimization process and also maintained high computation accuracy.
Finding an optimal strategy for measuring the quality of groundwater as a source for drinking water
NASA Astrophysics Data System (ADS)
van Driezum, Inge; Saracevic, Ernis; Scheibz, Jürgen; Zessner, Matthias; Kirschner, Alexander; Sommer, Regina; Farnleitner, Andreas; Blaschke, Alfred Paul
2015-04-01
A good chemical and microbiological water quality is of great importance in riverbank filtration systems that are used as public water supplies. Water quality is ideally monitored frequently at the drinking water well using a steady pumping rate. Monitoring source water (like groundwater) however, can be more challenging. First of all, piezometers should be drilled in the correct layer of the aquifer. Secondly, the sampling design should include all preferred parameters (microbiological and chemical parameters) and should also take the hydrological conditions into account. In this study, we made use of different geophysical techniques (ERT and FDEM) to select the optimal placement of the piezometers. We also designed a sampling strategy which can be used to sample fecal indicators, biostability parameters, standard chemical parameters and a wide range of micropollutants. Several time series experiments were carried out in the study site Porous GroundWater Aquifer (PGWA) - an urban floodplain extending on the left bank of the river Danube downstream of the City of Vienna, Austria. The upper layer of the PGWA consist of silt and has a thickness from 1 to 6 meter. The underlying confined aquifer consists of sand and gravel and has a thickness of in between 3 and 15 meter. Hydraulic conductivities range from 5 x 10-2 m/s up to 5 x 10-5 m/s. Underneath the aquifer are alternating sand and clay/silt layers. As fecal markers Escherichia coli, enterococci and aerobic spores were measured. Biostability was measured using leucine incorporation. Additionally, several micropollutants and standard chemical parameters were measured. Results showed that physical and chemical parameters stayed stable in all monitoring wells during extended purging. A similar trend could be observed for E coli and enterococci. In the wells close to the river, aerobic spores and leucine incorporation decreased after 30 min. of pumping, whereas the well close to the backwater showed a different pattern. Overall, purging for 45 minutes was the optimal sampling procedure for the microbiological parameters. Samples for the detection of micropollutants were taken after 15 min. purging.
Development of an ELISA for evaluation of swab recovery efficiencies of bovine serum albumin.
Sparding, Nadja; Slotved, Hans-Christian; Nicolaisen, Gert M; Giese, Steen B; Elmlund, Jón; Steenhard, Nina R
2014-01-01
After a potential biological incident the sampling strategy and sample analysis are crucial for the outcome of the investigation and identification. In this study, we have developed a simple sandwich ELISA based on commercial components to quantify BSA (used as a surrogate for ricin) with a detection range of 1.32-80 ng/mL. We used the ELISA to evaluate different protein swabbing procedures (swabbing techniques and after-swabbing treatments) for two swab types: a cotton gauze swab and a flocked nylon swab. The optimal swabbing procedure for each swab type was used to obtain recovery efficiencies from different surface materials. The surface recoveries using the optimal swabbing procedure ranged from 0-60% and were significantly higher from nonporous surfaces compared to porous surfaces. In conclusion, this study presents a swabbing procedure evaluation and a simple BSA ELISA based on commercial components, which are easy to perform in a laboratory with basic facilities. The data indicate that different swabbing procedures were optimal for each of the tested swab types, and the particular swab preference depends on the surface material to be swabbed.
NASA Astrophysics Data System (ADS)
Yin, Chuancun; Wang, Chunwei
2009-11-01
The optimal dividend problem proposed in de Finetti [1] is to find the dividend-payment strategy that maximizes the expected discounted value of dividends which are paid to the shareholders until the company is ruined. Avram et al. [9] studied the case when the risk process is modelled by a general spectrally negative Lévy process and Loeffen [10] gave sufficient conditions under which the optimal strategy is of the barrier type. Recently Kyprianou et al. [11] strengthened the result of Loeffen [10] which established a larger class of Lévy processes for which the barrier strategy is optimal among all admissible ones. In this paper we use an analytical argument to re-investigate the optimality of barrier dividend strategies considered in the three recent papers.
NASA Astrophysics Data System (ADS)
Qi, Shengqi; Hou, Deyi; Luo, Jian
2017-09-01
This study presents a numerical model based on field data to simulate groundwater flow in both the aquifer and the well-bore for the low-flow sampling method and the well-volume sampling method. The numerical model was calibrated to match well with field drawdown, and calculated flow regime in the well was used to predict the variation of dissolved oxygen (DO) concentration during the purging period. The model was then used to analyze sampling representativeness and sampling time. Site characteristics, such as aquifer hydraulic conductivity, and sampling choices, such as purging rate and screen length, were found to be significant determinants of sampling representativeness and required sampling time. Results demonstrated that: (1) DO was the most useful water quality indicator in ensuring groundwater sampling representativeness in comparison with turbidity, pH, specific conductance, oxidation reduction potential (ORP) and temperature; (2) it is not necessary to maintain a drawdown of less than 0.1 m when conducting low flow purging. However, a high purging rate in a low permeability aquifer may result in a dramatic decrease in sampling representativeness after an initial peak; (3) the presence of a short screen length may result in greater drawdown and a longer sampling time for low-flow purging. Overall, the present study suggests that this new numerical model is suitable for describing groundwater flow during the sampling process, and can be used to optimize sampling strategies under various hydrogeological conditions.
Optimal management strategies in variable environments: Stochastic optimal control methods
Williams, B.K.
1985-01-01
Dynamic optimization was used to investigate the optimal defoliation of salt desert shrubs in north-western Utah. Management was formulated in the context of optimal stochastic control theory, with objective functions composed of discounted or time-averaged biomass yields. Climatic variability and community patterns of salt desert shrublands make the application of stochastic optimal control both feasible and necessary. A primary production model was used to simulate shrub responses and harvest yields under a variety of climatic regimes and defoliation patterns. The simulation results then were used in an optimization model to determine optimal defoliation strategies. The latter model encodes an algorithm for finite state, finite action, infinite discrete time horizon Markov decision processes. Three questions were addressed: (i) What effect do changes in weather patterns have on optimal management strategies? (ii) What effect does the discounting of future returns have? (iii) How do the optimal strategies perform relative to certain fixed defoliation strategies? An analysis was performed for the three shrub species, winterfat (Ceratoides lanata), shadscale (Atriplex confertifolia) and big sagebrush (Artemisia tridentata). In general, the results indicate substantial differences among species in optimal control strategies, which are associated with differences in physiological and morphological characteristics. Optimal policies for big sagebrush varied less with variation in climate, reserve levels and discount rates than did either shadscale or winterfat. This was attributed primarily to the overwintering of photosynthetically active tissue and to metabolic activity early in the growing season. Optimal defoliation of shadscale and winterfat generally was more responsive to differences in plant vigor and climate, reflecting the sensitivity of these species to utilization and replenishment of carbohydrate reserves. Similarities could be seen in the influence of both the discount rate and the climatic patterns on optimal harvest strategics. In general, decreases in either the discount rate or in the frequency of favorable weather patterns lcd to a more conservative defoliation policy. This did not hold, however, for plants in states of low vigor. Optimal control for shadscale and winterfat tended to stabilize on a policy of heavy defoliation stress, followed by one or more seasons of rest. Big sagebrush required a policy of heavy summer defoliation when sufficient active shoot material is present at the beginning of the growing season. The comparison of fixed and optimal strategies indicated considerable improvement in defoliation yields when optimal strategies are followed. The superior performance was attributable to increased defoliation of plants in states of high vigor. Improvements were found for both discounted and undiscounted yields.
Optimizing the diagnostic testing of Clostridium difficile infection.
Bouza, Emilio; Alcalá, Luis; Reigadas, Elena
2016-09-01
Clostridium difficile infection (CDI) is the leading cause of hospital-acquired diarrhea and is associated with a considerable health and cost burden. However, there is still not a clear consensus on the best laboratory diagnosis approach and a wide variation of testing methods and strategies can be encountered. We aim to review the most practical aspects of CDI diagnosis providing our own view on how to optimize CDI diagnosis. Expert commentary: Laboratory diagnosis in search of C. difficile toxins should be applied to all fecal diarrheic samples reaching the microbiology laboratory in patients > 2 years old, with or without classic risk factors for CDI. Detection of toxins either directly in the fecal sample or in the bacteria isolated in culture confirm CDI in the proper clinical setting. Nuclear Acid Assay techniques (NAAT) allow to speed up the process with epidemiological and therapeutic consequences.
An improved CS-LSSVM algorithm-based fault pattern recognition of ship power equipments.
Yang, Yifei; Tan, Minjia; Dai, Yuewei
2017-01-01
A ship power equipments' fault monitoring signal usually provides few samples and the data's feature is non-linear in practical situation. This paper adopts the method of the least squares support vector machine (LSSVM) to deal with the problem of fault pattern identification in the case of small sample data. Meanwhile, in order to avoid involving a local extremum and poor convergence precision which are induced by optimizing the kernel function parameter and penalty factor of LSSVM, an improved Cuckoo Search (CS) algorithm is proposed for the purpose of parameter optimization. Based on the dynamic adaptive strategy, the newly proposed algorithm improves the recognition probability and the searching step length, which can effectively solve the problems of slow searching speed and low calculation accuracy of the CS algorithm. A benchmark example demonstrates that the CS-LSSVM algorithm can accurately and effectively identify the fault pattern types of ship power equipments.
Predicting Short-Term Remembering as Boundedly Optimal Strategy Choice.
Howes, Andrew; Duggan, Geoffrey B; Kalidindi, Kiran; Tseng, Yuan-Chi; Lewis, Richard L
2016-07-01
It is known that, on average, people adapt their choice of memory strategy to the subjective utility of interaction. What is not known is whether an individual's choices are boundedly optimal. Two experiments are reported that test the hypothesis that an individual's decisions about the distribution of remembering between internal and external resources are boundedly optimal where optimality is defined relative to experience, cognitive constraints, and reward. The theory makes predictions that are tested against data, not fitted to it. The experiments use a no-choice/choice utility learning paradigm where the no-choice phase is used to elicit a profile of each participant's performance across the strategy space and the choice phase is used to test predicted choices within this space. They show that the majority of individuals select strategies that are boundedly optimal. Further, individual differences in what people choose to do are successfully predicted by the analysis. Two issues are discussed: (a) the performance of the minority of participants who did not find boundedly optimal adaptations, and (b) the possibility that individuals anticipate what, with practice, will become a bounded optimal strategy, rather than what is boundedly optimal during training. Copyright © 2015 Cognitive Science Society, Inc.
Review: Optimization methods for groundwater modeling and management
NASA Astrophysics Data System (ADS)
Yeh, William W.-G.
2015-09-01
Optimization methods have been used in groundwater modeling as well as for the planning and management of groundwater systems. This paper reviews and evaluates the various optimization methods that have been used for solving the inverse problem of parameter identification (estimation), experimental design, and groundwater planning and management. Various model selection criteria are discussed, as well as criteria used for model discrimination. The inverse problem of parameter identification concerns the optimal determination of model parameters using water-level observations. In general, the optimal experimental design seeks to find sampling strategies for the purpose of estimating the unknown model parameters. A typical objective of optimal conjunctive-use planning of surface water and groundwater is to minimize the operational costs of meeting water demand. The optimization methods include mathematical programming techniques such as linear programming, quadratic programming, dynamic programming, stochastic programming, nonlinear programming, and the global search algorithms such as genetic algorithms, simulated annealing, and tabu search. Emphasis is placed on groundwater flow problems as opposed to contaminant transport problems. A typical two-dimensional groundwater flow problem is used to explain the basic formulations and algorithms that have been used to solve the formulated optimization problems.
NASA Astrophysics Data System (ADS)
Wahl, N.; Hennig, P.; Wieser, H. P.; Bangert, M.
2017-07-01
The sensitivity of intensity-modulated proton therapy (IMPT) treatment plans to uncertainties can be quantified and mitigated with robust/min-max and stochastic/probabilistic treatment analysis and optimization techniques. Those methods usually rely on sparse random, importance, or worst-case sampling. Inevitably, this imposes a trade-off between computational speed and accuracy of the uncertainty propagation. Here, we investigate analytical probabilistic modeling (APM) as an alternative for uncertainty propagation and minimization in IMPT that does not rely on scenario sampling. APM propagates probability distributions over range and setup uncertainties via a Gaussian pencil-beam approximation into moments of the probability distributions over the resulting dose in closed form. It supports arbitrary correlation models and allows for efficient incorporation of fractionation effects regarding random and systematic errors. We evaluate the trade-off between run-time and accuracy of APM uncertainty computations on three patient datasets. Results are compared against reference computations facilitating importance and random sampling. Two approximation techniques to accelerate uncertainty propagation and minimization based on probabilistic treatment plan optimization are presented. Runtimes are measured on CPU and GPU platforms, dosimetric accuracy is quantified in comparison to a sampling-based benchmark (5000 random samples). APM accurately propagates range and setup uncertainties into dose uncertainties at competitive run-times (GPU ≤slant {5} min). The resulting standard deviation (expectation value) of dose show average global γ{3% / {3}~mm} pass rates between 94.2% and 99.9% (98.4% and 100.0%). All investigated importance sampling strategies provided less accuracy at higher run-times considering only a single fraction. Considering fractionation, APM uncertainty propagation and treatment plan optimization was proven to be possible at constant time complexity, while run-times of sampling-based computations are linear in the number of fractions. Using sum sampling within APM, uncertainty propagation can only be accelerated at the cost of reduced accuracy in variance calculations. For probabilistic plan optimization, we were able to approximate the necessary pre-computations within seconds, yielding treatment plans of similar quality as gained from exact uncertainty propagation. APM is suited to enhance the trade-off between speed and accuracy in uncertainty propagation and probabilistic treatment plan optimization, especially in the context of fractionation. This brings fully-fledged APM computations within reach of clinical application.
Wahl, N; Hennig, P; Wieser, H P; Bangert, M
2017-06-26
The sensitivity of intensity-modulated proton therapy (IMPT) treatment plans to uncertainties can be quantified and mitigated with robust/min-max and stochastic/probabilistic treatment analysis and optimization techniques. Those methods usually rely on sparse random, importance, or worst-case sampling. Inevitably, this imposes a trade-off between computational speed and accuracy of the uncertainty propagation. Here, we investigate analytical probabilistic modeling (APM) as an alternative for uncertainty propagation and minimization in IMPT that does not rely on scenario sampling. APM propagates probability distributions over range and setup uncertainties via a Gaussian pencil-beam approximation into moments of the probability distributions over the resulting dose in closed form. It supports arbitrary correlation models and allows for efficient incorporation of fractionation effects regarding random and systematic errors. We evaluate the trade-off between run-time and accuracy of APM uncertainty computations on three patient datasets. Results are compared against reference computations facilitating importance and random sampling. Two approximation techniques to accelerate uncertainty propagation and minimization based on probabilistic treatment plan optimization are presented. Runtimes are measured on CPU and GPU platforms, dosimetric accuracy is quantified in comparison to a sampling-based benchmark (5000 random samples). APM accurately propagates range and setup uncertainties into dose uncertainties at competitive run-times (GPU [Formula: see text] min). The resulting standard deviation (expectation value) of dose show average global [Formula: see text] pass rates between 94.2% and 99.9% (98.4% and 100.0%). All investigated importance sampling strategies provided less accuracy at higher run-times considering only a single fraction. Considering fractionation, APM uncertainty propagation and treatment plan optimization was proven to be possible at constant time complexity, while run-times of sampling-based computations are linear in the number of fractions. Using sum sampling within APM, uncertainty propagation can only be accelerated at the cost of reduced accuracy in variance calculations. For probabilistic plan optimization, we were able to approximate the necessary pre-computations within seconds, yielding treatment plans of similar quality as gained from exact uncertainty propagation. APM is suited to enhance the trade-off between speed and accuracy in uncertainty propagation and probabilistic treatment plan optimization, especially in the context of fractionation. This brings fully-fledged APM computations within reach of clinical application.
Mester, David; Ronin, Yefim; Schnable, Patrick; Aluru, Srinivas; Korol, Abraham
2015-01-01
Our aim was to develop a fast and accurate algorithm for constructing consensus genetic maps for chip-based SNP genotyping data with a high proportion of shared markers between mapping populations. Chip-based genotyping of SNP markers allows producing high-density genetic maps with a relatively standardized set of marker loci for different mapping populations. The availability of a standard high-throughput mapping platform simplifies consensus analysis by ignoring unique markers at the stage of consensus mapping thereby reducing mathematical complicity of the problem and in turn analyzing bigger size mapping data using global optimization criteria instead of local ones. Our three-phase analytical scheme includes automatic selection of ~100-300 of the most informative (resolvable by recombination) markers per linkage group, building a stable skeletal marker order for each data set and its verification using jackknife re-sampling, and consensus mapping analysis based on global optimization criterion. A novel Evolution Strategy optimization algorithm with a global optimization criterion presented in this paper is able to generate high quality, ultra-dense consensus maps, with many thousands of markers per genome. This algorithm utilizes "potentially good orders" in the initial solution and in the new mutation procedures that generate trial solutions, enabling to obtain a consensus order in reasonable time. The developed algorithm, tested on a wide range of simulated data and real world data (Arabidopsis), outperformed two tested state-of-the-art algorithms by mapping accuracy and computation time. PMID:25867943
DAKOTA Design Analysis Kit for Optimization and Terascale
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adams, Brian M.; Dalbey, Keith R.; Eldred, Michael S.
2010-02-24
The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes (computational models) and iterative analysis methods. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and analysis of computational models on high performance computers.A user provides a set of DAKOTA commands in an input file and launches DAKOTA. DAKOTA invokes instances of the computational models, collects their results, and performs systems analyses. DAKOTA contains algorithms for optimization with gradient and nongradient-basedmore » methods; uncertainty quantification with sampling, reliability, polynomial chaos, stochastic collocation, and epistemic methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as hybrid optimization, surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. Services for parallel computing, simulation interfacing, approximation modeling, fault tolerance, restart, and graphics are also included.« less
Limited sampling strategies to predict the area under the concentration-time curve for rifampicin.
Medellín-Garibay, Susanna E; Correa-López, Tania; Romero-Méndez, Carmen; Milán-Segovia, Rosa C; Romano-Moreno, Silvia
2014-12-01
Rifampicin (RMP) is the most effective first-line antituberculosis drug. One of the most critical aspects of using it in fixed-drug combination formulations is to ensure it reaches therapeutic levels in blood. The determination of the area under the concentration-time curve (AUC) and appropriate dose adjustment of this drug may contribute to optimization of therapy. Even when the maximal concentration (Cmax) of RMP also predicts its sterilizing effect, the time to reach it (Tmax) takes 40 minutes to 6 hours. The aim of this study was to develop a limited sampling strategy (LSS) for therapeutic drug monitoring assistance for RMP. Full concentration-time curves were obtained from 58 patients with tuberculosis (TB) after the oral administration of RMP in fixed-drug combination formulation. A validated high-performance liquid chromatographic method was used. Pharmacokinetic parameters were estimated with a noncompartmental model. Generalized linear models were obtained by forward steps, and bootstrapping was performed to develop LSS to predict AUC curve from time 0 to the last measured at 24 hours postdose (AUC0-24). The predictive performance of the proposed models was assessed using RMP profiles from 25 other TB patients by comparing predicted and observed AUC0-24. The mean AUC0-24 in the current study was 91.46 ± 36.7 mg·h·L, and the most convenient sampling time points to predict it were 2, 4 and 12 hours postdose (slope [m] = 0.955 ± 0.06; r = 0.92). The mean prediction error was -0.355%, and the root mean square error was 5.6% in the validation group. Alternate LSSs are proposed with 2 of these sampling time points, which also provide good predictions when the 3 most convenient are not feasible. The AUC0-24 for RMP in TB patients can be predicted with acceptable precision through a 2- or 3-point sampling strategy, despite wide interindividual variability. These LSSs could be applied in clinical practice to optimize anti-TB therapy based on therapeutic drug monitoring.
Development of Medical Technology for Contingency Response to Marrow Toxic Agents
2014-10-30
mismatches may differ in their impact on transplant outcome, therefore, it is important to identify and quantify the influence of specific HLA ...evaluate HLA disparity and impact on HSC transplantation by adding selected pairs to the Donor/Recipient Pair project utilizing sample selection...to assay the impact of DNA-based HLA matching on unrelated donor transplant outcome, develop strategies for optimal HLA matching, evaluate the
Carnovale, Carla; Brusadelli, Tatiana; Zuccotti, GianVincenzo; Beretta, Silvia; Sullo, Maria Giuseppa; Capuano, Annalisa; Rossi, Francesco; Moschini, Martina; Mugelli, Alessandro; Vannacci, Alfredo; Laterza, Marcella; Clementi, Emilio; Radice, Sonia
2014-09-01
To gain information on safety of drugs used in pediatrics through a 4-year post-marketing active pharmacovigilance program. The program sampled the Italian population and was termed 'Monitoring of the Adverse Effects in Pediatric population' (MEAP). Adverse drug reactions (ADRs) were collected for individuals aged 0 - 17 years treated in hospitals and territorial health services in Lombardy, Tuscany, Apulia and Campania; located to gain an appropriate sampling of the population. ADRs were evaluated using the Adverse Drug Reaction Probability Scale (Naranjo) and analyzed with respect to time, age, sex, category of ADR, seriousness, suspected medicines, type of reporter and off-label use. We collected and analyzed reports from 3539 ADRs. Vaccines, antineoplastic and psychotropic drugs were the most frequently pharmacotherapeutic subgroups involved. Seventeen percent of reported ADRs were serious; of them fever, vomiting and angioedema were the most frequently reported. Eight percent of ADRs were associated with off-label use, and 10% were unknown ADRs. Analysis of these revealed possible strategies of therapy optimization. The MEAP project demonstrated that active post-marketing pharmacovigilance programs are a valid strategy to increase awareness on pediatric pharmacology, reduce underreporting and provide information on drug actions in pediatrics. This information enhances drug therapy optimization in the pediatric patients.
Sorzano, Carlos Oscars S; Pérez-De-La-Cruz Moreno, Maria Angeles; Burguet-Castell, Jordi; Montejo, Consuelo; Ros, Antonio Aguilar
2015-06-01
Pharmacokinetics (PK) applications can be seen as a special case of nonlinear, causal systems with memory. There are cases in which prior knowledge exists about the distribution of the system parameters in a population. However, for a specific patient in a clinical setting, we need to determine her system parameters so that the therapy can be personalized. This system identification is performed many times by measuring drug concentrations in plasma. The objective of this work is to provide an irregular sampling strategy that minimizes the uncertainty about the system parameters with a fixed amount of samples (cost constrained). We use Monte Carlo simulations to estimate the average Fisher's information matrix associated to the PK problem, and then estimate the sampling points that minimize the maximum uncertainty associated to system parameters (a minimax criterion). The minimization is performed employing a genetic algorithm. We show that such a sampling scheme can be designed in a way that is adapted to a particular patient and that it can accommodate any dosing regimen as well as it allows flexible therapeutic strategies. © 2015 Wiley Periodicals, Inc. and the American Pharmacists Association.
Barlow, P.M.; Wagner, B.J.; Belitz, K.
1996-01-01
The simulation-optimization approach is used to identify ground-water pumping strategies for control of the shallow water table in the western San Joaquin Valley, California, where shallow ground water threatens continued agricultural productivity. The approach combines the use of ground-water flow simulation with optimization techniques to build on and refine pumping strategies identified in previous research that used flow simulation alone. Use of the combined simulation-optimization model resulted in a 20 percent reduction in the area subject to a shallow water table over that identified by use of the simulation model alone. The simulation-optimization model identifies increasingly more effective pumping strategies for control of the water table as the complexity of the problem increases; that is, as the number of subareas in which pumping is to be managed increases, the simulation-optimization model is better able to discriminate areally among subareas to determine optimal pumping locations. The simulation-optimization approach provides an improved understanding of controls on the ground-water flow system and management alternatives that can be implemented in the valley. In particular, results of the simulation-optimization model indicate that optimal pumping strategies are constrained by the existing distribution of wells between the semiconfined and confined zones of the aquifer, by the distribution of sediment types (and associated hydraulic conductivities) in the western valley, and by the historical distribution of pumping throughout the western valley.
Dispositional optimism and coping strategies in patients with a kidney transplant.
Costa-Requena, Gemma; Cantarell-Aixendri, M Carmen; Parramon-Puig, Gemma; Serón-Micas, Daniel
2014-01-01
Dispositional optimism is a personal resource that determines the coping style and adaptive response to chronic diseases. The aim of this study was to assess the correlations between dispositional optimism and coping strategies in patients with recent kidney transplantation and evaluate the differences in the use of coping strategies in accordance with the level of dispositional optimism. Patients who were hospitalised in the nephrology department were selected consecutively after kidney transplantation was performed. The evaluation instruments were the Life Orientation Test-Revised, and the Coping Strategies Inventory. The data were analysed with central tendency measures, correlation analyses and means were compared using Student’s t-test. 66 patients with a kidney transplant participated in the study. The coping styles that characterised patients with a recent kidney transplantation were Social withdrawal and Problem avoidance. Correlations between dispositional optimism and coping strategies were significant in a positive direction in Problem-solving (p<.05) and Cognitive restructuring (p<.01), and inversely with Self-criticism (p<.05). Differences in dispositional optimism created significant differences in the Self-Criticism dimension (t=2.58; p<.01). Dispositional optimism scores provide differences in coping responses after kidney transplantation. Moreover, coping strategies may influence the patient’s perception of emotional wellbeing after kidney transplantation.
NASA Astrophysics Data System (ADS)
Hütsi, Gert; Gilfanov, Marat; Kolodzig, Alexander; Sunyaev, Rashid
2014-12-01
We investigate the potential of large X-ray-selected AGN samples for detecting baryonic acoustic oscillations (BAO). Though AGN selection in X-ray band is very clean and efficient, it does not provide redshift information, and thus needs to be complemented with an optical follow-up. The main focus of this study is (i) to find the requirements needed for the quality of the optical follow-up and (ii) to formulate the optimal strategy of the X-ray survey, in order to detect the BAO. We demonstrate that redshift accuracy of σ0 = 10-2 at z = 1 and the catastrophic failure rate of ffail ≲ 30% are sufficient for a reliable detection of BAO in future X-ray surveys. Spectroscopic quality redshifts (σ0 = 10-3 and ffail ~ 0) will boost the confidence level of the BAO detection by a factor of ~2. For meaningful detection of BAO, X-ray surveys of moderate depth of Flim ~ few 10-15 erg s-1/cm2 covering sky area from a few hundred to ~ten thousand square degrees are required. The optimal strategy for the BAO detection does not necessarily require full sky coverage. For example, in a 1000 day-long survey by an eROSITA type telescope, an optimal strategy would be to survey a sky area of ~9000 deg2, yielding a ~16σ BAO detection. A similar detection will be achieved by ATHENA+ or WFXT class telescopes in a survey with a duration of 100 days, covering a similar sky area. XMM-Newton can achieve a marginal BAO detection in a 100-day survey covering ~400 deg2. These surveys would demand a moderate-to-high cost in terms the optical follow-ups, requiring determination of redshifts of ~105 (XMM-Newton) to ~3 × 106 objects (eROSITA, ATHENA+, and WFXT) in these sky areas.
Wu, Yiman; Li, Liang
2012-12-18
For mass spectrometry (MS)-based metabolomics, it is important to use the same amount of starting materials from each sample to compare the metabolome changes in two or more comparative samples. Unfortunately, for biological samples, the total amount or concentration of metabolites is difficult to determine. In this work, we report a general approach of determining the total concentration of metabolites based on the use of chemical labeling to attach a UV absorbent to the metabolites to be analyzed, followed by rapid step-gradient liquid chromatography (LC) UV detection of the labeled metabolites. It is shown that quantification of the total labeled analytes in a biological sample facilitates the preparation of an appropriate amount of starting materials for MS analysis as well as the optimization of the sample loading amount to a mass spectrometer for achieving optimal detectability. As an example, dansylation chemistry was used to label the amine- and phenol-containing metabolites in human urine samples. LC-UV quantification of the labeled metabolites could be optimally performed at the detection wavelength of 338 nm. A calibration curve established from the analysis of a mixture of 17 labeled amino acid standards was found to have the same slope as that from the analysis of the labeled urinary metabolites, suggesting that the labeled amino acid standard calibration curve could be used to determine the total concentration of the labeled urinary metabolites. A workflow incorporating this LC-UV metabolite quantification strategy was then developed in which all individual urine samples were first labeled with (12)C-dansylation and the concentration of each sample was determined by LC-UV. The volumes of urine samples taken for producing the pooled urine standard were adjusted to ensure an equal amount of labeled urine metabolites from each sample was used for the pooling. The pooled urine standard was then labeled with (13)C-dansylation. Equal amounts of the (12)C-labeled individual sample and the (13)C-labeled pooled urine standard were mixed for LC-MS analysis. This way of concentration normalization among different samples with varying concentrations of total metabolites was found to be critical for generating reliable metabolome profiles for comparison.
Montesdeoca-Esponda, Sarah; Sosa-Ferrera, Zoraida; Kabir, Abuzar; Furton, Kenneth G; Santana-Rodríguez, José Juan
2015-10-01
A fast and sensitive sample preparation strategy using fabric phase sorptive extraction followed by ultra-high-performance liquid chromatography and tandem mass spectrometry detection has been developed to analyse benzotriazole UV stabilizer compounds in aqueous samples. Benzotriazole UV stabilizer compounds are a group of compounds added to sunscreens and other personal care products which may present detrimental effects to aquatic ecosystems. Fabric phase sorptive extraction is a novel solvent minimized sample preparation approach that integrates the advantages of sol-gel derived hybrid inorganic-organic nanocomposite sorbents and the flexible, permeable and hydrophobic surface chemistry of polyester fabric. It is a highly sensitive, fast, efficient and inexpensive device that can be reused and does not suffer from coating damage, unlike SPME fibres or stir bars. In this paper, we optimized the extraction of seven benzotriazole UV filters evaluating the majority of the parameters involved in the extraction process, such as sorbent chemistry selection, extraction time, back-extraction solvent, back-extraction time and the impact of ionic strength. Under the optimized conditions, fabric phase sorptive extraction allows enrichment factors of 10 times with detection limits ranging from 6.01 to 60.7 ng L(-1) and intra- and inter-day % RSDs lower than 11 and 30 % for all compounds, respectively. The optimized sample preparation technique followed by ultra-high-performance liquid chromatography and tandem mass spectrometry detection was applied to determine the target analytes in sewage samples from wastewater treatment plants with different purification processes of Gran Canaria Island (Spain). Two UV stabilizer compounds were measured in ranges 17.0-60.5 ng mL(-1) (UV 328) and 69.3-99.2 ng mL(-1) (UV 360) in the three sewage water samples analysed.
Ferrer-Paris, José Rafael; Sánchez-Mercado, Ada; Rodríguez, Jon Paul
2013-03-01
The development of efficient sampling protocols is an essential prerequisite to evaluate and identify priority conservation areas. There are f ew protocols for fauna inventory and monitoring in wide geographical scales for the tropics, where the complexity of communities and high biodiversity levels, make the implementation of efficient protocols more difficult. We proposed here a simple strategy to optimize the capture of dung beetles, applied to sampling with baited traps and generalizable to other sampling methods. We analyzed data from eight transects sampled between 2006-2008 withthe aim to develop an uniform sampling design, that allows to confidently estimate species richness, abundance and composition at wide geographical scales. We examined four characteristics of any sampling design that affect the effectiveness of the sampling effort: the number of traps, sampling duration, type and proportion of bait, and spatial arrangement of the traps along transects. We used species accumulation curves, rank-abundance plots, indicator species analysis, and multivariate correlograms. We captured 40 337 individuals (115 species/morphospecies of 23 genera). Most species were attracted by both dung and carrion, but two thirds had greater relative abundance in traps baited with human dung. Different aspects of the sampling design influenced each diversity attribute in different ways. To obtain reliable richness estimates, the number of traps was the most important aspect. Accurate abundance estimates were obtained when the sampling period was increased, while the spatial arrangement of traps was determinant to capture the species composition pattern. An optimum sampling strategy for accurate estimates of richness, abundance and diversity should: (1) set 50-70 traps to maximize the number of species detected, (2) get samples during 48-72 hours and set trap groups along the transect to reliably estimate species abundance, (3) set traps in groups of at least 10 traps to suitably record the local species composition, and (4) separate trap groups by a distance greater than 5-10km to avoid spatial autocorrelation. For the evaluation of other sampling protocols we recommend to, first, identify the elements of sampling design that could affect the sampled effort (the number of traps, sampling duration, type and proportion of bait) and their spatial distribution (spatial arrangement of the traps) and then, to evaluate how they affect richness, abundance and species composition estimates.
An adaptive sharing elitist evolution strategy for multiobjective optimization.
Costa, Lino; Oliveira, Pedro
2003-01-01
Almost all approaches to multiobjective optimization are based on Genetic Algorithms (GAs), and implementations based on Evolution Strategies (ESs) are very rare. Thus, it is crucial to investigate how ESs can be extended to multiobjective optimization, since they have, in the past, proven to be powerful single objective optimizers. In this paper, we present a new approach to multiobjective optimization, based on ESs. We call this approach the Multiobjective Elitist Evolution Strategy (MEES) as it incorporates several mechanisms, like elitism, that improve its performance. When compared with other algorithms, MEES shows very promising results in terms of performance.
Using Twitter to recruit participants for health research: An example from a caregiving study.
Wasilewski, Marina Bastawrous; Stinson, Jennifer N; Webster, Fiona; Cameron, Jill Irene
2018-05-01
Twitter has the potential to optimize research conduct, but more research is needed around the nature of study-related tweets and strategies for optimizing reach. In the context of our caregiving study, we aimed to describe the nature and extent of study-related tweets, the extent to which they were shared by others, and their potential reach. To do so, we conducted a secondary analysis of our Twitter recruitment. We aggregated and categorized study-related tweets and analyzed the reach of the 10 most retweeted tweets. Results indicated that of 71 caregivers, 27 were recruited via Twitter. General recruitment tweets were most-shared by users. Tweet reach ranged from 5273 to 62,144 users. Twitter caregivers were demographically comparable to non-Twitter caregivers but had higher Internet proficiency and fewer children. Overall, using a personal Twitter account can expand the reach of study recruitment. Future research should compare different recruitment strategies and explore characteristics that may challenge the heterogeneity of Twitter samples.
Optimization of the excitation light sheet in selective plane illumination microscopy
Gao, Liang
2015-01-01
Selective plane illumination microscopy (SPIM) allows rapid 3D live fluorescence imaging on biological specimens with high 3D spatial resolution, good optical sectioning capability and minimal photobleaching and phototoxic effect. SPIM gains its advantage by confining the excitation light near the detection focal plane, and its performance is determined by the ability to create a thin, large and uniform excitation light sheet. Several methods have been developed to create such an excitation light sheet for SPIM. However, each method has its own strengths and weaknesses, and tradeoffs must be made among different aspects in SPIM imaging. In this work, we present a strategy to select the excitation light sheet among the latest SPIM techniques, and to optimize its geometry based on spatial resolution, field of view, optical sectioning capability, and the sample to be imaged. Besides the light sheets discussed in this work, the proposed strategy is also applicable to estimate the SPIM performance using other excitation light sheets. PMID:25798312
Successful aging in community seniors and stroke survivors: current and future strategies.
Kamat, Rujvi; Depp, Colin A; Jeste, Dilip V
2017-06-01
There is growing evidence to suggest that aging is accompanied by enhancement in psychosocial well-being, despite age-related declines in physical and cognitive functioning. A small but growing body of research has reported on positive trajectories of well-being, and its determinants, among community-dwelling seniors as well as in people with specific diseases such as stroke. Current strategies for promoting successful aging include physical, cognitive and social activities, healthy lifestyle, social support, and positive traits such as resilience and optimism. These strategies have typically been employed in samples without serious illnesses, but an emerging body of evidence suggests that they are as relevant in cohorts with neurologic and other diseases. Future strategies will include those that work at the community level and not just at individual level, and will focus on use of technology as well as group interventions to enhance resilience and building age-friendly communities.
Optimization model of vaccination strategy for dengue transmission
NASA Astrophysics Data System (ADS)
Widayani, H.; Kallista, M.; Nuraini, N.; Sari, M. Y.
2014-02-01
Dengue fever is emerging tropical and subtropical disease caused by dengue virus infection. The vaccination should be done as a prevention of epidemic in population. The host-vector model are modified with consider a vaccination factor to prevent the occurrence of epidemic dengue in a population. An optimal vaccination strategy using non-linear objective function was proposed. The genetic algorithm programming techniques are combined with fourth-order Runge-Kutta method to construct the optimal vaccination. In this paper, the appropriate vaccination strategy by using the optimal minimum cost function which can reduce the number of epidemic was analyzed. The numerical simulation for some specific cases of vaccination strategy is shown.
Optimal resource allocation strategy for two-layer complex networks
NASA Astrophysics Data System (ADS)
Ma, Jinlong; Wang, Lixin; Li, Sufeng; Duan, Congwen; Liu, Yu
2018-02-01
We study the traffic dynamics on two-layer complex networks, and focus on its delivery capacity allocation strategy to enhance traffic capacity measured by the critical value Rc. With the limited packet-delivering capacity, we propose a delivery capacity allocation strategy which can balance the capacities of non-hub nodes and hub nodes to optimize the data flow. With the optimal value of parameter αc, the maximal network capacity is reached because most of the nodes have shared the appropriate delivery capacity by the proposed delivery capacity allocation strategy. Our work will be beneficial to network service providers to design optimal networked traffic dynamics.
Pinto Mariano, Adriano; Bastos Borba Costa, Caliane; de Franceschi de Angelis, Dejanira; Maugeri Filho, Francisco; Pires Atala, Daniel Ibraim; Wolf Maciel, Maria Regina; Maciel Filho, Rubens
2009-11-01
In this work, the mathematical optimization of a continuous flash fermentation process for the production of biobutanol was studied. The process consists of three interconnected units, as follows: fermentor, cell-retention system (tangential microfiltration), and vacuum flash vessel (responsible for the continuous recovery of butanol from the broth). The objective of the optimization was to maximize butanol productivity for a desired substrate conversion. Two strategies were compared for the optimization of the process. In one of them, the process was represented by a deterministic model with kinetic parameters determined experimentally and, in the other, by a statistical model obtained using the factorial design technique combined with simulation. For both strategies, the problem was written as a nonlinear programming problem and was solved with the sequential quadratic programming technique. The results showed that despite the very similar solutions obtained with both strategies, the problems found with the strategy using the deterministic model, such as lack of convergence and high computational time, make the use of the optimization strategy with the statistical model, which showed to be robust and fast, more suitable for the flash fermentation process, being recommended for real-time applications coupling optimization and control.
Smith, David R.; McRae, Sarah E.; Augspurger, Tom; Ratcliffe, Judith A.; Nichols, Robert B.; Eads, Chris B.; Savidge, Tim; Bogan, Arthur E.
2015-01-01
We used a structured decision-making process to develop conservation strategies to increase persistence of Dwarf Wedgemussel (Alasmidonta heterodon) in North Carolina, USA, while accounting for uncertainty in management effectiveness and considering costs. Alternative conservation strategies were portfolios of management actions that differed by location of management actions on the landscape. Objectives of the conservation strategy were to maximize species persistence, maintain genetic diversity, maximize public support, and minimize management costs. We compared 4 conservation strategies: 1) the ‘status quo’ strategy represented current management, 2) the ‘protect the best’ strategy focused on protecting the best populations in the Tar River basin, 3) the ‘expand the distribution’ strategy focused on management of extant populations and establishment of new populations in the Neuse River basin, and 4) the ‘hybrid’ strategy combined elements of each strategy to balance conservation in the Tar and Neuse River basins. A population model informed requirements for population management, and experts projected performance of alternative strategies over a 20-y period. The optimal strategy depended on the relative value placed on competing objectives, which can vary among stakeholders. The protect the best and hybrid strategies were optimal across a wide range of relative values with 2 exceptions: 1) if minimizing management cost was of overriding concern, then status quo was optimal, or 2) if maximizing population persistence in the Neuse River basin was emphasized, then expand the distribution strategy was optimal. The optimal strategy was robust to uncertainty in management effectiveness. Overall, the structured decision process can help identify the most promising strategies for endangered species conservation that maximize conservation benefit given the constraint of limited funding.
Williams, Perry J.; Kendall, William L.
2017-01-01
Choices in ecological research and management are the result of balancing multiple, often competing, objectives. Multi-objective optimization (MOO) is a formal decision-theoretic framework for solving multiple objective problems. MOO is used extensively in other fields including engineering, economics, and operations research. However, its application for solving ecological problems has been sparse, perhaps due to a lack of widespread understanding. Thus, our objective was to provide an accessible primer on MOO, including a review of methods common in other fields, a review of their application in ecology, and a demonstration to an applied resource management problem.A large class of methods for solving MOO problems can be separated into two strategies: modelling preferences pre-optimization (the a priori strategy), or modelling preferences post-optimization (the a posteriori strategy). The a priori strategy requires describing preferences among objectives without knowledge of how preferences affect the resulting decision. In the a posteriori strategy, the decision maker simultaneously considers a set of solutions (the Pareto optimal set) and makes a choice based on the trade-offs observed in the set. We describe several methods for modelling preferences pre-optimization, including: the bounded objective function method, the lexicographic method, and the weighted-sum method. We discuss modelling preferences post-optimization through examination of the Pareto optimal set. We applied each MOO strategy to the natural resource management problem of selecting a population target for cackling goose (Branta hutchinsii minima) abundance. Cackling geese provide food security to Native Alaskan subsistence hunters in the goose's nesting area, but depredate crops on private agricultural fields in wintering areas. We developed objective functions to represent the competing objectives related to the cackling goose population target and identified an optimal solution first using the a priori strategy, and then by examining trade-offs in the Pareto set using the a posteriori strategy. We used four approaches for selecting a final solution within the a posteriori strategy; the most common optimal solution, the most robust optimal solution, and two solutions based on maximizing a restricted portion of the Pareto set. We discuss MOO with respect to natural resource management, but MOO is sufficiently general to cover any ecological problem that contains multiple competing objectives that can be quantified using objective functions.
Van Derlinden, E; Bernaerts, K; Van Impe, J F
2010-05-21
Optimal experiment design for parameter estimation (OED/PE) has become a popular tool for efficient and accurate estimation of kinetic model parameters. When the kinetic model under study encloses multiple parameters, different optimization strategies can be constructed. The most straightforward approach is to estimate all parameters simultaneously from one optimal experiment (single OED/PE strategy). However, due to the complexity of the optimization problem or the stringent limitations on the system's dynamics, the experimental information can be limited and parameter estimation convergence problems can arise. As an alternative, we propose to reduce the optimization problem to a series of two-parameter estimation problems, i.e., an optimal experiment is designed for a combination of two parameters while presuming the other parameters known. Two different approaches can be followed: (i) all two-parameter optimal experiments are designed based on identical initial parameter estimates and parameters are estimated simultaneously from all resulting experimental data (global OED/PE strategy), and (ii) optimal experiments are calculated and implemented sequentially whereby the parameter values are updated intermediately (sequential OED/PE strategy). This work exploits OED/PE for the identification of the Cardinal Temperature Model with Inflection (CTMI) (Rosso et al., 1993). This kinetic model describes the effect of temperature on the microbial growth rate and encloses four parameters. The three OED/PE strategies are considered and the impact of the OED/PE design strategy on the accuracy of the CTMI parameter estimation is evaluated. Based on a simulation study, it is observed that the parameter values derived from the sequential approach deviate more from the true parameters than the single and global strategy estimates. The single and global OED/PE strategies are further compared based on experimental data obtained from design implementation in a bioreactor. Comparable estimates are obtained, but global OED/PE estimates are, in general, more accurate and reliable. Copyright (c) 2010 Elsevier Ltd. All rights reserved.
Intelligent fault recognition strategy based on adaptive optimized multiple centers
NASA Astrophysics Data System (ADS)
Zheng, Bo; Li, Yan-Feng; Huang, Hong-Zhong
2018-06-01
For the recognition principle based optimized single center, one important issue is that the data with nonlinear separatrix cannot be recognized accurately. In order to solve this problem, a novel recognition strategy based on adaptive optimized multiple centers is proposed in this paper. This strategy recognizes the data sets with nonlinear separatrix by the multiple centers. Meanwhile, the priority levels are introduced into the multi-objective optimization, including recognition accuracy, the quantity of optimized centers, and distance relationship. According to the characteristics of various data, the priority levels are adjusted to ensure the quantity of optimized centers adaptively and to keep the original accuracy. The proposed method is compared with other methods, including support vector machine (SVM), neural network, and Bayesian classifier. The results demonstrate that the proposed strategy has the same or even better recognition ability on different distribution characteristics of data.
Long-Run Savings and Investment Strategy Optimization
Gerrard, Russell; Guillén, Montserrat; Pérez-Marín, Ana M.
2014-01-01
We focus on automatic strategies to optimize life cycle savings and investment. Classical optimal savings theory establishes that, given the level of risk aversion, a saver would keep the same relative amount invested in risky assets at any given time. We show that, when optimizing lifecycle investment, performance and risk assessment have to take into account the investor's risk aversion and the maximum amount the investor could lose, simultaneously. When risk aversion and maximum possible loss are considered jointly, an optimal savings strategy is obtained, which follows from constant rather than relative absolute risk aversion. This result is fundamental to prove that if risk aversion and the maximum possible loss are both high, then holding a constant amount invested in the risky asset is optimal for a standard lifetime saving/pension process and outperforms some other simple strategies. Performance comparisons are based on downside risk-adjusted equivalence that is used in our illustration. PMID:24711728
Long-run savings and investment strategy optimization.
Gerrard, Russell; Guillén, Montserrat; Nielsen, Jens Perch; Pérez-Marín, Ana M
2014-01-01
We focus on automatic strategies to optimize life cycle savings and investment. Classical optimal savings theory establishes that, given the level of risk aversion, a saver would keep the same relative amount invested in risky assets at any given time. We show that, when optimizing lifecycle investment, performance and risk assessment have to take into account the investor's risk aversion and the maximum amount the investor could lose, simultaneously. When risk aversion and maximum possible loss are considered jointly, an optimal savings strategy is obtained, which follows from constant rather than relative absolute risk aversion. This result is fundamental to prove that if risk aversion and the maximum possible loss are both high, then holding a constant amount invested in the risky asset is optimal for a standard lifetime saving/pension process and outperforms some other simple strategies. Performance comparisons are based on downside risk-adjusted equivalence that is used in our illustration.
Optimal Pain Assessment in Pediatric Rehabilitation: Implementation of a Nursing Guideline.
Kingsnorth, Shauna; Joachimides, Nick; Krog, Kim; Davies, Barbara; Higuchi, Kathryn Smith
2015-12-01
In Ontario, Canada, the Registered Nurses' Association promotes a Best Practice Spotlight Organization initiative to enhance evidence-based practice. Qualifying organizations are required to implement strategies, evaluate outcomes, and sustain practices aligned with nursing clinical practice guidelines. This study reports on the development and evaluation of a multifaceted implementation strategy to support adoption of a nursing clinical practice guideline on the assessment and management of acute pain in a pediatric rehabilitation and complex continuing care hospital. Multiple approaches were employed to influence behavior, attitudes, and awareness around optimal pain practice (e.g., instructional resources, electronic reminders, audits, and feedback). Four measures were introduced to assess pain in communicating and noncommunicating children as part of a campaign to treat pain as the fifth vital sign. A prospective repeated measures design examined survey and audit data to assess practice aligned with the guideline. The Knowledge and Attitudes Survey (KNAS) was adapted to ensure relevance to the local practice setting and was assessed before and after nurses' participation in three education modules. Audit data included client demographics and pain scores assessed annually over a 3-year window. A final sample of 69 nurses (78% response rate) provided pre-/post-survey data. A total of 108 pediatric surgical clients (younger than 19 years) contributed audit data across the three collection cycles. Significant improvements in nurses' knowledge, attitudes, and behaviors related to optimal pain care for children with disabilities were noted following adoption of the pain clinical practice guideline. Targeted guideline implementation strategies are central to supporting optimal pain practice. Copyright © 2015 American Society for Pain Management Nursing. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Ravishankar, Bharani
Conventional space vehicles have thermal protection systems (TPS) that provide protection to an underlying structure that carries the flight loads. In an attempt to save weight, there is interest in an integrated TPS (ITPS) that combines the structural function and the TPS function. This has weight saving potential, but complicates the design of the ITPS that now has both thermal and structural failure modes. The main objectives of this dissertation was to optimally design the ITPS subjected to thermal and mechanical loads through deterministic and reliability based optimization. The optimization of the ITPS structure requires computationally expensive finite element analyses of 3D ITPS (solid) model. To reduce the computational expenses involved in the structural analysis, finite element based homogenization method was employed, homogenizing the 3D ITPS model to a 2D orthotropic plate. However it was found that homogenization was applicable only for panels that are much larger than the characteristic dimensions of the repeating unit cell in the ITPS panel. Hence a single unit cell was used for the optimization process to reduce the computational cost. Deterministic and probabilistic optimization of the ITPS panel required evaluation of failure constraints at various design points. This further demands computationally expensive finite element analyses which was replaced by efficient, low fidelity surrogate models. In an optimization process, it is important to represent the constraints accurately to find the optimum design. Instead of building global surrogate models using large number of designs, the computational resources were directed towards target regions near constraint boundaries for accurate representation of constraints using adaptive sampling strategies. Efficient Global Reliability Analyses (EGRA) facilitates sequentially sampling of design points around the region of interest in the design space. EGRA was applied to the response surface construction of the failure constraints in the deterministic and reliability based optimization of the ITPS panel. It was shown that using adaptive sampling, the number of designs required to find the optimum were reduced drastically, while improving the accuracy. System reliability of ITPS was estimated using Monte Carlo Simulation (MCS) based method. Separable Monte Carlo method was employed that allowed separable sampling of the random variables to predict the probability of failure accurately. The reliability analysis considered uncertainties in the geometry, material properties, loading conditions of the panel and error in finite element modeling. These uncertainties further increased the computational cost of MCS techniques which was also reduced by employing surrogate models. In order to estimate the error in the probability of failure estimate, bootstrapping method was applied. This research work thus demonstrates optimization of the ITPS composite panel with multiple failure modes and large number of uncertainties using adaptive sampling techniques.
Gramer, Gwendolyn; Abdoh, Ghassan; Ben-Omran, Tawfeg; Shahbeck, Noora; Ali, Rehab; Mahmoud, Laila; Fang-Hoffmann, Junmin; Hoffmann, Georg F; Al Rifai, Hilal; Okun, Jürgen G
2017-04-01
Newborn screening is a precondition for early diagnosis and successful treatment of remethylation disorders and classical homocystinuria (cystathionine-ß-synthase deficiency). Newborn screening for classical homocystinuria using total homocysteine measurement in dried blood spots has been very successfully performed for many years for newborns from Qatar. A new optimized newborn screening strategy for remethylation disorders and homocystinuria was developed and evaluated for newborns from Qatar using total homocysteine measurement as first-tier and methionine, methionine-phenylalanine-ratio and propionylcarnitine as second-tiers. Proposed cut-offs were also retrospectively evaluated in newborn screening samples of 12 patients with remethylation disorders and vitamin B 12 deficiency from Qatar and Germany. Over a 12 months period, the proposed strategy led to a decrease in the recall rate in homocysteine screening for Qatar from 1.09% to 0.68%, while allowing for additional systematic inclusion of remethylation disorders and vitamin B 12 deficiency into the screening panel for Qatar. In the evaluated period the applied strategy would have detected all patients with classical homocystinuria identified by the previous strategy and in addition 5 children with maternal nutritional vitamin B 12 deficiency and one patient with an isolated remethylation disorder. Additional retrospective evaluation of newborn screening samples of 12 patients from Germany and Qatar with remethlyation disorders or vitamin B 12 deficiency showed that all of these patients would have been detected by the cut-offs used in the proposed new strategy. In addition, an adapted strategy for Germany using methionine, methionine-phenylalanine-ratio and propionylcarnitine as first-tier, and homocysteine as a second-tier test was also positively evaluated retrospectively. The proposed strategy for samples from Qatar allows inclusion of remethylation disorders and vitamin B 12 deficiency in the screening panel, while lowering the recall rate. An adapted second-tier strategy is presented for screening in Germany and will be prospectively evaluated over the next years in a pilot project named "Newborn Screening 2020".
A particle swarm optimization variant with an inner variable learning strategy.
Wu, Guohua; Pedrycz, Witold; Ma, Manhao; Qiu, Dishan; Li, Haifeng; Liu, Jin
2014-01-01
Although Particle Swarm Optimization (PSO) has demonstrated competitive performance in solving global optimization problems, it exhibits some limitations when dealing with optimization problems with high dimensionality and complex landscape. In this paper, we integrate some problem-oriented knowledge into the design of a certain PSO variant. The resulting novel PSO algorithm with an inner variable learning strategy (PSO-IVL) is particularly efficient for optimizing functions with symmetric variables. Symmetric variables of the optimized function have to satisfy a certain quantitative relation. Based on this knowledge, the inner variable learning (IVL) strategy helps the particle to inspect the relation among its inner variables, determine the exemplar variable for all other variables, and then make each variable learn from the exemplar variable in terms of their quantitative relations. In addition, we design a new trap detection and jumping out strategy to help particles escape from local optima. The trap detection operation is employed at the level of individual particles whereas the trap jumping out strategy is adaptive in its nature. Experimental simulations completed for some representative optimization functions demonstrate the excellent performance of PSO-IVL. The effectiveness of the PSO-IVL stresses a usefulness of augmenting evolutionary algorithms by problem-oriented domain knowledge.
Optimizing bulk milk dioxin monitoring based on costs and effectiveness.
Lascano-Alcoser, V H; Velthuis, A G J; van der Fels-Klerx, H J; Hoogenboom, L A P; Oude Lansink, A G J M
2013-07-01
Dioxins are environmental pollutants, potentially present in milk products, which have negative consequences for human health and for the firms and farms involved in the dairy chain. Dioxin monitoring in feed and food has been implemented to detect their presence and estimate their levels in food chains. However, the costs and effectiveness of such programs have not been evaluated. In this study, the costs and effectiveness of bulk milk dioxin monitoring in milk trucks were estimated to optimize the sampling and pooling monitoring strategies aimed at detecting at least 1 contaminated dairy farm out of 20,000 at a target dioxin concentration level. Incidents of different proportions, in terms of the number of contaminated farms, and concentrations were simulated. A combined testing strategy, consisting of screening and confirmatory methods, was assumed as well as testing of pooled samples. Two optimization models were built using linear programming. The first model aimed to minimize monitoring costs subject to a minimum required effectiveness of finding an incident, whereas the second model aimed to maximize the effectiveness for a given monitoring budget. Our results show that a high level of effectiveness is possible, but at high costs. Given specific assumptions, monitoring with 95% effectiveness to detect an incident of 1 contaminated farm at a dioxin concentration of 2 pg of toxic equivalents/g of fat [European Commission's (EC) action level] costs €2.6 million per month. At the same level of effectiveness, a 73% cost reduction is possible when aiming to detect an incident where 2 farms are contaminated at a dioxin concentration of 3 pg of toxic equivalents/g of fat (EC maximum level). With a fixed budget of €40,000 per month, the probability of detecting an incident with a single contaminated farm at a dioxin concentration equal to the EC action level is 4.4%. This probability almost doubled (8.0%) when aiming to detect the same incident but with a dioxin concentration equal to the EC maximum level. This study shows that the effectiveness of finding an incident depends not only on the ratio at which, for testing, collected truck samples are mixed into a pooled sample (aiming at detecting certain concentration), but also the number of collected truck samples. In conclusion, the optimal cost-effective monitoring depends on the number of contaminated farms and the concentration aimed at detection. The models and study results offer quantitative support to risk managers of food industries and food safety authorities. Copyright © 2013 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Janardhanan, S.; Datta, B.
2011-12-01
Surrogate models are widely used to develop computationally efficient simulation-optimization models to solve complex groundwater management problems. Artificial intelligence based models are most often used for this purpose where they are trained using predictor-predictand data obtained from a numerical simulation model. Most often this is implemented with the assumption that the parameters and boundary conditions used in the numerical simulation model are perfectly known. However, in most practical situations these values are uncertain. Under these circumstances the application of such approximation surrogates becomes limited. In our study we develop a surrogate model based coupled simulation optimization methodology for determining optimal pumping strategies for coastal aquifers considering parameter uncertainty. An ensemble surrogate modeling approach is used along with multiple realization optimization. The methodology is used to solve a multi-objective coastal aquifer management problem considering two conflicting objectives. Hydraulic conductivity and the aquifer recharge are considered as uncertain values. Three dimensional coupled flow and transport simulation model FEMWATER is used to simulate the aquifer responses for a number of scenarios corresponding to Latin hypercube samples of pumping and uncertain parameters to generate input-output patterns for training the surrogate models. Non-parametric bootstrap sampling of this original data set is used to generate multiple data sets which belong to different regions in the multi-dimensional decision and parameter space. These data sets are used to train and test multiple surrogate models based on genetic programming. The ensemble of surrogate models is then linked to a multi-objective genetic algorithm to solve the pumping optimization problem. Two conflicting objectives, viz, maximizing total pumping from beneficial wells and minimizing the total pumping from barrier wells for hydraulic control of saltwater intrusion are considered. The salinity levels resulting at strategic locations due to these pumping are predicted using the ensemble surrogates and are constrained to be within pre-specified levels. Different realizations of the concentration values are obtained from the ensemble predictions corresponding to each candidate solution of pumping. Reliability concept is incorporated as the percent of the total number of surrogate models which satisfy the imposed constraints. The methodology was applied to a realistic coastal aquifer system in Burdekin delta area in Australia. It was found that all optimal solutions corresponding to a reliability level of 0.99 satisfy all the constraints and as reducing reliability level decreases the constraint violation increases. Thus ensemble surrogate model based simulation-optimization was found to be useful in deriving multi-objective optimal pumping strategies for coastal aquifers under parameter uncertainty.
Pliego, Jorge; Mateos, Juan Carlos; Rodriguez, Jorge; Valero, Francisco; Baeza, Mireia; Femat, Ricardo; Camacho, Rosa; Sandoval, Georgina; Herrera-López, Enrique J
2015-01-27
Lipases and esterases are biocatalysts used at the laboratory and industrial level. To obtain the maximum yield in a bioprocess, it is important to measure key variables, such as enzymatic activity. The conventional method for monitoring hydrolytic activity is to take out a sample from the bioreactor to be analyzed off-line at the laboratory. The disadvantage of this approach is the long time required to recover the information from the process, hindering the possibility to develop control systems. New strategies to monitor lipase/esterase activity are necessary. In this context and in the first approach, we proposed a lab-made sequential injection analysis system to analyze off-line samples from shake flasks. Lipase/esterase activity was determined using p-nitrophenyl butyrate as the substrate. The sequential injection analysis allowed us to measure the hydrolytic activity from a sample without dilution in a linear range from 0.05-1.60 U/mL, with the capability to reach sample dilutions up to 1000 times, a sampling frequency of five samples/h, with a kinetic reaction of 5 min and a relative standard deviation of 8.75%. The results are promising to monitor lipase/esterase activity in real time, in which optimization and control strategies can be designed.
Pliego, Jorge; Mateos, Juan Carlos; Rodriguez, Jorge; Valero, Francisco; Baeza, Mireia; Femat, Ricardo; Camacho, Rosa; Sandoval, Georgina; Herrera-López, Enrique J.
2015-01-01
Lipases and esterases are biocatalysts used at the laboratory and industrial level. To obtain the maximum yield in a bioprocess, it is important to measure key variables, such as enzymatic activity. The conventional method for monitoring hydrolytic activity is to take out a sample from the bioreactor to be analyzed off-line at the laboratory. The disadvantage of this approach is the long time required to recover the information from the process, hindering the possibility to develop control systems. New strategies to monitor lipase/esterase activity are necessary. In this context and in the first approach, we proposed a lab-made sequential injection analysis system to analyze off-line samples from shake flasks. Lipase/esterase activity was determined using p-nitrophenyl butyrate as the substrate. The sequential injection analysis allowed us to measure the hydrolytic activity from a sample without dilution in a linear range from 0.05–1.60 U/mL, with the capability to reach sample dilutions up to 1000 times, a sampling frequency of five samples/h, with a kinetic reaction of 5 min and a relative standard deviation of 8.75%. The results are promising to monitor lipase/esterase activity in real time, in which optimization and control strategies can be designed. PMID:25633600
NASA Astrophysics Data System (ADS)
Remund, Stefan M.; Jaeggi, Beat; Kramer, Thorsten; Neuenschwander, Beat
2017-03-01
The resulting surface roughness and waviness after processing with ultra-short pulsed laser radiation depend on the laser parameters as well as on the machining strategy and the scanning system. However the results depend on the material and its initial surface quality and finishing as well. The improvement of surface finishing represents effort and produces additional costs. For industrial applications it is important to reduce the preparation of a workpiece for laser micro-machining to optimize quality and reduce costs. The effects of the ablation process and the influence of the machining strategy and scanning system onto the surface roughness and waviness can be differenced due to their separate manner. By using the optimal laser parameters on an initially perfect surface, the ablation process mainly increases the roughness to a certain value for most metallic materials. However, imperfections in the scanning system causing a slight variation in the scanning speed lead to a raise of the waviness on the sample surface. For a basic understanding of the influence of grinding marks, the sample surfaces were initially furnished with regular grooves of different depths and spatial frequencies to gain a homogenous and well-defined original surface. On these surfaces the effect of different beam waists and machining strategy are investigated and the results are compared with a simulation of the process. Furthermore the behaviors of common surface finishes used in industrial applications for laser micro-machining are studied and the relation onto the resulting surface roughness and waviness is presented.
Sauer, Charles W; Boutin, Mallory A; Kim, Jae H
2017-05-01
Very-low-birth-weight infants continue to face significant difficulties with postnatal growth. Human milk is the optimal form of nutrition for infants but may exhibit variation in nutrient content. This study aimed to perform macronutrient analysis on expressed human milk from mothers whose babies are hospitalized in the neonatal intensive care unit. Up to five human milk samples per participant were analyzed for protein, carbohydrate, and fat content using reference chemical analyses (Kjeldahl for protein, high pressure liquid chromatography for carbohydrates, and Mojonnier for fat). Calorie content was calculated. A total of 64 samples from 24 participants was analyzed. Wide variability was found in calorie, protein, carbohydrate, and fat composition. The authors found an average of 17.9 kcal/ounce, with only 34% of samples falling within 10% of the expected caloric density. The assumption that human milk contains 20 kcal/ounce is no longer supported based on this study. This supports promoting an individualized nutrition strategy as a crucial aspect to optimal nutrition.
Wahl, Simone; Boulesteix, Anne-Laure; Zierer, Astrid; Thorand, Barbara; van de Wiel, Mark A
2016-10-26
Missing values are a frequent issue in human studies. In many situations, multiple imputation (MI) is an appropriate missing data handling strategy, whereby missing values are imputed multiple times, the analysis is performed in every imputed data set, and the obtained estimates are pooled. If the aim is to estimate (added) predictive performance measures, such as (change in) the area under the receiver-operating characteristic curve (AUC), internal validation strategies become desirable in order to correct for optimism. It is not fully understood how internal validation should be combined with multiple imputation. In a comprehensive simulation study and in a real data set based on blood markers as predictors for mortality, we compare three combination strategies: Val-MI, internal validation followed by MI on the training and test parts separately, MI-Val, MI on the full data set followed by internal validation, and MI(-y)-Val, MI on the full data set omitting the outcome followed by internal validation. Different validation strategies, including bootstrap und cross-validation, different (added) performance measures, and various data characteristics are considered, and the strategies are evaluated with regard to bias and mean squared error of the obtained performance estimates. In addition, we elaborate on the number of resamples and imputations to be used, and adopt a strategy for confidence interval construction to incomplete data. Internal validation is essential in order to avoid optimism, with the bootstrap 0.632+ estimate representing a reliable method to correct for optimism. While estimates obtained by MI-Val are optimistically biased, those obtained by MI(-y)-Val tend to be pessimistic in the presence of a true underlying effect. Val-MI provides largely unbiased estimates, with a slight pessimistic bias with increasing true effect size, number of covariates and decreasing sample size. In Val-MI, accuracy of the estimate is more strongly improved by increasing the number of bootstrap draws rather than the number of imputations. With a simple integrated approach, valid confidence intervals for performance estimates can be obtained. When prognostic models are developed on incomplete data, Val-MI represents a valid strategy to obtain estimates of predictive performance measures.
Data analytics and optimization of an ice-based energy storage system for commercial buildings
Luo, Na; Hong, Tianzhen; Li, Hui; ...
2017-07-25
Ice-based thermal energy storage (TES) systems can shift peak cooling demand and reduce operational energy costs (with time-of-use rates) in commercial buildings. The accurate prediction of the cooling load, and the optimal control strategy for managing the charging and discharging of a TES system, are two critical elements to improving system performance and achieving energy cost savings. This study utilizes data-driven analytics and modeling to holistically understand the operation of an ice–based TES system in a shopping mall, calculating the system’s performance using actual measured data from installed meters and sensors. Results show that there is significant savings potential whenmore » the current operating strategy is improved by appropriately scheduling the operation of each piece of equipment of the TES system, as well as by determining the amount of charging and discharging for each day. A novel optimal control strategy, determined by an optimization algorithm of Sequential Quadratic Programming, was developed to minimize the TES system’s operating costs. Three heuristic strategies were also investigated for comparison with our proposed strategy, and the results demonstrate the superiority of our method to the heuristic strategies in terms of total energy cost savings. Specifically, the optimal strategy yields energy costs of up to 11.3% per day and 9.3% per month compared with current operational strategies. A one-day-ahead hourly load prediction was also developed using machine learning algorithms, which facilitates the adoption of the developed data analytics and optimization of the control strategy in a real TES system operation.« less
Data analytics and optimization of an ice-based energy storage system for commercial buildings
DOE Office of Scientific and Technical Information (OSTI.GOV)
Luo, Na; Hong, Tianzhen; Li, Hui
Ice-based thermal energy storage (TES) systems can shift peak cooling demand and reduce operational energy costs (with time-of-use rates) in commercial buildings. The accurate prediction of the cooling load, and the optimal control strategy for managing the charging and discharging of a TES system, are two critical elements to improving system performance and achieving energy cost savings. This study utilizes data-driven analytics and modeling to holistically understand the operation of an ice–based TES system in a shopping mall, calculating the system’s performance using actual measured data from installed meters and sensors. Results show that there is significant savings potential whenmore » the current operating strategy is improved by appropriately scheduling the operation of each piece of equipment of the TES system, as well as by determining the amount of charging and discharging for each day. A novel optimal control strategy, determined by an optimization algorithm of Sequential Quadratic Programming, was developed to minimize the TES system’s operating costs. Three heuristic strategies were also investigated for comparison with our proposed strategy, and the results demonstrate the superiority of our method to the heuristic strategies in terms of total energy cost savings. Specifically, the optimal strategy yields energy costs of up to 11.3% per day and 9.3% per month compared with current operational strategies. A one-day-ahead hourly load prediction was also developed using machine learning algorithms, which facilitates the adoption of the developed data analytics and optimization of the control strategy in a real TES system operation.« less
Intelligent Space Tube Optimization for speeding ground water remedial design.
Kalwij, Ineke M; Peralta, Richard C
2008-01-01
An innovative Intelligent Space Tube Optimization (ISTO) two-stage approach facilitates solving complex nonlinear flow and contaminant transport management problems. It reduces computational effort of designing optimal ground water remediation systems and strategies for an assumed set of wells. ISTO's stage 1 defines an adaptive mobile space tube that lengthens toward the optimal solution. The space tube has overlapping multidimensional subspaces. Stage 1 generates several strategies within the space tube, trains neural surrogate simulators (NSS) using the limited space tube data, and optimizes using an advanced genetic algorithm (AGA) with NSS. Stage 1 speeds evaluating assumed well locations and combinations. For a large complex plume of solvents and explosives, ISTO stage 1 reaches within 10% of the optimal solution 25% faster than an efficient AGA coupled with comprehensive tabu search (AGCT) does by itself. ISTO input parameters include space tube radius and number of strategies used to train NSS per cycle. Larger radii can speed convergence to optimality for optimizations that achieve it but might increase the number of optimizations reaching it. ISTO stage 2 automatically refines the NSS-AGA stage 1 optimal strategy using heuristic optimization (we used AGCT), without using NSS surrogates. Stage 2 explores the entire solution space. ISTO is applicable for many heuristic optimization settings in which the numerical simulator is computationally intensive, and one would like to reduce that burden.
Optimal strategy for controlling the spread of Plasmodium Knowlesi malaria: Treatment and culling
NASA Astrophysics Data System (ADS)
Abdullahi, Mohammed Baba; Hasan, Yahya Abu; Abdullah, Farah Aini
2015-05-01
Plasmodium Knowlesi malaria is a parasitic mosquito-borne disease caused by a eukaryotic protist of genus Plasmodium Knowlesi transmitted by mosquito, Anopheles leucosphyrus to human and macaques. We developed and analyzed a deterministic Mathematical model for the transmission of Plasmodium Knowlesi malaria in human and macaques. The optimal control theory is applied to investigate optimal strategies for controlling the spread of Plasmodium Knowlesi malaria using treatment and culling as control strategies. The conditions for optimal control of the Plasmodium Knowlesi malaria are derived using Pontryagin's Maximum Principle. Finally, numerical simulations suggested that the combination of the control strategies is the best way to control the disease in any community.
Zhang, Shuo; Zhang, Chengning; Han, Guangwei; Wang, Qinghui
2014-01-01
A dual-motor coupling-propulsion electric bus (DMCPEB) is modeled, and its optimal control strategy is studied in this paper. The necessary dynamic features of energy loss for subsystems is modeled. Dynamic programming (DP) technique is applied to find the optimal control strategy including upshift threshold, downshift threshold, and power split ratio between the main motor and auxiliary motor. Improved control rules are extracted from the DP-based control solution, forming near-optimal control strategies. Simulation results demonstrate that a significant improvement in reducing energy loss due to the dual-motor coupling-propulsion system (DMCPS) running is realized without increasing the frequency of the mode switch. PMID:25540814
Zhang, Shuo; Zhang, Chengning; Han, Guangwei; Wang, Qinghui
2014-01-01
A dual-motor coupling-propulsion electric bus (DMCPEB) is modeled, and its optimal control strategy is studied in this paper. The necessary dynamic features of energy loss for subsystems is modeled. Dynamic programming (DP) technique is applied to find the optimal control strategy including upshift threshold, downshift threshold, and power split ratio between the main motor and auxiliary motor. Improved control rules are extracted from the DP-based control solution, forming near-optimal control strategies. Simulation results demonstrate that a significant improvement in reducing energy loss due to the dual-motor coupling-propulsion system (DMCPS) running is realized without increasing the frequency of the mode switch.
Chou, Ann F; Yano, Elizabeth M; McCoy, Kimberly D; Willis, Deanna R; Doebbeling, Bradley N
2008-01-01
To address increases in the incidence of infection with antimicrobial-resistant pathogens, the National Foundation for Infectious Diseases and Centers for Disease Control and Prevention proposed two sets of strategies to (a) optimize antibiotic use and (b) prevent the spread of antimicrobial resistance and control transmission. However, little is known about the implementation of these strategies. Our objective is to explore organizational structural and process factors that facilitate the implementation of National Foundation for Infectious Diseases/Centers for Disease Control and Prevention strategies in U.S. hospitals. We surveyed 448 infection control professionals from a national sample of hospitals. Clinically anchored in the Donabedian model that defines quality in terms of structural and process factors, with the structural domain further informed by a contingency approach, we modeled the degree to which National Foundation for Infectious Diseases and Centers for Disease Control and Prevention strategies were implemented as a function of formalization and standardization of protocols, centralization of decision-making hierarchy, information technology capabilities, culture, communication mechanisms, and interdepartmental coordination, controlling for hospital characteristics. Formalization, standardization, centralization, institutional culture, provider-management communication, and information technology use were associated with optimal antibiotic use and enhanced implementation of strategies that prevent and control antimicrobial resistance spread (all p < .001). However, interdepartmental coordination for patient care was inversely related with antibiotic use in contrast to antimicrobial resistance spread prevention and control (p < .0001). Formalization and standardization may eliminate staff role conflict, whereas centralized authority may minimize ambiguity. Culture and communication likely promote internal trust, whereas information technology use helps integrate and support these organizational processes. These findings suggest concrete strategies for evaluating current capabilities to implement effective practices and foster and sustain a culture of patient safety.
Arthropod Surveillance Programs: Basic Components, Strategies, and Analysis.
Cohnstaedt, Lee W; Rochon, Kateryn; Duehl, Adrian J; Anderson, John F; Barrera, Roberto; Su, Nan-Yao; Gerry, Alec C; Obenauer, Peter J; Campbell, James F; Lysyk, Tim J; Allan, Sandra A
2012-03-01
Effective entomological surveillance planning stresses a careful consideration of methodology, trapping technologies, and analysis techniques. Herein, the basic principles and technological components of arthropod surveillance plans are described, as promoted in the symposium "Advancements in arthropod monitoring technology, techniques, and analysis" presented at the 58th annual meeting of the Entomological Society of America in San Diego, CA. Interdisciplinary examples of arthropod monitoring for urban, medical, and veterinary applications are reviewed. Arthropod surveillance consists of the three components: 1) sampling method, 2) trap technology, and 3) analysis technique. A sampling method consists of selecting the best device or collection technique for a specific location and sampling at the proper spatial distribution, optimal duration, and frequency to achieve the surveillance objective. Optimized sampling methods are discussed for several mosquito species (Diptera: Culicidae) and ticks (Acari: Ixodidae). The advantages and limitations of novel terrestrial and aerial insect traps, artificial pheromones and kairomones are presented for the capture of red flour beetle (Coleoptera: Tenebrionidae), small hive beetle (Coleoptera: Nitidulidae), bed bugs (Hemiptera: Cimicidae), and Culicoides (Diptera: Ceratopogonidae) respectively. After sampling, extrapolating real world population numbers from trap capture data are possible with the appropriate analysis techniques. Examples of this extrapolation and action thresholds are given for termites (Isoptera: Rhinotermitidae) and red flour beetles.
Arthropod Surveillance Programs: Basic Components, Strategies, and Analysis
Rochon, Kateryn; Duehl, Adrian J.; Anderson, John F.; Barrera, Roberto; Su, Nan-Yao; Gerry, Alec C.; Obenauer, Peter J.; Campbell, James F.; Lysyk, Tim J.; Allan, Sandra A.
2015-01-01
Effective entomological surveillance planning stresses a careful consideration of methodology, trapping technologies, and analysis techniques. Herein, the basic principles and technological components of arthropod surveillance plans are described, as promoted in the symposium “Advancements in arthropod monitoring technology, techniques, and analysis” presented at the 58th annual meeting of the Entomological Society of America in San Diego, CA. Interdisciplinary examples of arthropod monitoring for urban, medical, and veterinary applications are reviewed. Arthropod surveillance consists of the three components: 1) sampling method, 2) trap technology, and 3) analysis technique. A sampling method consists of selecting the best device or collection technique for a specific location and sampling at the proper spatial distribution, optimal duration, and frequency to achieve the surveillance objective. Optimized sampling methods are discussed for several mosquito species (Diptera: Culicidae) and ticks (Acari: Ixodidae). The advantages and limitations of novel terrestrial and aerial insect traps, artificial pheromones and kairomones are presented for the capture of red flour beetle (Coleoptera: Tenebrionidae), small hive beetle (Coleoptera: Nitidulidae), bed bugs (Hemiptera: Cimicidae), and Culicoides (Diptera: Ceratopogonidae) respectively. After sampling, extrapolating real world population numbers from trap capture data are possible with the appropriate analysis techniques. Examples of this extrapolation and action thresholds are given for termites (Isoptera: Rhinotermitidae) and red flour beetles. PMID:26543242
Woolfenden, Elizabeth
2010-04-16
Sorbent tubes/traps are widely used in combination with gas chromatographic (GC) analytical methods to monitor the vapour-phase fraction of organic compounds in air. Applications range from atmospheric research and ambient air monitoring (indoor and outdoor) to occupational hygiene (personal exposure assessment) and measuring chemical emission levels. Part 1 of this paper reviewed the main sorbent-based air sampling strategies including active (pumped) tube monitoring, diffusive (passive) sampling onto sorbent tubes/cartridges plus sorbent trapping/focusing of whole air samples that are either collected in containers (such as canisters or bags) or monitored online. Options for subsequent extraction and transfer to GC(MS) analysis were also summarised and the trend to thermal desorption (TD)-based methods and away from solvent extraction was explained. As a result of this trend, demand for TD-compatible sorbents (alternatives to traditional charcoal) is growing. Part 2 of this paper therefore continues with a summary of TD-compatible sorbents, their respective advantages and limitations and considerations for sorbent selection. Other analytical considerations for optimizing sorbent-based air monitoring methods are also discussed together with recent technical developments and sampling accessories which have extended the application range of sorbent trapping technology generally. Copyright 2010 Elsevier B.V. All rights reserved.
Mohammadpour, A-H; Nazemian, F; Abtahi, B; Naghibi, M; Gholami, K; Rezaee, S; Nazari, M-R A; Rajabi, O
2008-12-01
Area under the concentration curve (AUC) of mycophenolic acid (MPA) could help to optimize therapeutic drug monitoring during the early post-renal transplant period. The aim of this study was to develop a limited sampling strategy to estimate an abbreviated MPA AUC within the first month after renal transplantation. In this study we selected 19 patients in the early posttransplant period with normal renal graft function (glomerular filtration rate > 70 mL/min). Plasma MPA concentrations were measured using reverse-phase high-performance liquid chromatography. MPA AUC(0-12h) was calculated using the linear trapezoidal rule. Multiple stepwise regression analysis was used to determine the minimal and convenient time points of MPA levels that could be used to derive model equations best fitted to MPA AUC(0-12h). The regression equation for AUC estimation that gave the best performance was AUC = 14.46 C(10) + 15.547 (r(2) = .882). The validation of the method was performed using the jackknife method. Mean prediction error of this model was not different from zero (P > .05) and had a high root mean square prediction error (8.06). In conclusion, this limited sampling strategy provided an effective approach for therapeutic drug monitoring during the early posttransplant period.
NASA Astrophysics Data System (ADS)
Muratore-Ginanneschi, Paolo
2005-05-01
Investment strategies in multiplicative Markovian market models with transaction costs are defined using growth optimal criteria. The optimal strategy is shown to consist in holding the amount of capital invested in stocks within an interval around an ideal optimal investment. The size of the holding interval is determined by the intensity of the transaction costs and the time horizon. The inclusion of financial derivatives in the models is also considered. All the results presented in this contributions were previously derived in collaboration with E. Aurell.
Quantifying global fossil-fuel CO2 emissions: from OCO-2 to optimal observing designs
NASA Astrophysics Data System (ADS)
Ye, X.; Lauvaux, T.; Kort, E. A.; Oda, T.; Feng, S.; Lin, J. C.; Yang, E. G.; Wu, D.; Kuze, A.; Suto, H.; Eldering, A.
2017-12-01
Cities house more than half of the world's population and are responsible for more than 70% of the world anthropogenic CO2 emissions. Therefore, quantifications of emissions from major cities, which are only less than a hundred intense emitting spots across the globe, should allow us to monitor changes in global fossil-fuel CO2 emissions, in an independent, objective way. Satellite platforms provide favorable temporal and spatial coverage to collect urban CO2 data to quantify the anthropogenic contributions to the global carbon budget. We present here the optimal observation design for future NASA's OCO-2 and Japanese GOSAT missions, based on real-data (i.e. OCO-2) experiments and Observing System Simulation Experiments (OSSE's) to address different error components in the urban CO2 budget calculation. We identify the major sources of emission uncertainties for various types of cities with different ecosystems and geographical features, such as urban plumes over flat terrains, accumulated enhancements within basins, and complex weather regimes in coastal areas. Atmospheric transport errors were characterized under various meteorological conditions using the Weather Research and Forecasting (WRF) model at 1-km spatial resolution, coupled to the Open-source Data Inventory for Anthropogenic CO2 (ODIAC) emissions. We propose and discuss the optimized urban sampling strategies to address some difficulties from the seasonality in cloud cover and emissions, vegetation density in and around cities, and address the daytime sampling bias using prescribed diurnal cycles. These factors are combined in pseudo data experiments in which we evaluate the relative impact of uncertainties on inverse estimates of CO2 emissions for cities across latitudinal and climatological zones. We propose here several sampling strategies to minimize the uncertainties in target mode for tracking urban fossil-fuel CO2 emissions over the globe for future satellite missions, such as OCO-3 and future versions of GOSAT.
Optimal design of structures for earthquake loads by a hybrid RBF-BPSO method
NASA Astrophysics Data System (ADS)
Salajegheh, Eysa; Gholizadeh, Saeed; Khatibinia, Mohsen
2008-03-01
The optimal seismic design of structures requires that time history analyses (THA) be carried out repeatedly. This makes the optimal design process inefficient, in particular, if an evolutionary algorithm is used. To reduce the overall time required for structural optimization, two artificial intelligence strategies are employed. In the first strategy, radial basis function (RBF) neural networks are used to predict the time history responses of structures in the optimization flow. In the second strategy, a binary particle swarm optimization (BPSO) is used to find the optimum design. Combining the RBF and BPSO, a hybrid RBF-BPSO optimization method is proposed in this paper, which achieves fast optimization with high computational performance. Two examples are presented and compared to determine the optimal weight of structures under earthquake loadings using both exact and approximate analyses. The numerical results demonstrate the computational advantages and effectiveness of the proposed hybrid RBF-BPSO optimization method for the seismic design of structures.
NASA Astrophysics Data System (ADS)
Dwyer, Linnea; Yadav, Kamini; Congalton, Russell G.
2017-04-01
Providing adequate food and water for a growing, global population continues to be a major challenge. Mapping and monitoring crops are useful tools for estimating the extent of crop productivity. GFSAD30 (Global Food Security Analysis Data at 30m) is a program, funded by NASA, that is producing global cropland maps by using field measurements and remote sensing images. This program studies 8 major crop types, and includes information on cropland area/extent, if crops are irrigated or rainfed, and the cropping intensities. Using results from the US and the extensive reference data available, CDL (USDA Crop Data Layer), we will experiment with various sampling simulations to determine optimal sampling for thematic map accuracy assessment. These simulations will include varying the sampling unit, the sampling strategy, and the sample number. Results of these simulations will allow us to recommend assessment approaches to handle different cropping scenarios.
Fernández, Elena; Vidal, Lorena; Martín-Yerga, Daniel; Blanco, María del Carmen; Canals, Antonio; Costa-García, Agustín
2015-04-01
A novel approach is presented, whereby gold nanostructured screen-printed carbon electrodes (SPCnAuEs) are combined with in-situ ionic liquid formation dispersive liquid-liquid microextraction (in-situ IL-DLLME) and microvolume back-extraction for the determination of mercury in water samples. In-situ IL-DLLME is based on a simple metathesis reaction between a water-miscible IL and a salt to form a water-immiscible IL into sample solution. Mercury complex with ammonium pyrrolidinedithiocarbamate is extracted from sample solution into the water-immiscible IL formed in-situ. Then, an ultrasound-assisted procedure is employed to back-extract the mercury into 10 µL of a 4 M HCl aqueous solution, which is finally analyzed using SPCnAuEs. Sample preparation methodology was optimized using a multivariate optimization strategy. Under optimized conditions, a linear range between 0.5 and 10 µg L(-1) was obtained with a correlation coefficient of 0.997 for six calibration points. The limit of detection obtained was 0.2 µg L(-1), which is lower than the threshold value established by the Environmental Protection Agency and European Union (i.e., 2 µg L(-1) and 1 µg L(-1), respectively). The repeatability of the proposed method was evaluated at two different spiking levels (3 and 10 µg L(-1)) and a coefficient of variation of 13% was obtained in both cases. The performance of the proposed methodology was evaluated in real-world water samples including tap water, bottled water, river water and industrial wastewater. Relative recoveries between 95% and 108% were obtained. Copyright © 2014 Elsevier B.V. All rights reserved.
Modeling joint restoration strategies for interdependent infrastructure systems.
Zhang, Chao; Kong, Jingjing; Simonovic, Slobodan P
2018-01-01
Life in the modern world depends on multiple critical services provided by infrastructure systems which are interdependent at multiple levels. To effectively respond to infrastructure failures, this paper proposes a model for developing optimal joint restoration strategy for interdependent infrastructure systems following a disruptive event. First, models for (i) describing structure of interdependent infrastructure system and (ii) their interaction process, are presented. Both models are considering the failure types, infrastructure operating rules and interdependencies among systems. Second, an optimization model for determining an optimal joint restoration strategy at infrastructure component level by minimizing the economic loss from the infrastructure failures, is proposed. The utility of the model is illustrated using a case study of electric-water systems. Results show that a small number of failed infrastructure components can trigger high level failures in interdependent systems; the optimal joint restoration strategy varies with failure occurrence time. The proposed models can help decision makers to understand the mechanisms of infrastructure interactions and search for optimal joint restoration strategy, which can significantly enhance safety of infrastructure systems.
Ghose, Sanchayita; Nagrath, Deepak; Hubbard, Brian; Brooks, Clayton; Cramer, Steven M
2004-01-01
The effect of an alternate strategy employing two different flowrates during loading was explored as a means of increasing system productivity in Protein-A chromatography. The effect of such a loading strategy was evaluated using a chromatographic model that was able to accurately predict experimental breakthrough curves for this Protein-A system. A gradient-based optimization routine is carried out to establish the optimal loading conditions (initial and final flowrates and switching time). The two-step loading strategy (using a higher flowrate during the initial stages followed by a lower flowrate) was evaluated for an Fc-fusion protein and was found to result in significant improvements in process throughput. In an extension of this optimization routine, dynamic loading capacity and productivity were simultaneously optimized using a weighted objective function, and this result was compared to that obtained with the single flowrate. Again, the dual-flowrate strategy was found to be superior.
Optimal teaching strategy in periodic impulsive knowledge dissemination system.
Liu, Dan-Qing; Wu, Zhen-Qiang; Wang, Yu-Xin; Guo, Qiang; Liu, Jian-Guo
2017-01-01
Accurately describing the knowledge dissemination process is significant to enhance the performance of personalized education. In this study, considering the effect of periodic teaching activities on the learning process, we propose a periodic impulsive knowledge dissemination system to regenerate the knowledge dissemination process. Meanwhile, we put forward learning effectiveness which is an outcome of a trade-off between the benefits and costs raised by knowledge dissemination as objective function. Further, we investigate the optimal teaching strategy which can maximize learning effectiveness, to obtain the optimal effect of knowledge dissemination affected by the teaching activities. We solve this dynamic optimization problem by optimal control theory and get the optimization system. At last we numerically solve this system in several practical examples to make the conclusions intuitive and specific. The optimal teaching strategy proposed in this paper can be applied widely in the optimization problem of personal education and beneficial for enhancing the effect of knowledge dissemination.
Optimal teaching strategy in periodic impulsive knowledge dissemination system
Liu, Dan-Qing; Wu, Zhen-Qiang; Wang, Yu-Xin; Guo, Qiang
2017-01-01
Accurately describing the knowledge dissemination process is significant to enhance the performance of personalized education. In this study, considering the effect of periodic teaching activities on the learning process, we propose a periodic impulsive knowledge dissemination system to regenerate the knowledge dissemination process. Meanwhile, we put forward learning effectiveness which is an outcome of a trade-off between the benefits and costs raised by knowledge dissemination as objective function. Further, we investigate the optimal teaching strategy which can maximize learning effectiveness, to obtain the optimal effect of knowledge dissemination affected by the teaching activities. We solve this dynamic optimization problem by optimal control theory and get the optimization system. At last we numerically solve this system in several practical examples to make the conclusions intuitive and specific. The optimal teaching strategy proposed in this paper can be applied widely in the optimization problem of personal education and beneficial for enhancing the effect of knowledge dissemination. PMID:28665961
Health benefit modelling and optimization of vehicular pollution control strategies
NASA Astrophysics Data System (ADS)
Sonawane, Nayan V.; Patil, Rashmi S.; Sethi, Virendra
2012-12-01
This study asserts that the evaluation of pollution reduction strategies should be approached on the basis of health benefits. The framework presented could be used for decision making on the basis of cost effectiveness when the strategies are applied concurrently. Several vehicular pollution control strategies have been proposed in literature for effective management of urban air pollution. The effectiveness of these strategies has been mostly studied as a one at a time approach on the basis of change in pollution concentration. The adequacy and practicality of such an approach is studied in the present work. Also, the assessment of respective benefits of these strategies has been carried out when they are implemented simultaneously. An integrated model has been developed which can be used as a tool for optimal prioritization of various pollution management strategies. The model estimates health benefits associated with specific control strategies. ISC-AERMOD View has been used to provide the cause-effect relation between control options and change in ambient air quality. BenMAP, developed by U.S. EPA, has been applied for estimation of health and economic benefits associated with various management strategies. Valuation of health benefits has been done for impact indicators of premature mortality, hospital admissions and respiratory syndrome. An optimization model has been developed to maximize overall social benefits with determination of optimized percentage implementations for multiple strategies. The model has been applied for sub-urban region of Mumbai city for vehicular sector. Several control scenarios have been considered like revised emission standards, electric, CNG, LPG and hybrid vehicles. Reduction in concentration and resultant health benefits for the pollutants CO, NOx and particulate matter are estimated for different control scenarios. Finally, an optimization model has been applied to determine optimized percentage implementation of specific control strategies with maximization of social benefits, when these strategies are applied simultaneously.
NASA Astrophysics Data System (ADS)
Wang, Yan; Huang, Song; Ji, Zhicheng
2017-07-01
This paper presents a hybrid particle swarm optimization and gravitational search algorithm based on hybrid mutation strategy (HGSAPSO-M) to optimize economic dispatch (ED) including distributed generations (DGs) considering market-based energy pricing. A daily ED model was formulated and a hybrid mutation strategy was adopted in HGSAPSO-M. The hybrid mutation strategy includes two mutation operators, chaotic mutation, Gaussian mutation. The proposed algorithm was tested on IEEE-33 bus and results show that the approach is effective for this problem.
Pittig, Andre; van den Berg, Linda; Vervliet, Bram
2016-01-01
Extinction learning is a major mechanism for fear reduction by means of exposure. Current research targets innovative strategies to enhance fear extinction and thereby optimize exposure-based treatments for anxiety disorders. This selective review updates novel behavioral strategies that may provide cutting-edge clinical implications. Recent studies provide further support for two types of enhancement strategies. Procedural enhancement strategies implemented during extinction training translate to how exposure exercises may be conducted to optimize fear extinction. These strategies mostly focus on a maximized violation of dysfunctional threat expectancies and on reducing context and stimulus specificity of extinction learning. Flanking enhancement strategies target periods before and after extinction training and inform optimal preparation and post-processing of exposure exercises. These flanking strategies focus on the enhancement of learning in general, memory (re-)consolidation, and memory retrieval. Behavioral strategies to enhance fear extinction may provide powerful clinical applications to further maximize the efficacy of exposure-based interventions. However, future replications, mechanistic examinations, and translational studies are warranted to verify long-term effects and naturalistic utility. Future directions also comprise the interplay of optimized fear extinction with (avoidance) behavior and motivational antecedents of exposure.
NASA Astrophysics Data System (ADS)
Wang, Bo; Tian, Kuo; Zhao, Haixin; Hao, Peng; Zhu, Tianyu; Zhang, Ke; Ma, Yunlong
2017-06-01
In order to improve the post-buckling optimization efficiency of hierarchical stiffened shells, a multilevel optimization framework accelerated by adaptive equivalent strategy is presented in this paper. Firstly, the Numerical-based Smeared Stiffener Method (NSSM) for hierarchical stiffened shells is derived by means of the numerical implementation of asymptotic homogenization (NIAH) method. Based on the NSSM, a reasonable adaptive equivalent strategy for hierarchical stiffened shells is developed from the concept of hierarchy reduction. Its core idea is to self-adaptively decide which hierarchy of the structure should be equivalent according to the critical buckling mode rapidly predicted by NSSM. Compared with the detailed model, the high prediction accuracy and efficiency of the proposed model is highlighted. On the basis of this adaptive equivalent model, a multilevel optimization framework is then established by decomposing the complex entire optimization process into major-stiffener-level and minor-stiffener-level sub-optimizations, during which Fixed Point Iteration (FPI) is employed to accelerate convergence. Finally, the illustrative examples of the multilevel framework is carried out to demonstrate its efficiency and effectiveness to search for the global optimum result by contrast with the single-level optimization method. Remarkably, the high efficiency and flexibility of the adaptive equivalent strategy is indicated by compared with the single equivalent strategy.
Optimal Dynamic Advertising Strategy Under Age-Specific Market Segmentation
NASA Astrophysics Data System (ADS)
Krastev, Vladimir
2011-12-01
We consider the model proposed by Faggian and Grosset for determining the advertising efforts and goodwill in the long run of a company under age segmentation of consumers. Reducing this model to optimal control sub problems we find the optimal advertising strategy and goodwill.
NASA Astrophysics Data System (ADS)
Bhosale, Parag; Staring, Marius; Al-Ars, Zaid; Berendsen, Floris F.
2018-03-01
Currently, non-rigid image registration algorithms are too computationally intensive to use in time-critical applications. Existing implementations that focus on speed typically address this by either parallelization on GPU-hardware, or by introducing methodically novel techniques into CPU-oriented algorithms. Stochastic gradient descent (SGD) optimization and variations thereof have proven to drastically reduce the computational burden for CPU-based image registration, but have not been successfully applied in GPU hardware due to its stochastic nature. This paper proposes 1) NiftyRegSGD, a SGD optimization for the GPU-based image registration tool NiftyReg, 2) random chunk sampler, a new random sampling strategy that better utilizes the memory bandwidth of GPU hardware. Experiments have been performed on 3D lung CT data of 19 patients, which compared NiftyRegSGD (with and without random chunk sampler) with CPU-based elastix Fast Adaptive SGD (FASGD) and NiftyReg. The registration runtime was 21.5s, 4.4s and 2.8s for elastix-FASGD, NiftyRegSGD without, and NiftyRegSGD with random chunk sampling, respectively, while similar accuracy was obtained. Our method is publicly available at https://github.com/SuperElastix/NiftyRegSGD.
Li, Zenghui; Xu, Bin; Yang, Jian; Song, Jianshe
2015-01-01
This paper focuses on suppressing spectral overlap for sub-band spectral estimation, with which we can greatly decrease the computational complexity of existing spectral estimation algorithms, such as nonlinear least squares spectral analysis and non-quadratic regularized sparse representation. Firstly, our study shows that the nominal ability of the high-order analysis filter to suppress spectral overlap is greatly weakened when filtering a finite-length sequence, because many meaningless zeros are used as samples in convolution operations. Next, an extrapolation-based filtering strategy is proposed to produce a series of estimates as the substitutions of the zeros and to recover the suppression ability. Meanwhile, a steady-state Kalman predictor is applied to perform a linearly-optimal extrapolation. Finally, several typical methods for spectral analysis are applied to demonstrate the effectiveness of the proposed strategy. PMID:25609038
From Macrocrystals to Microcrystals: A Strategy for Membrane Protein Serial Crystallography.
Dods, Robert; Båth, Petra; Arnlund, David; Beyerlein, Kenneth R; Nelson, Garrett; Liang, Mengling; Harimoorthy, Rajiv; Berntsen, Peter; Malmerberg, Erik; Johansson, Linda; Andersson, Rebecka; Bosman, Robert; Carbajo, Sergio; Claesson, Elin; Conrad, Chelsie E; Dahl, Peter; Hammarin, Greger; Hunter, Mark S; Li, Chufeng; Lisova, Stella; Milathianaki, Despina; Robinson, Joseph; Safari, Cecilia; Sharma, Amit; Williams, Garth; Wickstrand, Cecilia; Yefanov, Oleksandr; Davidsson, Jan; DePonte, Daniel P; Barty, Anton; Brändén, Gisela; Neutze, Richard
2017-09-05
Serial protein crystallography was developed at X-ray free-electron lasers (XFELs) and is now also being applied at storage ring facilities. Robust strategies for the growth and optimization of microcrystals are needed to advance the field. Here we illustrate a generic strategy for recovering high-density homogeneous samples of microcrystals starting from conditions known to yield large (macro) crystals of the photosynthetic reaction center of Blastochloris viridis (RC vir ). We first crushed these crystals prior to multiple rounds of microseeding. Each cycle of microseeding facilitated improvements in the RC vir serial femtosecond crystallography (SFX) structure from 3.3-Å to 2.4-Å resolution. This approach may allow known crystallization conditions for other proteins to be adapted to exploit novel scientific opportunities created by serial crystallography. Copyright © 2017 Elsevier Ltd. All rights reserved.
Fully automatic characterization and data collection from crystals of biological macromolecules.
Svensson, Olof; Malbet-Monaco, Stéphanie; Popov, Alexander; Nurizzo, Didier; Bowler, Matthew W
2015-08-01
Considerable effort is dedicated to evaluating macromolecular crystals at synchrotron sources, even for well established and robust systems. Much of this work is repetitive, and the time spent could be better invested in the interpretation of the results. In order to decrease the need for manual intervention in the most repetitive steps of structural biology projects, initial screening and data collection, a fully automatic system has been developed to mount, locate, centre to the optimal diffraction volume, characterize and, if possible, collect data from multiple cryocooled crystals. Using the capabilities of pixel-array detectors, the system is as fast as a human operator, taking an average of 6 min per sample depending on the sample size and the level of characterization required. Using a fast X-ray-based routine, samples are located and centred systematically at the position of highest diffraction signal and important parameters for sample characterization, such as flux, beam size and crystal volume, are automatically taken into account, ensuring the calculation of optimal data-collection strategies. The system is now in operation at the new ESRF beamline MASSIF-1 and has been used by both industrial and academic users for many different sample types, including crystals of less than 20 µm in the smallest dimension. To date, over 8000 samples have been evaluated on MASSIF-1 without any human intervention.
Memarian, Elham; Hosseiny Davarani, Saied Saeed; Nojavan, Saeed; Movahed, Siyavash Kazemi
2016-09-07
In this work, a new solid-phase microextraction fiber was prepared based on nitrogen-doped graphene (N-doped G). Moreover, a new strategy was proposed to solve problems dealt in direct coating of N-doped G. For this purpose, first, Graphene oxide (GO) was coated on Pt wire by electrophoretic deposition method. Then, chemical reduction of coated GO to N-doped G was accomplished by hydrazine and NH3. The prepared fiber showed good mechanical and thermal stabilities. The obtained fiber was used in two different modes (conventional headspace solid-phase microextraction and cold-fiber headspace solid-phase microextraction (CF-HS-SPME)). Both modes were optimized and applied for the extraction of benzene and xylenes from different aqueous samples. All effective parameters including extraction time, salt content, stirring rate, and desorption time were optimized. The optimized CF-HS-SPME combined with GC-FID showed good limit of detections (LODs) (0.3-2.3 μg/L), limit of quantifications (LOQs) (1.0-7.0 μg/L) and linear ranges (1.0-5000 μg/L). The developed method was applied for the analysis of benzene and xylenes in rainwater and some wastewater samples. Copyright © 2016 Elsevier B.V. All rights reserved.
Stochastic DG Placement for Conservation Voltage Reduction Based on Multiple Replications Procedure
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Zhaoyu; Chen, Bokan; Wang, Jianhui
2015-06-01
Conservation voltage reduction (CVR) and distributed-generation (DG) integration are popular strategies implemented by utilities to improve energy efficiency. This paper investigates the interactions between CVR and DG placement to minimize load consumption in distribution networks, while keeping the lowest voltage level within the predefined range. The optimal placement of DG units is formulated as a stochastic optimization problem considering the uncertainty of DG outputs and load consumptions. A sample average approximation algorithm-based technique is developed to solve the formulated problem effectively. A multiple replications procedure is developed to test the stability of the solution and calculate the confidence interval ofmore » the gap between the candidate solution and optimal solution. The proposed method has been applied to the IEEE 37-bus distribution test system with different scenarios. The numerical results indicate that the implementations of CVR and DG, if combined, can achieve significant energy savings.« less
Piehowski, Paul D; Petyuk, Vladislav A; Sandoval, John D; Burnum, Kristin E; Kiebel, Gary R; Monroe, Matthew E; Anderson, Gordon A; Camp, David G; Smith, Richard D
2013-03-01
For bottom-up proteomics, there are wide variety of database-searching algorithms in use for matching peptide sequences to tandem MS spectra. Likewise, there are numerous strategies being employed to produce a confident list of peptide identifications from the different search algorithm outputs. Here we introduce a grid-search approach for determining optimal database filtering criteria in shotgun proteomics data analyses that is easily adaptable to any search. Systematic Trial and Error Parameter Selection--referred to as STEPS--utilizes user-defined parameter ranges to test a wide array of parameter combinations to arrive at an optimal "parameter set" for data filtering, thus maximizing confident identifications. The benefits of this approach in terms of numbers of true-positive identifications are demonstrated using datasets derived from immunoaffinity-depleted blood serum and a bacterial cell lysate, two common proteomics sample types. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
An Optimal Strategy for Accurate Bulge-to-disk Decomposition of Disk Galaxies
NASA Astrophysics Data System (ADS)
Gao, Hua; Ho, Luis C.
2017-08-01
The development of two-dimensional (2D) bulge-to-disk decomposition techniques has shown their advantages over traditional one-dimensional (1D) techniques, especially for galaxies with non-axisymmetric features. However, the full potential of 2D techniques has yet to be fully exploited. Secondary morphological features in nearby disk galaxies, such as bars, lenses, rings, disk breaks, and spiral arms, are seldom accounted for in 2D image decompositions, even though some image-fitting codes, such as GALFIT, are capable of handling them. We present detailed, 2D multi-model and multi-component decomposition of high-quality R-band images of a representative sample of nearby disk galaxies selected from the Carnegie-Irvine Galaxy Survey, using the latest version of GALFIT. The sample consists of five barred and five unbarred galaxies, spanning Hubble types from S0 to Sc. Traditional 1D decomposition is also presented for comparison. In detailed case studies of the 10 galaxies, we successfully model the secondary morphological features. Through a comparison of best-fit parameters obtained from different input surface brightness models, we identify morphological features that significantly impact bulge measurements. We show that nuclear and inner lenses/rings and disk breaks must be properly taken into account to obtain accurate bulge parameters, whereas outer lenses/rings and spiral arms have a negligible effect. We provide an optimal strategy to measure bulge parameters of typical disk galaxies, as well as prescriptions to estimate realistic uncertainties of them, which will benefit subsequent decomposition of a larger galaxy sample.
An Optimal Strategy for Accurate Bulge-to-disk Decomposition of Disk Galaxies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gao Hua; Ho, Luis C.
The development of two-dimensional (2D) bulge-to-disk decomposition techniques has shown their advantages over traditional one-dimensional (1D) techniques, especially for galaxies with non-axisymmetric features. However, the full potential of 2D techniques has yet to be fully exploited. Secondary morphological features in nearby disk galaxies, such as bars, lenses, rings, disk breaks, and spiral arms, are seldom accounted for in 2D image decompositions, even though some image-fitting codes, such as GALFIT, are capable of handling them. We present detailed, 2D multi-model and multi-component decomposition of high-quality R -band images of a representative sample of nearby disk galaxies selected from the Carnegie-Irvine Galaxymore » Survey, using the latest version of GALFIT. The sample consists of five barred and five unbarred galaxies, spanning Hubble types from S0 to Sc. Traditional 1D decomposition is also presented for comparison. In detailed case studies of the 10 galaxies, we successfully model the secondary morphological features. Through a comparison of best-fit parameters obtained from different input surface brightness models, we identify morphological features that significantly impact bulge measurements. We show that nuclear and inner lenses/rings and disk breaks must be properly taken into account to obtain accurate bulge parameters, whereas outer lenses/rings and spiral arms have a negligible effect. We provide an optimal strategy to measure bulge parameters of typical disk galaxies, as well as prescriptions to estimate realistic uncertainties of them, which will benefit subsequent decomposition of a larger galaxy sample.« less
Active model-based balancing strategy for self-reconfigurable batteries
NASA Astrophysics Data System (ADS)
Bouchhima, Nejmeddine; Schnierle, Marc; Schulte, Sascha; Birke, Kai Peter
2016-08-01
This paper describes a novel balancing strategy for self-reconfigurable batteries where the discharge and charge rates of each cell can be controlled. While much effort has been focused on improving the hardware architecture of self-reconfigurable batteries, energy equalization algorithms have not been systematically optimized in terms of maximizing the efficiency of the balancing system. Our approach includes aspects of such optimization theory. We develop a balancing strategy for optimal control of the discharge rate of battery cells. We first formulate the cell balancing as a nonlinear optimal control problem, which is modeled afterward as a network program. Using dynamic programming techniques and MATLAB's vectorization feature, we solve the optimal control problem by generating the optimal battery operation policy for a given drive cycle. The simulation results show that the proposed strategy efficiently balances the cells over the life of the battery, an obvious advantage that is absent in the other conventional approaches. Our algorithm is shown to be robust when tested against different influencing parameters varying over wide spectrum on different drive cycles. Furthermore, due to the little computation time and the proved low sensitivity to the inaccurate power predictions, our strategy can be integrated in a real-time system.
Chiu, Mei Choi; Pun, Chi Seng; Wong, Hoi Ying
2017-08-01
Investors interested in the global financial market must analyze financial securities internationally. Making an optimal global investment decision involves processing a huge amount of data for a high-dimensional portfolio. This article investigates the big data challenges of two mean-variance optimal portfolios: continuous-time precommitment and constant-rebalancing strategies. We show that both optimized portfolios implemented with the traditional sample estimates converge to the worst performing portfolio when the portfolio size becomes large. The crux of the problem is the estimation error accumulated from the huge dimension of stock data. We then propose a linear programming optimal (LPO) portfolio framework, which applies a constrained ℓ 1 minimization to the theoretical optimal control to mitigate the risk associated with the dimensionality issue. The resulting portfolio becomes a sparse portfolio that selects stocks with a data-driven procedure and hence offers a stable mean-variance portfolio in practice. When the number of observations becomes large, the LPO portfolio converges to the oracle optimal portfolio, which is free of estimation error, even though the number of stocks grows faster than the number of observations. Our numerical and empirical studies demonstrate the superiority of the proposed approach. © 2017 Society for Risk Analysis.
Caroline Müllenbroich, M; McGhee, Ewan J; Wright, Amanda J; Anderson, Kurt I; Mathieson, Keith
2014-01-01
We have developed a nonlinear adaptive optics microscope utilizing a deformable membrane mirror (DMM) and demonstrated its use in compensating for system- and sample-induced aberrations. The optimum shape of the DMM was determined with a random search algorithm optimizing on either two photon fluorescence or second harmonic signals as merit factors. We present here several strategies to overcome photobleaching issues associated with lengthy optimization routines by adapting the search algorithm and the experimental methodology. Optimizations were performed on extrinsic fluorescent dyes, fluorescent beads loaded into organotypic tissue cultures and the intrinsic second harmonic signal of these cultures. We validate the approach of using these preoptimized mirror shapes to compile a robust look-up table that can be applied for imaging over several days and through a variety of tissues. In this way, the photon exposure to the fluorescent cells under investigation is limited to imaging. Using our look-up table approach, we show signal intensity improvement factors ranging from 1.7 to 4.1 in organotypic tissue cultures and freshly excised mouse tissue. Imaging zebrafish in vivo, we demonstrate signal improvement by a factor of 2. This methodology is easily reproducible and could be applied to many photon starved experiments, for example fluorescent life time imaging, or when photobleaching is a concern.
Optimal charge control strategies for stationary photovoltaic battery systems
NASA Astrophysics Data System (ADS)
Li, Jiahao; Danzer, Michael A.
2014-07-01
Battery systems coupled to photovoltaic (PV) modules for example fulfill one major function: they locally decouple PV generation and consumption of electrical power leading to two major effects. First, they reduce the grid load, especially at peak times and therewith reduce the necessity of a network expansion. And second, they increase the self-consumption in households and therewith help to reduce energy expenses. For the management of PV batteries charge control strategies need to be developed to reach the goals of both the distribution system operators and the local power producer. In this work optimal control strategies regarding various optimization goals are developed on the basis of the predicted household loads and PV generation profiles using the method of dynamic programming. The resulting charge curves are compared and essential differences discussed. Finally, a multi-objective optimization shows that charge control strategies can be derived that take all optimization goals into account.
Liu, Xiaojuan; Xu, Ningning; Gai, Panpan; Li, Feng
2018-08-01
Since melamine is a strong hazard to human health, the development of new methods for highly sensitive detection of melamine is highly desirable. Herein, a novel fluorescent biosensing strategy was designed for sensitive and selective melamine assay based on the recognition ability of abasic (AP) site in triplex towards melamine and signal amplification by Mg 2+ -dependent DNAzyme. In this strategy, the melamine-induced formation of triplex DNA was employed to trigger the strand displacement reaction (SDR). The SDR process converted the specific target recognition into the release and activation of Mg 2+ -dependent DNAzyme, which could catalyze the cleavage of fluorophore/quencher labeled DNA substrate (FQ), resulting in a significantly increased fluorescent signal. Under the optimal conditions, the fluorescent signal has a linear relationship with the logarithm of the melamine concentration in a wide range of 0.005-50 μM. The detection limit was estimated to be 0.9 nM (0.1ppb), which is sufficiently sensitive for practical application. Furthermore, this strategy exhibits high selectivity against other potential interfering substances, and the practical application of this strategy for milk samples reveals that the proposed strategy works well for melamine assay in real samples. Therefore, this strategy presents a new method for the sensitive melamine assay and holds great promise for sensing applications in the environment and the food safety field. Copyright © 2018 Elsevier B.V. All rights reserved.
Long range personalized cancer treatment strategies incorporating evolutionary dynamics.
Yeang, Chen-Hsiang; Beckman, Robert A
2016-10-22
Current cancer precision medicine strategies match therapies to static consensus molecular properties of an individual's cancer, thus determining the next therapeutic maneuver. These strategies typically maintain a constant treatment while the cancer is not worsening. However, cancers feature complicated sub-clonal structure and dynamic evolution. We have recently shown, in a comprehensive simulation of two non-cross resistant therapies across a broad parameter space representing realistic tumors, that substantial improvement in cure rates and median survival can be obtained utilizing dynamic precision medicine strategies. These dynamic strategies explicitly consider intratumoral heterogeneity and evolutionary dynamics, including predicted future drug resistance states, and reevaluate optimal therapy every 45 days. However, the optimization is performed in single 45 day steps ("single-step optimization"). Herein we evaluate analogous strategies that think multiple therapeutic maneuvers ahead, considering potential outcomes at 5 steps ahead ("multi-step optimization") or 40 steps ahead ("adaptive long term optimization (ALTO)") when recommending the optimal therapy in each 45 day block, in simulations involving both 2 and 3 non-cross resistant therapies. We also evaluate an ALTO approach for situations where simultaneous combination therapy is not feasible ("Adaptive long term optimization: serial monotherapy only (ALTO-SMO)"). Simulations utilize populations of 764,000 and 1,700,000 virtual patients for 2 and 3 drug cases, respectively. Each virtual patient represents a unique clinical presentation including sizes of major and minor tumor subclones, growth rates, evolution rates, and drug sensitivities. While multi-step optimization and ALTO provide no significant average survival benefit, cure rates are significantly increased by ALTO. Furthermore, in the subset of individual virtual patients demonstrating clinically significant difference in outcome between approaches, by far the majority show an advantage of multi-step or ALTO over single-step optimization. ALTO-SMO delivers cure rates superior or equal to those of single- or multi-step optimization, in 2 and 3 drug cases respectively. In selected virtual patients incurable by dynamic precision medicine using single-step optimization, analogous strategies that "think ahead" can deliver long-term survival and cure without any disadvantage for non-responders. When therapies require dose reduction in combination (due to toxicity), optimal strategies feature complex patterns involving rapidly interleaved pulses of combinations and high dose monotherapy. This article was reviewed by Wendy Cornell, Marek Kimmel, and Andrzej Swierniak. Wendy Cornell and Andrzej Swierniak are external reviewers (not members of the Biology Direct editorial board). Andrzej Swierniak was nominated by Marek Kimmel.
Informational analysis for compressive sampling in radar imaging.
Zhang, Jingxiong; Yang, Ke
2015-03-24
Compressive sampling or compressed sensing (CS) works on the assumption of the sparsity or compressibility of the underlying signal, relies on the trans-informational capability of the measurement matrix employed and the resultant measurements, operates with optimization-based algorithms for signal reconstruction and is thus able to complete data compression, while acquiring data, leading to sub-Nyquist sampling strategies that promote efficiency in data acquisition, while ensuring certain accuracy criteria. Information theory provides a framework complementary to classic CS theory for analyzing information mechanisms and for determining the necessary number of measurements in a CS environment, such as CS-radar, a radar sensor conceptualized or designed with CS principles and techniques. Despite increasing awareness of information-theoretic perspectives on CS-radar, reported research has been rare. This paper seeks to bridge the gap in the interdisciplinary area of CS, radar and information theory by analyzing information flows in CS-radar from sparse scenes to measurements and determining sub-Nyquist sampling rates necessary for scene reconstruction within certain distortion thresholds, given differing scene sparsity and average per-sample signal-to-noise ratios (SNRs). Simulated studies were performed to complement and validate the information-theoretic analysis. The combined strategy proposed in this paper is valuable for information-theoretic orientated CS-radar system analysis and performance evaluation.
Stochastic optimization algorithms for barrier dividend strategies
NASA Astrophysics Data System (ADS)
Yin, G.; Song, Q. S.; Yang, H.
2009-01-01
This work focuses on finding optimal barrier policy for an insurance risk model when the dividends are paid to the share holders according to a barrier strategy. A new approach based on stochastic optimization methods is developed. Compared with the existing results in the literature, more general surplus processes are considered. Precise models of the surplus need not be known; only noise-corrupted observations of the dividends are used. Using barrier-type strategies, a class of stochastic optimization algorithms are developed. Convergence of the algorithm is analyzed; rate of convergence is also provided. Numerical results are reported to demonstrate the performance of the algorithm.
The standpoint of persons with hearing loss on work disparities and workplace accommodations.
Shaw, Lynn; Tetlaff, Britta; Jennings, Mary Beth; Southall, Kenneth E
2013-01-01
The perspectives of persons who live and work with hearing loss were sought to examine workplace accommodation challenges and strategies. A convenience sample of seven older adults with hearing loss participated in in-depth interviews. A systematic grounded theory approach informed the study design and analysis. Categories of facilitators and challenges in the data were identified through axial coding and clustering. Core categories of social processes emerged through constant comparison and theoretical sampling of the data to reveal the actions and interactions used to negotiate or implement adaptations or workplace accommodations. Persons with hearing loss use a realm of strategies to live and work with a hearing loss. Social processes used to navigate the challenges to working with hearing loss and to manage optimal work performance included: self-accommodation, self-advocacy, self-management of hearing loss, and lobbying. Success in overcoming work disparities for persons with hearing loss requires individuals to take control of identifying their needs within the workplace and at home, and to negotiate for specific accommodations. These strategies and processes draw attention to the need for a repository on contextualized workplace accommodation strategies for improving communication and hearing in the workplace. Further to this a best practice guide for use by workers, employers, and work rehabilitation and health care workers is indicated.
Performance of the SBRC 190, a cryogenic multiplexer for photoconductor arrays
NASA Technical Reports Server (NTRS)
Dotson, Jessie L.; Koerber, C. T.; Mason, C. G.; Simpson, J. P.; Moore, E. M.; Witteborn, F. C.; Farhoomand, J.; Erickson, E. F.; DeVincenzi, D. (Technical Monitor)
2002-01-01
The SBRC 190 cryogenic readouts were developed for use with far-infrared arrays of Ge:Sb and Ge:Ga photoconductor detectors. The SBRC 190 provides an AC-coupled CTIA (capacitance transimpedance amplifier) unit cell for each detector and multiplexes up to 32 detectors. This paper presents our test results characterizing and optimizing the performance of these novel devices. We will discuss their basic behavior in addition to describing the trade-offs inherent in different sampling strategies.
Multiplex pyrosequencing of InDel markers for forensic DNA analysis.
Bus, Magdalena M; Karas, Ognjen; Allen, Marie
2016-12-01
The capillary electrophoresis (CE) technology is commonly used for fragment length separation of markers in forensic DNA analysis. In this study, pyrosequencing technology was used as an alternative and rapid tool for the analysis of biallelic InDel (insertion/deletion) markers for individual identification. The DNA typing is based on a subset of the InDel markers that are included in the Investigator ® DIPplex Kit, which are sequenced in a multiplex pyrosequencing analysis. To facilitate the analysis of degraded DNA, the polymerase chain reaction (PCR) fragments were kept short in the primer design. Samples from individuals of Swedish origin were genotyped using the pyrosequencing strategy and analysis of the Investigator ® DIPplex markers with CE. A comparison between the pyrosequencing and CE data revealed concordant results demonstrating a robust and correct genotyping by pyrosequencing. Using optimal marker combination and a directed dispensation strategy, five markers could be multiplexed and analyzed simultaneously. In this proof-of-principle study, we demonstrate that multiplex InDel pyrosequencing analysis is possible. However, further studies on degraded samples, lower DNA quantities, and mixtures will be required to fully optimize InDel analysis by pyrosequencing for forensic applications. Overall, although CE analysis is implemented in most forensic laboratories, multiplex InDel pyrosequencing offers a cost-effective alternative for some applications. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Li, Zhou; Xiao, Chong; Fan, Shaojuan; Deng, Yu; Zhang, Wenshuai; Ye, Bangjiao; Xie, Yi
2015-05-27
Vacancy is a very important class of phonon scattering center to reduce thermal conductivity for the development of high efficient thermoelectric materials. However, conventional monovacancy may also act as an electron or hole acceptor, thereby modifying the electrical transport properties and even worsening the thermoelectric performance. This issue urges us to create new types of vacancies that scatter phonons effectively while not deteriorating the electrical transport. Herein, taking BiCuSeO as an example, we first reported the successful synergistic optimization of electrical and thermal parameters through Bi/Cu dual vacancies. As expected, as compared to its pristine and monovacancy samples, these dual vacancies further increase the phonon scattering, which results in an ultra low thermal conductivity of 0.37 W m(-1) K(-1) at 750 K. Most importantly, the clear-cut evidence in positron annihilation unambiguously confirms the interlayer charge transfer between these Bi/Cu dual vacancies, which results in the significant increase of electrical conductivity with relatively high Seebeck coefficient. As a result, BiCuSeO with Bi/Cu dual vacancies shows a high ZT value of 0.84 at 750 K, which is superior to that of its native sample and monovacancies-dominant counterparts. These findings undoubtedly elucidate a new strategy and direction for rational design of high performance thermoelectric materials.
NASA Astrophysics Data System (ADS)
Sturdevant-Rees, P. L.; Bourdeau, D.; Baker, R.; Long, S. C.; Barten, P. K.
2004-05-01
Microbial and water-quality measurements are collected during storm events under a variety of meteorological and land-use conditions in order to 1) identify risk of Cryptosporidium oocysts, Giardia cysts and other constituents, including microbial indicator organisms, entering surface waters from various land uses during periods of surface runoff; 2) optimize storm sampling procedures for these parameters; and 3) optimize strategies for accurate determination of constituent loads. The investigation is focused on four isolated land uses: forested with free ranging wildlife, beaver influenced forested with free ranging wildlife, residential/commercial, and dairy farm grazing/pastureland using an upstream and downstream sampling strategy. Traditional water-quality analyses include pH, temperature, turbidity, conductivity, total suspended solids, total phosphorus, total Kjeldahl-nitrogen, and ammonia nitrogen, Giardia cysts and Cryptosporidium oocysts. Total coliforms and fecal coliforms are measured as industry standard microbial analyses. Sorbitol-fermenting Bifidobacteria, Rhodococcus coprophilus, Clostridium perfringens spores, and Somatic and F-specific coliphages are measured at select sites as potential alternative source-specific indicator organisms. Upon completion of the project, the final database will consist of wet weather transport data for a set of parameters during twenty-four distinct storm-events in addition to monthly baseline data. A subset of the results to date will be presented, with focus placed on demonstrating the impact of beaver on constituent loadings over a variety of hydrologic and meteorological conditions.
Buitrón, G; Moreno-Andrade, I; Linares-García, J A; Pérez, J; Betancur, M J; Moreno, J A
2007-01-01
This work presents the results and discussions of the application of an optimally controlled influent flow rate strategy to biodegrade, in a discontinuous reactor, a synthetic wastewater constituted by 4-chlorophenol. An aerobic automated discontinuous reactor system of 1.3 m3, with a useful volume of 0.75 m3 and an exchange volume of 60% was used. As part of the control strategy influent is fed into the reactor in such a way as to obtain the maximal degradation rate avoiding inhibition of microorganisms. Such an optimal strategy was able to manage increments of 4-chlorophenol concentrations in the influent between 250 and 1000 mg/L. it was shown that the optimally controlled influent flow rate strategy brings savings in reaction time and flexibility in treating high concentrations of an influent with toxic characteristics.
Revenue Share between Layers and Investment Incentive for ISP in the Internet Market
NASA Astrophysics Data System (ADS)
Unno, Masaru; Xu, Hua
In this paper, we consider a revenue-sharing and network investment problem between an Internet service provider (ISP) and a content provider (CP) by applying the dynamic agency theory. We formulate the problem as the principal-agent problem where the ISP is the principal and the CP is the agent. The principal-agent problem is transformed to a stochastic optimal control problem in which the objectives of ISP are to find an optimal revenue-sharing strategy and a network investment strategy, and to advise an incentive compatible effort level to the CP. The sufficient conditions for the existence of the optimal revenue-sharing strategy, the optimal investment strategy and the incentive compatible effort to the CP are obtained. A numerical example is solved to show the existence of such strategies. The practical implications of the results obtained in the paper will also be discussed.
The Development of Quality Control Genotyping Approaches: A Case Study Using Elite Maize Lines.
Chen, Jiafa; Zavala, Cristian; Ortega, Noemi; Petroli, Cesar; Franco, Jorge; Burgueño, Juan; Costich, Denise E; Hearne, Sarah J
2016-01-01
Quality control (QC) of germplasm identity and purity is a critical component of breeding and conservation activities. SNP genotyping technologies and increased availability of markers provide the opportunity to employ genotyping as a low-cost and robust component of this QC. In the public sector available low-cost SNP QC genotyping methods have been developed from a very limited panel of markers of 1,000 to 1,500 markers without broad selection of the most informative SNPs. Selection of optimal SNPs and definition of appropriate germplasm sampling in addition to platform section impact on logistical and resource-use considerations for breeding and conservation applications when mainstreaming QC. In order to address these issues, we evaluated the selection and use of SNPs for QC applications from large DArTSeq data sets generated from CIMMYT maize inbred lines (CMLs). Two QC genotyping strategies were developed, the first is a "rapid QC", employing a small number of SNPs to identify potential mislabeling of seed packages or plots, the second is a "broad QC", employing a larger number of SNP, used to identify each germplasm entry and to measure heterogeneity. The optimal marker selection strategies combined the selection of markers with high minor allele frequency, sampling of clustered SNP in proportion to marker cluster distance and selecting markers that maintain a uniform genomic distribution. The rapid and broad QC SNP panels selected using this approach were further validated using blind test assessments of related re-generation samples. The influence of sampling within each line was evaluated. Sampling 192 individuals would result in close to 100% possibility of detecting a 5% contamination in the entry, and approximately a 98% probability to detect a 2% contamination of the line. These results provide a framework for the establishment of QC genotyping. A comparison of financial and time costs for use of these approaches across different platforms is discussed providing a framework for institutions involved in maize conservation and breeding to assess the resource use effectiveness of QC genotyping. Application of these research findings, in combination with existing QC approaches, will ensure the regeneration, distribution and use in breeding of true to type inbred germplasm. These findings also provide an effective approach to optimize SNP selection for QC genotyping in other species.
Variance of discharge estimates sampled using acoustic Doppler current profilers from moving boats
Garcia, Carlos M.; Tarrab, Leticia; Oberg, Kevin; Szupiany, Ricardo; Cantero, Mariano I.
2012-01-01
This paper presents a model for quantifying the random errors (i.e., variance) of acoustic Doppler current profiler (ADCP) discharge measurements from moving boats for different sampling times. The model focuses on the random processes in the sampled flow field and has been developed using statistical methods currently available for uncertainty analysis of velocity time series. Analysis of field data collected using ADCP from moving boats from three natural rivers of varying sizes and flow conditions shows that, even though the estimate of the integral time scale of the actual turbulent flow field is larger than the sampling interval, the integral time scale of the sampled flow field is on the order of the sampling interval. Thus, an equation for computing the variance error in discharge measurements associated with different sampling times, assuming uncorrelated flow fields is appropriate. The approach is used to help define optimal sampling strategies by choosing the exposure time required for ADCPs to accurately measure flow discharge.
On scheduling task systems with variable service times
NASA Astrophysics Data System (ADS)
Maset, Richard G.; Banawan, Sayed A.
1993-08-01
Several strategies have been proposed for developing optimal and near-optimal schedules for task systems (jobs consisting of multiple tasks that can be executed in parallel). Most such strategies, however, implicitly assume deterministic task service times. We show that these strategies are much less effective when service times are highly variable. We then evaluate two strategies—one adaptive, one static—that have been proposed for retaining high performance despite such variability. Both strategies are extensions of critical path scheduling, which has been found to be efficient at producing near-optimal schedules. We found the adaptive approach to be quite effective.
Parkinson, Don-Roger; Churchill, Tonia J; Rolls, Wyn
2008-11-01
Methyl benzoate - as a biomarker for mold growth - was used as a specific target compound to indicate outgassed MVOC products from mold. Both real and surrogate samples were analyzed from a variety of matrices including: carpet, ceiling tiles, dried paint surfaces, wallboard and wallboard paper. Sampling parameters, including: desorption, extraction time, incubation temperature, pH, salt effects and spinning rate, were optimized. Results suggest that extraction and detection of methyl benzoate amongst other MVOCs can be accomplished cleanly by SPME-GC/MS methods. With detection limits (LOD = 1.5 ppb) and linearity (0.999) over a range of 100 ppm to 2 ppb, this work demonstrates that such a green technique can be contemplated for use in quick assessment or as part of an ongoing assessment strategy to detect mold growth in common indoor buildings and materials for both qualitative and quantitative determinations. Of importance, no matrix effects are observed under optimized extraction conditions.
Semiautomated Device for Batch Extraction of Metabolites from Tissue Samples
2012-01-01
Metabolomics has become a mainstream analytical strategy for investigating metabolism. The quality of data derived from these studies is proportional to the consistency of the sample preparation. Although considerable research has been devoted to finding optimal extraction protocols, most of the established methods require extensive sample handling. Manual sample preparation can be highly effective in the hands of skilled technicians, but an automated tool for purifying metabolites from complex biological tissues would be of obvious utility to the field. Here, we introduce the semiautomated metabolite batch extraction device (SAMBED), a new tool designed to simplify metabolomics sample preparation. We discuss SAMBED’s design and show that SAMBED-based extractions are of comparable quality to extracts produced through traditional methods (13% mean coefficient of variation from SAMBED versus 16% from manual extractions). Moreover, we show that aqueous SAMBED-based methods can be completed in less than a quarter of the time required for manual extractions. PMID:22292466
The option value of delay in health technology assessment.
Eckermann, Simon; Willan, Andrew R
2008-01-01
Processes of health technology assessment (HTA) inform decisions under uncertainty about whether to invest in new technologies based on evidence of incremental effects, incremental cost, and incremental net benefit monetary (INMB). An option value to delaying such decisions to wait for further evidence is suggested in the usual case of interest, in which the prior distribution of INMB is positive but uncertain. of estimating the option value of delaying decisions to invest have previously been developed when investments are irreversible with an uncertain payoff over time and information is assumed fixed. However, in HTA decision uncertainty relates to information (evidence) on the distribution of INMB. This article demonstrates that the option value of delaying decisions to allow collection of further evidence can be estimated as the expected value of sample of information (EVSI). For irreversible decisions, delay and trial (DT) is demonstrated to be preferred to adopt and no trial (AN) when the EVSI exceeds expected costs of information, including expected opportunity costs of not treating patients with the new therapy. For reversible decisions, adopt and trial (AT) becomes a potentially optimal strategy, but costs of reversal are shown to reduce the EVSI of this strategy due to both a lower probability of reversal being optimal and lower payoffs when reversal is optimal. Hence, decision makers are generally shown to face joint research and reimbursement decisions (AN, DT and AT), with the optimal choice dependent on costs of reversal as well as opportunity costs of delay and the distribution of prior INMB.
NASA Astrophysics Data System (ADS)
Monica, Z.; Sękala, A.; Gwiazda, A.; Banaś, W.
2016-08-01
Nowadays a key issue is to reduce the energy consumption of road vehicles. In particular solution one could find different strategies of energy optimization. The most popular but not sophisticated is so called eco-driving. In this strategy emphasized is particular behavior of drivers. In more sophisticated solution behavior of drivers is supported by control system measuring driving parameters and suggesting proper operation of the driver. The other strategy is concerned with application of different engineering solutions that aid optimization the process of energy consumption. Such systems take into consideration different parameters measured in real time and next take proper action according to procedures loaded to the control computer of a vehicle. The third strategy bases on optimization of the designed vehicle taking into account especially main sub-systems of a technical mean. In this approach the optimal level of energy consumption by a vehicle is obtained by synergetic results of individual optimization of particular constructional sub-systems of a vehicle. It is possible to distinguish three main sub-systems: the structural one the drive one and the control one. In the case of the structural sub-system optimization of the energy consumption level is related with the optimization or the weight parameter and optimization the aerodynamic parameter. The result is optimized body of a vehicle. Regarding the drive sub-system the optimization of the energy consumption level is related with the fuel or power consumption using the previously elaborated physical models. Finally the optimization of the control sub-system consists in determining optimal control parameters.
Multifunctional Mesoscale Observing Networks.
NASA Astrophysics Data System (ADS)
Dabberdt, Walter F.; Schlatter, Thomas W.; Carr, Frederick H.; Friday, Elbert W. Joe; Jorgensen, David; Koch, Steven; Pirone, Maria; Ralph, F. Martin; Sun, Juanzhen; Welsh, Patrick; Wilson, James W.; Zou, Xiaolei
2005-07-01
More than 120 scientists, engineers, administrators, and users met on 8 10 December 2003 in a workshop format to discuss the needs for enhanced three-dimensional mesoscale observing networks. Improved networks are seen as being critical to advancing numerical and empirical modeling for a variety of mesoscale applications, including severe weather warnings and forecasts, hydrology, air-quality forecasting, chemical emergency response, transportation safety, energy management, and others. The participants shared a clear and common vision for the observing requirements: existing two-dimensional mesoscale measurement networks do not provide observations of the type, frequency, and density that are required to optimize mesoscale prediction and nowcasts. To be viable, mesoscale observing networks must serve multiple applications, and the public, private, and academic sectors must all actively participate in their design and implementation, as well as in the creation and delivery of value-added products. The mesoscale measurement challenge can best be met by an integrated approach that considers all elements of an end-to-end solution—identifying end users and their needs, designing an optimal mix of observations, defining the balance between static and dynamic (targeted or adaptive) sampling strategies, establishing long-term test beds, and developing effective implementation strategies. Detailed recommendations are provided pertaining to nowcasting, numerical prediction and data assimilation, test beds, and implementation strategies.
Wang, Mingyu
2006-04-01
An innovative management strategy is proposed for optimized and integrated environmental management for regional or national groundwater contamination prevention and restoration allied with consideration of sustainable development. This management strategy accounts for availability of limited resources, human health and ecological risks from groundwater contamination, costs for groundwater protection measures, beneficial uses and values from groundwater protection, and sustainable development. Six different categories of costs are identified with regard to groundwater prevention and restoration. In addition, different environmental impacts from groundwater contamination including human health and ecological risks are individually taken into account. System optimization principles are implemented to accomplish decision-makings on the optimal resources allocations of the available resources or budgets to different existing contaminated sites and projected contamination sites for a maximal risk reduction. Established management constraints such as budget limitations under different categories of costs are satisfied at the optimal solution. A stepwise optimization process is proposed in which the first step is to select optimally a limited number of sites where remediation or prevention measures will be taken, from all the existing contaminated and projected contamination sites, based on a total regionally or nationally available budget in a certain time frame such as 10 years. Then, several optimization steps determined year-by-year optimal distributions of the available yearly budgets for those selected sites. A hypothetical case study is presented to demonstrate a practical implementation of the management strategy. Several issues pertaining to groundwater contamination exposure and risk assessments and remediation cost evaluations are briefly discussed for adequately understanding implementations of the management strategy.
Validation of optimization strategies using the linear structured production chains
NASA Astrophysics Data System (ADS)
Kusiak, Jan; Morkisz, Paweł; Oprocha, Piotr; Pietrucha, Wojciech; Sztangret, Łukasz
2017-06-01
Different optimization strategies applied to sequence of several stages of production chains were validated in this paper. Two benchmark problems described by ordinary differential equations (ODEs) were considered. A water tank and a passive CR-RC filter were used as the exemplary objects described by the first and the second order differential equations, respectively. Considered in the work optimization problems serve as the validators of strategies elaborated by the Authors. However, the main goal of research is selection of the best strategy for optimization of two real metallurgical processes which will be investigated in an on-going projects. The first problem will be the oxidizing roasting process of zinc sulphide concentrate where the sulphur from the input concentrate should be eliminated and the minimal concentration of sulphide sulphur in the roasted products has to be achieved. Second problem will be the lead refining process consisting of three stages: roasting to the oxide, oxide reduction to metal and the oxidizing refining. Strategies, which appear the most effective in considered benchmark problems will be candidates for optimization of the mentioned above industrial processes.
Modeling joint restoration strategies for interdependent infrastructure systems
Simonovic, Slobodan P.
2018-01-01
Life in the modern world depends on multiple critical services provided by infrastructure systems which are interdependent at multiple levels. To effectively respond to infrastructure failures, this paper proposes a model for developing optimal joint restoration strategy for interdependent infrastructure systems following a disruptive event. First, models for (i) describing structure of interdependent infrastructure system and (ii) their interaction process, are presented. Both models are considering the failure types, infrastructure operating rules and interdependencies among systems. Second, an optimization model for determining an optimal joint restoration strategy at infrastructure component level by minimizing the economic loss from the infrastructure failures, is proposed. The utility of the model is illustrated using a case study of electric-water systems. Results show that a small number of failed infrastructure components can trigger high level failures in interdependent systems; the optimal joint restoration strategy varies with failure occurrence time. The proposed models can help decision makers to understand the mechanisms of infrastructure interactions and search for optimal joint restoration strategy, which can significantly enhance safety of infrastructure systems. PMID:29649300
Teshale, Salom M; Lachman, Margie E
2016-11-01
Past work on selective optimization and compensation (SOC) has focused on between-persons differences and its relationship with global well-being. However, less work examines within-person SOC variation. This study examined whether variation over 7 days in everyday SOC was associated with happiness in a sample of 145 adults ages 22-94. Age differences in this relationship, the moderating effects of health, and lagged effects were also examined. On days in which middle-age and older adults and individuals with lower health used more SOC, they also reported greater happiness. Lagged effects indicated lower happiness led to greater subsequent SOC usage. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Optimal Trajectories Generation in Robotic Fiber Placement Systems
NASA Astrophysics Data System (ADS)
Gao, Jiuchun; Pashkevich, Anatol; Caro, Stéphane
2017-06-01
The paper proposes a methodology for optimal trajectories generation in robotic fiber placement systems. A strategy to tune the parameters of the optimization algorithm at hand is also introduced. The presented technique transforms the original continuous problem into a discrete one where the time-optimal motions are generated by using dynamic programming. The developed strategy for the optimization algorithm tuning allows essentially reducing the computing time and obtaining trajectories satisfying industrial constraints. Feasibilities and advantages of the proposed methodology are confirmed by an application example.
Energy optimization for upstream data transfer in 802.15.4 beacon-enabled star formulation
NASA Astrophysics Data System (ADS)
Liu, Hua; Krishnamachari, Bhaskar
2008-08-01
Energy saving is one of the major concerns for low rate personal area networks. This paper models energy consumption for beacon-enabled time-slotted media accessing control cooperated with sleeping scheduling in a star network formulation for IEEE 802.15.4 standard. We investigate two different upstream (data transfer from devices to a network coordinator) strategies: a) tracking strategy: the devices wake up and check status (track the beacon) in each time slot; b) non-tracking strategy: nodes only wake-up upon data arriving and stay awake till data transmitted to the coordinator. We consider the tradeoff between energy cost and average data transmission delay for both strategies. Both scenarios are formulated as optimization problems and the optimal solutions are discussed. Our results show that different data arrival rate and system parameters (such as contention access period interval, upstream speed etc.) result in different strategies in terms of energy optimization with maximum delay constraints. Hence, according to different applications and system settings, different strategies might be chosen by each node to achieve energy optimization for both self-interested view and system view. We give the relation among the tunable parameters by formulas and plots to illustrate which strategy is better under corresponding parameters. There are two main points emphasized in our results with delay constraints: on one hand, when the system setting is fixed by coordinator, nodes in the network can intelligently change their strategies according to corresponding application data arrival rate; on the other hand, when the nodes' applications are known by the coordinator, the coordinator can tune the system parameters to achieve optimal system energy consumption.
Inconsistent Investment and Consumption Problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kronborg, Morten Tolver, E-mail: mtk@atp.dk; Steffensen, Mogens, E-mail: mogens@math.ku.dk
In a traditional Black–Scholes market we develop a verification theorem for a general class of investment and consumption problems where the standard dynamic programming principle does not hold. The theorem is an extension of the standard Hamilton–Jacobi–Bellman equation in the form of a system of non-linear differential equations. We derive the optimal investment and consumption strategy for a mean-variance investor without pre-commitment endowed with labor income. In the case of constant risk aversion it turns out that the optimal amount of money to invest in stocks is independent of wealth. The optimal consumption strategy is given as a deterministic bang-bangmore » strategy. In order to have a more realistic model we allow the risk aversion to be time and state dependent. Of special interest is the case were the risk aversion is inversely proportional to present wealth plus the financial value of future labor income net of consumption. Using the verification theorem we give a detailed analysis of this problem. It turns out that the optimal amount of money to invest in stocks is given by a linear function of wealth plus the financial value of future labor income net of consumption. The optimal consumption strategy is again given as a deterministic bang-bang strategy. We also calculate, for a general time and state dependent risk aversion function, the optimal investment and consumption strategy for a mean-standard deviation investor without pre-commitment. In that case, it turns out that it is optimal to take no risk at all.« less
Han, Xiaolong; Song, Wenxia; Liu, Guodong; Li, Zhonghai; Yang, Piao; Qu, Yinbo
2017-03-01
Medium optimization and repeated fed-batch fermentation were performed to improve the cellulase productivity by P. oxalicum RE-10 in submerged fermentation. First, Plackett-Burman design (PBD) and central composite design (CCD) were used to optimize the medium for cellulase production. PBD demonstrated wheat bran and NaNO 3 had significant influences on cellulase production. The CCD results showed the maximum filter paper activity (FPA) production of 8.61U/mL could be achieved in Erlenmeyer flasks. The maximal FPA reached 12.69U/mL by submerged batch fermentation in a 7.5-L stirred tank, 1.76-fold higher than that on the original medium. Then, the repeated fed-batch fermentation strategy was performed successfully for increasing the cellulase productivity from 105.75U/L/h in batch fermentation to 158.38U/L/h. The cellulase activity and the glucan conversion of delignined corn cob residue hydrolysis had no significant difference between the enzymes sampled from different cycles of the repeated fed-batch fermentation and that from batch culture. Copyright © 2016 Elsevier Ltd. All rights reserved.
Bayesian Lagrangian Data Assimilation and Drifter Deployment Strategies
NASA Astrophysics Data System (ADS)
Dutt, A.; Lermusiaux, P. F. J.
2017-12-01
Ocean currents transport a variety of natural (e.g. water masses, phytoplankton, zooplankton, sediments, etc.) and man-made materials and other objects (e.g. pollutants, floating debris, search and rescue, etc.). Lagrangian Coherent Structures (LCSs) or the most influential/persistent material lines in a flow, provide a robust approach to characterize such Lagrangian transports and organize classic trajectories. Using the flow-map stochastic advection and a dynamically-orthogonal decomposition, we develop uncertainty prediction schemes for both Eulerian and Lagrangian variables. We then extend our Bayesian Gaussian Mixture Model (GMM)-DO filter to a joint Eulerian-Lagrangian Bayesian data assimilation scheme. The resulting nonlinear filter allows the simultaneous non-Gaussian estimation of Eulerian variables (e.g. velocity, temperature, salinity, etc.) and Lagrangian variables (e.g. drifter/float positions, trajectories, LCSs, etc.). Its results are showcased using a double-gyre flow with a random frequency, a stochastic flow past a cylinder, and realistic ocean examples. We further show how our Bayesian mutual information and adaptive sampling equations provide a rigorous efficient methodology to plan optimal drifter deployment strategies and predict the optimal times, locations, and types of measurements to be collected.
Order-Constrained Solutions in K-Means Clustering: Even Better than Being Globally Optimal
ERIC Educational Resources Information Center
Steinley, Douglas; Hubert, Lawrence
2008-01-01
This paper proposes an order-constrained K-means cluster analysis strategy, and implements that strategy through an auxiliary quadratic assignment optimization heuristic that identifies an initial object order. A subsequent dynamic programming recursion is applied to optimally subdivide the object set subject to the order constraint. We show that…
NASA Astrophysics Data System (ADS)
Sreekanth, J.; Datta, Bithin
2011-07-01
Overexploitation of the coastal aquifers results in saltwater intrusion. Once saltwater intrusion occurs, it involves huge cost and long-term remediation measures to remediate these contaminated aquifers. Hence, it is important to have strategies for the sustainable use of coastal aquifers. This study develops a methodology for the optimal management of saltwater intrusion prone aquifers. A linked simulation-optimization-based management strategy is developed. The methodology uses genetic-programming-based models for simulating the aquifer processes, which is then linked to a multi-objective genetic algorithm to obtain optimal management strategies in terms of groundwater extraction from potential well locations in the aquifer.
Yu, Xiang; Zhang, Xueqing
2017-01-01
Comprehensive learning particle swarm optimization (CLPSO) is a powerful state-of-the-art single-objective metaheuristic. Extending from CLPSO, this paper proposes multiswarm CLPSO (MSCLPSO) for multiobjective optimization. MSCLPSO involves multiple swarms, with each swarm associated with a separate original objective. Each particle's personal best position is determined just according to the corresponding single objective. Elitists are stored externally. MSCLPSO differs from existing multiobjective particle swarm optimizers in three aspects. First, each swarm focuses on optimizing the associated objective using CLPSO, without learning from the elitists or any other swarm. Second, mutation is applied to the elitists and the mutation strategy appropriately exploits the personal best positions and elitists. Third, a modified differential evolution (DE) strategy is applied to some extreme and least crowded elitists. The DE strategy updates an elitist based on the differences of the elitists. The personal best positions carry useful information about the Pareto set, and the mutation and DE strategies help MSCLPSO discover the true Pareto front. Experiments conducted on various benchmark problems demonstrate that MSCLPSO can find nondominated solutions distributed reasonably over the true Pareto front in a single run.
Birgani, Nasrin Taghipour; Elhami, Shahla
2017-01-01
A simple and sensitive method was proposed for the preconcentration of trace levels of Al(III) prior to its determination by spectrophotometry, based on dispersive liquid-liquid microextraction. The complexation of the Al(III) was performed by chelation with Eriochrome Cyanine R (ECR). In this method, cetyltrimethyl ammonium bromide (CTAB) as a dispersant was dissolved in chloroform as an extractant solvent, and then the solution was rapidly injected by a syringe into the samples containing Al(III), which had already been complexed by ECR at optimized pH. Various parameters were studied and optimized for a 10 mL sample volume. Under the optimum conditions, the LOD (3 times the SD of 10 replicate readings of the reagent blank) and the dynamic range of the calibration obtained were 0.2 ng mL-1 (7 nM) and 1.0-80.0 ng mL-1, respectively. The RSDs for eight replicate determinations of 10 and 60 ng mL-1 of Al(III) were 3.3 and 1.8%, respectively. This strategy was successfully applied to determine the Al concentration in water, wastewater, yogurt, apple, carrot, celery, bread, potato, urine, and Al-Mg syrup samples.
Physiologically Relevant Changes in Serotonin Resolved by Fast Microdialysis
2013-01-01
Online microdialysis is a sampling and detection method that enables continuous interrogation of extracellular molecules in freely moving subjects under behaviorally relevant conditions. A majority of recent publications using brain microdialysis in rodents report sample collection times of 20–30 min. These long sampling times are due, in part, to limitations in the detection sensitivity of high performance liquid chromatography (HPLC). By optimizing separation and detection conditions, we decreased the retention time of serotonin to 2.5 min and the detection threshold to 0.8 fmol. Sampling times were consequently reduced from 20 to 3 min per sample for online detection of serotonin (and dopamine) in brain dialysates using a commercial HPLC system. We developed a strategy to collect and to analyze dialysate samples continuously from two animals in tandem using the same instrument. Improvements in temporal resolution enabled elucidation of rapid changes in extracellular serotonin levels associated with mild stress and circadian rhythms. These dynamics would be difficult or impossible to differentiate using conventional microdialysis sampling rates. PMID:23614776
Influences of misprediction costs on solar flare prediction
NASA Astrophysics Data System (ADS)
Huang, Xin; Wang, HuaNing; Dai, XingHua
2012-10-01
The mispredictive costs of flaring and non-flaring samples are different for different applications of solar flare prediction. Hence, solar flare prediction is considered a cost sensitive problem. A cost sensitive solar flare prediction model is built by modifying the basic decision tree algorithm. Inconsistency rate with the exhaustive search strategy is used to determine the optimal combination of magnetic field parameters in an active region. These selected parameters are applied as the inputs of the solar flare prediction model. The performance of the cost sensitive solar flare prediction model is evaluated for the different thresholds of solar flares. It is found that more flaring samples are correctly predicted and more non-flaring samples are wrongly predicted with the increase of the cost for wrongly predicting flaring samples as non-flaring samples, and the larger cost of wrongly predicting flaring samples as non-flaring samples is required for the higher threshold of solar flares. This can be considered as the guide line for choosing proper cost to meet the requirements in different applications.
Efficient Robust Optimization of Metal Forming Processes using a Sequential Metamodel Based Strategy
NASA Astrophysics Data System (ADS)
Wiebenga, J. H.; Klaseboer, G.; van den Boogaard, A. H.
2011-08-01
The coupling of Finite Element (FE) simulations to mathematical optimization techniques has contributed significantly to product improvements and cost reductions in the metal forming industries. The next challenge is to bridge the gap between deterministic optimization techniques and the industrial need for robustness. This paper introduces a new and generally applicable structured methodology for modeling and solving robust optimization problems. Stochastic design variables or noise variables are taken into account explicitly in the optimization procedure. The metamodel-based strategy is combined with a sequential improvement algorithm to efficiently increase the accuracy of the objective function prediction. This is only done at regions of interest containing the optimal robust design. Application of the methodology to an industrial V-bending process resulted in valuable process insights and an improved robust process design. Moreover, a significant improvement of the robustness (>2σ) was obtained by minimizing the deteriorating effects of several noise variables. The robust optimization results demonstrate the general applicability of the robust optimization strategy and underline the importance of including uncertainty and robustness explicitly in the numerical optimization procedure.
A decade of aquatic invasive species (AIS) early detection ...
As an invasion prone location, the St. Louis River Estuary (SLRE) has been a case study for ongoing research to develop the framework for a practical Great Lakes monitoring network for early detection of aquatic invasive species (AIS). Early detection, however, necessitates finding new invaders before they are common. Here we outline our research (2005 present) approach and findings, including strategies to increase detection efficiency by optimizing specimen collection and identification methods. Initial surveys were designed to over-sample to amass data as the basis for numerical experiments to investigate to the effort required for a given detection probability. Later surveys tested the outcome of implementing these strategies, examined the potential benefits of sampling larval fish instead of adults and explored the prospect of using advanced DNA based methods as an alternative to traditional taxonomy. To date we have identified several previously undetected invertebrate invaders, developed survey design and gear recommendations and have refined the search strategy for systems beyond the SLRE. In addition, because we’ve accumulated such a large body of data we now have the basis to show spatial-temporal trends for native and non-native species in the SLRE. not applicable
Ferguson, Philip E; Sales, Catherine M; Hodges, Dalton C; Sales, Elizabeth W
2015-01-01
Recent publications have emphasized the importance of a multidisciplinary strategy for maximum conservation and utilization of lung biopsy material for advanced testing, which may determine therapy. This paper quantifies the effect of a multidisciplinary strategy implemented to optimize and increase tissue volume in CT-guided transthoracic needle core lung biopsies. The strategy was three-pronged: (1) once there was confidence diagnostic tissue had been obtained and if safe for the patient, additional biopsy passes were performed to further increase volume of biopsy material, (2) biopsy material was placed in multiple cassettes for processing, and (3) all tissue ribbons were conserved when cutting blocks in the histology laboratory. This study quantifies the effects of strategies #1 and #2. This retrospective analysis comparing CT-guided lung biopsies from 2007 and 2012 (before and after multidisciplinary approach implementation) was performed at a single institution. Patient medical records were reviewed and main variables analyzed include biopsy sample size, radiologist, number of blocks submitted, diagnosis, and complications. The biopsy sample size measured was considered to be directly proportional to tissue volume in the block. Biopsy sample size increased 2.5 fold with the average total biopsy sample size increasing from 1.0 cm (0.9-1.1 cm) in 2007 to 2.5 cm (2.3-2.8 cm) in 2012 (P<0.0001). The improvement was statistically significant for each individual radiologist. During the same time, the rate of pneumothorax requiring chest tube placement decreased from 15% to 7% (P = 0.065). No other major complications were identified. The proportion of tumor within the biopsy material was similar at 28% (23%-33%) and 35% (30%-40%) for 2007 and 2012, respectively. The number of cases with at least two blocks available for testing increased from 10.7% to 96.4% (P<0.0001). The effect of this multidisciplinary strategy to CT-guided lung biopsies was effective in significantly increasing tissue volume and number of blocks available for advanced diagnostic testing.
Danet-Lamasou, Marie; Asselineau, Julien; Perez, Paul; Vivot, Alexandre; Nunes, Marie-Laure; Loiseau, Hugues; San-Galli, François; Cherifi-Gatta, Blandine; Corcuff, Jean-Benoît; Tabarin, Antoine
2015-02-01
The performance of late-night salivary cortisol (LNSC) to accurately screen for postoperative recurrence of Cushing's disease (CD) at an early stage is unknown. The aim of this study was to compare the accuracy of multiple sampling strategies to suggest the optimal number of LNSC samples needed for diagnosing post-surgical recurrences of CD at an early stage. Retrospective analysis in a single centre. Thirty-six patients in surgical remission of CD had successive measurements of LNSC, defined as 'sequences', using a locally modified RIA assay as part of long-term follow-up (69·2 ± 10·6 months). Patients underwent an extensive biochemical evaluation within 3 months before or after a sequence of saliva sampling and were classified as being in remission or in early-stage recurrence. The accuracy of three diagnostic strategies combining two, three or four LNSC results from a sequence was estimated using areas under the ROC curves (AUC), sensitivity, specificity and predictive values. Forty-four sequences of LNSC measurements were available. Fifty-two percent of sequences were performed during early-stage recurrence. The intrasequence variability of LNSC was higher during recurrence than during remission (medians of SDs: 2·1 vs 0·5 nm; P < 0·0001). AUCs from ROC curves ranged from 0·93 to 0·96 depending on the strategy. For 90% sensitivities, the best specificities (92·9% and 90·9%) were achieved by strategies taking into account three or four measurements summarized either by their mean or their maximum value. Increase in LNSC concentration is an early abnormality during post-surgical recurrence of CD. However, due to a major within-patient variability of LNSC from 1 day to another, a screening strategy using three or four samples collected on successive days may be recommended to detect early-stage recurrence of CD with a high accuracy. © 2014 John Wiley & Sons Ltd.
Optimization Under Uncertainty for Wake Steering Strategies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Quick, Julian; Annoni, Jennifer; King, Ryan N.
Here, wind turbines in a wind power plant experience significant power losses because of aerodynamic interactions between turbines. One control strategy to reduce these losses is known as 'wake steering,' in which upstream turbines are yawed to direct wakes away from downstream turbines. Previous wake steering research has assumed perfect information, however, there can be significant uncertainty in many aspects of the problem, including wind inflow and various turbine measurements. Uncertainty has significant implications for performance of wake steering strategies. Consequently, the authors formulate and solve an optimization under uncertainty (OUU) problem for finding optimal wake steering strategies in themore » presence of yaw angle uncertainty. The OUU wake steering strategy is demonstrated on a two-turbine test case and on the utility-scale, offshore Princess Amalia Wind Farm. When we accounted for yaw angle uncertainty in the Princess Amalia Wind Farm case, inflow-direction-specific OUU solutions produced between 0% and 1.4% more power than the deterministically optimized steering strategies, resulting in an overall annual average improvement of 0.2%. More importantly, the deterministic optimization is expected to perform worse and with more downside risk than the OUU result when realistic uncertainty is taken into account. Additionally, the OUU solution produces fewer extreme yaw situations than the deterministic solution.« less
Optimization Under Uncertainty for Wake Steering Strategies: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Quick, Julian; Annoni, Jennifer; King, Ryan N
Wind turbines in a wind power plant experience significant power losses because of aerodynamic interactions between turbines. One control strategy to reduce these losses is known as 'wake steering,' in which upstream turbines are yawed to direct wakes away from downstream turbines. Previous wake steering research has assumed perfect information, however, there can be significant uncertainty in many aspects of the problem, including wind inflow and various turbine measurements. Uncertainty has significant implications for performance of wake steering strategies. Consequently, the authors formulate and solve an optimization under uncertainty (OUU) problem for finding optimal wake steering strategies in the presencemore » of yaw angle uncertainty. The OUU wake steering strategy is demonstrated on a two-turbine test case and on the utility-scale, offshore Princess Amalia Wind Farm. When we accounted for yaw angle uncertainty in the Princess Amalia Wind Farm case, inflow-direction-specific OUU solutions produced between 0% and 1.4% more power than the deterministically optimized steering strategies, resulting in an overall annual average improvement of 0.2%. More importantly, the deterministic optimization is expected to perform worse and with more downside risk than the OUU result when realistic uncertainty is taken into account. Additionally, the OUU solution produces fewer extreme yaw situations than the deterministic solution.« less
Optimization Under Uncertainty for Wake Steering Strategies
NASA Astrophysics Data System (ADS)
Quick, Julian; Annoni, Jennifer; King, Ryan; Dykes, Katherine; Fleming, Paul; Ning, Andrew
2017-05-01
Wind turbines in a wind power plant experience significant power losses because of aerodynamic interactions between turbines. One control strategy to reduce these losses is known as “wake steering,” in which upstream turbines are yawed to direct wakes away from downstream turbines. Previous wake steering research has assumed perfect information, however, there can be significant uncertainty in many aspects of the problem, including wind inflow and various turbine measurements. Uncertainty has significant implications for performance of wake steering strategies. Consequently, the authors formulate and solve an optimization under uncertainty (OUU) problem for finding optimal wake steering strategies in the presence of yaw angle uncertainty. The OUU wake steering strategy is demonstrated on a two-turbine test case and on the utility-scale, offshore Princess Amalia Wind Farm. When we accounted for yaw angle uncertainty in the Princess Amalia Wind Farm case, inflow-direction-specific OUU solutions produced between 0% and 1.4% more power than the deterministically optimized steering strategies, resulting in an overall annual average improvement of 0.2%. More importantly, the deterministic optimization is expected to perform worse and with more downside risk than the OUU result when realistic uncertainty is taken into account. Additionally, the OUU solution produces fewer extreme yaw situations than the deterministic solution.
Optimization Under Uncertainty for Wake Steering Strategies
Quick, Julian; Annoni, Jennifer; King, Ryan N.; ...
2017-06-13
Here, wind turbines in a wind power plant experience significant power losses because of aerodynamic interactions between turbines. One control strategy to reduce these losses is known as 'wake steering,' in which upstream turbines are yawed to direct wakes away from downstream turbines. Previous wake steering research has assumed perfect information, however, there can be significant uncertainty in many aspects of the problem, including wind inflow and various turbine measurements. Uncertainty has significant implications for performance of wake steering strategies. Consequently, the authors formulate and solve an optimization under uncertainty (OUU) problem for finding optimal wake steering strategies in themore » presence of yaw angle uncertainty. The OUU wake steering strategy is demonstrated on a two-turbine test case and on the utility-scale, offshore Princess Amalia Wind Farm. When we accounted for yaw angle uncertainty in the Princess Amalia Wind Farm case, inflow-direction-specific OUU solutions produced between 0% and 1.4% more power than the deterministically optimized steering strategies, resulting in an overall annual average improvement of 0.2%. More importantly, the deterministic optimization is expected to perform worse and with more downside risk than the OUU result when realistic uncertainty is taken into account. Additionally, the OUU solution produces fewer extreme yaw situations than the deterministic solution.« less
2013-01-01
Introduction: Framework Convention on Tobacco Control (FCTC) Articles 20, 21, and 22 call for strong monitoring and reporting of tobacco use and factors influencing use and disease (Articles 20 and 21) and for collaboration among the Parties and relevant organizations to share resources, knowledge, and expertise on all relevant tobacco control strategies (Article 22). Methods: This paper provides background information and discusses research strategies that would strengthen these efforts and better inform the Parties. By necessity, Articles 20 and 21 are discussed separately from Article 22, although 1 example that relates to both 20/21 and 22 is discussed at the end. Results: Twelve important research opportunities on surveillance and evaluation are recognized, along with 4 on collaboration. The authors believe that the 6 most important areas for research would study (a) possible underreporting of tobacco use among certain demographic groups in some countries, (b) measures of industry activities, (c) optimal sampling strategies, (d) sentinel surveillance, (e) networks of tobacco companies and their partners as they promote tobacco use and interfere with implementation of the FCTC, and (f) network/relationship factors that impact diffusion of knowledge and decision making on the implementation of the FCTC. In addition, we call for a review process of existing surveillance and evaluation strategies to coordinate activities to make optimal use of existing resources. This activity would involve networking as prescribed in Article 22. Conclusions: Studies and activities such as these would facilitate control of the tobacco epidemic. PMID:23335488
Giovino, Gary A; Kulak, Jessica A; Kalsbeek, William D; Leischow, Scott J
2013-04-01
Framework Convention on Tobacco Control (FCTC) Articles 20, 21, and 22 call for strong monitoring and reporting of tobacco use and factors influencing use and disease (Articles 20 and 21) and for collaboration among the Parties and relevant organizations to share resources, knowledge, and expertise on all relevant tobacco control strategies (Article 22). This paper provides background information and discusses research strategies that would strengthen these efforts and better inform the parties. By necessity, Articles 20 and 21 are discussed separately from Article 22, although 1 example that relates to both 20/21 and 22 is discussed at the end. Twelve important research opportunities on surveillance and evaluation are recognized, along with 4 on collaboration. The authors believe that the 6 most important areas for research would study (a) possible underreporting of tobacco use among certain demographic groups in some countries, (b) measures of industry activities, (c) optimal sampling strategies, (d) sentinel surveillance, (e) networks of tobacco companies and their partners as they promote tobacco use and interfere with implementation of the FCTC, and (f) network/relationship factors that impact diffusion of knowledge and decision making on the implementation of the FCTC. In addition, we call for a review process of existing surveillance and evaluation strategies to coordinate activities to make optimal use of existing resources. This activity would involve networking as prescribed in Article 22. Studies and activities such as these would facilitate control of the tobacco epidemic.
Heinsch, Stephen C.; Das, Siba R.; Smanski, Michael J.
2018-01-01
Increasing the final titer of a multi-gene metabolic pathway can be viewed as a multivariate optimization problem. While numerous multivariate optimization algorithms exist, few are specifically designed to accommodate the constraints posed by genetic engineering workflows. We present a strategy for optimizing expression levels across an arbitrary number of genes that requires few design-build-test iterations. We compare the performance of several optimization algorithms on a series of simulated expression landscapes. We show that optimal experimental design parameters depend on the degree of landscape ruggedness. This work provides a theoretical framework for designing and executing numerical optimization on multi-gene systems. PMID:29535690
Duffull, Stephen B; Graham, Gordon; Mengersen, Kerrie; Eccleston, John
2012-01-01
Information theoretic methods are often used to design studies that aim to learn about pharmacokinetic and linked pharmacokinetic-pharmacodynamic systems. These design techniques, such as D-optimality, provide the optimum experimental conditions. The performance of the optimum design will depend on the ability of the investigator to comply with the proposed study conditions. However, in clinical settings it is not possible to comply exactly with the optimum design and hence some degree of unplanned suboptimality occurs due to error in the execution of the study. In addition, due to the nonlinear relationship of the parameters of these models to the data, the designs are also locally dependent on an arbitrary choice of a nominal set of parameter values. A design that is robust to both study conditions and uncertainty in the nominal set of parameter values is likely to be of use clinically. We propose an adaptive design strategy to account for both execution error and uncertainty in the parameter values. In this study we investigate designs for a one-compartment first-order pharmacokinetic model. We do this in a Bayesian framework using Markov-chain Monte Carlo (MCMC) methods. We consider log-normal prior distributions on the parameters and investigate several prior distributions on the sampling times. An adaptive design was used to find the sampling window for the current sampling time conditional on the actual times of all previous samples.
Flynn, Brian S; Worden, John K; Bunn, Janice Yanushka; Dorwaldt, Anne L; Connolly, Scott W; Ashikaga, Takamaru
2007-08-01
Mass media interventions are among the strategies recommended for youth cigarette smoking prevention, but little is known about optimal methods for reaching diverse youth audiences. Grades 4 through 12 samples of youth from four states (n = 1,230) rated smoking-prevention messages in classroom settings. Similar proportions of African American, Hispanic, and White youth participated. Impact of audience characteristics on message appeal ratings was assessed to provide guidance for audience segmentation strategies. Age had a strong effect on individual message appeal. The effect of gender also was significant. Message ratings were similar among the younger racial/ethnic groups, but differences were found for older African American youth. Lower academic achievement was associated with lower appeal scores for some messages. Age should be a primary consideration in developing and delivering smoking-prevention messages to youth audiences. The unique needs of boys and girls and older African American adolescents should also be considered.
NASA Astrophysics Data System (ADS)
Basak, Jyotirmoy; Maitra, Subhamoy
2018-04-01
In device-independent (DI) paradigm, the trustful assumptions over the devices are removed and CHSH test is performed to check the functionality of the devices toward certifying the security of the protocol. The existing DI protocols consider infinite number of samples from theoretical point of view, though this is not practically implementable. For finite sample analysis of the existing DI protocols, we may also consider strategies for checking device independence other than the CHSH test. In this direction, here we present a comparative analysis between CHSH and three-party Pseudo-telepathy game for the quantum private query protocol in DI paradigm that appeared in Maitra et al. (Phys Rev A 95:042344, 2017) very recently.
Quantum-state comparison and discrimination
NASA Astrophysics Data System (ADS)
Hayashi, A.; Hashimoto, T.; Horibe, M.
2018-05-01
We investigate the performance of discrimination strategy in the comparison task of known quantum states. In the discrimination strategy, one infers whether or not two quantum systems are in the same state on the basis of the outcomes of separate discrimination measurements on each system. In some cases with more than two possible states, the optimal strategy in minimum-error comparison is that one should infer the two systems are in different states without any measurement, implying that the discrimination strategy performs worse than the trivial "no-measurement" strategy. We present a sufficient condition for this phenomenon to happen. For two pure states with equal prior probabilities, we determine the optimal comparison success probability with an error margin, which interpolates the minimum-error and unambiguous comparison. We find that the discrimination strategy is not optimal except for the minimum-error case.
Advanced overlay: sampling and modeling for optimized run-to-run control
NASA Astrophysics Data System (ADS)
Subramany, Lokesh; Chung, WoongJae; Samudrala, Pavan; Gao, Haiyong; Aung, Nyan; Gomez, Juan Manuel; Gutjahr, Karsten; Park, DongSuk; Snow, Patrick; Garcia-Medina, Miguel; Yap, Lipkong; Demirer, Onur Nihat; Pierson, Bill; Robinson, John C.
2016-03-01
In recent years overlay (OVL) control schemes have become more complicated in order to meet the ever shrinking margins of advanced technology nodes. As a result, this brings up new challenges to be addressed for effective run-to- run OVL control. This work addresses two of these challenges by new advanced analysis techniques: (1) sampling optimization for run-to-run control and (2) bias-variance tradeoff in modeling. The first challenge in a high order OVL control strategy is to optimize the number of measurements and the locations on the wafer, so that the "sample plan" of measurements provides high quality information about the OVL signature on the wafer with acceptable metrology throughput. We solve this tradeoff between accuracy and throughput by using a smart sampling scheme which utilizes various design-based and data-based metrics to increase model accuracy and reduce model uncertainty while avoiding wafer to wafer and within wafer measurement noise caused by metrology, scanner or process. This sort of sampling scheme, combined with an advanced field by field extrapolated modeling algorithm helps to maximize model stability and minimize on product overlay (OPO). Second, the use of higher order overlay models means more degrees of freedom, which enables increased capability to correct for complicated overlay signatures, but also increases sensitivity to process or metrology induced noise. This is also known as the bias-variance trade-off. A high order model that minimizes the bias between the modeled and raw overlay signature on a single wafer will also have a higher variation from wafer to wafer or lot to lot, that is unless an advanced modeling approach is used. In this paper, we characterize the bias-variance trade off to find the optimal scheme. The sampling and modeling solutions proposed in this study are validated by advanced process control (APC) simulations to estimate run-to-run performance, lot-to-lot and wafer-to- wafer model term monitoring to estimate stability and ultimately high volume manufacturing tests to monitor OPO by densely measured OVL data.
Modelling and Optimal Control of Typhoid Fever Disease with Cost-Effective Strategies.
Tilahun, Getachew Teshome; Makinde, Oluwole Daniel; Malonza, David
2017-01-01
We propose and analyze a compartmental nonlinear deterministic mathematical model for the typhoid fever outbreak and optimal control strategies in a community with varying population. The model is studied qualitatively using stability theory of differential equations and the basic reproductive number that represents the epidemic indicator is obtained from the largest eigenvalue of the next-generation matrix. Both local and global asymptotic stability conditions for disease-free and endemic equilibria are determined. The model exhibits a forward transcritical bifurcation and the sensitivity analysis is performed. The optimal control problem is designed by applying Pontryagin maximum principle with three control strategies, namely, the prevention strategy through sanitation, proper hygiene, and vaccination; the treatment strategy through application of appropriate medicine; and the screening of the carriers. The cost functional accounts for the cost involved in prevention, screening, and treatment together with the total number of the infected persons averted. Numerical results for the typhoid outbreak dynamics and its optimal control revealed that a combination of prevention and treatment is the best cost-effective strategy to eradicate the disease.
Optimal background matching camouflage.
Michalis, Constantine; Scott-Samuel, Nicholas E; Gibson, David P; Cuthill, Innes C
2017-07-12
Background matching is the most familiar and widespread camouflage strategy: avoiding detection by having a similar colour and pattern to the background. Optimizing background matching is straightforward in a homogeneous environment, or when the habitat has very distinct sub-types and there is divergent selection leading to polymorphism. However, most backgrounds have continuous variation in colour and texture, so what is the best solution? Not all samples of the background are likely to be equally inconspicuous, and laboratory experiments on birds and humans support this view. Theory suggests that the most probable background sample (in the statistical sense), at the size of the prey, would, on average, be the most cryptic. We present an analysis, based on realistic assumptions about low-level vision, that estimates the distribution of background colours and visual textures, and predicts the best camouflage. We present data from a field experiment that tests and supports our predictions, using artificial moth-like targets under bird predation. Additionally, we present analogous data for humans, under tightly controlled viewing conditions, searching for targets on a computer screen. These data show that, in the absence of predator learning, the best single camouflage pattern for heterogeneous backgrounds is the most probable sample. © 2017 The Authors.
Jalbani, N; Soylak, M
2014-04-01
In the present study, a microextraction technique combining Fe3O4 nano-particle with surfactant mediated solid phase extraction ((SM-SPE)) was successfully developed for the preconcentration/separation of Cd(II) and Pb(II) in water and soil samples. The analytes were determined by flame atomic absorption spectrometry (FAAS). The effective variables such as the amount of adsorbent (NPs), the pH, concentration of non-ionic (TX-114) and centrifugation time (min) were investigated by Plackett-Burman (PBD) design. The important variables were further optimized by central composite design (CCD). Under the optimized conditions, the detection limits (LODs) of Cd(II) and Pb(II) were 0.15 and 0.74 µg/L, respectively. The validation of the proposed procedure was checked by the analysis of certified reference materials of TMDA 53.3 fortified water and GBW07425 soil. The method was successfully applied for the determination of Cd(II) and Pb(II) in water and soil samples. Copyright © 2014 Elsevier Inc. All rights reserved.
Establishment of an immortalized mouse dermal papilla cell strain with optimized culture strategy.
Guo, Haiying; Xing, Yizhan; Zhang, Yiming; He, Long; Deng, Fang; Ma, Xiaogen; Li, Yuhong
2018-01-01
Dermal papilla (DP) plays important roles in hair follicle regeneration. Long-term culture of mouse DP cells can provide enough cells for research and application of DP cells. We optimized the culture strategy for DP cells from three dimensions: stepwise dissection, collagen I coating, and optimized culture medium. Based on the optimized culture strategy, we immortalized primary DP cells with SV40 large T antigen, and established several immortalized DP cell strains. By comparing molecular expression and morphologic characteristics with primary DP cells, we found one cell strain named iDP6 was similar with primary DP cells. Further identifications illustrate that iDP6 expresses FGF7 and α-SMA, and has activity of alkaline phosphatase. During the process of characterization of immortalized DP cell strains, we also found that cells in DP were heterogeneous. We successfully optimized culture strategy for DP cells, and established an immortalized DP cell strain suitable for research and application of DP cells.
Establishment of an immortalized mouse dermal papilla cell strain with optimized culture strategy
Zhang, Yiming; He, Long; Deng, Fang; Ma, Xiaogen
2018-01-01
Dermal papilla (DP) plays important roles in hair follicle regeneration. Long-term culture of mouse DP cells can provide enough cells for research and application of DP cells. We optimized the culture strategy for DP cells from three dimensions: stepwise dissection, collagen I coating, and optimized culture medium. Based on the optimized culture strategy, we immortalized primary DP cells with SV40 large T antigen, and established several immortalized DP cell strains. By comparing molecular expression and morphologic characteristics with primary DP cells, we found one cell strain named iDP6 was similar with primary DP cells. Further identifications illustrate that iDP6 expresses FGF7 and α-SMA, and has activity of alkaline phosphatase. During the process of characterization of immortalized DP cell strains, we also found that cells in DP were heterogeneous. We successfully optimized culture strategy for DP cells, and established an immortalized DP cell strain suitable for research and application of DP cells. PMID:29383288
Separation-Compliant, Optimal Routing and Control of Scheduled Arrivals in a Terminal Airspace
NASA Technical Reports Server (NTRS)
Sadovsky, Alexander V.; Davis, Damek; Isaacson, Douglas R.
2013-01-01
We address the problem of navigating a set (fleet) of aircraft in an aerial route network so as to bring each aircraft to its destination at a specified time and with minimal distance separation assured between all aircraft at all times. The speed range, initial position, required destination, and required time of arrival at destination for each aircraft are assumed provided. Each aircraft's movement is governed by a controlled differential equation (state equation). The problem consists in choosing for each aircraft a path in the route network and a control strategy so as to meet the constraints and reach the destination at the required time. The main contribution of the paper is a model that allows to recast this problem as a decoupled collection of problems in classical optimal control and is easily generalized to the case when inertia cannot be neglected. Some qualitative insight into solution behavior is obtained using the Pontryagin Maximum Principle. Sample numerical solutions are computed using a numerical optimal control solver. The proposed model is first step toward increasing the fidelity of continuous time control models of air traffic in a terminal airspace. The Pontryagin Maximum Principle implies the polygonal shape of those portions of the state trajectories away from those states in which one or more aircraft pair are at minimal separation. The model also confirms the intuition that, the narrower the allowed speed ranges of the aircraft, the smaller the space of optimal solutions, and that an instance of the optimal control problem may not have a solution at all (i.e., no control strategy that meets the separation requirement and other constraints).
An Umeclidinium membrane sensor; Two-step optimization strategy for improved responses.
Yehia, Ali M; Monir, Hany H
2017-09-01
In the scientific context of membrane sensors and improved experimentation, we devised an experimentally designed protocol for sensor optimization. Two-step strategy was implemented for Umeclidinium bromide (UMEC) analysis which is a novel quinuclidine-based muscarinic antagonist used for maintenance treatment of symptoms accompanied with chronic obstructive pulmonary disease. In the first place, membrane components were screened for ideal ion exchanger, ionophore and plasticizer using three categorical factors at three levels in Taguchi design. Secondly, experimentally designed optimization was followed in order to tune the sensor up for finest responses. Twelve experiments were randomly carried out in a continuous factor design. Nernstian response, detection limit and selectivity were assigned as responses in these designs. The optimized membrane sensor contained tetrakis-[3,5-bis(trifluoro- methyl)phenyl] borate (0.44wt%) and calix[6]arene (0.43wt%) in 50.00% PVC plasticized with 49.13wt% 2-ni-tro-phenyl octylether. This sensor, along with an optimum concentration of inner filling solution (2×10 -4 molL -1 UMEC) and 2h of soaking time, attained the design objectives. Nernstian response approached 59.7mV/decade and detection limit decreased by about two order of magnitude (8×10 -8 mol L -1 ) through this optimization protocol. The proposed sensor was validated for UMEC determination in its linear range (3.16×10 -7 -1×10 -3 mol L -1 ) and challenged for selective discrimination of other congeners and inorganic cations. Results of INCRUSE ELLIPTA ® inhalation powder analyses obtained from the proposed sensor and manufacturer's UPLC were statistically compared. Moreover the proposed sensor was successfully used for the determination of UMEC in plasma samples. Copyright © 2017 Elsevier B.V. All rights reserved.
Multiobjective optimization of temporal processes.
Song, Zhe; Kusiak, Andrew
2010-06-01
This paper presents a dynamic predictive-optimization framework of a nonlinear temporal process. Data-mining (DM) and evolutionary strategy algorithms are integrated in the framework for solving the optimization model. DM algorithms learn dynamic equations from the process data. An evolutionary strategy algorithm is then applied to solve the optimization problem guided by the knowledge extracted by the DM algorithm. The concept presented in this paper is illustrated with the data from a power plant, where the goal is to maximize the boiler efficiency and minimize the limestone consumption. This multiobjective optimization problem can be either transformed into a single-objective optimization problem through preference aggregation approaches or into a Pareto-optimal optimization problem. The computational results have shown the effectiveness of the proposed optimization framework.
Optimal control of an invasive species using a reaction-diffusion model and linear programming
Bonneau, Mathieu; Johnson, Fred A.; Smith, Brian J.; Romagosa, Christina M.; Martin, Julien; Mazzotti, Frank J.
2017-01-01
Managing an invasive species is particularly challenging as little is generally known about the species’ biological characteristics in its new habitat. In practice, removal of individuals often starts before the species is studied to provide the information that will later improve control. Therefore, the locations and the amount of control have to be determined in the face of great uncertainty about the species characteristics and with a limited amount of resources. We propose framing spatial control as a linear programming optimization problem. This formulation, paired with a discrete reaction-diffusion model, permits calculation of an optimal control strategy that minimizes the remaining number of invaders for a fixed cost or that minimizes the control cost for containment or protecting specific areas from invasion. We propose computing the optimal strategy for a range of possible model parameters, representing current uncertainty on the possible invasion scenarios. Then, a best strategy can be identified depending on the risk attitude of the decision-maker. We use this framework to study the spatial control of the Argentine black and white tegus (Salvator merianae) in South Florida. There is uncertainty about tegu demography and we considered several combinations of model parameters, exhibiting various dynamics of invasion. For a fixed one-year budget, we show that the risk-averse strategy, which optimizes the worst-case scenario of tegus’ dynamics, and the risk-neutral strategy, which optimizes the expected scenario, both concentrated control close to the point of introduction. A risk-seeking strategy, which optimizes the best-case scenario, focuses more on models where eradication of the species in a cell is possible and consists of spreading control as much as possible. For the establishment of a containment area, assuming an exponential growth we show that with current control methods it might not be possible to implement such a strategy for some of the models that we considered. Including different possible models allows an examination of how the strategy is expected to perform in different scenarios. Then, a strategy that accounts for the risk attitude of the decision-maker can be designed.
Therapeutic Substance Abuse Treatment for Incarcerated Women
Finfgeld-Connett, Deborah; Johnson, E. Diane
2011-01-01
The purpose of this qualitative systematic review was to explicate attributes of optimal therapeutic strategies for treating incarcerated women who have a history of substance abuse. An expansive search of electronic databases for qualitative research reports relating to substance abuse treatment for incarcerated women was conducted. Nine qualitative research reports comprised the sample for this review. Findings from these reports were extracted, placed into a data analysis matrix, coded, and categorized. Memos were written, and strategies for treating incarcerated women with alcohol problems were identified. Therapeutic effects of treatment programs for incarcerated women with substance-abuse problems appear to be enhanced when trust-based relationships are established, individualized and just care is provided, and treatment facilities are separate from the general prison environment. PMID:21771929
Postaudit of optimal conjunctive use policies
Nishikawa, Tracy; Martin, Peter; ,
1998-01-01
A simulation-optimization model was developed for the optimal management of the city of Santa Barbara's water resources during a drought; however, this model addressed only groundwater flow and not the advective-dispersive, density-dependent transport of seawater. Zero-m freshwater head constraints at the coastal boundary were used as surrogates for the control of seawater intrusion. In this study, the strategies derived from the simulation-optimization model using two surface water supply scenarios are evaluated using a two-dimensional, density-dependent groundwater flow and transport model. Comparisons of simulated chloride mass fractions are made between maintaining the actual pumping policies of the 1987-91 drought and implementing the optimal pumping strategies for each scenario. The results indicate that using 0-m freshwater head constraints allowed no more seawater intrusion than under actual 1987-91 drought conditions and that the simulation-optimization model yields least-cost strategies that deliver more water than under actual drought conditions while controlling seawater intrusion.
ODECS -- A computer code for the optimal design of S.I. engine control strategies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arsie, I.; Pianese, C.; Rizzo, G.
1996-09-01
The computer code ODECS (Optimal Design of Engine Control Strategies) for the design of Spark Ignition engine control strategies is presented. This code has been developed starting from the author`s activity in this field, availing of some original contributions about engine stochastic optimization and dynamical models. This code has a modular structure and is composed of a user interface for the definition, the execution and the analysis of different computations performed with 4 independent modules. These modules allow the following calculations: (1) definition of the engine mathematical model from steady-state experimental data; (2) engine cycle test trajectory corresponding to amore » vehicle transient simulation test such as ECE15 or FTP drive test schedule; (3) evaluation of the optimal engine control maps with a steady-state approach; (4) engine dynamic cycle simulation and optimization of static control maps and/or dynamic compensation strategies, taking into account dynamical effects due to the unsteady fluxes of air and fuel and the influences of combustion chamber wall thermal inertia on fuel consumption and emissions. Moreover, in the last two modules it is possible to account for errors generated by a non-deterministic behavior of sensors and actuators and the related influences on global engine performances, and compute robust strategies, less sensitive to stochastic effects. In the paper the four models are described together with significant results corresponding to the simulation and the calculation of optimal control strategies for dynamic transient tests.« less
Quantitative learning strategies based on word networks
NASA Astrophysics Data System (ADS)
Zhao, Yue-Tian-Yi; Jia, Zi-Yang; Tang, Yong; Xiong, Jason Jie; Zhang, Yi-Cheng
2018-02-01
Learning English requires a considerable effort, but the way that vocabulary is introduced in textbooks is not optimized for learning efficiency. With the increasing population of English learners, learning process optimization will have significant impact and improvement towards English learning and teaching. The recent developments of big data analysis and complex network science provide additional opportunities to design and further investigate the strategies in English learning. In this paper, quantitative English learning strategies based on word network and word usage information are proposed. The strategies integrate the words frequency with topological structural information. By analyzing the influence of connected learned words, the learning weights for the unlearned words and dynamically updating of the network are studied and analyzed. The results suggest that quantitative strategies significantly improve learning efficiency while maintaining effectiveness. Especially, the optimized-weight-first strategy and segmented strategies outperform other strategies. The results provide opportunities for researchers and practitioners to reconsider the way of English teaching and designing vocabularies quantitatively by balancing the efficiency and learning costs based on the word network.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adams, Brian M.; Ebeida, Mohamed Salah; Eldred, Michael S
The Dakota (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a exible and extensible interface between simulation codes and iterative analysis methods. Dakota contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quanti cation with sampling, reliability, and stochastic expansion methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components requiredmore » for iterative systems analyses, the Dakota toolkit provides a exible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a theoretical manual for selected algorithms implemented within the Dakota software. It is not intended as a comprehensive theoretical treatment, since a number of existing texts cover general optimization theory, statistical analysis, and other introductory topics. Rather, this manual is intended to summarize a set of Dakota-related research publications in the areas of surrogate-based optimization, uncertainty quanti cation, and optimization under uncertainty that provide the foundation for many of Dakota's iterative analysis capabilities.« less
Hong, Cheng William; Wolfson, Tanya; Sy, Ethan Z; Schlein, Alexandra N; Hooker, Jonathan C; Fazeli Dehkordy, Soudabeh; Hamilton, Gavin; Reeder, Scott B; Loomba, Rohit; Sirlin, Claude B
2018-04-01
Clinical trials utilizing proton density fat fraction (PDFF) as an imaging biomarker for hepatic steatosis have used a laborious region-of-interest (ROI) sampling strategy of placing an ROI in each hepatic segment. To identify a strategy with the fewest ROIs that consistently achieves close agreement with the nine-ROI strategy. Retrospective secondary analysis of prospectively acquired clinical research data. A total of 391 adults (173 men, 218 women) with known or suspected NAFLD. Confounder-corrected chemical-shift-encoded 3T MRI using a 2D multiecho gradient-recalled echo technique. An ROI was placed in each hepatic segment. Mean nine-ROI PDFF and segmental PDFF standard deviation were computed. Segmental and lobar PDFF were compared. PDFF was estimated using every combinatorial subset of ROIs and compared to the nine-ROI average. Mean nine-ROI PDFF and segmental PDFF standard deviation were summarized descriptively. Segmental PDFF was compared using a one-way analysis of variance, and lobar PDFF was compared using a paired t-test and a Bland-Altman analysis. The PDFF estimated by every subset of ROIs was informally compared to the nine-ROI average using median intraclass correlation coefficients (ICCs) and Bland-Altman analyses. The study population's mean whole-liver PDFF was 10.1 ± 8.9% (range: 1.1-44.1%). Although there was no significant difference in average segmental (P = 0.452) or lobar (P = 0.154) PDFF, left and right lobe PDFF differed by at least 1.5 percentage points in 25.1% (98/391) of patients. Any strategy with ≥4 ROIs had ICC >0.995. 115 of 126 four-ROI strategies (91%) had limits of agreement (LOA) <1.5%, including four-ROI strategies with two ROIs from each lobe, which all had LOA <1.5%. 14/36 (39%) of two-ROI strategies and 74/84 (88%) of three-ROI strategies had ICC >0.995, and 2/36 (6%) of two-ROI strategies and 46/84 (55%) of three-ROI strategies had LOA <1.5%. Four-ROI sampling strategies with two ROIs in the left and right lobes achieve close agreement with nine-ROI PDFF. 3 Technical Efficacy: Stage 2 J. Magn. Reson. Imaging 2018;47:988-994. © 2017 International Society for Magnetic Resonance in Medicine.
Hong, Cheng William; Wolfson, Tanya; Sy, Ethan Z.; Schlein, Alexandra N.; Hooker, Jonathan C.; Dehkordy, Soudabeh Fazeli; Hamilton, Gavin; Reeder, Scott B.; Loomba, Rohit; Sirlin, Claude B.
2017-01-01
BACKGROUND Clinical trials utilizing proton density fat fraction (PDFF) as an imaging biomarker for hepatic steatosis have used a laborious region-of-interest (ROI) sampling strategy of placing an ROI in each hepatic segment. PURPOSE To identify a strategy with the fewest ROIs that consistently achieves close agreement with the nine-ROI strategy. STUDY TYPE Retrospective secondary analysis of prospectively acquired clinical research data. POPULATION A total of 391 adults (173 men, 218 women) with known or suspected NAFLD. FIELD STRENGTH/SEQUENCE Confounder-corrected chemical-shift-encoded 3T MRI using a 2D multiecho gradientrecalled echo technique. ASSESSMENT An ROI was placed in each hepatic segment. Mean nine-ROI PDFF and segmental PDFF standard deviation were computed. Segmental and lobar PDFF were compared. PDFF was estimated using every combinatorial subset of ROIs and compared to the nine-ROI average. STATISTICAL TESTING Mean nine-ROI PDFF and segmental PDFF standard deviation were summarized descriptively. Segmental PDFF was compared using a one-way analysis of variance, and lobar PDFF was compared using a paired t-test and a Bland–Altman analysis. The PDFF estimated by every subset of ROIs was informally compared to the nine-ROI average using median intraclass correlation coefficients (ICCs) and Bland–Altman analyses. RESULTS The study population’s mean whole-liver PDFF was 10.1±8.9% (range: 1.1–44.1%). Although there was no significant difference in average segmental (P=0.452) or lobar (P=0.154) PDFF, left and right lobe PDFF differed by at least 1.5 percentage points in 25.1% (98/391) of patients. Any strategy with ≥ 4 ROIs had ICC >0.995. 115 of 126 four-ROI strategies (91%) had limits of agreement (LOA) <1.5%, including four-ROI strategies with two ROIs from each lobe, which all had LOA <1.5%. 14/36 (39%) of two-ROI strategies and 74/84 (88%) of three-ROI strategies had ICC >0.995, and 2/36 (6%) of two-ROI strategies and 46/84 (55%) of three-ROI strategies had LOA <1.5%. DATA CONCLUSION Four-ROI sampling strategies with two ROIs in the left and right lobes achieve close agreement with nine-ROI PDFF. Level of Evidence 3 Technical Efficacy Stage 2 PMID:28842937
Optimization of the MINERVA Exoplanet Search Strategy via Simulations
NASA Astrophysics Data System (ADS)
Nava, Chantell; Johnson, Samson; McCrady, Nate; Minerva
2015-01-01
Detection of low-mass exoplanets requires high spectroscopic precision and high observational cadence. MINERVA is a dedicated observatory capable of sub meter-per-second radial velocity precision. As a dedicated observatory, MINERVA can observe with every-clear-night cadence that is essential for low-mass exoplanet detection. However, this cadence complicates the determination of an optimal observing strategy. We simulate MINERVA observations to optimize our observing strategy and maximize exoplanet detections. A dispatch scheduling algorithm provides observations of MINERVA targets every day over a three-year observing campaign. An exoplanet population with a distribution informed by Kepler statistics is assigned to the targets, and radial velocity curves induced by the planets are constructed. We apply a correlated noise model that realistically simulates stellar astrophysical noise sources. The simulated radial velocity data is fed to the MINERVA planet detection code and the expected exoplanet yield is calculated. The full simulation provides a tool to test different strategies for scheduling observations of our targets and optimizing the MINERVA exoplanet search strategy.
Hao, Yong; Sun, Xu-Dong; Yang, Qiang
2012-12-01
Variables selection strategy combined with local linear embedding (LLE) was introduced for the analysis of complex samples by using near infrared spectroscopy (NIRS). Three methods include Monte Carlo uninformation variable elimination (MCUVE), successive projections algorithm (SPA) and MCUVE connected with SPA were used for eliminating redundancy spectral variables. Partial least squares regression (PLSR) and LLE-PLSR were used for modeling complex samples. The results shown that MCUVE can both extract effective informative variables and improve the precision of models. Compared with PLSR models, LLE-PLSR models can achieve more accurate analysis results. MCUVE combined with LLE-PLSR is an effective modeling method for NIRS quantitative analysis.
Testing of Strategies for the Acceleration of the Cost Optimization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ponciroli, Roberto; Vilim, Richard B.
The general problem addressed in the Nuclear-Renewable Hybrid Energy System (N-R HES) project is finding the optimum economical dispatch (ED) and capacity planning solutions for the hybrid energy systems. In the present test-problem configuration, the N-R HES unit is composed of three electrical power-generating components, i.e. the Balance of Plant (BOP), the Secondary Energy Source (SES), and the Energy Storage (ES). In addition, there is an Industrial Process (IP), which is devoted to hydrogen generation. At this preliminary stage, the goal is to find the power outputs of each one of the N-R HES unit components (BOP, SES, ES) andmore » the IP hydrogen production level that maximizes the unit profit by simultaneously satisfying individual component operational constraints. The optimization problem is meant to be solved in the Risk Analysis Virtual Environment (RAVEN) framework. The dynamic response of the N-R HES unit components is simulated by using dedicated object-oriented models written in the Modelica modeling language. Though this code coupling provides for very accurate predictions, the ensuing optimization problem is characterized by a very large number of solution variables. To ease the computational burden and to improve the path to a converged solution, a method to better estimate the initial guess for the optimization problem solution was developed. The proposed approach led to the definition of a suitable Monte Carlo-based optimization algorithm (called the preconditioner), which provides an initial guess for the optimal N-R HES power dispatch and the optimal installed capacity for each one of the unit components. The preconditioner samples a set of stochastic power scenarios for each one of the N-R HES unit components, and then for each of them the corresponding value of a suitably defined cost function is evaluated. After having simulated a sufficient number of power histories, the configuration which ensures the highest profit is selected as the optimal one. The component physical dynamics are represented through suitable ramp constraints, which considerably simplify the numerical solving. In order to test the capabilities of the proposed approach, in the present report, the dispatch problem only is tackled, i.e. a reference unit configuration is assumed, and each one of the N-R HES unit components is assumed to have a fixed installed capacity. As for the next steps, the main improvement will concern the operation strategy of the ES facility. In particular, in order to describe a more realistic battery commitment strategy, the ES operation will be regulated according to the electricity price forecasts.« less
Ptolemy, Adam S; Britz-McKibbin, Philip
2006-02-17
New strategies for integrating sample pretreatment with chemical analyses under a single format is required for rapid, sensitive and enantioselective analyses of low abundance metabolites in complex biological samples. Capillary electrophoresis (CE) offers a unique environment for controlling analyte/reagent band dispersion and electromigration properties using discontinuous electrolyte systems. Recent work in our laboratory towards developing a high-throughput CE platform for low abundance metabolites via on-line sample preconcentration with chemical derivatization (SPCD) is primarily examined in this review, as there have been surprisingly only a few strategies reported in the literature to date. In-capillary sample preconcentration serves to enhance concentration sensitivity via electrokinetic focusing of long sample injection volumes for lower detection limits, whereas chemical derivatization by zone passing is used to expand detectability and selectivity, notably for enantiomeric resolution of metabolites lacking intrinsic chromophores using nanolitre volumes of reagent. Together, on-line SPCD-CE can provide over a 100-fold improvement in concentration sensitivity, shorter total analysis times, reduced sample handling and improved reliability for a variety of amino acid and amino sugar metabolites, which is also amenable to automated high-throughput screening. This review will highlight basic method development and optimization parameters relevant to SPCD-CE, including applications to bacterial metabolite flux and biomarker analyses. Insight into the mechanism of analyte focusing and labeling by SPCD-CE is also discussed, as well as future directions for continued research.
Streicher, Jeffrey W; Schulte, James A; Wiens, John J
2016-01-01
Targeted sequence capture is becoming a widespread tool for generating large phylogenomic data sets to address difficult phylogenetic problems. However, this methodology often generates data sets in which increasing the number of taxa and loci increases amounts of missing data. Thus, a fundamental (but still unresolved) question is whether sampling should be designed to maximize sampling of taxa or genes, or to minimize the inclusion of missing data cells. Here, we explore this question for an ancient, rapid radiation of lizards, the pleurodont iguanians. Pleurodonts include many well-known clades (e.g., anoles, basilisks, iguanas, and spiny lizards) but relationships among families have proven difficult to resolve strongly and consistently using traditional sequencing approaches. We generated up to 4921 ultraconserved elements with sampling strategies including 16, 29, and 44 taxa, from 1179 to approximately 2.4 million characters per matrix and approximately 30% to 60% total missing data. We then compared mean branch support for interfamilial relationships under these 15 different sampling strategies for both concatenated (maximum likelihood) and species tree (NJst) approaches (after showing that mean branch support appears to be related to accuracy). We found that both approaches had the highest support when including loci with up to 50% missing taxa (matrices with ~40-55% missing data overall). Thus, our results show that simply excluding all missing data may be highly problematic as the primary guiding principle for the inclusion or exclusion of taxa and genes. The optimal strategy was somewhat different for each approach, a pattern that has not been shown previously. For concatenated analyses, branch support was maximized when including many taxa (44) but fewer characters (1.1 million). For species-tree analyses, branch support was maximized with minimal taxon sampling (16) but many loci (4789 of 4921). We also show that the choice of these sampling strategies can be critically important for phylogenomic analyses, since some strategies lead to demonstrably incorrect inferences (using the same method) that have strong statistical support. Our preferred estimate provides strong support for most interfamilial relationships in this important but phylogenetically challenging group. © The Author(s) 2015. Published by Oxford University Press, on behalf of the Society of Systematic Biologists. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Spatio-temporal optimization of sampling for bluetongue vectors (Culicoides) near grazing livestock
2013-01-01
Background Estimating the abundance of Culicoides using light traps is influenced by a large variation in abundance in time and place. This study investigates the optimal trapping strategy to estimate the abundance or presence/absence of Culicoides on a field with grazing animals. We used 45 light traps to sample specimens from the Culicoides obsoletus species complex on a 14 hectare field during 16 nights in 2009. Findings The large number of traps and catch nights enabled us to simulate a series of samples consisting of different numbers of traps (1-15) on each night. We also varied the number of catch nights when simulating the sampling, and sampled with increasing minimum distances between traps. We used resampling to generate a distribution of different mean and median abundance in each sample. Finally, we used the hypergeometric distribution to estimate the probability of falsely detecting absence of vectors on the field. The variation in the estimated abundance decreased steeply when using up to six traps, and was less pronounced when using more traps, although no clear cutoff was found. Conclusions Despite spatial clustering in vector abundance, we found no effect of increasing the distance between traps. We found that 18 traps were generally required to reach 90% probability of a true positive catch when sampling just one night. But when sampling over two nights the same probability level was obtained with just three traps per night. The results are useful for the design of vector monitoring programmes on fields with grazing animals. PMID:23705770
Gu, Hui-Wen; Wu, Hai-Long; Yin, Xiao-Li; Li, Yong; Liu, Ya-Juan; Xia, Hui; Zhang, Shu-Rong; Jin, Yi-Feng; Sun, Xiao-Dong; Yu, Ru-Qin; Yang, Peng-Yuan; Lu, Hao-Jie
2014-10-27
β-blockers are the first-line therapeutic agents for treating cardiovascular diseases and also a class of prohibited substances in athletic competitions. In this work, a smart strategy that combines three-way liquid chromatography-mass spectrometry (LC-MS) data with second-order calibration method based on alternating trilinear decomposition (ATLD) algorithm was developed for simultaneous determination of ten β-blockers in human urine and plasma samples. This flexible strategy proved to be a useful tool to solve the problems of overlapped peaks and uncalibrated interferences encountered in quantitative LC-MS, and made the multi-targeted interference-free qualitative and quantitative analysis of β-blockers in complex matrices possible. The limits of detection were in the range of 2.0×10(-5)-6.2×10(-3) μg mL(-1), and the average recoveries were between 90 and 110% with standard deviations and average relative prediction errors less than 10%, indicating that the strategy could provide satisfactory prediction results for ten β-blockers in human urine and plasma samples only using liquid chromatography hyphenated single-quadrupole mass spectrometer in full scan mode. To further confirm the feasibility and reliability of the proposed method, the same batch samples were analyzed by multiple reaction monitoring (MRM) method. T-test demonstrated that there are no significant differences between the prediction results of the two methods. Considering the advantages of fast, low-cost, high sensitivity, and no need of complicated chromatographic and tandem mass spectrometric conditions optimization, the proposed strategy is expected to be extended as an attractive alternative method to quantify analyte(s) of interest in complex systems such as cells, biological fluids, food, environment, pharmaceuticals and other complex samples. Copyright © 2014 Elsevier B.V. All rights reserved.
Clinical usefulness of limited sampling strategies for estimating AUC of proton pump inhibitors.
Niioka, Takenori
2011-03-01
Cytochrome P450 (CYP) 2C19 (CYP2C19) genotype is regarded as a useful tool to predict area under the blood concentration-time curve (AUC) of proton pump inhibitors (PPIs). In our results, however, CYP2C19 genotypes had no influence on AUC of all PPIs during fluvoxamine treatment. These findings suggest that CYP2C19 genotyping is not always a good indicator for estimating AUC of PPIs. Limited sampling strategies (LSS) were developed to estimate AUC simply and accurately. It is important to minimize the number of blood samples because of patient's acceptance. This article reviewed the usefulness of LSS for estimating AUC of three PPIs (omeprazole: OPZ, lansoprazole: LPZ and rabeprazole: RPZ). The best prediction formulas in each PPI were AUC(OPZ)=9.24 x C(6h)+2638.03, AUC(LPZ)=12.32 x C(6h)+3276.09 and AUC(RPZ)=1.39 x C(3h)+7.17 x C(6h)+344.14, respectively. In order to optimize the sampling strategy of LPZ, we tried to establish LSS for LPZ using a time point within 3 hours through the property of pharmacokinetics of its enantiomers. The best prediction formula using the fewest sampling points (one point) was AUC(racemic LPZ)=6.5 x C(3h) of (R)-LPZ+13.7 x C(3h) of (S)-LPZ-9917.3 x G1-14387.2×G2+7103.6 (G1: homozygous extensive metabolizer is 1 and the other genotypes are 0; G2: heterozygous extensive metabolizer is 1 and the other genotypes are 0). Those strategies, plasma concentration monitoring at one or two time-points, might be more suitable for AUC estimation than reference to CYP2C19 genotypes, particularly in the case of coadministration of CYP mediators.
NASA Astrophysics Data System (ADS)
Wullschleger, S. D.; Charsley-Groffman, L.; Baltzer, J. L.; Berg, A. A.; Griffith, P. C.; Jafarov, E. E.; Marsh, P.; Miller, C. E.; Schaefer, K. M.; Siqueira, P.; Wilson, C. J.; Kasischke, E. S.
2017-12-01
There is considerable interest in using L- and P-band Synthetic Aperture Radar (SAR) data to monitor variations in aboveground woody biomass, soil moisture, and permafrost conditions in high-latitude ecosystems. Such information is useful for quantifying spatial heterogeneity in surface and subsurface properties, and for model development and evaluation. To conduct these studies, it is desirable that field studies share a common sampling strategy so that the data from multiple sites can be combined and used to analyze variations in conditions across different landscape geomorphologies and vegetation types. In 2015, NASA launched the decade-long Arctic-Boreal Vulnerability Experiment (ABoVE) to study the sensitivity and resilience of these ecosystems to disturbance and environmental change. NASA is able to leverage its remote sensing strengths to collect airborne and satellite observations to capture important ecosystem properties and dynamics across large spatial scales. A critical component of this effort includes collection of ground-based data that can be used to analyze, calibrate and validate remote sensing products. ABoVE researchers at a large number of sites located in important Arctic and boreal ecosystems in Alaska and western Canada are following common design protocols and strategies for measuring soil moisture, thaw depth, biomass, and wetland inundation. Here we elaborate on those sampling strategies as used in the 2017 summer SAR campaign and address the sampling design and measurement protocols for supporting the ABoVE aerial activities. Plot size, transect length, and distribution of replicates across the landscape systematically allowed investigators to optimally sample a site for soil moisture, thaw depth, and organic layer thickness. Specific examples and data sets are described for the Department of Energy's Next-Generation Ecosystem Experiments (NGEE Arctic) project field sites near Nome and Barrow, Alaska. Future airborne and satellite campaigns will be conducted by the NASA ABoVE team and additional collaboration is encouraged.
Liu, Xiaoyan; Zhang, Xiaoyun; Zhang, Haixia; Liu, Mancang
2008-08-01
A sensitive method for the analysis of bisphenol A and 4-nonylphenol is developed by means of the optimization of solid-phase microextraction using Uniform Experimental Design methodology followed by high-performance liquid chromatographic analysis with fluorescence detection. The optimal extraction conditions are determined based on the relationship between parameters and the peak area. The curve calibration plots are linear (r2>or=0.9980) over the concentration range of 1.25-125 ng/mL for bisphenol A and 2.59-202.96 ng/mL for 4-nonylphenol, respectively. The detection limits, based on a signal-to-noise ratio of 3, are 0.097 ng/mL for bisphenol A and 0.27 ng/mL for 4-nonylphenol, respectively. The validity of the proposed method is demonstrated by the analysis of the investigated analytes in real water samples and sensitivity of the optimized method is verified by comparing results with those obtained by previous methods using the same commercial solid-phase microextraction fiber.
An optimizing start-up strategy for a bio-methanator.
Sbarciog, Mihaela; Loccufier, Mia; Vande Wouwer, Alain
2012-05-01
This paper presents an optimizing start-up strategy for a bio-methanator. The goal of the control strategy is to maximize the outflow rate of methane in anaerobic digestion processes, which can be described by a two-population model. The methodology relies on a thorough analysis of the system dynamics and involves the solution of two optimization problems: steady-state optimization for determining the optimal operating point and transient optimization. The latter is a classical optimal control problem, which can be solved using the maximum principle of Pontryagin. The proposed control law is of the bang-bang type. The process is driven from an initial state to a small neighborhood of the optimal steady state by switching the manipulated variable (dilution rate) from the minimum to the maximum value at a certain time instant. Then the dilution rate is set to the optimal value and the system settles down in the optimal steady state. This control law ensures the convergence of the system to the optimal steady state and substantially increases its stability region. The region of attraction of the steady state corresponding to maximum production of methane is considerably enlarged. In some cases, which are related to the possibility of selecting the minimum dilution rate below a certain level, the stability region of the optimal steady state equals the interior of the state space. Aside its efficiency, which is evaluated not only in terms of biogas production but also from the perspective of treatment of the organic load, the strategy is also characterized by simplicity, being thus appropriate for implementation in real-life systems. Another important advantage is its generality: this technique may be applied to any anaerobic digestion process, for which the acidogenesis and methanogenesis are, respectively, characterized by Monod and Haldane kinetics.
A new strategy of glucose supply in a microbial fermentation model
NASA Astrophysics Data System (ADS)
Kasbawati, Gunawan, A. Y.; Sidarto, K. A.; Hertadi, R.
2015-09-01
Strategy of glucose supply to achieve an optimal productivity of ethanol production of a yeast cell is one of the main features in a microbial fermentation process. Beside a known continuous glucose supply, in this study we consider a new supply strategy so called the on-off supply. An optimal control theory is applied to the fermentation system to find the optimal rate of glucose supply and time of supply. The optimization problem is solved numerically using Differential Evolutionary algorithm. We find two alternative solutions that we can choose to get the similar result: either long period process with low supply or short period process with high glucose supply.
Midenet, Sophie; Saunier, Nicolas; Boillot, Florence
2011-11-01
This paper proposes an original definition of the exposure to lateral collision in signalized intersections and discusses the results of a real world experiment. This exposure is defined as the duration of situations where the stream that is given the right-of-way goes through the conflict zone while road users are waiting in the cross-traffic approach. This measure, obtained from video sensors, makes it possible to compare different operating conditions such as different traffic signal strategies. The data from a real world experiment is used, where the adaptive real-time strategy CRONOS (ContRol Of Networks by Optimization of Switchovers) and a time-plan strategy with vehicle-actuated ranges alternately controlled an isolated intersection near Paris. Hourly samples with similar traffic volumes are compared and the exposure to lateral collision is different in various areas of the intersection and various traffic conditions for the two strategies. The total exposure under peak hour traffic conditions drops by roughly 5 min/h with the CRONOS strategy compared to the time-plan strategy, which occurs mostly on entry streams. The results are analyzed through the decomposition of cycles in phase sequences and recommendations are made for traffic control strategies. Copyright © 2011 Elsevier Ltd. All rights reserved.
Active learning based segmentation of Crohns disease from abdominal MRI.
Mahapatra, Dwarikanath; Vos, Franciscus M; Buhmann, Joachim M
2016-05-01
This paper proposes a novel active learning (AL) framework, and combines it with semi supervised learning (SSL) for segmenting Crohns disease (CD) tissues from abdominal magnetic resonance (MR) images. Robust fully supervised learning (FSL) based classifiers require lots of labeled data of different disease severities. Obtaining such data is time consuming and requires considerable expertise. SSL methods use a few labeled samples, and leverage the information from many unlabeled samples to train an accurate classifier. AL queries labels of most informative samples and maximizes gain from the labeling effort. Our primary contribution is in designing a query strategy that combines novel context information with classification uncertainty and feature similarity. Combining SSL and AL gives a robust segmentation method that: (1) optimally uses few labeled samples and many unlabeled samples; and (2) requires lower training time. Experimental results show our method achieves higher segmentation accuracy than FSL methods with fewer samples and reduced training effort. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Numerical solution of a conspicuous consumption model with constant control delay☆
Huschto, Tony; Feichtinger, Gustav; Hartl, Richard F.; Kort, Peter M.; Sager, Sebastian; Seidl, Andrea
2011-01-01
We derive optimal pricing strategies for conspicuous consumption products in periods of recession. To that end, we formulate and investigate a two-stage economic optimal control problem that takes uncertainty of the recession period length and delay effects of the pricing strategy into account. This non-standard optimal control problem is difficult to solve analytically, and solutions depend on the variable model parameters. Therefore, we use a numerical result-driven approach. We propose a structure-exploiting direct method for optimal control to solve this challenging optimization problem. In particular, we discretize the uncertainties in the model formulation by using scenario trees and target the control delays by introduction of slack control functions. Numerical results illustrate the validity of our approach and show the impact of uncertainties and delay effects on optimal economic strategies. During the recession, delayed optimal prices are higher than the non-delayed ones. In the normal economic period, however, this effect is reversed and optimal prices with a delayed impact are smaller compared to the non-delayed case. PMID:22267871
Optimal dividends in the Brownian motion risk model with interest
NASA Astrophysics Data System (ADS)
Fang, Ying; Wu, Rong
2009-07-01
In this paper, we consider a Brownian motion risk model, and in addition, the surplus earns investment income at a constant force of interest. The objective is to find a dividend policy so as to maximize the expected discounted value of dividend payments. It is well known that optimality is achieved by using a barrier strategy for unrestricted dividend rate. However, ultimate ruin of the company is certain if a barrier strategy is applied. In many circumstances this is not desirable. This consideration leads us to impose a restriction on the dividend stream. We assume that dividends are paid to the shareholders according to admissible strategies whose dividend rate is bounded by a constant. Under this additional constraint, we show that the optimal dividend strategy is formed by a threshold strategy.
Cost-effectiveness of angiographic imaging in isolated perimesencephalic subarachnoid hemorrhage.
Kalra, Vivek B; Wu, Xiao; Forman, Howard P; Malhotra, Ajay
2014-12-01
The purpose of this study is to perform a comprehensive cost-effectiveness analysis of all possible permutations of computed tomographic angiography (CTA) and digital subtraction angiography imaging strategies for both initial diagnosis and follow-up imaging in patients with perimesencephalic subarachnoid hemorrhage on noncontrast CT. Each possible imaging strategy was evaluated in a decision tree created with TreeAge Pro Suite 2014, with parameters derived from a meta-analysis of 40 studies and literature values. Base case and sensitivity analyses were performed to assess the cost-effectiveness of each strategy. A Monte Carlo simulation was conducted with distributional variables to evaluate the robustness of the optimal strategy. The base case scenario showed performing initial CTA with no follow-up angiographic studies in patients with perimesencephalic subarachnoid hemorrhage to be the most cost-effective strategy ($5422/quality adjusted life year). Using a willingness-to-pay threshold of $50 000/quality adjusted life year, the most cost-effective strategy based on net monetary benefit is CTA with no follow-up when the sensitivity of initial CTA is >97.9%, and CTA with CTA follow-up otherwise. The Monte Carlo simulation reported CTA with no follow-up to be the optimal strategy at willingness-to-pay of $50 000 in 99.99% of the iterations. Digital subtraction angiography, whether at initial diagnosis or as part of follow-up imaging, is never the optimal strategy in our model. CTA without follow-up imaging is the optimal strategy for evaluation of patients with perimesencephalic subarachnoid hemorrhage when modern CT scanners and a strict definition of perimesencephalic subarachnoid hemorrhage are used. Digital subtraction angiography and follow-up imaging are not optimal as they carry complications and associated costs. © 2014 American Heart Association, Inc.
Biswas, Santanu; Subramanian, Abhishek; ELMojtaba, Ibrahim M; Chattopadhyay, Joydev; Sarkar, Ram Rup
2017-01-01
Visceral leishmaniasis (VL) is a deadly neglected tropical disease that poses a serious problem in various countries all over the world. Implementation of various intervention strategies fail in controlling the spread of this disease due to issues of parasite drug resistance and resistance of sandfly vectors to insecticide sprays. Due to this, policy makers need to develop novel strategies or resort to a combination of multiple intervention strategies to control the spread of the disease. To address this issue, we propose an extensive SIR-type model for anthroponotic visceral leishmaniasis transmission with seasonal fluctuations modeled in the form of periodic sandfly biting rate. Fitting the model for real data reported in South Sudan, we estimate the model parameters and compare the model predictions with known VL cases. Using optimal control theory, we study the effects of popular control strategies namely, drug-based treatment of symptomatic and PKDL-infected individuals, insecticide treated bednets and spray of insecticides on the dynamics of infected human and vector populations. We propose that the strategies remain ineffective in curbing the disease individually, as opposed to the use of optimal combinations of the mentioned strategies. Testing the model for different optimal combinations while considering periodic seasonal fluctuations, we find that the optimal combination of treatment of individuals and insecticide sprays perform well in controlling the disease for the time period of intervention introduced. Performing a cost-effective analysis we identify that the same strategy also proves to be efficacious and cost-effective. Finally, we suggest that our model would be helpful for policy makers to predict the best intervention strategies for specific time periods and their appropriate implementation for elimination of visceral leishmaniasis.
Optimal reconfiguration strategy for a degradable multimodule computing system
NASA Technical Reports Server (NTRS)
Lee, Yann-Hang; Shin, Kang G.
1987-01-01
The present quantitative approach to the problem of reconfiguring a degradable multimode system assigns some modules to computation and arranges others for reliability. By using expected total reward as the optimal criterion, there emerges an active reconfiguration strategy based not only on the occurrence of failure but the progression of the given mission. This reconfiguration strategy requires specification of the times at which the system should undergo reconfiguration, and the configurations to which the system should change. The optimal reconfiguration problem is converted to integer nonlinear knapsack and fractional programming problems.
NASA Astrophysics Data System (ADS)
Maragou, Niki C.; Thomaidis, Nikolaos S.; Koupparis, Michael A.
2011-10-01
A systematic and detailed optimization strategy for the development of atmospheric pressure ionization (API) LC-MS/MS methods for the determination of Irgarol 1051, Diuron, and their degradation products (M1, DCPMU, DCPU, and DCA) in water, sediment, and mussel is described. Experimental design was applied for the optimization of the ion sources parameters. Comparison of ESI and APCI was performed in positive- and negative-ion mode, and the effect of the mobile phase on ionization was studied for both techniques. Special attention was drawn to the ionization of DCA, which presents particular difficulty in API techniques. Satisfactory ionization of this small molecule is achieved only with ESI positive-ion mode using acetonitrile in the mobile phase; the instrumental detection limit is 0.11 ng/mL. Signal suppression was qualitatively estimated by using purified and non-purified samples. The sample preparation for sediments and mussels is direct and simple, comprising only solvent extraction. Mean recoveries ranged from 71% to 110%, and the corresponding (%) RSDs ranged between 4.1 and 14%. The method limits of detection ranged between 0.6 and 3.5 ng/g for sediment and mussel and from 1.3 to 1.8 ng/L for sea water. The method was applied to sea water, marine sediment, and mussels, which were obtained from marinas in Attiki, Greece. Ion ratio confirmation was used for the identification of the compounds.
Optimization as a Reasoning Strategy for Dealing with Socioscientific Decision-Making Situations
ERIC Educational Resources Information Center
Papadouris, Nicos
2012-01-01
This paper reports on an attempt to help 12-year-old students develop a specific optimization strategy for selecting among possible solutions in socioscientific decision-making situations. We have developed teaching and learning materials for elaborating this strategy, and we have implemented them in two intact classes (N = 48). Prior to and after…
Dynamic Portfolio Strategy Using Clustering Approach
Lu, Ya-Nan; Li, Sai-Ping; Jiang, Xiong-Fei; Zhong, Li-Xin; Qiu, Tian
2017-01-01
The problem of portfolio optimization is one of the most important issues in asset management. We here propose a new dynamic portfolio strategy based on the time-varying structures of MST networks in Chinese stock markets, where the market condition is further considered when using the optimal portfolios for investment. A portfolio strategy comprises two stages: First, select the portfolios by choosing central and peripheral stocks in the selection horizon using five topological parameters, namely degree, betweenness centrality, distance on degree criterion, distance on correlation criterion and distance on distance criterion. Second, use the portfolios for investment in the investment horizon. The optimal portfolio is chosen by comparing central and peripheral portfolios under different combinations of market conditions in the selection and investment horizons. Market conditions in our paper are identified by the ratios of the number of trading days with rising index to the total number of trading days, or the sum of the amplitudes of the trading days with rising index to the sum of the amplitudes of the total trading days. We find that central portfolios outperform peripheral portfolios when the market is under a drawup condition, or when the market is stable or drawup in the selection horizon and is under a stable condition in the investment horizon. We also find that peripheral portfolios gain more than central portfolios when the market is stable in the selection horizon and is drawdown in the investment horizon. Empirical tests are carried out based on the optimal portfolio strategy. Among all possible optimal portfolio strategies based on different parameters to select portfolios and different criteria to identify market conditions, 65% of our optimal portfolio strategies outperform the random strategy for the Shanghai A-Share market while the proportion is 70% for the Shenzhen A-Share market. PMID:28129333
Dynamic Portfolio Strategy Using Clustering Approach.
Ren, Fei; Lu, Ya-Nan; Li, Sai-Ping; Jiang, Xiong-Fei; Zhong, Li-Xin; Qiu, Tian
2017-01-01
The problem of portfolio optimization is one of the most important issues in asset management. We here propose a new dynamic portfolio strategy based on the time-varying structures of MST networks in Chinese stock markets, where the market condition is further considered when using the optimal portfolios for investment. A portfolio strategy comprises two stages: First, select the portfolios by choosing central and peripheral stocks in the selection horizon using five topological parameters, namely degree, betweenness centrality, distance on degree criterion, distance on correlation criterion and distance on distance criterion. Second, use the portfolios for investment in the investment horizon. The optimal portfolio is chosen by comparing central and peripheral portfolios under different combinations of market conditions in the selection and investment horizons. Market conditions in our paper are identified by the ratios of the number of trading days with rising index to the total number of trading days, or the sum of the amplitudes of the trading days with rising index to the sum of the amplitudes of the total trading days. We find that central portfolios outperform peripheral portfolios when the market is under a drawup condition, or when the market is stable or drawup in the selection horizon and is under a stable condition in the investment horizon. We also find that peripheral portfolios gain more than central portfolios when the market is stable in the selection horizon and is drawdown in the investment horizon. Empirical tests are carried out based on the optimal portfolio strategy. Among all possible optimal portfolio strategies based on different parameters to select portfolios and different criteria to identify market conditions, 65% of our optimal portfolio strategies outperform the random strategy for the Shanghai A-Share market while the proportion is 70% for the Shenzhen A-Share market.
Optimal domain decomposition strategies
NASA Technical Reports Server (NTRS)
Yoon, Yonghyun; Soni, Bharat K.
1995-01-01
The primary interest of the authors is in the area of grid generation, in particular, optimal domain decomposition about realistic configurations. A grid generation procedure with optimal blocking strategies has been developed to generate multi-block grids for a circular-to-rectangular transition duct. The focus of this study is the domain decomposition which optimizes solution algorithm/block compatibility based on geometrical complexities as well as the physical characteristics of flow field. The progress realized in this study is summarized in this paper.
NASA Technical Reports Server (NTRS)
Carey, L. D.; Petersen, W. A.; Deierling, W.; Roeder, W. P.
2009-01-01
A new weather radar is being acquired for use in support of America s space program at Cape Canaveral Air Force Station, NASA Kennedy Space Center, and Patrick AFB on the east coast of central Florida. This new radar replaces the modified WSR-74C at Patrick AFB that has been in use since 1984. The new radar is a Radtec TDR 43-250, which has Doppler and dual polarization capability. A new fixed scan strategy was designed to best support the space program. The fixed scan strategy represents a complex compromise between many competing factors and relies on climatological heights of various temperatures that are important for improved lightning forecasting and evaluation of Lightning Launch Commit Criteria (LCC), which are the weather rules to avoid lightning strikes to in-flight rockets. The 0 C to -20 C layer is vital since most generation of electric charge occurs within it and so it is critical in evaluating Lightning LCC and in forecasting lightning. These are two of the most important duties of 45 WS. While the fixed scan strategy that covers most of the climatological variation of the 0 C to -20 C levels with high resolution ensures that these critical temperatures are well covered most of the time, it also means that on any particular day the radar is spending precious time scanning at angles covering less important heights. The goal of this project is to develop a user-friendly, Interactive Data Language (IDL) computer program that will automatically generate optimized radar scan strategies that adapt to user input of the temperature profile and other important parameters. By using only the required scan angles output by the temperature profile adaptive scan strategy program, faster update times for volume scans and/or collection of more samples per gate for better data quality is possible, while maintaining high resolution at the critical temperature levels. The temperature profile adaptive technique will also take into account earth curvature and refraction when geo-locating the radar beam (i.e., beam height and arc distance), including non-standard refraction based on the user-input temperature profile. In addition to temperature profile adaptivity, this paper will also summarize the other requirements for this scan strategy program such as detection of low-level boundaries, detection of anvil clouds, reducing the Cone Of Silence, and allowing for times when deep convective clouds will not occur. The adaptive technique will be carefully compared to and benchmarked against the new fixed scan strategy. Specific environmental scenarios in which the adaptive scan strategy is able to optimize and improve coverage and resolution at critical heights, scan time, and/or sample numbers relative to the fixed scan strategy will be presented.
Optimization of a Viability PCR Method for the Detection of Listeria monocytogenes in Food Samples.
Agustí, Gemma; Fittipaldi, Mariana; Codony, Francesc
2018-06-01
Rapid detection of Listeria and other microbial pathogens in food is an essential part of quality control and it is critical for ensuring the safety of consumers. Culture-based methods for detecting foodborne pathogens are time-consuming, laborious and cannot detect viable but non-culturable microorganism, whereas viability PCR methodology provides quick results; it is able to detect viable but non-culturable cells, and allows for easier handling of large amount of samples. Although the most critical point to use viability PCR technique is achieving the complete exclusion of dead cell amplification signals, many improvements are being introduced to overcome this. In the present work, the yield of dead cell DNA neutralization was enhanced by incorporating two new sample treatment strategies: tube change combined with a double light treatment. This procedure was successfully tested using artificially contaminated food samples, showing improved neutralization of dead cell DNA.
New color-based tracking algorithm for joints of the upper extremities
NASA Astrophysics Data System (ADS)
Wu, Xiangping; Chow, Daniel H. K.; Zheng, Xiaoxiang
2007-11-01
To track the joints of the upper limb of stroke sufferers for rehabilitation assessment, a new tracking algorithm which utilizes a developed color-based particle filter and a novel strategy for handling occlusions is proposed in this paper. Objects are represented by their color histogram models and particle filter is introduced to track the objects within a probability framework. Kalman filter, as a local optimizer, is integrated into the sampling stage of the particle filter that steers samples to a region with high likelihood and therefore fewer samples is required. A color clustering method and anatomic constraints are used in dealing with occlusion problem. Compared with the general basic particle filtering method, the experimental results show that the new algorithm has reduced the number of samples and hence the computational consumption, and has achieved better abilities of handling complete occlusion over a few frames.
Liebi, Marianne; Georgiadis, Marios; Kohlbrecher, Joachim; Holler, Mirko; Raabe, Jörg; Usov, Ivan; Menzel, Andreas; Schneider, Philipp; Bunk, Oliver; Guizar-Sicairos, Manuel
2018-01-01
Small-angle X-ray scattering tensor tomography, which allows reconstruction of the local three-dimensional reciprocal-space map within a three-dimensional sample as introduced by Liebi et al. [Nature (2015), 527, 349-352], is described in more detail with regard to the mathematical framework and the optimization algorithm. For the case of trabecular bone samples from vertebrae it is shown that the model of the three-dimensional reciprocal-space map using spherical harmonics can adequately describe the measured data. The method enables the determination of nanostructure orientation and degree of orientation as demonstrated previously in a single momentum transfer q range. This article presents a reconstruction of the complete reciprocal-space map for the case of bone over extended ranges of q. In addition, it is shown that uniform angular sampling and advanced regularization strategies help to reduce the amount of data required.
NASA Astrophysics Data System (ADS)
Clemens, Joshua William
Game theory has application across multiple fields, spanning from economic strategy to optimal control of an aircraft and missile on an intercept trajectory. The idea of game theory is fascinating in that we can actually mathematically model real-world scenarios and determine optimal decision making. It may not always be easy to mathematically model certain real-world scenarios, nonetheless, game theory gives us an appreciation for the complexity involved in decision making. This complexity is especially apparent when the players involved have access to different information upon which to base their decision making (a nonclassical information pattern). Here we will focus on the class of adversarial two-player games (sometimes referred to as pursuit-evasion games) with nonclassical information pattern. We present a two-sided (simultaneous) optimization solution method for the two-player linear quadratic Gaussian (LQG) multistage game. This direct solution method allows for further interpretation of each player's decision making (strategy) as compared to previously used formal solution methods. In addition to the optimal control strategies, we present a saddle point proof and we derive an expression for the optimal performance index value. We provide some numerical results in order to further interpret the optimal control strategies and to highlight real-world application of this game-theoretic optimal solution.
A Parameter Communication Optimization Strategy for Distributed Machine Learning in Sensors.
Zhang, Jilin; Tu, Hangdi; Ren, Yongjian; Wan, Jian; Zhou, Li; Li, Mingwei; Wang, Jue; Yu, Lifeng; Zhao, Chang; Zhang, Lei
2017-09-21
In order to utilize the distributed characteristic of sensors, distributed machine learning has become the mainstream approach, but the different computing capability of sensors and network delays greatly influence the accuracy and the convergence rate of the machine learning model. Our paper describes a reasonable parameter communication optimization strategy to balance the training overhead and the communication overhead. We extend the fault tolerance of iterative-convergent machine learning algorithms and propose the Dynamic Finite Fault Tolerance (DFFT). Based on the DFFT, we implement a parameter communication optimization strategy for distributed machine learning, named Dynamic Synchronous Parallel Strategy (DSP), which uses the performance monitoring model to dynamically adjust the parameter synchronization strategy between worker nodes and the Parameter Server (PS). This strategy makes full use of the computing power of each sensor, ensures the accuracy of the machine learning model, and avoids the situation that the model training is disturbed by any tasks unrelated to the sensors.
Bourbonnais, Anne; Ducharme, Francine; Landreville, Philippe; Michaud, Cécile; Gauthier, Marie-Andrée; Lavallée, Marie-Hélène
2018-03-01
Few studies have been conducted on strategies to promote the implementation of complex interventions in nursing homes (NHs). This article presents a pilot study intended to assess the strategies that would enable the optimal implementation of a complex intervention approach in NHs based on the meanings of screams of older people living with Alzheimer's disease. An action research approach was used with 19 formal and family caregivers from five NHs. Focus groups and individual interviews were held to assess different implementation strategies. A number of challenges were identified, as were strategies to overcome them. These latter included interactive training, intervention design, and external support. This study shows the feasibility of implementing a complex intervention to optimize older people's well-being. The article shares strategies that may promote the implementation of these types of interventions in NHs.
A review on existing OSSEs and their implications on European marine observation requirements
NASA Astrophysics Data System (ADS)
She, Jun
2017-04-01
Marine observations are essential for understanding marine processes and improving the forecast quality, they are also expensive. It has always been an important issue to optimize sampling schemes of marine observational networks so that the value of marine observations can be maximized and the cost can be lowered. Ocean System Simulation Experiment (OSSE) is an efficient tool in assessing impacts of proposed future sampling schemes on reconstructing and forecasting the ocean and ecosystem conditions. In this study existing OSSE research results from EU projects (such as JERICO, OPEC, SANGOMA, E-AIMS and AtlantOS), institutional studies and review papers are collected and analyzed, according to regions (Arctic, Baltic, N. Atlantic, Mediterranean Sea and Black Sea) and instruments/variables. The preliminary results show that significant gaps for OSSEs in regions and instruments. Among the existing OSSEs, Argo (Bio-Argo and Deep See Argo), gliders and ferrybox are the most often investigated instruments. Although many of the OSSEs are dedicated for very specific monitoring strategies and not sufficiently comprehensive for making solid recommendations for optimizing the existing networks, the detailed findings for future marine observation requirements from the OSSEs will be summarized in the presentation. Recommendations for systematic OSSEs for optimizing European marine observation networks are also given.
Rational approximations to rational models: alternative algorithms for category learning.
Sanborn, Adam N; Griffiths, Thomas L; Navarro, Daniel J
2010-10-01
Rational models of cognition typically consider the abstract computational problems posed by the environment, assuming that people are capable of optimally solving those problems. This differs from more traditional formal models of cognition, which focus on the psychological processes responsible for behavior. A basic challenge for rational models is thus explaining how optimal solutions can be approximated by psychological processes. We outline a general strategy for answering this question, namely to explore the psychological plausibility of approximation algorithms developed in computer science and statistics. In particular, we argue that Monte Carlo methods provide a source of rational process models that connect optimal solutions to psychological processes. We support this argument through a detailed example, applying this approach to Anderson's (1990, 1991) rational model of categorization (RMC), which involves a particularly challenging computational problem. Drawing on a connection between the RMC and ideas from nonparametric Bayesian statistics, we propose 2 alternative algorithms for approximate inference in this model. The algorithms we consider include Gibbs sampling, a procedure appropriate when all stimuli are presented simultaneously, and particle filters, which sequentially approximate the posterior distribution with a small number of samples that are updated as new data become available. Applying these algorithms to several existing datasets shows that a particle filter with a single particle provides a good description of human inferences.
Transaction fees and optimal rebalancing in the growth-optimal portfolio
NASA Astrophysics Data System (ADS)
Feng, Yu; Medo, Matúš; Zhang, Liang; Zhang, Yi-Cheng
2011-05-01
The growth-optimal portfolio optimization strategy pioneered by Kelly is based on constant portfolio rebalancing which makes it sensitive to transaction fees. We examine the effect of fees on an example of a risky asset with a binary return distribution and show that the fees may give rise to an optimal period of portfolio rebalancing. The optimal period is found analytically in the case of lognormal returns. This result is consequently generalized and numerically verified for broad return distributions and returns generated by a GARCH process. Finally we study the case when investment is rebalanced only partially and show that this strategy can improve the investment long-term growth rate more than optimization of the rebalancing period.
Instrument for Real-Time Digital Nucleic Acid Amplification on Custom Microfluidic Devices
Selck, David A.
2016-01-01
Nucleic acid amplification tests that are coupled with a digital readout enable the absolute quantification of single molecules, even at ultralow concentrations. Digital methods are robust, versatile and compatible with many amplification chemistries including isothermal amplification, making them particularly invaluable to assays that require sensitive detection, such as the quantification of viral load in occult infections or detection of sparse amounts of DNA from forensic samples. A number of microfluidic platforms are being developed for carrying out digital amplification. However, the mechanistic investigation and optimization of digital assays has been limited by the lack of real-time kinetic information about which factors affect the digital efficiency and analytical sensitivity of a reaction. Commercially available instruments that are capable of tracking digital reactions in real-time are restricted to only a small number of device types and sample-preparation strategies. Thus, most researchers who wish to develop, study, or optimize digital assays rely on the rate of the amplification reaction when performed in a bulk experiment, which is now recognized as an unreliable predictor of digital efficiency. To expand our ability to study how digital reactions proceed in real-time and enable us to optimize both the digital efficiency and analytical sensitivity of digital assays, we built a custom large-format digital real-time amplification instrument that can accommodate a wide variety of devices, amplification chemistries and sample-handling conditions. Herein, we validate this instrument, we provide detailed schematics that will enable others to build their own custom instruments, and we include a complete custom software suite to collect and analyze the data retrieved from the instrument. We believe assay optimizations enabled by this instrument will improve the current limits of nucleic acid detection and quantification, improving our fundamental understanding of single-molecule reactions and providing advancements in practical applications such as medical diagnostics, forensics and environmental sampling. PMID:27760148
Sampling solution traces for the problem of sorting permutations by signed reversals
2012-01-01
Background Traditional algorithms to solve the problem of sorting by signed reversals output just one optimal solution while the space of all optimal solutions can be huge. A so-called trace represents a group of solutions which share the same set of reversals that must be applied to sort the original permutation following a partial ordering. By using traces, we therefore can represent the set of optimal solutions in a more compact way. Algorithms for enumerating the complete set of traces of solutions were developed. However, due to their exponential complexity, their practical use is limited to small permutations. A partial enumeration of traces is a sampling of the complete set of traces and can be an alternative for the study of distinct evolutionary scenarios of big permutations. Ideally, the sampling should be done uniformly from the space of all optimal solutions. This is however conjectured to be ♯P-complete. Results We propose and evaluate three algorithms for producing a sampling of the complete set of traces that instead can be shown in practice to preserve some of the characteristics of the space of all solutions. The first algorithm (RA) performs the construction of traces through a random selection of reversals on the list of optimal 1-sequences. The second algorithm (DFALT) consists in a slight modification of an algorithm that performs the complete enumeration of traces. Finally, the third algorithm (SWA) is based on a sliding window strategy to improve the enumeration of traces. All proposed algorithms were able to enumerate traces for permutations with up to 200 elements. Conclusions We analysed the distribution of the enumerated traces with respect to their height and average reversal length. Various works indicate that the reversal length can be an important aspect in genome rearrangements. The algorithms RA and SWA show a tendency to lose traces with high average reversal length. Such traces are however rare, and qualitatively our results show that, for testable-sized permutations, the algorithms DFALT and SWA produce distributions which approximate the reversal length distributions observed with a complete enumeration of the set of traces. PMID:22704580
Derivative Trade Optimizing Model Utilizing GP Based on Behavioral Finance Theory
NASA Astrophysics Data System (ADS)
Matsumura, Koki; Kawamoto, Masaru
This paper proposed a new technique which makes the strategy trees for the derivative (option) trading investment decision based on the behavioral finance theory and optimizes it using evolutionary computation, in order to achieve high profitability. The strategy tree uses a technical analysis based on a statistical, experienced technique for the investment decision. The trading model is represented by various technical indexes, and the strategy tree is optimized by the genetic programming(GP) which is one of the evolutionary computations. Moreover, this paper proposed a method using the prospect theory based on the behavioral finance theory to set psychological bias for profit and deficit and attempted to select the appropriate strike price of option for the higher investment efficiency. As a result, this technique produced a good result and found the effectiveness of this trading model by the optimized dealings strategy.
Performance tradeoffs in static and dynamic load balancing strategies
NASA Technical Reports Server (NTRS)
Iqbal, M. A.; Saltz, J. H.; Bokhart, S. H.
1986-01-01
The problem of uniformly distributing the load of a parallel program over a multiprocessor system was considered. A program was analyzed whose structure permits the computation of the optimal static solution. Then four strategies for load balancing were described and their performance compared. The strategies are: (1) the optimal static assignment algorithm which is guaranteed to yield the best static solution, (2) the static binary dissection method which is very fast but sub-optimal, (3) the greedy algorithm, a static fully polynomial time approximation scheme, which estimates the optimal solution to arbitrary accuracy, and (4) the predictive dynamic load balancing heuristic which uses information on the precedence relationships within the program and outperforms any of the static methods. It is also shown that the overhead incurred by the dynamic heuristic is reduced considerably if it is started off with a static assignment provided by either of the other three strategies.
NASA Astrophysics Data System (ADS)
Bauer, Sebastian; Suchaneck, Andre; Puente León, Fernando
2014-01-01
Depending on the actual battery temperature, electrical power demands in general have a varying impact on the life span of a battery. As electrical energy provided by the battery is needed to temper it, the question arises at which temperature which amount of energy optimally should be utilized for tempering. Therefore, the objective function that has to be optimized contains both the goal to maximize life expectancy and to minimize the amount of energy used for obtaining the first goal. In this paper, Pontryagin's maximum principle is used to derive a causal control strategy from such an objective function. The derivation of the causal strategy includes the determination of major factors that rule the optimal solution calculated with the maximum principle. The optimization is calculated offline on a desktop computer for all possible vehicle parameters and major factors. For the practical implementation in the vehicle, it is sufficient to have the values of the major factors determined only roughly in advance and the offline calculation results available. This feature sidesteps the drawback of several optimization strategies that require the exact knowledge of the future power demand. The resulting strategy's application is not limited to batteries in electric vehicles.
Comparison of Two Multidisciplinary Optimization Strategies for Launch-Vehicle Design
NASA Technical Reports Server (NTRS)
Braun, R. D.; Powell, R. W.; Lepsch, R. A.; Stanley, D. O.; Kroo, I. M.
1995-01-01
The investigation focuses on development of a rapid multidisciplinary analysis and optimization capability for launch-vehicle design. Two multidisciplinary optimization strategies in which the analyses are integrated in different manners are implemented and evaluated for solution of a single-stage-to-orbit launch-vehicle design problem. Weights and sizing, propulsion, and trajectory issues are directly addressed in each optimization process. Additionally, the need to maintain a consistent vehicle model across the disciplines is discussed. Both solution strategies were shown to obtain similar solutions from two different starting points. These solutions suggests that a dual-fuel, single-stage-to-orbit vehicle with a dry weight of approximately 1.927 x 10(exp 5)lb, gross liftoff weight of 2.165 x 10(exp 6)lb, and length of 181 ft is attainable. A comparison of the two approaches demonstrates that treatment or disciplinary coupling has a direct effect on optimization convergence and the required computational effort. In comparison with the first solution strategy, which is of the general form typically used within the launch vehicle design community at present, the second optimization approach is shown to he 3-4 times more computationally efficient.
Choi, Jane Ru; Yong, Kar Wey; Tang, Ruihua; Gong, Yan; Wen, Ting; Yang, Hui; Li, Ang; Chia, Yook Chin; Pingguan-Murphy, Belinda; Xu, Feng
2017-01-01
Paper-based devices have been broadly used for the point-of-care detection of dengue viral nucleic acids due to their simplicity, cost-effectiveness, and readily observable colorimetric readout. However, their moderate sensitivity and functionality have limited their applications. Despite the above-mentioned advantages, paper substrates are lacking in their ability to control fluid flow, in contrast to the flow control enabled by polymer substrates (e.g., agarose) with readily tunable pore size and porosity. Herein, taking the benefits from both materials, the authors propose a strategy to create a hybrid substrate by incorporating agarose into the test strip to achieve flow control for optimal biomolecule interactions. As compared to the unmodified test strip, this strategy allows sensitive detection of targets with an approximately tenfold signal improvement. Additionally, the authors showcase the potential of functionality improvement by creating multiple test zones for semi-quantification of targets, suggesting that the number of visible test zones is directly proportional to the target concentration. The authors further demonstrate the potential of their proposed strategy for clinical assessment by applying it to their prototype sample-to-result test strip to sensitively and semi-quantitatively detect dengue viral RNA from the clinical blood samples. This proposed strategy holds significant promise for detecting various targets for diverse future applications. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Estimation of the discharges of the multiple water level stations by multi-objective optimization
NASA Astrophysics Data System (ADS)
Matsumoto, Kazuhiro; Miyamoto, Mamoru; Yamakage, Yuzuru; Tsuda, Morimasa; Yanami, Hitoshi; Anai, Hirokazu; Iwami, Yoichi
2016-04-01
This presentation shows two aspects of the parameter identification to estimate the discharges of the multiple water level stations by multi-objective optimization. One is how to adjust the parameters to estimate the discharges accurately. The other is which optimization algorithms are suitable for the parameter identification. Regarding the previous studies, there is a study that minimizes the weighted error of the discharges of the multiple water level stations by single-objective optimization. On the other hand, there are some studies that minimize the multiple error assessment functions of the discharge of a single water level station by multi-objective optimization. This presentation features to simultaneously minimize the errors of the discharges of the multiple water level stations by multi-objective optimization. Abe River basin in Japan is targeted. The basin area is 567.0km2. There are thirteen rainfall stations and three water level stations. Nine flood events are investigated. They occurred from 2005 to 2012 and the maximum discharges exceed 1,000m3/s. The discharges are calculated with PWRI distributed hydrological model. The basin is partitioned into the meshes of 500m x 500m. Two-layer tanks are placed on each mesh. Fourteen parameters are adjusted to estimate the discharges accurately. Twelve of them are the hydrological parameters and two of them are the parameters of the initial water levels of the tanks. Three objective functions are the mean squared errors between the observed and calculated discharges at the water level stations. Latin Hypercube sampling is one of the uniformly sampling algorithms. The discharges are calculated with respect to the parameter values sampled by a simplified version of Latin Hypercube sampling. The observed discharge is surrounded by the calculated discharges. It suggests that it might be possible to estimate the discharge accurately by adjusting the parameters. In a sense, it is true that the discharge of a water level station can be accurately estimated by setting the parameter values optimized to the responding water level station. However, there are some cases that the calculated discharge by setting the parameter values optimized to one water level station does not meet the observed discharge at another water level station. It is important to estimate the discharges of all the water level stations in some degree of accuracy. It turns out to be possible to select the parameter values from the pareto optimal solutions by the condition that all the normalized errors by the minimum error of the responding water level station are under 3. The optimization performance of five implementations of the algorithms and a simplified version of Latin Hypercube sampling are compared. Five implementations are NSGA2 and PAES of an optimization software inspyred and MCO_NSGA2R, MOPSOCD and NSGA2R_NSGA2R of a statistical software R. NSGA2, PAES and MOPSOCD are the optimization algorithms of a genetic algorithm, an evolution strategy and a particle swarm optimization respectively. The number of the evaluations of the objective functions is 10,000. Two implementations of NSGA2 of R outperform the others. They are promising to be suitable for the parameter identification of PWRI distributed hydrological model.
Fan, Zhong G; Gao, Xiao F; Li, Xiao B; Mao, Wen X; Chen, Li W; Tian, Nai L
2017-04-01
The optimal strategy of percutaneous coronary intervention (PCI) for patients with ST-elevation myocardial infarction (STEMI) and multivessel disease (MVD) still remains controversial. This study sought to explore the optimal PCI strategy for those patients. Medline, EMBASE and the Cochrane Controlled Trials Registry were searched for relevant studies. We analyzed the comparison of major adverse cardiac events (MACEs) as the primary end point between the preventive PCI strategy and the culprit only PCI strategy (CV-PCI). The further analysis of two subgroups described as the complete multivessel PCI strategy during primary procedure (CMV-PCI) and the staged PCI strategy (S-PCI) was also performed. Nine randomized trials were identified. The risk of MACEs was reduced significantly regarding to preventive PCI strategy (OR=0.41, 95% CI: 0.31-0.53, P<0.001) compared to CV-PCI strategy. There were lower risks of long-term mortality, reinfarction and repeat revascularization in the preventive PCI group compared to the CV-PCI group (OR=0.41, 95% CI: 0.27-0.62, P<0.001; OR=0.54, 95% CI: 0.32-0.91, P=0.021; OR=0.37, 95% CI: 0.26-0.51, P<0.001). Subgroup analysis showed that staged PCI strategy reduced the incidence of long-term mortality versus CMV-PCI strategy. The preventive PCI is associated with the lower risk of MACEs in STEMI patients with MVD compared to the CV-PCI strategy, and the S-PCI strategy seems to be an optimal choice for these patients rather than the CMV-PCI.
Sparsely sampling the sky: a Bayesian experimental design approach
NASA Astrophysics Data System (ADS)
Paykari, P.; Jaffe, A. H.
2013-08-01
The next generation of galaxy surveys will observe millions of galaxies over large volumes of the Universe. These surveys are expensive both in time and cost, raising questions regarding the optimal investment of this time and money. In this work, we investigate criteria for selecting amongst observing strategies for constraining the galaxy power spectrum and a set of cosmological parameters. Depending on the parameters of interest, it may be more efficient to observe a larger, but sparsely sampled, area of sky instead of a smaller contiguous area. In this work, by making use of the principles of Bayesian experimental design, we will investigate the advantages and disadvantages of the sparse sampling of the sky and discuss the circumstances in which a sparse survey is indeed the most efficient strategy. For the Dark Energy Survey (DES), we find that by sparsely observing the same area in a smaller amount of time, we only increase the errors on the parameters by a maximum of 0.45 per cent. Conversely, investing the same amount of time as the original DES to observe a sparser but larger area of sky, we can in fact constrain the parameters with errors reduced by 28 per cent.
NASA Astrophysics Data System (ADS)
Grayver, Alexander V.; Kuvshinov, Alexey V.
2016-05-01
This paper presents a methodology to sample equivalence domain (ED) in nonlinear partial differential equation (PDE)-constrained inverse problems. For this purpose, we first applied state-of-the-art stochastic optimization algorithm called Covariance Matrix Adaptation Evolution Strategy (CMAES) to identify low-misfit regions of the model space. These regions were then randomly sampled to create an ensemble of equivalent models and quantify uncertainty. CMAES is aimed at exploring model space globally and is robust on very ill-conditioned problems. We show that the number of iterations required to converge grows at a moderate rate with respect to number of unknowns and the algorithm is embarrassingly parallel. We formulated the problem by using the generalized Gaussian distribution. This enabled us to seamlessly use arbitrary norms for residual and regularization terms. We show that various regularization norms facilitate studying different classes of equivalent solutions. We further show how performance of the standard Metropolis-Hastings Markov chain Monte Carlo algorithm can be substantially improved by using information CMAES provides. This methodology was tested by using individual and joint inversions of magneotelluric, controlled-source electromagnetic (EM) and global EM induction data.
An effective rumor-containing strategy
NASA Astrophysics Data System (ADS)
Pan, Cheng; Yang, Lu-Xing; Yang, Xiaofan; Wu, Yingbo; Tang, Yuan Yan
2018-06-01
False rumors can lead to huge economic losses or/and social instability. Hence, mitigating the impact of bogus rumors is of primary importance. This paper focuses on the problem of how to suppress a false rumor by use of the truth. Based on a set of rational hypotheses and a novel rumor-truth mixed spreading model, the effectiveness and cost of a rumor-containing strategy are quantified, respectively. On this basis, the original problem is modeled as a constrained optimization problem (the RC model), in which the independent variable and the objective function represent a rumor-containing strategy and the effectiveness of a rumor-containing strategy, respectively. The goal of the optimization problem is to find the most effective rumor-containing strategy subject to a limited rumor-containing budget. Some optimal rumor-containing strategies are given by solving their respective RC models. The influence of different factors on the highest cost effectiveness of a RC model is illuminated through computer experiments. The results obtained are instructive to develop effective rumor-containing strategies.
NASA Astrophysics Data System (ADS)
Fefer, M.; Dogan, M. S.; Herman, J. D.
2017-12-01
Long-term shifts in the timing and magnitude of reservoir inflows will potentially have significant impacts on water supply reliability in California, though projections remain uncertain. Here we assess the vulnerability of the statewide system to changes in total annual runoff (a function of precipitation) and the fraction of runoff occurring during the winter months (primarily a function of temperature). An ensemble of scenarios is sampled using a bottom-up approach and compared to the most recent available streamflow projections from the state's 4th Climate Assessment. We evaluate these scenarios using a new open-source version of the CALVIN model, a network flow optimization model encompassing roughly 90% of the urban and agricultural water demands in California, which is capable of running scenario ensembles on a high-performance computing cluster. The economic representation of water demand in the model yields several advantages for this type of analysis: optimized reservoir operating policies to minimize shortage cost and the marginal value of adaptation opportunities, defined by shadow prices on infrastructure and regulatory constraints. Results indicate a shift in optimal reservoir operations and high marginal value of additional reservoir storage in the winter months. The collaborative management of reservoirs in CALVIN yields increased storage in downstream reservoirs to store the increased winter runoff. This study contributes an ensemble evaluation of a large-scale network model to investigate uncertain climate projections, and an approach to interpret the results of economic optimization through the lens of long-term adaptation strategies.
Vandenberg, Wim; Duwé, Sam; Leutenegger, Marcel; Moeyaert, Benjamien; Krajnik, Bartosz; Lasser, Theo; Dedecker, Peter
2016-01-01
Stochastic optical fluctuation imaging (SOFI) is a super-resolution fluorescence imaging technique that makes use of stochastic fluctuations in the emission of the fluorophores. During a SOFI measurement multiple fluorescence images are acquired from the sample, followed by the calculation of the spatiotemporal cumulants of the intensities observed at each position. Compared to other techniques, SOFI works well under conditions of low signal-to-noise, high background, or high emitter densities. However, it can be difficult to unambiguously determine the reliability of images produced by any superresolution imaging technique. In this work we present a strategy that enables the estimation of the variance or uncertainty associated with each pixel in the SOFI image. In addition to estimating the image quality or reliability, we show that this can be used to optimize the signal-to-noise ratio (SNR) of SOFI images by including multiple pixel combinations in the cumulant calculation. We present an algorithm to perform this optimization, which automatically takes all relevant instrumental, sample, and probe parameters into account. Depending on the optical magnification of the system, this strategy can be used to improve the SNR of a SOFI image by 40% to 90%. This gain in information is entirely free, in the sense that it does not require additional efforts or complications. Alternatively our approach can be applied to reduce the number of fluorescence images to meet a particular quality level by about 30% to 50%, strongly improving the temporal resolution of SOFI imaging. PMID:26977356
NASA Astrophysics Data System (ADS)
Metfies, Katja; Schroeder, Friedhelm; Hessel, Johanna; Wollschläger, Jochen; Micheller, Sebastian; Wolf, Christian; Kilias, Estelle; Sprong, Pim; Neuhaus, Stefan; Frickenhaus, Stephan; Petersen, Wilhelm
2016-11-01
Information on recent biomass distribution and biogeography of photosynthetic marine protists with adequate temporal and spatial resolution is urgently needed to better understand the consequences of environmental change for marine ecosystems. Here we introduce and review a molecular-based observation strategy for high-resolution assessment of these protists in space and time. It is the result of extensive technology developments, adaptations and evaluations which are documented in a number of different publications, and the results of the recently completed field testing which are introduced in this paper. The observation strategy is organized at four different levels. At level 1, samples are collected at high spatiotemporal resolution using the remotely controlled automated filtration system AUTOFIM. Resulting samples can either be preserved for later laboratory analyses, or directly subjected to molecular surveillance of key species aboard the ship via an automated biosensor system or quantitative polymerase chain reaction (level 2). Preserved samples are analyzed at the next observational levels in the laboratory (levels 3 and 4). At level 3 this involves molecular fingerprinting methods for a quick and reliable overview of differences in protist community composition. Finally, selected samples can be used to generate a detailed analysis of taxonomic protist composition via the latest next generation sequencing technology (NGS) at level 4. An overall integrated dataset of the results based on the different analyses provides comprehensive information on the diversity and biogeography of protists, including all related size classes. At the same time the cost of the observation is optimized with respect to analysis effort and time.
Dynamic optimal strategies in transboundary pollution game under learning by doing
NASA Astrophysics Data System (ADS)
Chang, Shuhua; Qin, Weihua; Wang, Xinyu
2018-01-01
In this paper, we present a transboundary pollution game, in which emission permits trading and pollution abatement costs under learning by doing are considered. In this model, the abatement cost mainly depends on the level of pollution abatement and the experience of using pollution abatement technology. We use optimal control theory to investigate the optimal emission paths and the optimal pollution abatement strategies under cooperative and noncooperative games, respectively. Additionally, the effects of parameters on the results have been examined.
Strategies for sustainable management of renewable resources during environmental change.
Lindkvist, Emilie; Ekeberg, Örjan; Norberg, Jon
2017-03-15
As a consequence of global environmental change, management strategies that can deal with unexpected change in resource dynamics are becoming increasingly important. In this paper we undertake a novel approach to studying resource growth problems using a computational form of adaptive management to find optimal strategies for prevalent natural resource management dilemmas. We scrutinize adaptive management, or learning-by-doing, to better understand how to simultaneously manage and learn about a system when its dynamics are unknown. We study important trade-offs in decision-making with respect to choosing optimal actions (harvest efforts) for sustainable management during change. This is operationalized through an artificially intelligent model where we analyze how different trends and fluctuations in growth rates of a renewable resource affect the performance of different management strategies. Our results show that the optimal strategy for managing resources with declining growth is capable of managing resources with fluctuating or increasing growth at a negligible cost, creating in a management strategy that is both efficient and robust towards future unknown changes. To obtain this strategy, adaptive management should strive for: high learning rates to new knowledge, high valuation of future outcomes and modest exploration around what is perceived as the optimal action. © 2017 The Author(s).
An inverse modeling strategy and a computer program to model garnet growth and resorption
NASA Astrophysics Data System (ADS)
Lanari, Pierre; Giuntoli, Francesco
2017-04-01
GrtMod is a computer program that allows numerical simulation of the pressure-temperature (P-T) evolution of garnet porphyroblasts based on the composition of successive growth zones preserved in natural samples. For each garnet growth stage, a new reactive bulk composition is optimized, allowing for resorption and/or fractionation of the previously crystalized garnet. The successive minimizations are performed using a heuristic search method and an objective function that quantify the amount by which the predicted garnet composition deviates from the measured values. The automated strategy of GrtMod includes a two stages optimization and one refinement stage. In this contribution, we will present several application examples. The new strategy provides quantitative estimates of the optimal P-T conditions whereas it was generally derived in a qualitatively way by using garnet isopleth intersections in equilibrium phase diagrams. GrtMod can also be used to model the evolution of the reactive bulk composition along any P-T trajectories. The results for typical MORB and metapelite compositions demonstrate that fractional crystallization models are required to derive accurate P-T information from garnet compositional zoning. GrtMod can also be used to retrieve complex garnet histories involving several stages of resorption. For instance, it has been used to model the P-T condition of garnet growth in grains from the Sesia Zone (Western Alps). The compositional variability of successive growth zones is characterized using standardized X-ray maps and the program XMapTools. Permian garnet cores crystalized under granulite facies conditions (T > 800°C and P = 6 kbar), whereas Alpine garnet rims grew at eclogite facies conditions (650°C and 16 kbar) involving several successive episodes of resorption. The model predicts that up to 50 vol% of garnet was dissolved before a new episode of garnet growth.
NASA Astrophysics Data System (ADS)
Nicholson, B.; Klise, K. A.; Laird, C. D.; Ravikumar, A. P.; Brandt, A. R.
2017-12-01
In order to comply with current and future methane emissions regulations, natural gas producers must develop emissions monitoring strategies for their facilities. In addition, regulators must develop air monitoring strategies over wide areas incorporating multiple facilities. However, in both of these cases, only a limited number of sensors can be deployed. With a wide variety of sensors to choose from in terms of cost, precision, accuracy, spatial coverage, location, orientation, and sampling frequency, it is difficult to design robust monitoring strategies for different scenarios while systematically considering the tradeoffs between different sensor technologies. In addition, the geography, weather, and other site specific conditions can have a large impact on the performance of a sensor network. In this work, we demonstrate methods for calculating optimal sensor networks. Our approach can incorporate tradeoffs between vastly different sensor technologies, optimize over typical wind conditions for a particular area, and consider different objectives such as time to detection or geographic coverage. We do this by pre-computing site specific scenarios and using them as input to a mixed-integer, stochastic programming problem that solves for a sensor network that maximizes the effectiveness of the detection program. Our methods and approach have been incorporated within an open source Python package called Chama with the goal of providing facility operators and regulators with tools for designing more effective and efficient monitoring systems. Sandia National Laboratories is a multi-mission laboratory managed and operated by National Technology and Engineering Solutions of Sandia, LLC., a wholly owned subsidiary of Honeywell International, Inc., for the U.S. Department of Energys National Nuclear Security Administration under contract DE-NA0003525.
Sekiguchi, Masau; Igarashi, Ataru; Matsuda, Takahisa; Matsumoto, Minori; Sakamoto, Taku; Nakajima, Takeshi; Kakugawa, Yasuo; Yamamoto, Seiichiro; Saito, Hiroshi; Saito, Yutaka
2016-02-01
There have been few cost-effectiveness analyses of population-based colorectal cancer screening in Japan, and there is no consensus on the optimal use of total colonoscopy and the fecal immunochemical test for colorectal cancer screening with regard to cost-effectiveness and total colonoscopy workload. The present study aimed to examine the cost-effectiveness of colorectal cancer screening using Japanese data to identify the optimal use of total colonoscopy and fecal immunochemical test. We developed a Markov model to assess the cost-effectiveness of colorectal cancer screening offered to an average-risk population aged 40 years or over. The cost, quality-adjusted life-years and number of total colonoscopy procedures required were evaluated for three screening strategies: (i) a fecal immunochemical test-based strategy; (ii) a total colonoscopy-based strategy; (iii) a strategy of adding population-wide total colonoscopy at 50 years to a fecal immunochemical test-based strategy. All three strategies dominated no screening. Among the three, Strategy 1 was dominated by Strategy 3, and the incremental cost per quality-adjusted life-years gained for Strategy 2 against Strategies 1 and 3 were JPY 293 616 and JPY 781 342, respectively. Within the Japanese threshold (JPY 5-6 million per QALY gained), Strategy 2 was the most cost-effective, followed by Strategy 3; however, Strategy 2 required more than double the number of total colonoscopy procedures than the other strategies. The total colonoscopy-based strategy could be the most cost-effective for population-based colorectal cancer screening in Japan. However, it requires more total colonoscopy procedures than the other strategies. Depending on total colonoscopy capacity, the strategy of adding total colonoscopy for individuals at a specified age to a fecal immunochemical test-based screening may be an optimal solution. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Cyber War Game in Temporal Networks
Cho, Jin-Hee; Gao, Jianxi
2016-01-01
In a cyber war game where a network is fully distributed and characterized by resource constraints and high dynamics, attackers or defenders often face a situation that may require optimal strategies to win the game with minimum effort. Given the system goal states of attackers and defenders, we study what strategies attackers or defenders can take to reach their respective system goal state (i.e., winning system state) with minimum resource consumption. However, due to the dynamics of a network caused by a node’s mobility, failure or its resource depletion over time or action(s), this optimization problem becomes NP-complete. We propose two heuristic strategies in a greedy manner based on a node’s two characteristics: resource level and influence based on k-hop reachability. We analyze complexity and optimality of each algorithm compared to optimal solutions for a small-scale static network. Further, we conduct a comprehensive experimental study for a large-scale temporal network to investigate best strategies, given a different environmental setting of network temporality and density. We demonstrate the performance of each strategy under various scenarios of attacker/defender strategies in terms of win probability, resource consumption, and system vulnerability. PMID:26859840
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hu, Lingzhi, E-mail: hlingzhi@gmail.com, E-mail: raymond.muzic@case.edu; Traughber, Melanie; Su, Kuan-Hao
Purpose: The ultrashort echo-time (UTE) sequence is a promising MR pulse sequence for imaging cortical bone which is otherwise difficult to image using conventional MR sequences and also poses strong attenuation for photons in radiation therapy and PET imaging. The authors report here a systematic characterization of cortical bone signal decay and a scanning time optimization strategy for the UTE sequence through k-space undersampling, which can result in up to a 75% reduction in acquisition time. Using the undersampled UTE imaging sequence, the authors also attempted to quantitatively investigate the MR properties of cortical bone in healthy volunteers, thus demonstratingmore » the feasibility of using such a technique for generating bone-enhanced images which can be used for radiation therapy planning and attenuation correction with PET/MR. Methods: An angularly undersampled, radially encoded UTE sequence was used for scanning the brains of healthy volunteers. Quantitative MR characterization of tissue properties, including water fraction and R2{sup ∗} = 1/T2{sup ∗}, was performed by analyzing the UTE images acquired at multiple echo times. The impact of different sampling rates was evaluated through systematic comparison of the MR image quality, bone-enhanced image quality, image noise, water fraction, and R2{sup ∗} of cortical bone. Results: A reduced angular sampling rate of the UTE trajectory achieves acquisition durations in proportion to the sampling rate and in as short as 25% of the time required for full sampling using a standard Cartesian acquisition, while preserving unique MR contrast within the skull at the cost of a minimal increase in noise level. The R2{sup ∗} of human skull was measured as 0.2–0.3 ms{sup −1} depending on the specific region, which is more than ten times greater than the R2{sup ∗} of soft tissue. The water fraction in human skull was measured to be 60%–80%, which is significantly less than the >90% water fraction in brain. High-quality, bone-enhanced images can be generated using a reduced sampled UTE sequence with no visible compromise in image quality and they preserved bone-to-air contrast with as low as a 25% sampling rate. Conclusions: This UTE strategy with angular undersampling preserves the image quality and contrast of cortical bone, while reducing the total scanning time by as much as 75%. The quantitative results of R2{sup ∗} and the water fraction of skull based on Dixon analysis of UTE images acquired at multiple echo times provide guidance for the clinical adoption and further parameter optimization of the UTE sequence when used for radiation therapy and MR-based PET attenuation correction.« less
Optimal management of non-Markovian biological populations
Williams, B.K.
2007-01-01
Wildlife populations typically are described by Markovian models, with population dynamics influenced at each point in time by current but not previous population levels. Considerable work has been done on identifying optimal management strategies under the Markovian assumption. In this paper we generalize this work to non-Markovian systems, for which population responses to management are influenced by lagged as well as current status and/or controls. We use the maximum principle of optimal control theory to derive conditions for the optimal management such a system, and illustrate the effects of lags on the structure of optimal habitat strategies for a predator-prey system.
Multi-strategy coevolving aging particle optimization.
Iacca, Giovanni; Caraffini, Fabio; Neri, Ferrante
2014-02-01
We propose Multi-Strategy Coevolving Aging Particles (MS-CAP), a novel population-based algorithm for black-box optimization. In a memetic fashion, MS-CAP combines two components with complementary algorithm logics. In the first stage, each particle is perturbed independently along each dimension with a progressively shrinking (decaying) radius, and attracted towards the current best solution with an increasing force. In the second phase, the particles are mutated and recombined according to a multi-strategy approach in the fashion of the ensemble of mutation strategies in Differential Evolution. The proposed algorithm is tested, at different dimensionalities, on two complete black-box optimization benchmarks proposed at the Congress on Evolutionary Computation 2010 and 2013. To demonstrate the applicability of the approach, we also test MS-CAP to train a Feedforward Neural Network modeling the kinematics of an 8-link robot manipulator. The numerical results show that MS-CAP, for the setting considered in this study, tends to outperform the state-of-the-art optimization algorithms on a large set of problems, thus resulting in a robust and versatile optimizer.
NASA Astrophysics Data System (ADS)
Venkateswaran, Kasthuri; La Duc, Myron; James; Osman, Shariff; Andersen, Gary; Huber, Julie; Sogin, Mitchell
The scientific literature teems with reports of microbial diversity from seemingly every niche imaginable, from deep within Antarctic ice to ocean-floor hydrothermal systems. The fields of applied microbiology and molecular biology have made enormous technological advancements over the past two decades, from the development of PCR-amplification of DNA to the forensic detection of what many consider to be "miniscule" amounts of blood and other such biomatter. Despite advances in the specificity and sensitivity of molecular biological technologies, the abilities to efficiently sample and extract nucleic acids from low-biomass matrices, and accurately describe the true microbial diversity housed in such samples, remain significant challenges. To minimize the likelihood of forward contamination of Mars, Europa, or any other extraterrestrial environment, significant effort is invested to ensure that environments in which spacecraft are assembled are maintained appropriately and kept as free of microbial contamination as possible. To this end, routine analyses, largely based on spore-counts and cultivation-based approaches, are carried out to validate the cleanliness of such surfaces. However, only by applying the most efficient and accurate molecular means of analysis can conclusions be drawn on the actual bioburden and microbial diversity associated with these environments. For any measure of sample-derived bioburden, a large portion is inevitably lost in sampling. Furthermore, a 90 Since the surface area of a spacecraft is fixed, it is not possible to simply increase sample size to improve yield. It is therefore critical to assure that current methods of purification of biomolecules sampled from this limited resource are 1) optimal for achieving total yield of biota present and 2) conserving of the true microbial diversity of the sampled environment. This project focuses on the development of capabilities to effectively and efficiently generate a genetic inventory of microbes present about the surfaces of spacecraft and associated clean-room facilities. This entails the evaluation and optimization of molecular-based strategies designed to assess microbial burden and diversity arising from samples of low biomass. Such strategies include conventional clone library analysis, DNA microarray screening, and V6-Tag Sequencing. The capabilities resulting from this work will enable NASA to establish genetic inventories of spacecraft, as recommended by the National Research Council, to better understand the risk of forward contamination.
Pareto fronts for multiobjective optimization design on materials data
NASA Astrophysics Data System (ADS)
Gopakumar, Abhijith; Balachandran, Prasanna; Gubernatis, James E.; Lookman, Turab
Optimizing multiple properties simultaneously is vital in materials design. Here we apply infor- mation driven, statistical optimization strategies blended with machine learning methods, to address multi-objective optimization tasks on materials data. These strategies aim to find the Pareto front consisting of non-dominated data points from a set of candidate compounds with known character- istics. The objective is to find the pareto front in as few additional measurements or calculations as possible. We show how exploration of the data space to find the front is achieved by using uncer- tainties in predictions from regression models. We test our proposed design strategies on multiple, independent data sets including those from computations as well as experiments. These include data sets for Max phases, piezoelectrics and multicomponent alloys.
Inverse Statistics and Asset Allocation Efficiency
NASA Astrophysics Data System (ADS)
Bolgorian, Meysam
In this paper using inverse statistics analysis, the effect of investment horizon on the efficiency of portfolio selection is examined. Inverse statistics analysis is a general tool also known as probability distribution of exit time that is used for detecting the distribution of the time in which a stochastic process exits from a zone. This analysis was used in Refs. 1 and 2 for studying the financial returns time series. This distribution provides an optimal investment horizon which determines the most likely horizon for gaining a specific return. Using samples of stocks from Tehran Stock Exchange (TSE) as an emerging market and S&P 500 as a developed market, effect of optimal investment horizon in asset allocation is assessed. It is found that taking into account the optimal investment horizon in TSE leads to more efficiency for large size portfolios while for stocks selected from S&P 500, regardless of portfolio size, this strategy does not only not produce more efficient portfolios, but also longer investment horizons provides more efficiency.
Measuring the value of accurate link prediction for network seeding.
Wei, Yijin; Spencer, Gwen
2017-01-01
The influence-maximization literature seeks small sets of individuals whose structural placement in the social network can drive large cascades of behavior. Optimization efforts to find the best seed set often assume perfect knowledge of the network topology. Unfortunately, social network links are rarely known in an exact way. When do seeding strategies based on less-than-accurate link prediction provide valuable insight? We introduce optimized-against-a-sample ([Formula: see text]) performance to measure the value of optimizing seeding based on a noisy observation of a network. Our computational study investigates [Formula: see text] under several threshold-spread models in synthetic and real-world networks. Our focus is on measuring the value of imprecise link information. The level of investment in link prediction that is strategic appears to depend closely on spread model: in some parameter ranges investments in improving link prediction can pay substantial premiums in cascade size. For other ranges, such investments would be wasted. Several trends were remarkably consistent across topologies.
Multiple indicator cokriging with application to optimal sampling for environmental monitoring
NASA Astrophysics Data System (ADS)
Pardo-Igúzquiza, Eulogio; Dowd, Peter A.
2005-02-01
A probabilistic solution to the problem of spatial interpolation of a variable at an unsampled location consists of estimating the local cumulative distribution function (cdf) of the variable at that location from values measured at neighbouring locations. As this distribution is conditional to the data available at neighbouring locations it incorporates the uncertainty of the value of the variable at the unsampled location. Geostatistics provides a non-parametric solution to such problems via the various forms of indicator kriging. In a least squares sense indicator cokriging is theoretically the best estimator but in practice its use has been inhibited by problems such as an increased number of violations of order relations constraints when compared with simpler forms of indicator kriging. In this paper, we describe a methodology and an accompanying computer program for estimating a vector of indicators by simple indicator cokriging, i.e. simultaneous estimation of the cdf for K different thresholds, {F(u,zk),k=1,…,K}, by solving a unique cokriging system for each location at which an estimate is required. This approach produces a variance-covariance matrix of the estimated vector of indicators which is used to fit a model to the estimated local cdf by logistic regression. This model is used to correct any violations of order relations and automatically ensures that all order relations are satisfied, i.e. the estimated cumulative distribution function, F^(u,zk), is such that: F^(u,zk)∈[0,1],∀zk,andF^(u,zk)⩽F^(u,z)forzk
Optimal control of anthracnose using mixed strategies.
Fotsa Mbogne, David Jaures; Thron, Christopher
2015-11-01
In this paper we propose and study a spatial diffusion model for the control of anthracnose disease in a bounded domain. The model is a generalization of the one previously developed in [15]. We use the model to simulate two different types of control strategies against anthracnose disease. Strategies that employ chemical fungicides are modeled using a continuous control function; while strategies that rely on cultivational practices (such as pruning and removal of mummified fruits) are modeled with a control function which is discrete in time (though not in space). For comparative purposes, we perform our analyses for a spatially-averaged model as well as the space-dependent diffusion model. Under weak smoothness conditions on parameters we demonstrate the well-posedness of both models by verifying existence and uniqueness of the solution for the growth inhibition rate for given initial conditions. We also show that the set [0, 1] is positively invariant. We first study control by impulsive strategies, then analyze the simultaneous use of mixed continuous and pulse strategies. In each case we specify a cost functional to be minimized, and we demonstrate the existence of optimal control strategies. In the case of pulse-only strategies, we provide explicit algorithms for finding the optimal control strategies for both the spatially-averaged model and the space-dependent model. We verify the algorithms for both models via simulation, and discuss properties of the optimal solutions. Copyright © 2015 Elsevier Inc. All rights reserved.
Optimal estimation of two-qubit pure-state entanglement
NASA Astrophysics Data System (ADS)
Acín, Antonio; Tarrach, Rolf; Vidal, Guifré
2000-06-01
We present optimal measuring strategies for an estimation of the entanglement of unknown two-qubit pure states and of the degree of mixing of unknown single-qubit mixed states, of which N identical copies are available. The most general measuring strategies are considered in both situations, to conclude in the first case that a local, although collective, measurement suffices to estimate entanglement, a nonlocal property, optimally.
Fernández, Elena; Vidal, Lorena; Canals, Antonio
2016-08-05
This study reports a new composite based on ZSM-5 zeolite decorated with iron oxide magnetic nanoparticles as a valuable sorbent for magnetic solid-phase extraction (MSPE). A proposal is made to determine benzene, toluene, ethylbenzene and xylenes (BTEX) as model analytes in water samples using gas chromatography-mass spectrometry. A two-step multivariate optimization strategy, using Plackett⬜Burman and circumscribed central composite designs, was employed to optimize experimental parameters affecting MSPE. The method was evaluated under optimized extraction conditions (i.e., amount of sorbent, 138mg; extraction time, 11min; sample pH, pH of water (i.e., 5.5⬜6.5); eluent solvent volume, 0.5mL; and elution time, 5min), obtaining a linear response from 1 to 100μgL(↙1) for benzene; from 10 to 100μgL(↙1) for toluene, ethylbenzene and o-xylene; and from 10 to 75μgL(↙1) for m,p-xylene. The repeatability of the proposed method was evaluated at a 40μgL(↙1) spiking level and coefficients of variation ranged between 8 and 11% (n=5). Limits of detection were found to be 0.3μgL(↙1) for benzene and 3μgL(↙1) for the other analytes. These values satisfy the current normative of the Environmental Protection Agency and European Union for BTEX content in waters for human consumption. Finally, drinking water, wastewater and river water were selected as real water samples to assess the applicability of the method. Relative recoveries varied between 85% and 114% showing negligible matrix effects. Copyright © 2016 Elsevier B.V. All rights reserved.
Hybrid-optimization strategy for the communication of large-scale Kinetic Monte Carlo simulation
NASA Astrophysics Data System (ADS)
Wu, Baodong; Li, Shigang; Zhang, Yunquan; Nie, Ningming
2017-02-01
The parallel Kinetic Monte Carlo (KMC) algorithm based on domain decomposition has been widely used in large-scale physical simulations. However, the communication overhead of the parallel KMC algorithm is critical, and severely degrades the overall performance and scalability. In this paper, we present a hybrid optimization strategy to reduce the communication overhead for the parallel KMC simulations. We first propose a communication aggregation algorithm to reduce the total number of messages and eliminate the communication redundancy. Then, we utilize the shared memory to reduce the memory copy overhead of the intra-node communication. Finally, we optimize the communication scheduling using the neighborhood collective operations. We demonstrate the scalability and high performance of our hybrid optimization strategy by both theoretical and experimental analysis. Results show that the optimized KMC algorithm exhibits better performance and scalability than the well-known open-source library-SPPARKS. On 32-node Xeon E5-2680 cluster (total 640 cores), the optimized algorithm reduces the communication time by 24.8% compared with SPPARKS.
Esteve, Rosa; López-Martínez, Alicia E; Peters, Madelon L; Serrano-Ibáñez, Elena R; Ruiz-Párraga, Gema T; Ramírez-Maestre, Carmen
2018-01-01
Activity patterns are the product of pain and of the self-regulation of current goals in the context of pain. The aim of this study was to investigate the association between goal management strategies and activity patterns while taking into account the role of optimism/pessimism and positive/negative affect. Two hundred and thirty-seven patients with chronic musculoskeletal pain filled out questionnaires on optimism, positive and negative affect, pain intensity, and the activity patterns they employed in dealing with their pain. Questionnaires were also administered to assess their general goal management strategies: goal persistence, flexible goal adjustment, and disengagement and reengagement with goals. Structural equation modelling showed that higher levels of optimism were related to persistence, flexible goal management, and commitment to new goals. These strategies were associated with higher positive affect, persistence in finishing tasks despite pain, and infrequent avoidance behaviour in the presence or anticipation of pain. The strategies used by the patients with chronic musculoskeletal pain to manage their life goals are related to their activity patterns.
On optimal strategies in event-constrained differential games
NASA Technical Reports Server (NTRS)
Heymann, M.; Rajan, N.; Ardema, M.
1985-01-01
Combat games are formulated as zero-sum differential games with unilateral event constraints. An interior penalty function approach is employed to approximate optimal strategies for the players. The method is very attractive computationally and possesses suitable approximation and convergence properties.
Strategies for the Optimization of Natural Leads to Anticancer Drugs or Drug Candidates
Xiao, Zhiyan; Morris-Natschke, Susan L.; Lee, Kuo-Hsiung
2015-01-01
Natural products have made significant contribution to cancer chemotherapy over the past decades and remain an indispensable source of molecular and mechanistic diversity for anticancer drug discovery. More often than not, natural products may serve as leads for further drug development rather than as effective anticancer drugs by themselves. Generally, optimization of natural leads into anticancer drugs or drug candidates should not only address drug efficacy, but also improve ADMET profiles and chemical accessibility associated with the natural leads. Optimization strategies involve direct chemical manipulation of functional groups, structure-activity relationship-directed optimization and pharmacophore-oriented molecular design based on the natural templates. Both fundamental medicinal chemistry principles (e.g., bio-isosterism) and state-of-the-art computer-aided drug design techniques (e.g., structure-based design) can be applied to facilitate optimization efforts. In this review, the strategies to optimize natural leads to anticancer drugs or drug candidates are illustrated with examples and described according to their purposes. Furthermore, successful case studies on lead optimization of bioactive compounds performed in the Natural Products Research Laboratories at UNC are highlighted. PMID:26359649
de Koning, Jos J; van der Zweep, Cees-Jan; Cornelissen, Jesper; Kuiper, Bouke
2013-03-01
Optimal pacing strategy was determined for breaking the world speed record on a human-powered vehicle (HPV) using an energy-flow model in which the rider's physical capacities, the vehicle's properties, and the environmental conditions were included. Power data from world-record attempts were compared with data from the model, and race protocols were adjusted to the results from the model. HPV performance can be improved by using an energy-flow model for optimizing race strategy. A biphased in-run followed by a sprint gave best results.
Convexity of Ruin Probability and Optimal Dividend Strategies for a General Lévy Process
Yuen, Kam Chuen; Shen, Ying
2015-01-01
We consider the optimal dividends problem for a company whose cash reserves follow a general Lévy process with certain positive jumps and arbitrary negative jumps. The objective is to find a policy which maximizes the expected discounted dividends until the time of ruin. Under appropriate conditions, we use some recent results in the theory of potential analysis of subordinators to obtain the convexity properties of probability of ruin. We present conditions under which the optimal dividend strategy, among all admissible ones, takes the form of a barrier strategy. PMID:26351655
Optimal placement of tuning masses for vibration reduction in helicopter rotor blades
NASA Technical Reports Server (NTRS)
Pritchard, Jocelyn I.; Adelman, Howard M.
1988-01-01
Described are methods for reducing vibration in helicopter rotor blades by determining optimum sizes and locations of tuning masses through formal mathematical optimization techniques. An optimization procedure is developed which employs the tuning masses and corresponding locations as design variables which are systematically changed to achieve low values of shear without a large mass penalty. The finite-element structural analysis of the blade and the optimization formulation require development of discretized expressions for two performance parameters: modal shaping parameter and modal shear amplitude. Matrix expressions for both quantities and their sensitivity derivatives are developed. Three optimization strategies are developed and tested. The first is based on minimizing the modal shaping parameter which indirectly reduces the modal shear amplitudes corresponding to each harmonic of airload. The second strategy reduces these amplitudes directly, and the third strategy reduces the shear as a function of time during a revolution of the blade. The first strategy works well for reducing the shear for one mode responding to a single harmonic of the airload, but has been found in some cases to be ineffective for more than one mode. The second and third strategies give similar results and show excellent reduction of the shear with a low mass penalty.
The Evolution of Generosity in the Ultimatum Game.
Hintze, Arend; Hertwig, Ralph
2016-09-28
When humans fail to make optimal decisions in strategic games and economic gambles, researchers typically try to explain why that behaviour is biased. To this end, they search for mechanisms that cause human behaviour to deviate from what seems to be the rational optimum. But perhaps human behaviour is not biased; perhaps research assumptions about the optimality of strategies are incomplete. In the one-shot anonymous symmetric ultimatum game (UG), humans fail to play optimally as defined by the Nash equilibrium. However, the distinction between kin and non-kin-with kin detection being a key evolutionary adaption-is often neglected when deriving the "optimal" strategy. We computationally evolved strategies in the UG that were equipped with an evolvable probability to discern kin from non-kin. When an opponent was not kin, agents evolved strategies that were similar to those used by humans. We therefore conclude that the strategy humans play is not irrational. The deviation between behaviour and the Nash equilibrium may rather be attributable to key evolutionary adaptations, such as kin detection. Our findings further suggest that social preference models are likely to capture mechanisms that permit people to play optimally in an evolutionary context. Once this context is taken into account, human behaviour no longer appears irrational.
Motor planning under temporal uncertainty is suboptimal when the gain function is asymmetric
Ota, Keiji; Shinya, Masahiro; Kudo, Kazutoshi
2015-01-01
For optimal action planning, the gain/loss associated with actions and the variability in motor output should both be considered. A number of studies make conflicting claims about the optimality of human action planning but cannot be reconciled due to their use of different movements and gain/loss functions. The disagreement is possibly because of differences in the experimental design and differences in the energetic cost of participant motor effort. We used a coincident timing task, which requires decision making with constant energetic cost, to test the optimality of participant's timing strategies under four configurations of the gain function. We compared participant strategies to an optimal timing strategy calculated from a Bayesian model that maximizes the expected gain. We found suboptimal timing strategies under two configurations of the gain function characterized by asymmetry, in which higher gain is associated with higher risk of zero gain. Participants showed a risk-seeking strategy by responding closer than optimal to the time of onset/offset of zero gain. Meanwhile, there was good agreement of the model with actual performance under two configurations of the gain function characterized by symmetry. Our findings show that human ability to make decisions that must reflect uncertainty in one's own motor output has limits that depend on the configuration of the gain function. PMID:26236227
Difficulty of distinguishing product states locally
NASA Astrophysics Data System (ADS)
Croke, Sarah; Barnett, Stephen M.
2017-01-01
Nonlocality without entanglement is a rather counterintuitive phenomenon in which information may be encoded entirely in product (unentangled) states of composite quantum systems in such a way that local measurement of the subsystems is not enough for optimal decoding. For simple examples of pure product states, the gap in performance is known to be rather small when arbitrary local strategies are allowed. Here we restrict to local strategies readily achievable with current technology: those requiring neither a quantum memory nor joint operations. We show that even for measurements on pure product states, there can be a large gap between such strategies and theoretically optimal performance. Thus, even in the absence of entanglement, physically realizable local strategies can be far from optimal for extracting quantum information.
Opitmal Platform Strategies in the Smartphone Market
NASA Astrophysics Data System (ADS)
Unno, Masaru; Xu, Hua
In a smartphone market, smartphone makers encourage smartphone application providers (AP) to create more popular smartphone applications through making a revenue-sharing contract with AP and providing application-purchasing support to end users. In this paper, we study revenue-sharing and application-purchasing support problem between a risk-averse smartphone maker and a smartphone application provider. The problem is formulated as the smartphone makers's risk-sensitive stochastic control problem. The sufficient conditions for the existence of the optimal revenue-sharing strategy, the optimal application-purchasing support strategy and the incentive compatible effort recommended to AP are obtained. The effects of the smartphone makers's risk-sensitivity on the optimal strategies are also discussed. A numerical example is solved to show the computation aspects of the problem.
Aerodynamic Shape Optimization Using Hybridized Differential Evolution
NASA Technical Reports Server (NTRS)
Madavan, Nateri K.
2003-01-01
An aerodynamic shape optimization method that uses an evolutionary algorithm known at Differential Evolution (DE) in conjunction with various hybridization strategies is described. DE is a simple and robust evolutionary strategy that has been proven effective in determining the global optimum for several difficult optimization problems. Various hybridization strategies for DE are explored, including the use of neural networks as well as traditional local search methods. A Navier-Stokes solver is used to evaluate the various intermediate designs and provide inputs to the hybrid DE optimizer. The method is implemented on distributed parallel computers so that new designs can be obtained within reasonable turnaround times. Results are presented for the inverse design of a turbine airfoil from a modern jet engine. (The final paper will include at least one other aerodynamic design application). The capability of the method to search large design spaces and obtain the optimal airfoils in an automatic fashion is demonstrated.
Chassin, David P.; Behboodi, Sahand; Djilali, Ned
2018-01-28
This article proposes a system-wide optimal resource dispatch strategy that enables a shift from a primarily energy cost-based approach, to a strategy using simultaneous price signals for energy, power and ramping behavior. A formal method to compute the optimal sub-hourly power trajectory is derived for a system when the price of energy and ramping are both significant. Optimal control functions are obtained in both time and frequency domains, and a discrete-time solution suitable for periodic feedback control systems is presented. The method is applied to North America Western Interconnection for the planning year 2024, and it is shown that anmore » optimal dispatch strategy that simultaneously considers both the cost of energy and the cost of ramping leads to significant cost savings in systems with high levels of renewable generation: the savings exceed 25% of the total system operating cost for a 50% renewables scenario.« less
Synthesizing epidemiological and economic optima for control of immunizing infections.
Klepac, Petra; Laxminarayan, Ramanan; Grenfell, Bryan T
2011-08-23
Epidemic theory predicts that the vaccination threshold required to interrupt local transmission of an immunizing infection like measles depends only on the basic reproductive number and hence transmission rates. When the search for optimal strategies is expanded to incorporate economic constraints, the optimum for disease control in a single population is determined by relative costs of infection and control, rather than transmission rates. Adding a spatial dimension, which precludes local elimination unless it can be achieved globally, can reduce or increase optimal vaccination levels depending on the balance of costs and benefits. For weakly coupled populations, local optimal strategies agree with the global cost-effective strategy; however, asymmetries in costs can lead to divergent control optima in more strongly coupled systems--in particular, strong regional differences in costs of vaccination can preclude local elimination even when elimination is locally optimal. Under certain conditions, it is locally optimal to share vaccination resources with other populations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chassin, David P.; Behboodi, Sahand; Djilali, Ned
This article proposes a system-wide optimal resource dispatch strategy that enables a shift from a primarily energy cost-based approach, to a strategy using simultaneous price signals for energy, power and ramping behavior. A formal method to compute the optimal sub-hourly power trajectory is derived for a system when the price of energy and ramping are both significant. Optimal control functions are obtained in both time and frequency domains, and a discrete-time solution suitable for periodic feedback control systems is presented. The method is applied to North America Western Interconnection for the planning year 2024, and it is shown that anmore » optimal dispatch strategy that simultaneously considers both the cost of energy and the cost of ramping leads to significant cost savings in systems with high levels of renewable generation: the savings exceed 25% of the total system operating cost for a 50% renewables scenario.« less
Development of a codon optimization strategy using the efor RED reporter gene as a test case
NASA Astrophysics Data System (ADS)
Yip, Chee-Hoo; Yarkoni, Orr; Ajioka, James; Wan, Kiew-Lian; Nathan, Sheila
2018-04-01
Synthetic biology is a platform that enables high-level synthesis of useful products such as pharmaceutically related drugs, bioplastics and green fuels from synthetic DNA constructs. Large-scale expression of these products can be achieved in an industrial compliant host such as Escherichia coli. To maximise the production of recombinant proteins in a heterologous host, the genes of interest are usually codon optimized based on the codon usage of the host. However, the bioinformatics freeware available for standard codon optimization might not be ideal in determining the best sequence for the synthesis of synthetic DNA. Synthesis of incorrect sequences can prove to be a costly error and to avoid this, a codon optimization strategy was developed based on the E. coli codon usage using the efor RED reporter gene as a test case. This strategy replaces codons encoding for serine, leucine, proline and threonine with the most frequently used codons in E. coli. Furthermore, codons encoding for valine and glycine are substituted with the second highly used codons in E. coli. Both the optimized and original efor RED genes were ligated to the pJS209 plasmid backbone using Gibson Assembly and the recombinant DNAs were transformed into E. coli E. cloni 10G strain. The fluorescence intensity per cell density of the optimized sequence was improved by 20% compared to the original sequence. Hence, the developed codon optimization strategy is proposed when designing an optimal sequence for heterologous protein production in E. coli.
Holmes, Thomas D; Guilmette, Raymond A; Cheng, Yung Sung; Parkhurst, Mary Ann; Hoover, Mark D
2009-03-01
The Capstone Depleted Uranium (DU) Aerosol Study was undertaken to obtain aerosol samples resulting from a large-caliber DU penetrator striking an Abrams or Bradley test vehicle. The sampling strategy was designed to (1) optimize the performance of the samplers and maintain their integrity in the extreme environment created during perforation of an armored vehicle by a DU penetrator, (2) collect aerosols as a function of time post perforation, and (3) obtain size-classified samples for analysis of chemical composition, particle morphology, and solubility in lung fluid. This paper describes the experimental setup and sampling methodologies used to achieve these objectives. Custom-designed arrays of sampling heads were secured to the inside of the target in locations approximating the breathing zones of the crew locations in the test vehicles. Each array was designed to support nine filter cassettes and nine cascade impactors mounted with quick-disconnect fittings. Shielding and sampler placement strategies were used to minimize sampler loss caused by the penetrator impact and the resulting fragments of eroded penetrator and perforated armor. A cyclone train was used to collect larger quantities of DU aerosol for measurement of chemical composition and solubility. A moving filter sample was used to obtain semicontinuous samples for DU concentration determination. Control for the air samplers was provided by five remotely located valve control and pressure monitoring units located inside and around the test vehicle. These units were connected to a computer interface chassis and controlled using a customized LabVIEW engineering computer control program. The aerosol sampling arrays and control systems for the Capstone study provided the needed aerosol samples for physicochemical analysis, and the resultant data were used for risk assessment of exposure to DU aerosol.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Holmes, Thomas D.; Guilmette, Raymond A.; Cheng, Yung-Sung
2009-03-01
The Capstone Depleted Uranium Aerosol Study was undertaken to obtain aerosol samples resulting from a kinetic-energy cartridge with a large-caliber depleted uranium (DU) penetrator striking an Abrams or Bradley test vehicle. The sampling strategy was designed to (1) optimize the performance of the samplers and maintain their integrity in the extreme environment created during perforation of an armored vehicle by a DU penetrator, (2) collect aerosols as a function of time post-impact, and (3) obtain size-classified samples for analysis of chemical composition, particle morphology, and solubility in lung fluid. This paper describes the experimental setup and sampling methodologies used tomore » achieve these objectives. Custom-designed arrays of sampling heads were secured to the inside of the target in locations approximating the breathing zones of the vehicle commander, loader, gunner, and driver. Each array was designed to support nine filter cassettes and nine cascade impactors mounted with quick-disconnect fittings. Shielding and sampler placement strategies were used to minimize sampler loss caused by the penetrator impact and the resulting fragments of eroded penetrator and perforated armor. A cyclone train was used to collect larger quantities of DU aerosol for chemical composition and solubility. A moving filter sample was used to obtain semicontinuous samples for depleted uranium concentration determination. Control for the air samplers was provided by five remotely located valve control and pressure monitoring units located inside and around the test vehicle. These units were connected to a computer interface chassis and controlled using a customized LabVIEW engineering computer control program. The aerosol sampling arrays and control systems for the Capstone study provided the needed aerosol samples for physicochemical analysis, and the resultant data were used for risk assessment of exposure to DU aerosol.« less
Oliveri, Paolo; López, M Isabel; Casolino, M Chiara; Ruisánchez, Itziar; Callao, M Pilar; Medini, Luca; Lanteri, Silvia
2014-12-03
A new class-modeling method, referred to as partial least squares density modeling (PLS-DM), is presented. The method is based on partial least squares (PLS), using a distance-based sample density measurement as the response variable. Potential function probability density is subsequently calculated on PLS scores and used, jointly with residual Q statistics, to develop efficient class models. The influence of adjustable model parameters on the resulting performances has been critically studied by means of cross-validation and application of the Pareto optimality criterion. The method has been applied to verify the authenticity of olives in brine from cultivar Taggiasca, based on near-infrared (NIR) spectra recorded on homogenized solid samples. Two independent test sets were used for model validation. The final optimal model was characterized by high efficiency and equilibrate balance between sensitivity and specificity values, if compared with those obtained by application of well-established class-modeling methods, such as soft independent modeling of class analogy (SIMCA) and unequal dispersed classes (UNEQ). Copyright © 2014 Elsevier B.V. All rights reserved.
Guomundsdottir, S.; Applegate, Lynn M.; Arnason, I.O.; Kristmundsson, A.; Purcell, Maureen K.; Elliott, Diane G.
2017-01-01
Renibacterium salmoninarum, the causative agent of salmonid bacterial kidney disease (BKD), is endemic in many wild trout species in northerly regions. The aim of the present study was to determine the optimal R. salmoninarum sampling/testing strategy for wild brown trout (Salmo trutta L.) populations in Iceland. Fish were netted in a lake and multiple organs—kidney, spleen, gills, oesophagus and mid-gut—were sampled and subjected to five detection tests i.e. culture, polyclonal enzyme-linked immunosorbent assay (pELISA) and three different PCR tests. The results showed that each fish had encountered R. salmoninarum but there were marked differences between results obtained depending on organ and test. The bacterium was not cultured from any kidney sample while all kidney samples were positive by pELISA. At least one organ from 92.9% of the fish tested positive by PCR. The results demonstrated that the choice of tissue and diagnostic method can dramatically influence the outcome of R. salmoninarum surveys.
Lobréaux, Stéphane; Melodelima, Christelle
2015-02-01
We tested the use of Generalized Linear Mixed Models to detect associations between genetic loci and environmental variables, taking into account the population structure of sampled individuals. We used a simulation approach to generate datasets under demographically and selectively explicit models. These datasets were used to analyze and optimize GLMM capacity to detect the association between markers and selective coefficients as environmental data in terms of false and true positive rates. Different sampling strategies were tested, maximizing the number of populations sampled, sites sampled per population, or individuals sampled per site, and the effect of different selective intensities on the efficiency of the method was determined. Finally, we apply these models to an Arabidopsis thaliana SNP dataset from different accessions, looking for loci associated with spring minimal temperature. We identified 25 regions that exhibit unusual correlations with the climatic variable and contain genes with functions related to temperature stress. Copyright © 2014 Elsevier Inc. All rights reserved.
A new inertia weight control strategy for particle swarm optimization
NASA Astrophysics Data System (ADS)
Zhu, Xianming; Wang, Hongbo
2018-04-01
Particle Swarm Optimization is a member of swarm intelligence algorithms, which is inspired by the behavior of bird flocks. The inertia weight, one of the most important parameters of PSO, is crucial for PSO, for it balances the performance of exploration and exploitation of the algorithm. This paper proposes a new inertia weight control strategy and PSO with this new strategy is tested by four benchmark functions. The results shows that the new strategy provides the PSO with better performance.
NASA Astrophysics Data System (ADS)
Gang, Grace J.; Siewerdsen, Jeffrey H.; Webster Stayman, J.
2017-06-01
Tube current modulation (TCM) is routinely adopted on diagnostic CT scanners for dose reduction. Conventional TCM strategies are generally designed for filtered-backprojection (FBP) reconstruction to satisfy simple image quality requirements based on noise. This work investigates TCM designs for model-based iterative reconstruction (MBIR) to achieve optimal imaging performance as determined by a task-based image quality metric. Additionally, regularization is an important aspect of MBIR that is jointly optimized with TCM, and includes both the regularization strength that controls overall smoothness as well as directional weights that permits control of the isotropy/anisotropy of the local noise and resolution properties. Initial investigations focus on a known imaging task at a single location in the image volume. The framework adopts Fourier and analytical approximations for fast estimation of the local noise power spectrum (NPS) and modulation transfer function (MTF)—each carrying dependencies on TCM and regularization. For the single location optimization, the local detectability index (d‧) of the specific task was directly adopted as the objective function. A covariance matrix adaptation evolution strategy (CMA-ES) algorithm was employed to identify the optimal combination of imaging parameters. Evaluations of both conventional and task-driven approaches were performed in an abdomen phantom for a mid-frequency discrimination task in the kidney. Among the conventional strategies, the TCM pattern optimal for FBP using a minimum variance criterion yielded a worse task-based performance compared to an unmodulated strategy when applied to MBIR. Moreover, task-driven TCM designs for MBIR were found to have the opposite behavior from conventional designs for FBP, with greater fluence assigned to the less attenuating views of the abdomen and less fluence to the more attenuating lateral views. Such TCM patterns exaggerate the intrinsic anisotropy of the MTF and NPS as a result of the data weighting in MBIR. Directional penalty design was found to reinforce the same trend. The task-driven approaches outperform conventional approaches, with the maximum improvement in d‧ of 13% given by the joint optimization of TCM and regularization. This work demonstrates that the TCM optimal for MBIR is distinct from conventional strategies proposed for FBP reconstruction and strategies optimal for FBP are suboptimal and may even reduce performance when applied to MBIR. The task-driven imaging framework offers a promising approach for optimizing acquisition and reconstruction for MBIR that can improve imaging performance and/or dose utilization beyond conventional imaging strategies.
Optimal Budget Allocation for Sample Average Approximation
2011-06-01
an optimization algorithm applied to the sample average problem. We examine the convergence rate of the estimator as the computing budget tends to...regime for the optimization algorithm . 1 Introduction Sample average approximation (SAA) is a frequently used approach to solving stochastic programs...appealing due to its simplicity and the fact that a large number of standard optimization algorithms are often available to optimize the resulting sample
Optimal robust control strategy of a solid oxide fuel cell system
NASA Astrophysics Data System (ADS)
Wu, Xiaojuan; Gao, Danhui
2018-01-01
Optimal control can ensure system safe operation with a high efficiency. However, only a few papers discuss optimal control strategies for solid oxide fuel cell (SOFC) systems. Moreover, the existed methods ignore the impact of parameter uncertainty on system instantaneous performance. In real SOFC systems, several parameters may vary with the variation of operation conditions and can not be identified exactly, such as load current. Therefore, a robust optimal control strategy is proposed, which involves three parts: a SOFC model with parameter uncertainty, a robust optimizer and robust controllers. During the model building process, boundaries of the uncertain parameter are extracted based on Monte Carlo algorithm. To achieve the maximum efficiency, a two-space particle swarm optimization approach is employed to obtain optimal operating points, which are used as the set points of the controllers. To ensure the SOFC safe operation, two feed-forward controllers and a higher-order robust sliding mode controller are presented to control fuel utilization ratio, air excess ratio and stack temperature afterwards. The results show the proposed optimal robust control method can maintain the SOFC system safe operation with a maximum efficiency under load and uncertainty variations.
Han, Guanghui; Liu, Xiabi; Zheng, Guangyuan; Wang, Murong; Huang, Shan
2018-06-06
Ground-glass opacity (GGO) is a common CT imaging sign on high-resolution CT, which means the lesion is more likely to be malignant compared to common solid lung nodules. The automatic recognition of GGO CT imaging signs is of great importance for early diagnosis and possible cure of lung cancers. The present GGO recognition methods employ traditional low-level features and system performance improves slowly. Considering the high-performance of CNN model in computer vision field, we proposed an automatic recognition method of 3D GGO CT imaging signs through the fusion of hybrid resampling and layer-wise fine-tuning CNN models in this paper. Our hybrid resampling is performed on multi-views and multi-receptive fields, which reduces the risk of missing small or large GGOs by adopting representative sampling panels and processing GGOs with multiple scales simultaneously. The layer-wise fine-tuning strategy has the ability to obtain the optimal fine-tuning model. Multi-CNN models fusion strategy obtains better performance than any single trained model. We evaluated our method on the GGO nodule samples in publicly available LIDC-IDRI dataset of chest CT scans. The experimental results show that our method yields excellent results with 96.64% sensitivity, 71.43% specificity, and 0.83 F1 score. Our method is a promising approach to apply deep learning method to computer-aided analysis of specific CT imaging signs with insufficient labeled images. Graphical abstract We proposed an automatic recognition method of 3D GGO CT imaging signs through the fusion of hybrid resampling and layer-wise fine-tuning CNN models in this paper. Our hybrid resampling reduces the risk of missing small or large GGOs by adopting representative sampling panels and processing GGOs with multiple scales simultaneously. The layer-wise fine-tuning strategy has ability to obtain the optimal fine-tuning model. Our method is a promising approach to apply deep learning method to computer-aided analysis of specific CT imaging signs with insufficient labeled images.
Mohsenikia, Atefeh; Gholami, Ali; Masoum, Saeed; Abbasi, Saleheh
2017-09-01
This study presents a new strategy for the simultaneous quantification of two industrial contaminants. The excitation-emission fluorescence data matrix combined with a three-way chemometric method, such as parallel factor analysis, was used for the determination of nonylphenol ethoxylate (NPE-9) as a nonionic surfactant and 2-naphthalene sulfonate (2-NS) in waste water samples. It is noticeable that this method can resolve overlapping signal into spectral and relative concentration profiles. By spiking the known concentrations of these compounds in the wastewater samples, the accuracy of the proposed methods was validated and recoveries of the spiked values were calculated. High recoveries (i.e. 90-110%) obtained for the waste water samples indicate the present method can be used successfully to determine the analytes concentration in the environmental contaminations. The photocatalytic degradation of NPE-9 and 2-NS in aqueous solution was studied using the CoTiO 3 nanoparticles catalyst. It was synthesized by the sol-gel technique. The catalytic activity of the prepared nanoparticles was measured in a batch photoreactor containing appropriate solutions of these compounds with UV irradiation. The photodegradation process of these compounds was optimized by using the central composite design. The CoTiO 3 showed high activity for UV-photocatalytic degradation of NPE-9 and 2-NS.
NASA Astrophysics Data System (ADS)
Bellingeri, Michele; Agliari, Elena; Cassi, Davide
2015-10-01
The best strategy to immunize a complex network is usually evaluated in terms of the percolation threshold, i.e. the number of vaccine doses which make the largest connected cluster (LCC) vanish. The strategy inducing the minimum percolation threshold represents the optimal way to immunize the network. Here we show that the efficacy of the immunization strategies can change during the immunization process. This means that, if the number of doses is limited, the best strategy is not necessarily the one leading to the smallest percolation threshold. This outcome should warn about the adoption of global measures in order to evaluate the best immunization strategy.
Optimal generator bidding strategies for power and ancillary services
NASA Astrophysics Data System (ADS)
Morinec, Allen G.
As the electric power industry transitions to a deregulated market, power transactions are made upon price rather than cost. Generator companies are interested in maximizing their profits rather than overall system efficiency. A method to equitably compensate generation providers for real power, and ancillary services such as reactive power and spinning reserve, will ensure a competitive market with an adequate number of suppliers. Optimizing the generation product mix during bidding is necessary to maximize a generator company's profits. The objective of this research work is to determine and formulate appropriate optimal bidding strategies for a generation company in both the energy and ancillary services markets. These strategies should incorporate the capability curves of their generators as constraints to define the optimal product mix and price offered in the day-ahead and real time spot markets. In order to achieve such a goal, a two-player model was composed to simulate market auctions for power generation. A dynamic game methodology was developed to identify Nash Equilibria and Mixed-Strategy Nash Equilibria solutions as optimal generation bidding strategies for two-player non-cooperative variable-sum matrix games with incomplete information. These games integrated the generation product mix of real power, reactive power, and spinning reserve with the generators's capability curves as constraints. The research includes simulations of market auctions, where strategies were tested for generators with different unit constraints, costs, types of competitors, strategies, and demand levels. Studies on the capability of large hydrogen cooled synchronous generators were utilized to derive useful equations that define the exact shape of the capability curve from the intersections of the arcs defined by the centers and radial vectors of the rotor, stator, and steady-state stability limits. The available reactive reserve and spinning reserve were calculated given a generator operating point in the P-Q plane. Four computer programs were developed to automatically perform the market auction simulations using the equal incremental cost rule. The software calculates the payoffs for the two competing competitors, dispatches six generators, and allocates ancillary services for 64 combinations of bidding strategies, three levels of system demand, and three different types of competitors. Matrix Game theory was utilized to calculate Nash Equilibrium solutions and mixed-strategy Nash solutions as the optimal generator bidding strategies. A method to incorporate ancillary services into the generation bidding strategy, to assure an adequate supply of ancillary services, and to allocate these necessary resources to the on-line units was devised. The optimal generator bid strategy in a power auction was shown to be the Nash Equilibrium solution found in two-player variable-sum matrix games.
MO-G-18A-01: Radiation Dose Reducing Strategies in CT, Fluoroscopy and Radiography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mahesh, M; Gingold, E; Jones, A
2014-06-15
Advances in medical x-ray imaging have provided significant benefits to patient care. According to NCRP 160, there are more than 400 million x-ray procedures performed annually in the United States alone that contributes to nearly half of all the radiation exposure to the US population. Similar growth trends in medical x-ray imaging are observed worldwide. Apparent increase in number of medical x-ray imaging procedures, new protocols and the associated radiation dose and risk has drawn considerable attention. This has led to a number of technological innovations such as tube current modulation, iterative reconstruction algorithms, dose alerts, dose displays, flat panelmore » digital detectors, high efficient digital detectors, storage phosphor radiography, variable filters, etc. that are enabling users to acquire medical x-ray images at a much lower radiation dose. Along with these, there are number of radiation dose optimization strategies that users can adapt to effectively lower radiation dose in medical x-ray procedures. The main objectives of this SAM course are to provide information and how to implement the various radiation dose optimization strategies in CT, Fluoroscopy and Radiography. Learning Objectives: To update impact of technological advances on dose optimization in medical imaging. To identify radiation optimization strategies in computed tomography. To describe strategies for configuring fluoroscopic equipment that yields optimal images at reasonable radiation dose. To assess ways to configure digital radiography systems and recommend ways to improve image quality at optimal dose.« less
Convergent evolution of vascular optimization in kelp (Laminariales).
Drobnitch, Sarah Tepler; Jensen, Kaare H; Prentice, Paige; Pittermann, Jarmila
2015-10-07
Terrestrial plants and mammals, although separated by a great evolutionary distance, have each arrived at a highly conserved body plan in which universal allometric scaling relationships govern the anatomy of vascular networks and key functional metabolic traits. The universality of allometric scaling suggests that these phyla have each evolved an 'optimal' transport strategy that has been overwhelmingly adopted by extant species. To truly evaluate the dominance and universality of vascular optimization, however, it is critical to examine other, lesser-known, vascularized phyla. The brown algae (Phaeophyceae) are one such group--as distantly related to plants as mammals, they have convergently evolved a plant-like body plan and a specialized phloem-like transport network. To evaluate possible scaling and optimization in the kelp vascular system, we developed a model of optimized transport anatomy and tested it with measurements of the giant kelp, Macrocystis pyrifera, which is among the largest and most successful of macroalgae. We also evaluated three classical allometric relationships pertaining to plant vascular tissues with a diverse sampling of kelp species. Macrocystis pyrifera displays strong scaling relationships between all tested vascular parameters and agrees with our model; other species within the Laminariales display weak or inconsistent vascular allometries. The lack of universal scaling in the kelps and the presence of optimized transport anatomy in M. pyrifera raises important questions about the evolution of optimization and the possible competitive advantage conferred by optimized vascular systems to multicellular phyla. © 2015 The Author(s).
Optimization of Particle-in-Cell Codes on RISC Processors
NASA Technical Reports Server (NTRS)
Decyk, Viktor K.; Karmesin, Steve Roy; Boer, Aeint de; Liewer, Paulette C.
1996-01-01
General strategies are developed to optimize particle-cell-codes written in Fortran for RISC processors which are commonly used on massively parallel computers. These strategies include data reorganization to improve cache utilization and code reorganization to improve efficiency of arithmetic pipelines.
Larsson, Marie C; Karlsson, Ewa; Woksepp, Hanna; Frölander, Kerstin; Mårtensson, Agneta; Rashed, Foad; Annika, Wistedt; Schön, Thomas; Serrander, Lena
2014-03-19
The aim of this study was to evaluate diagnostic tests in order to introduce a diagnostic strategy to identify the most common gram-positive bacteria (pneumococci, enterococci, β-haemolytic streptococci and S. aureus) found in blood cultures within 6 hours after signalling growth. The tube coagulase test was optimized and several latex agglutination tests were compared and evaluated before a validation period of 11 months was performed on consecutive positive blood culture patient samples from Kalmar County Hospital, Sweden. During the validation period 150 (91%) of a total of 166 gram-positive cocci (119 in clusters, 45 in chains or pairs and 2 undefined morphology) were correctly identified as S. aureus, CoNS, Pneumococci, Enterococci or group A streptococci (GAS), group B streptococci (GBS), group G streptococci (GGS) within 6 hours with a minimal increase in work-load and costs. The remaining samples (9%) were correctly identified during the next day. No samples were incorrectly grouped with this diagnostic strategy and no patient came to risk by early reporting. A simple strategy gives reliable and cost-effective reporting of >90% of the most common gram-positive cocci within 6 hours after a blood cultures become positive. The high specificity of the tests used makes preliminary reports reliable. The reports can be used to indicate the focus of infection and not the least, support faster administration of proper antimicrobial treatment for patients with serious bacterial infections.
Optimal design in pediatric pharmacokinetic and pharmacodynamic clinical studies.
Roberts, Jessica K; Stockmann, Chris; Balch, Alfred; Yu, Tian; Ward, Robert M; Spigarelli, Michael G; Sherwin, Catherine M T
2015-03-01
It is not trivial to conduct clinical trials with pediatric participants. Ethical, logistical, and financial considerations add to the complexity of pediatric studies. Optimal design theory allows investigators the opportunity to apply mathematical optimization algorithms to define how to structure their data collection to answer focused research questions. These techniques can be used to determine an optimal sample size, optimal sample times, and the number of samples required for pharmacokinetic and pharmacodynamic studies. The aim of this review is to demonstrate how to determine optimal sample size, optimal sample times, and the number of samples required from each patient by presenting specific examples using optimal design tools. Additionally, this review aims to discuss the relative usefulness of sparse vs rich data. This review is intended to educate the clinician, as well as the basic research scientist, whom plan on conducting a pharmacokinetic/pharmacodynamic clinical trial in pediatric patients. © 2015 John Wiley & Sons Ltd.
2011-09-01
Fbg αC 242-424. DNA for expressing Fbg αC 242-424 and FXIII A2 in Ecoli have been obtained from collaborators. Strategies for expressing and...the coming months. It will be important to 11 verify that the expressed FXIII A2 is active and that the Fbg αC 242-424 can serve as an effective...optimized. For the larger substrate Fbg αC 242-424, we will need to proteolytically digest the quenched kinetic samples with chymotrypsin prior to
2016-12-01
gastrocnemius muscles. 4. Place an interlocking intramedullary nail using a custom spacer to maintain 5-cm defect length. 5. Place a pre-molded 5 cm long x...2 cm diameter PMMA spacer around the nail in the defect. 6. Irrigate the wound with normal (0.9 %) saline and close the wound. The Treatment...PMMA spacer using a “bomb bay door opening”. 4. Remove the spacer without damaging the membrane or nail . 5. Collect appropriate IM samples as
Ant Navigation: Fractional Use of the Home Vector
Cheung, Allen; Hiby, Lex; Narendra, Ajay
2012-01-01
Home is a special location for many animals, offering shelter from the elements, protection from predation, and a common place for gathering of the same species. Not surprisingly, many species have evolved efficient, robust homing strategies, which are used as part of each and every foraging journey. A basic strategy used by most animals is to take the shortest possible route home by accruing the net distances and directions travelled during foraging, a strategy well known as path integration. This strategy is part of the navigation toolbox of ants occupying different landscapes. However, when there is a visual discrepancy between test and training conditions, the distance travelled by animals relying on the path integrator varies dramatically between species: from 90% of the home vector to an absolute distance of only 50 cm. We here ask what the theoretically optimal balance between PI-driven and landmark-driven navigation should be. In combination with well-established results from optimal search theory, we show analytically that this fractional use of the home vector is an optimal homing strategy under a variety of circumstances. Assuming there is a familiar route that an ant recognizes, theoretically optimal search should always begin at some fraction of the home vector, depending on the region of familiarity. These results are shown to be largely independent of the search algorithm used. Ant species from different habitats appear to have optimized their navigation strategy based on the availability and nature of navigational information content in their environment. PMID:23209744
Design of underwater robot lines based on a hybrid automatic optimization strategy
NASA Astrophysics Data System (ADS)
Lyu, Wenjing; Luo, Weilin
2014-09-01
In this paper, a hybrid automatic optimization strategy is proposed for the design of underwater robot lines. Isight is introduced as an integration platform. The construction of this platform is based on the user programming and several commercial software including UG6.0, GAMBIT2.4.6 and FLUENT12.0. An intelligent parameter optimization method, the particle swarm optimization, is incorporated into the platform. To verify the strategy proposed, a simulation is conducted on the underwater robot model 5470, which originates from the DTRC SUBOFF project. With the automatic optimization platform, the minimal resistance is taken as the optimization goal; the wet surface area as the constraint condition; the length of the fore-body, maximum body radius and after-body's minimum radius as the design variables. With the CFD calculation, the RANS equations and the standard turbulence model are used for direct numerical simulation. By analyses of the simulation results, it is concluded that the platform is of high efficiency and feasibility. Through the platform, a variety of schemes for the design of the lines are generated and the optimal solution is achieved. The combination of the intelligent optimization algorithm and the numerical simulation ensures a global optimal solution and improves the efficiency of the searching solutions.
Lee, Chris P; Chertow, Glenn M; Zenios, Stefanos A
2006-01-01
Patients with end-stage renal disease (ESRD) require dialysis to maintain survival. The optimal timing of dialysis initiation in terms of cost-effectiveness has not been established. We developed a simulation model of individuals progressing towards ESRD and requiring dialysis. It can be used to analyze dialysis strategies and scenarios. It was embedded in an optimization frame worked to derive improved strategies. Actual (historical) and simulated survival curves and hospitalization rates were virtually indistinguishable. The model overestimated transplantation costs (10%) but it was related to confounding by Medicare coverage. To assess the model's robustness, we examined several dialysis strategies while input parameters were perturbed. Under all 38 scenarios, relative rankings remained unchanged. An improved policy for a hypothetical patient was derived using an optimization algorithm. The model produces reliable results and is robust. It enables the cost-effectiveness analysis of dialysis strategies.
A Parameter Communication Optimization Strategy for Distributed Machine Learning in Sensors
Zhang, Jilin; Tu, Hangdi; Ren, Yongjian; Wan, Jian; Zhou, Li; Li, Mingwei; Wang, Jue; Yu, Lifeng; Zhao, Chang; Zhang, Lei
2017-01-01
In order to utilize the distributed characteristic of sensors, distributed machine learning has become the mainstream approach, but the different computing capability of sensors and network delays greatly influence the accuracy and the convergence rate of the machine learning model. Our paper describes a reasonable parameter communication optimization strategy to balance the training overhead and the communication overhead. We extend the fault tolerance of iterative-convergent machine learning algorithms and propose the Dynamic Finite Fault Tolerance (DFFT). Based on the DFFT, we implement a parameter communication optimization strategy for distributed machine learning, named Dynamic Synchronous Parallel Strategy (DSP), which uses the performance monitoring model to dynamically adjust the parameter synchronization strategy between worker nodes and the Parameter Server (PS). This strategy makes full use of the computing power of each sensor, ensures the accuracy of the machine learning model, and avoids the situation that the model training is disturbed by any tasks unrelated to the sensors. PMID:28934163
Lewandowska, Dagmara W; Zagordi, Osvaldo; Geissberger, Fabienne-Desirée; Kufner, Verena; Schmutz, Stefan; Böni, Jürg; Metzner, Karin J; Trkola, Alexandra; Huber, Michael
2017-08-08
Sequence-specific PCR is the most common approach for virus identification in diagnostic laboratories. However, as specific PCR only detects pre-defined targets, novel virus strains or viruses not included in routine test panels will be missed. Recently, advances in high-throughput sequencing allow for virus-sequence-independent identification of entire virus populations in clinical samples, yet standardized protocols are needed to allow broad application in clinical diagnostics. Here, we describe a comprehensive sample preparation protocol for high-throughput metagenomic virus sequencing using random amplification of total nucleic acids from clinical samples. In order to optimize metagenomic sequencing for application in virus diagnostics, we tested different enrichment and amplification procedures on plasma samples spiked with RNA and DNA viruses. A protocol including filtration, nuclease digestion, and random amplification of RNA and DNA in separate reactions provided the best results, allowing reliable recovery of viral genomes and a good correlation of the relative number of sequencing reads with the virus input. We further validated our method by sequencing a multiplexed viral pathogen reagent containing a range of human viruses from different virus families. Our method proved successful in detecting the majority of the included viruses with high read numbers and compared well to other protocols in the field validated against the same reference reagent. Our sequencing protocol does work not only with plasma but also with other clinical samples such as urine and throat swabs. The workflow for virus metagenomic sequencing that we established proved successful in detecting a variety of viruses in different clinical samples. Our protocol supplements existing virus-specific detection strategies providing opportunities to identify atypical and novel viruses commonly not accounted for in routine diagnostic panels.
Bayesian assessment of the expected data impact on prediction confidence in optimal sampling design
NASA Astrophysics Data System (ADS)
Leube, P. C.; Geiges, A.; Nowak, W.
2012-02-01
Incorporating hydro(geo)logical data, such as head and tracer data, into stochastic models of (subsurface) flow and transport helps to reduce prediction uncertainty. Because of financial limitations for investigation campaigns, information needs toward modeling or prediction goals should be satisfied efficiently and rationally. Optimal design techniques find the best one among a set of investigation strategies. They optimize the expected impact of data on prediction confidence or related objectives prior to data collection. We introduce a new optimal design method, called PreDIA(gnosis) (Preposterior Data Impact Assessor). PreDIA derives the relevant probability distributions and measures of data utility within a fully Bayesian, generalized, flexible, and accurate framework. It extends the bootstrap filter (BF) and related frameworks to optimal design by marginalizing utility measures over the yet unknown data values. PreDIA is a strictly formal information-processing scheme free of linearizations. It works with arbitrary simulation tools, provides full flexibility concerning measurement types (linear, nonlinear, direct, indirect), allows for any desired task-driven formulations, and can account for various sources of uncertainty (e.g., heterogeneity, geostatistical assumptions, boundary conditions, measurement values, model structure uncertainty, a large class of model errors) via Bayesian geostatistics and model averaging. Existing methods fail to simultaneously provide these crucial advantages, which our method buys at relatively higher-computational costs. We demonstrate the applicability and advantages of PreDIA over conventional linearized methods in a synthetic example of subsurface transport. In the example, we show that informative data is often invisible for linearized methods that confuse zero correlation with statistical independence. Hence, PreDIA will often lead to substantially better sampling designs. Finally, we extend our example to specifically highlight the consideration of conceptual model uncertainty.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tian, Z; Folkerts, M; Jiang, S
Purpose: We have previously developed a GPU-OpenCL-based MC dose engine named goMC with built-in analytical linac beam model. To move goMC towards routine clinical use, we have developed an automatic beam-commissioning method, and an efficient source sampling strategy to facilitate dose calculations for real treatment plans. Methods: Our commissioning method is to automatically adjust the relative weights among the sub-sources, through an optimization process minimizing the discrepancies between calculated dose and measurements. Six models built for Varian Truebeam linac photon beams (6MV, 10MV, 15MV, 18MV, 6MVFFF, 10MVFFF) were commissioned using measurement data acquired at our institution. To facilitate dose calculationsmore » for real treatment plans, we employed inverse sampling method to efficiently incorporate MLC leaf-sequencing into source sampling. Specifically, instead of sampling source particles control-point by control-point and rejecting the particles blocked by MLC, we assigned a control-point index to each sampled source particle, according to MLC leaf-open duration of each control-point at the pixel where the particle intersects the iso-center plane. Results: Our auto-commissioning method decreased distance-to-agreement (DTA) of depth dose at build-up regions by 36.2% averagely, making it within 1mm. Lateral profiles were better matched for all beams, with biggest improvement found at 15MV for which root-mean-square difference was reduced from 1.44% to 0.50%. Maximum differences of output factors were reduced to less than 0.7% for all beams, with largest decrease being from1.70% to 0.37% found at 10FFF. Our new sampling strategy was tested on a Head&Neck VMAT patient case. Achieving clinically acceptable accuracy, the new strategy could reduce the required history number by a factor of ∼2.8 given a statistical uncertainty level and hence achieve a similar speed-up factor. Conclusion: Our studies have demonstrated the feasibility and effectiveness of our auto-commissioning approach and new efficient source sampling strategy, implying the potential of our GPU-based MC dose engine goMC for routine clinical use.« less
2014-01-01
For building a new iris template, this paper proposes a strategy to fuse different portions of iris based on machine learning method to evaluate local quality of iris. There are three novelties compared to previous work. Firstly, the normalized segmented iris is divided into multitracks and then each track is estimated individually to analyze the recognition accuracy rate (RAR). Secondly, six local quality evaluation parameters are adopted to analyze texture information of each track. Besides, particle swarm optimization (PSO) is employed to get the weights of these evaluation parameters and corresponding weighted coefficients of different tracks. Finally, all tracks' information is fused according to the weights of different tracks. The experimental results based on subsets of three public and one private iris image databases demonstrate three contributions of this paper. (1) Our experimental results prove that partial iris image cannot completely replace the entire iris image for iris recognition system in several ways. (2) The proposed quality evaluation algorithm is a self-adaptive algorithm, and it can automatically optimize the parameters according to iris image samples' own characteristics. (3) Our feature information fusion strategy can effectively improve the performance of iris recognition system. PMID:24693243
Chen, Ying; Liu, Yuanning; Zhu, Xiaodong; Chen, Huiling; He, Fei; Pang, Yutong
2014-01-01
For building a new iris template, this paper proposes a strategy to fuse different portions of iris based on machine learning method to evaluate local quality of iris. There are three novelties compared to previous work. Firstly, the normalized segmented iris is divided into multitracks and then each track is estimated individually to analyze the recognition accuracy rate (RAR). Secondly, six local quality evaluation parameters are adopted to analyze texture information of each track. Besides, particle swarm optimization (PSO) is employed to get the weights of these evaluation parameters and corresponding weighted coefficients of different tracks. Finally, all tracks' information is fused according to the weights of different tracks. The experimental results based on subsets of three public and one private iris image databases demonstrate three contributions of this paper. (1) Our experimental results prove that partial iris image cannot completely replace the entire iris image for iris recognition system in several ways. (2) The proposed quality evaluation algorithm is a self-adaptive algorithm, and it can automatically optimize the parameters according to iris image samples' own characteristics. (3) Our feature information fusion strategy can effectively improve the performance of iris recognition system.
An in vivo library-versus-library selection of optimized protein-protein interactions.
Pelletier, J N; Arndt, K M; Plückthun, A; Michnick, S W
1999-07-01
We describe a rapid and efficient in vivo library-versus-library screening strategy for identifying optimally interacting pairs of heterodimerizing polypeptides. Two leucine zipper libraries, semi-randomized at the positions adjacent to the hydrophobic core, were genetically fused to either one of two designed fragments of the enzyme murine dihydrofolate reductase (mDHFR), and cotransformed into Escherichia coli. Interaction between the library polypeptides reconstituted enzymatic activity of mDHFR, allowing bacterial growth. Analysis of the resulting colonies revealed important biases in the zipper sequences relative to the original libraries, which are consistent with selection for stable, heterodimerizing pairs. Using more weakly associating mDHFR fragments, we increased the stringency of selection. We enriched the best-performing leucine zipper pairs by multiple passaging of the pooled, selected colonies in liquid culture, as the best pairs allowed for better bacterial propagation. This competitive growth allowed small differences among the pairs to be amplified, and different sequence positions were enriched at different rates. We applied these selection processes to a library-versus-library sample of 2.0 x 10(6) combinations and selected a novel leucine zipper pair that may be appropriate for use in further in vivo heterodimerization strategies.
Franck, Julien; Arafah, Karim; Barnes, Alan; Wisztorski, Maxence; Salzet, Michel; Fournier, Isabelle
2009-10-01
Nowadays, matrix-assisted laser desorption ionization mass spectrometry imaging (MALDI MSI) is a powerful technique to obtain the distribution of endogenous and exogenous molecules within tissue sections. It can, thus, be used to study the evolution of molecules across different physiological stages in order to find out markers or get knowledge on signaling pathways. In order to provide valuable information, we must carefully control the sample preparation to avoid any delocalization of molecules of interest inside the tissue during this step. Currently, two strategies can be used to deposit chemicals, such as the MALDI matrix, onto the tissue both involving generation of microdroplets that will be dropped off onto the surface. First strategy involves microspraying of solutions. Here, we have been interested in the development of a microspotting strategy, where nanodroplets of solvent are ejected by a piezoelectric device to generate microspots at the tissue level. Such systems allow one to precisely control sample preparation by creating an array of spots. In terms of matrix crystallization, a microspotting MALDI matrix is hardly compatible with the results by classical (pipetting) methods. We have thus synthesized and studied new solid ionic matrixes in order to obtain high analytical performance using such a deposition system. These developments have enabled optimization of the preparation time because of the high stability of the printing that is generated in these conditions. We have also studied microspotting for performing on-tissue digestion in order to go for identification of proteins or to work from formalin fixed and paraffin embedded (FFPE) tissue samples. We have shown that microspotting is an interesting approach for on tissue digestion. Peptides, proteins, and lipids were studied under this specific preparation strategy to improve imaging performances for this class of molecules.
Aristizábal, Luis F; Bustillo, Alex E; Arthurs, Steven P
2016-02-03
The coffee berry borer (CBB), Hypothenemus hampei Ferrari (Coleoptera: Curculionidae: Scolytinae) is the primary arthropod pest of coffee plantations worldwide. Since its detection in Hawaii (September 2010), coffee growers are facing financial losses due to reduced quality of coffee yields. Several control strategies that include cultural practices, biological control agents (parasitoids), chemical and microbial insecticides (entomopathogenic fungi), and a range of post-harvest sanitation practices have been conducted to manage CBB around the world. In addition, sampling methods including the use of alcohol based traps for monitoring CBB populations have been implemented in some coffee producing countries in Latin America. It is currently unclear which combination of CBB control strategies is optimal under economical, environmental, and sociocultural conditions of Hawaii. This review discusses components of an integrated pest management program for CBB. We focus on practical approaches to provide guidance to coffee farmers in Hawaii. Experiences of integrated pest management (IPM) of CBB learned from Latin America over the past 25 years may be relevant for establishing strategies of control that may fit under Hawaiian coffee farmers' conditions.
Balancing on tightropes and slacklines
Paoletti, P.; Mahadevan, L.
2012-01-01
Balancing on a tightrope or a slackline is an example of a neuromechanical task where the whole body both drives and responds to the dynamics of the external environment, often on multiple timescales. Motivated by a range of neurophysiological observations, here we formulate a minimal model for this system and use optimal control theory to design a strategy for maintaining an upright position. Our analysis of the open and closed-loop dynamics shows the existence of an optimal rope sag where balancing requires minimal effort, consistent with qualitative observations and suggestive of strategies for optimizing balancing performance while standing and walking. Our consideration of the effects of nonlinearities, potential parameter coupling and delays on the overall performance shows that although these factors change the results quantitatively, the existence of an optimal strategy persists. PMID:22513724
A simple approach to optimal control of invasive species.
Hastings, Alan; Hall, Richard J; Taylor, Caz M
2006-12-01
The problem of invasive species and their control is one of the most pressing applied issues in ecology today. We developed simple approaches based on linear programming for determining the optimal removal strategies of different stage or age classes for control of invasive species that are still in a density-independent phase of growth. We illustrate the application of this method to the specific example of invasive Spartina alterniflora in Willapa Bay, WA. For all such systems, linear programming shows in general that the optimal strategy in any time step is to prioritize removal of a single age or stage class. The optimal strategy adjusts which class is the focus of control through time and can be much more cost effective than prioritizing removal of the same stage class each year.
Truss topology optimization with simultaneous analysis and design
NASA Technical Reports Server (NTRS)
Sankaranarayanan, S.; Haftka, Raphael T.; Kapania, Rakesh K.
1992-01-01
Strategies for topology optimization of trusses for minimum weight subject to stress and displacement constraints by Simultaneous Analysis and Design (SAND) are considered. The ground structure approach is used. A penalty function formulation of SAND is compared with an augmented Lagrangian formulation. The efficiency of SAND in handling combinations of general constraints is tested. A strategy for obtaining an optimal topology by minimizing the compliance of the truss is compared with a direct weight minimization solution to satisfy stress and displacement constraints. It is shown that for some problems, starting from the ground structure and using SAND is better than starting from a minimum compliance topology design and optimizing only the cross sections for minimum weight under stress and displacement constraints. A member elimination strategy to save CPU time is discussed.
Cheema, Jitender Jit Singh; Sankpal, Narendra V; Tambe, Sanjeev S; Kulkarni, Bhaskar D
2002-01-01
This article presents two hybrid strategies for the modeling and optimization of the glucose to gluconic acid batch bioprocess. In the hybrid approaches, first a novel artificial intelligence formalism, namely, genetic programming (GP), is used to develop a process model solely from the historic process input-output data. In the next step, the input space of the GP-based model, representing process operating conditions, is optimized using two stochastic optimization (SO) formalisms, viz., genetic algorithms (GAs) and simultaneous perturbation stochastic approximation (SPSA). These SO formalisms possess certain unique advantages over the commonly used gradient-based optimization techniques. The principal advantage of the GP-GA and GP-SPSA hybrid techniques is that process modeling and optimization can be performed exclusively from the process input-output data without invoking the detailed knowledge of the process phenomenology. The GP-GA and GP-SPSA techniques have been employed for modeling and optimization of the glucose to gluconic acid bioprocess, and the optimized process operating conditions obtained thereby have been compared with those obtained using two other hybrid modeling-optimization paradigms integrating artificial neural networks (ANNs) and GA/SPSA formalisms. Finally, the overall optimized operating conditions given by the GP-GA method, when verified experimentally resulted in a significant improvement in the gluconic acid yield. The hybrid strategies presented here are generic in nature and can be employed for modeling and optimization of a wide variety of batch and continuous bioprocesses.
Flexible modulation of risk attitude during decision-making under quota.
Fujimoto, Atsushi; Takahashi, Hidehiko
2016-10-01
Risk attitude is often regarded as an intrinsic parameter in the individual personality. However, ethological studies reported state-dependent strategy optimization irrespective of individual preference. To synthesize the two contrasting literatures, we developed a novel gambling task that dynamically manipulated the quota severity (required outcome to clear the task) in a course of choice trials and conducted a task-fMRI study in human participants. The participants showed their individual risk preference when they had no quota constraint ('individual-preference mode'), while they adopted state-dependent optimal strategy when they needed to achieve a quota ('strategy-optimization mode'). fMRI analyses illustrated that the interplay among prefrontal areas and salience-network areas reflected the quota severity and the utilization of the optimal strategy, shedding light on the neural substrates of the quota-dependent risk attitude. Our results demonstrated the complex nature of risk-sensitive decision-making and may provide a new perspective for the understanding of problematic risky behaviors in human. Copyright © 2016 Elsevier Inc. All rights reserved.
Yan, Bin-Jun; Guo, Zheng-Tai; Qu, Hai-Bin; Zhao, Bu-Chang; Zhao, Tao
2013-06-01
In this work, a feedforward control strategy basing on the concept of quality by design was established for the manufacturing process of traditional Chinese medicine to reduce the impact of the quality variation of raw materials on drug. In the research, the ethanol precipitation process of Danhong injection was taken as an application case of the method established. Box-Behnken design of experiments was conducted. Mathematical models relating the attributes of the concentrate, the process parameters and the quality of the supernatants produced were established. Then an optimization model for calculating the best process parameters basing on the attributes of the concentrate was built. The quality of the supernatants produced by ethanol precipitation with optimized and non-optimized process parameters were compared. The results showed that using the feedforward control strategy for process parameters optimization can control the quality of the supernatants effectively. The feedforward control strategy proposed can enhance the batch-to-batch consistency of the supernatants produced by ethanol precipitation.
A formulation and analysis of combat games
NASA Technical Reports Server (NTRS)
Heymann, M.; Ardema, M. D.; Rajan, N.
1984-01-01
Combat which is formulated as a dynamical encounter between two opponents, each of whom has offensive capabilities and objectives is outlined. A target set is associated with each opponent in the event space in which he endeavors to terminate the combat, thereby winning. If the combat terminates in both target sets simultaneously, or in neither, a joint capture or a draw, respectively, occurs. Resolution of the encounter is formulated as a combat game; as a pair of competing event constrained differential games. If exactly one of the players can win, the optimal strategies are determined from a resulting constrained zero sum differential game. Otherwise the optimal strategies are computed from a resulting nonzero sum game. Since optimal combat strategies may frequently not exist, approximate or delta combat games are also formulated leading to approximate or delta optimal strategies. The turret game is used to illustrate combat games. This game is sufficiently complex to exhibit a rich variety of combat behavior, much of which is not found in pursuit evasion games.
Mutumba, Massy; Bauermeister, José A; Musiime, Victor; Byaruhanga, Judith; Francis, Kiweewa; Snow, Rachel C; Tsai, Alexander C
2015-02-01
Although more than 90% of youth perinatally infected with HIV live in sub-Saharan Africa, little is known about the psychosocial factors that impact their wellbeing, or how these youth cope with these challenges. The purpose of this study was to identify the psychosocial challenges and coping strategies among perinatal HIV-infected adolescents in Uganda. In-depth interviews were conducted with a purposive sample of 38 HIV-infected adolescents aged 12-19 years at a large HIV treatment center in Kampala. Data were analyzed thematically to identify themes and domains related to stressors and specific coping strategies. Psychosocial challenges included stigma/discrimination, relationship challenges such as HIV status disclosure, and medication difficulties. Coping strategies included medication adherence, concealment or limited disclosure of HIV status, treatment optimism, social support, rationalizing, social comparison, spirituality/religiosity, avoidance, and distraction. Age and gender differences also emerged: younger participants generally lacked specific coping strategies; compared to females, male adolescents reported greater use of avoidance/distraction techniques. Findings underscore the need to address stigma within homes and schools, and to equip adolescents with the comprehensive knowledge and skills to address their varied challenges.
Optimal selection of epitopes for TXP-immunoaffinity mass spectrometry.
Planatscher, Hannes; Supper, Jochen; Poetz, Oliver; Stoll, Dieter; Joos, Thomas; Templin, Markus F; Zell, Andreas
2010-06-25
Mass spectrometry (MS) based protein profiling has become one of the key technologies in biomedical research and biomarker discovery. One bottleneck in MS-based protein analysis is sample preparation and an efficient fractionation step to reduce the complexity of the biological samples, which are too complex to be analyzed directly with MS. Sample preparation strategies that reduce the complexity of tryptic digests by using immunoaffinity based methods have shown to lead to a substantial increase in throughput and sensitivity in the proteomic mass spectrometry approach. The limitation of using such immunoaffinity-based approaches is the availability of the appropriate peptide specific capture antibodies. Recent developments in these approaches, where subsets of peptides with short identical terminal sequences can be enriched using antibodies directed against short terminal epitopes, promise a significant gain in efficiency. We show that the minimal set of terminal epitopes for the coverage of a target protein list can be found by the formulation as a set cover problem, preceded by a filtering pipeline for the exclusion of peptides and target epitopes with undesirable properties. For small datasets (a few hundred proteins) it is possible to solve the problem to optimality with moderate computational effort using commercial or free solvers. Larger datasets, like full proteomes require the use of heuristics.
Gwadz, Marya Viorst; Collins, Linda M; Cleland, Charles M; Leonard, Noelle R; Wilton, Leo; Gandhi, Monica; Scott Braithwaite, R; Perlman, David C; Kutnick, Alexandra; Ritchie, Amanda S
2017-05-04
More than half of persons living with HIV (PLWH) in the United States are insufficiently engaged in HIV primary care and not taking antiretroviral therapy (ART), mainly African Americans/Blacks and Hispanics. In the proposed project, a potent and innovative research methodology, the multiphase optimization strategy (MOST), will be employed to develop a highly efficacious, efficient, scalable, and cost-effective intervention to increase engagement along the HIV care continuum. Whereas randomized controlled trials are valuable for evaluating the efficacy of multi-component interventions as a package, they are not designed to evaluate which specific components contribute to efficacy. MOST, a pioneering, engineering-inspired framework, addresses this problem through highly efficient randomized experimentation to assess the performance of individual intervention components and their interactions. We propose to use MOST to engineer an intervention to increase engagement along the HIV care continuum for African American/Black and Hispanic PLWH not well engaged in care and not taking ART. Further, the intervention will be optimized for cost-effectiveness. A similar set of multi-level factors impede both HIV care and ART initiation for African American/Black and Hispanic PLWH, primary among them individual- (e.g., substance use, distrust, fear), social- (e.g., stigma), and structural-level barriers (e.g., difficulties accessing ancillary services). Guided by a multi-level social cognitive theory, and using the motivational interviewing approach, the study will evaluate five distinct culturally based intervention components (i.e., counseling sessions, pre-adherence preparation, support groups, peer mentorship, and patient navigation), each designed to address a specific barrier to HIV care and ART initiation. These components are well-grounded in the empirical literature and were found acceptable, feasible, and promising with respect to efficacy in a preliminary study. Study aims are: 1) using a highly efficient fractional factorial experimental design, identify which of five intervention components contribute meaningfully to improvement in HIV viral suppression, and secondary outcomes of ART adherence and engagement in HIV primary care; 2) identify mediators and moderators of intervention component efficacy; and 3) using a mathematical modeling approach, build the most cost-effective and efficient intervention package from the efficacious components. A heterogeneous sample of African American/Black and Hispanic PLWH (with respect to age, substance use, and sexual minority status) will be recruited with a proven hybrid sampling method using targeted sampling in community settings and peer recruitment (N = 512). This is the first study to apply the MOST framework in the field of HIV prevention and treatment. This innovative study will produce a culturally based HIV care continuum intervention for the nation's most vulnerable PLWH, optimized for cost-effectiveness, and with exceptional levels of efficacy, efficiency, and scalability. ClinicalTrials.gov, NCT02801747 , Registered June 8, 2016.
Decision making with regard to antiviral intervention during an influenza pandemic.
Shim, Eunha; Chapman, Gretchen B; Galvani, Alison P
2010-01-01
Antiviral coverage is defined by the proportion of the population that takes antiviral prophylaxis or treatment. High coverage of an antiviral drug has epidemiological and evolutionary repercussions. Antivirals select for drug resistance within the population, and individuals may experience adverse effects. To determine optimal antiviral coverage in the context of an influenza outbreak, we compared 2 perspectives: 1) the individual level (the Nash perspective), and 2) the population level (utilitarian perspective). We developed an epidemiological game-theoretic model of an influenza pandemic. The data sources were published literature and a national survey. The target population was the US population. The time horizon was 6 months. The perspective was individuals and the population overall. The interventions were antiviral prophylaxis and treatment. The outcome measures were the optimal coverage of antivirals in an influenza pandemic. At current antiviral pricing, the optimal Nash strategy is 0% coverage for prophylaxis and 30% coverage for treatment, whereas the optimal utilitarian strategy is 19% coverage for prophylaxis and 100% coverage for treatment. Subsidizing prophylaxis by $440 and treatment by $85 would bring the Nash and utilitarian strategies into alignment. For both prophylaxis and treatment, the optimal antiviral coverage decreases as pricing of antivirals increases. Our study does not incorporate the possibility of an effective vaccine and lacks probabilistic sensitivity analysis. Our survey also does not completely represent the US population. Because our model assumes a homogeneous population and homogeneous antiviral pricing, it does not incorporate heterogeneity of preference. The optimal antiviral coverage from the population perspective and individual perspectives differs widely for both prophylaxis and treatment strategies. Optimal population and individual strategies for prophylaxis and treatment might be aligned through subsidization.
Decision Making with Regard to Antiviral Intervention during an Influenza Pandemic
Shim, Eunha; Chapman, Gretchen B.; Galvani, Alison P.
2012-01-01
Background Antiviral coverage is defined by the proportion of the population that takes antiviral prophylaxis or treatment. High coverage of an antiviral drug has epidemiological and evolutionary repercussions. Antivirals select for drug resistance within the population, and individuals may experience adverse effects. To determine optimal antiviral coverage in the context of an influenza outbreak, we compared 2 perspectives: 1) the individual level (the Nash perspective), and 2) the population level (utilitarian perspective). Methods We developed an epidemiological game-theoretic model of an influenza pandemic. The data sources were published literature and a national survey. The target population was the US population. The time horizon was 6 months. The perspective was individuals and the population overall. The interventions were antiviral prophylaxis and treatment. The outcome measures were the optimal coverage of antivirals in an influenza pandemic. Results At current antiviral pricing, the optimal Nash strategy is 0% coverage for prophylaxis and 30% coverage for treatment, whereas the optimal utilitarian strategy is 19% coverage for prophylaxis and 100% coverage for treatment. Subsidizing prophylaxis by $440 and treatment by $85 would bring the Nash and utilitarian strategies into alignment. For both prophylaxis and treatment, the optimal antiviral coverage decreases as pricing of antivirals increases. Our study does not incorporate the possibility of an effective vaccine and lacks probabilistic sensitivity analysis. Our survey also does not completely represent the US population. Because our model assumes a homogeneous population and homogeneous antiviral pricing, it does not incorporate heterogeneity of preference. Conclusions The optimal antiviral coverage from the population perspective and individual perspectives differs widely for both prophylaxis and treatment strategies. Optimal population and individual strategies for prophylaxis and treatment might be aligned through subsidization. PMID:20634545
Fully automatic characterization and data collection from crystals of biological macromolecules
DOE Office of Scientific and Technical Information (OSTI.GOV)
Svensson, Olof; Malbet-Monaco, Stéphanie; Popov, Alexander
A fully automatic system has been developed that performs X-ray centring and characterization of, and data collection from, large numbers of cryocooled crystals without human intervention. Considerable effort is dedicated to evaluating macromolecular crystals at synchrotron sources, even for well established and robust systems. Much of this work is repetitive, and the time spent could be better invested in the interpretation of the results. In order to decrease the need for manual intervention in the most repetitive steps of structural biology projects, initial screening and data collection, a fully automatic system has been developed to mount, locate, centre to themore » optimal diffraction volume, characterize and, if possible, collect data from multiple cryocooled crystals. Using the capabilities of pixel-array detectors, the system is as fast as a human operator, taking an average of 6 min per sample depending on the sample size and the level of characterization required. Using a fast X-ray-based routine, samples are located and centred systematically at the position of highest diffraction signal and important parameters for sample characterization, such as flux, beam size and crystal volume, are automatically taken into account, ensuring the calculation of optimal data-collection strategies. The system is now in operation at the new ESRF beamline MASSIF-1 and has been used by both industrial and academic users for many different sample types, including crystals of less than 20 µm in the smallest dimension. To date, over 8000 samples have been evaluated on MASSIF-1 without any human intervention.« less
[Academic burnout and selection-optimization-compensation strategy in medical students].
Chun, Kyung Hee; Park, Young Soon; Lee, Young Hwan; Kim, Seong Yong
2014-12-01
This study was conducted to examine the relationship between academic demand, academic burnout, and the selection-optimization-compensation (SOC) strategy in medical students. A total of 317 students at Yeungnam University, comprising 90 premedical course students, 114 medical course students, and 113 graduate course students, completed a survey that addressed the factors of academic burnout and the selection-optimization-compensation strategy. We analyzed variances of burnout and SOC strategy use by group, and stepwise multiple regression analysis was conducted. There were significant differences in emotional exhaustion and cynicism between groups and year in school. In the SOC strategy, there were no significant differences between groups except for elective selection. The second-year medical and graduate students experienced significantly greater exhaustion (p<0.001), and first-year premedical students experienced significantly higher cynicism (p<0.001). By multiple regression analysis, subfactors of academic burnout and emotional exhaustion were significantly affected by academic demand (p<0.001), and 46% of the variance was explained. Cynicism was significantly affected by elective selection (p<0.05), and inefficacy was significantly influenced by optimization (p<0.001). To improve adaptation, prescriptive strategies and preventive support should be implemented with regard to academic burnout in medical school. Longitudinal and qualitative studies on burnout must be conducted.
NASA Astrophysics Data System (ADS)
Waeldele, F.
1983-01-01
The influence of sample shape deviations on the measurement uncertainties and the optimization of computer aided coordinate measurement were investigated for a circle and a cylinder. Using the complete error propagation law in matrix form the parameter uncertainties are calculated, taking the correlation between the measurement points into account. Theoretical investigations show that the measuring points have to be equidistantly distributed and that for a cylindrical body a measuring point distribution along a cross section is better than along a helical line. The theoretically obtained expressions to calculate the uncertainties prove to be a good estimation basis. The simple error theory is not satisfactory for estimation. The complete statistical data analysis theory helps to avoid aggravating measurement errors and to adjust the number of measuring points to the required measuring uncertainty.
Utilization of coconut oil cake for the production of lipase using Bacillus coagulans VKL1.
Gowthami, Palanisamy; Muthukumar, Karuppan; Velan, Manickam
2015-01-01
The overproduction of enzymes was performed by manipulating the medium components. In our study, solvent-tolerant thermophilic lipase-producing Bacillus coagulans was isolated from soil samples and a stepwise optimization strategy was employed to increase the lipase production using coconut oil cake basal medium. In the first step, the influence of pH, temperature, carbon source, nitrogen source and inducers on lipase activity was investigated by the One-Factor-At-A-Time (OFAT) method. In the second step, the three significant factors resulted from OFAT were optimized by the statistical approach (CCD).The optimum values of olive oil (0.5%), Tween 80 (0.6%) and FeSO4 (0.05%) was found to be responsible for a 3.2-fold increase in the lipase production identified by Central Composite Design.
Suboptimal LQR-based spacecraft full motion control: Theory and experimentation
NASA Astrophysics Data System (ADS)
Guarnaccia, Leone; Bevilacqua, Riccardo; Pastorelli, Stefano P.
2016-05-01
This work introduces a real time suboptimal control algorithm for six-degree-of-freedom spacecraft maneuvering based on a State-Dependent-Algebraic-Riccati-Equation (SDARE) approach and real-time linearization of the equations of motion. The control strategy is sub-optimal since the gains of the linear quadratic regulator (LQR) are re-computed at each sample time. The cost function of the proposed controller has been compared with the one obtained via a general purpose optimal control software, showing, on average, an increase in control effort of approximately 15%, compensated by real-time implementability. Lastly, the paper presents experimental tests on a hardware-in-the-loop six-degree-of-freedom spacecraft simulator, designed for testing new guidance, navigation, and control algorithms for nano-satellites in a one-g laboratory environment. The tests show the real-time feasibility of the proposed approach.
Optimal strategies for throwing accurately
2017-01-01
The accuracy of throwing in games and sports is governed by how errors in planning and initial conditions are propagated by the dynamics of the projectile. In the simplest setting, the projectile path is typically described by a deterministic parabolic trajectory which has the potential to amplify noisy launch conditions. By analysing how parabolic trajectories propagate errors, we show how to devise optimal strategies for a throwing task demanding accuracy. Our calculations explain observed speed–accuracy trade-offs, preferred throwing style of overarm versus underarm, and strategies for games such as dart throwing, despite having left out most biological complexities. As our criteria for optimal performance depend on the target location, shape and the level of uncertainty in planning, they also naturally suggest an iterative scheme to learn throwing strategies by trial and error. PMID:28484641
Optimal strategies for throwing accurately
NASA Astrophysics Data System (ADS)
Venkadesan, M.; Mahadevan, L.
2017-04-01
The accuracy of throwing in games and sports is governed by how errors in planning and initial conditions are propagated by the dynamics of the projectile. In the simplest setting, the projectile path is typically described by a deterministic parabolic trajectory which has the potential to amplify noisy launch conditions. By analysing how parabolic trajectories propagate errors, we show how to devise optimal strategies for a throwing task demanding accuracy. Our calculations explain observed speed-accuracy trade-offs, preferred throwing style of overarm versus underarm, and strategies for games such as dart throwing, despite having left out most biological complexities. As our criteria for optimal performance depend on the target location, shape and the level of uncertainty in planning, they also naturally suggest an iterative scheme to learn throwing strategies by trial and error.
Methods of increasing efficiency and maintainability of pipeline systems
NASA Astrophysics Data System (ADS)
Ivanov, V. A.; Sokolov, S. M.; Ogudova, E. V.
2018-05-01
This study is dedicated to the issue of pipeline transportation system maintenance. The article identifies two classes of technical-and-economic indices, which are used to select an optimal pipeline transportation system structure. Further, the article determines various system maintenance strategies and strategy selection criteria. Meanwhile, the maintenance strategies turn out to be not sufficiently effective due to non-optimal values of maintenance intervals. This problem could be solved by running the adaptive maintenance system, which includes a pipeline transportation system reliability improvement algorithm, especially an equipment degradation computer model. In conclusion, three model building approaches for determining optimal technical systems verification inspections duration were considered.
Optimal tyre usage for a Formula One car
NASA Astrophysics Data System (ADS)
Tremlett, A. J.; Limebeer, D. J. N.
2016-10-01
Variations in track temperature, surface conditions and layout have led tyre manufacturers to produce a range of rubber compounds for race events. Each compound has unique friction and durability characteristics. Efficient tyre management over a full race distance is a crucial component of a competitive race strategy. A minimum lap time optimal control calculation and a thermodynamic tyre wear model are used to establish optimal tyre warming and tyre usage strategies. Lap time sensitivities demonstrate that relatively small changes in control strategy can lead to significant reductions in the associated wear metrics. The illustrated methodology shows how vehicle setup parameters can be optimised for minimum tyre usage.
Optimal Consumption in a Brownian Model with Absorption and Finite Time Horizon
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grandits, Peter, E-mail: pgrand@fam.tuwien.ac.at
2013-04-15
We construct {epsilon}-optimal strategies for the following control problem: Maximize E[{integral}{sub [0,{tau})}e{sup -{beta}s} dC{sub s}+e{sup -{beta}{tau}}X{sub {tau}}] , where X{sub t}=x+{mu}t+{sigma}W{sub t}-C{sub t}, {tau}{identical_to}inf{l_brace}t>0|X{sub t}=0{r_brace} Logical-And T, T>0 is a fixed finite time horizon, W{sub t} is standard Brownian motion, {mu}, {sigma} are constants, and C{sub t} describes accumulated consumption until time t. It is shown that {epsilon}-optimal strategies are given by barrier strategies with time-dependent barriers.