Chenel, Marylore; Bouzom, François; Aarons, Leon; Ogungbenro, Kayode
2008-12-01
To determine the optimal sampling time design of a drug-drug interaction (DDI) study for the estimation of apparent clearances (CL/F) of two co-administered drugs (SX, a phase I compound, potentially a CYP3A4 inhibitor, and MDZ, a reference CYP3A4 substrate) without any in vivo data using physiologically based pharmacokinetic (PBPK) predictions, population PK modelling and multiresponse optimal design. PBPK models were developed with AcslXtreme using only in vitro data to simulate PK profiles of both drugs when they were co-administered. Then, using simulated data, population PK models were developed with NONMEM and optimal sampling times were determined by optimizing the determinant of the population Fisher information matrix with PopDes using either two uniresponse designs (UD) or a multiresponse design (MD) with joint sampling times for both drugs. Finally, the D-optimal sampling time designs were evaluated by simulation and re-estimation with NONMEM by computing the relative root mean squared error (RMSE) and empirical relative standard errors (RSE) of CL/F. There were four and five optimal sampling times (=nine different sampling times) in the UDs for SX and MDZ, respectively, whereas there were only five sampling times in the MD. Whatever design and compound, CL/F was well estimated (RSE < 20% for MDZ and <25% for SX) and expected RSEs from PopDes were in the same range as empirical RSEs. Moreover, there was no bias in CL/F estimation. Since MD required only five sampling times compared to the two UDs, D-optimal sampling times of the MD were included into a full empirical design for the proposed clinical trial. A joint paper compares the designs with real data. This global approach including PBPK simulations, population PK modelling and multiresponse optimal design allowed, without any in vivo data, the design of a clinical trial, using sparse sampling, capable of estimating CL/F of the CYP3A4 substrate and potential inhibitor when co-administered together.
Optimal design in pediatric pharmacokinetic and pharmacodynamic clinical studies.
Roberts, Jessica K; Stockmann, Chris; Balch, Alfred; Yu, Tian; Ward, Robert M; Spigarelli, Michael G; Sherwin, Catherine M T
2015-03-01
It is not trivial to conduct clinical trials with pediatric participants. Ethical, logistical, and financial considerations add to the complexity of pediatric studies. Optimal design theory allows investigators the opportunity to apply mathematical optimization algorithms to define how to structure their data collection to answer focused research questions. These techniques can be used to determine an optimal sample size, optimal sample times, and the number of samples required for pharmacokinetic and pharmacodynamic studies. The aim of this review is to demonstrate how to determine optimal sample size, optimal sample times, and the number of samples required from each patient by presenting specific examples using optimal design tools. Additionally, this review aims to discuss the relative usefulness of sparse vs rich data. This review is intended to educate the clinician, as well as the basic research scientist, whom plan on conducting a pharmacokinetic/pharmacodynamic clinical trial in pediatric patients. © 2015 John Wiley & Sons Ltd.
Foo, Lee Kien; McGree, James; Duffull, Stephen
2012-01-01
Optimal design methods have been proposed to determine the best sampling times when sparse blood sampling is required in clinical pharmacokinetic studies. However, the optimal blood sampling time points may not be feasible in clinical practice. Sampling windows, a time interval for blood sample collection, have been proposed to provide flexibility in blood sampling times while preserving efficient parameter estimation. Because of the complexity of the population pharmacokinetic models, which are generally nonlinear mixed effects models, there is no analytical solution available to determine sampling windows. We propose a method for determination of sampling windows based on MCMC sampling techniques. The proposed method attains a stationary distribution rapidly and provides time-sensitive windows around the optimal design points. The proposed method is applicable to determine sampling windows for any nonlinear mixed effects model although our work focuses on an application to population pharmacokinetic models. Copyright © 2012 John Wiley & Sons, Ltd.
Optimal sampling strategies for detecting zoonotic disease epidemics.
Ferguson, Jake M; Langebrake, Jessica B; Cannataro, Vincent L; Garcia, Andres J; Hamman, Elizabeth A; Martcheva, Maia; Osenberg, Craig W
2014-06-01
The early detection of disease epidemics reduces the chance of successful introductions into new locales, minimizes the number of infections, and reduces the financial impact. We develop a framework to determine the optimal sampling strategy for disease detection in zoonotic host-vector epidemiological systems when a disease goes from below detectable levels to an epidemic. We find that if the time of disease introduction is known then the optimal sampling strategy can switch abruptly between sampling only from the vector population to sampling only from the host population. We also construct time-independent optimal sampling strategies when conducting periodic sampling that can involve sampling both the host and the vector populations simultaneously. Both time-dependent and -independent solutions can be useful for sampling design, depending on whether the time of introduction of the disease is known or not. We illustrate the approach with West Nile virus, a globally-spreading zoonotic arbovirus. Though our analytical results are based on a linearization of the dynamical systems, the sampling rules appear robust over a wide range of parameter space when compared to nonlinear simulation models. Our results suggest some simple rules that can be used by practitioners when developing surveillance programs. These rules require knowledge of transition rates between epidemiological compartments, which population was initially infected, and of the cost per sample for serological tests.
Silber, Hanna E; Nyberg, Joakim; Hooker, Andrew C; Karlsson, Mats O
2009-06-01
Intravenous glucose tolerance test (IVGTT) provocations are informative, but complex and laborious, for studying the glucose-insulin system. The objective of this study was to evaluate, through optimal design methodology, the possibilities of more informative and/or less laborious study design of the insulin modified IVGTT in type 2 diabetic patients. A previously developed model for glucose and insulin regulation was implemented in the optimal design software PopED 2.0. The following aspects of the study design of the insulin modified IVGTT were evaluated; (1) glucose dose, (2) insulin infusion, (3) combination of (1) and (2), (4) sampling times, (5) exclusion of labeled glucose. Constraints were incorporated to avoid prolonged hyper- and/or hypoglycemia and a reduced design was used to decrease run times. Design efficiency was calculated as a measure of the improvement with an optimal design compared to the basic design. The results showed that the design of the insulin modified IVGTT could be substantially improved by the use of an optimized design compared to the standard design and that it was possible to use a reduced number of samples. Optimization of sample times gave the largest improvement followed by insulin dose. The results further showed that it was possible to reduce the total sample time with only a minor loss in efficiency. Simulations confirmed the predictions from PopED. The predicted uncertainty of parameter estimates (CV) was low in all tested cases, despite the reduction in the number of samples/subject. The best design had a predicted average CV of parameter estimates of 19.5%. We conclude that improvement can be made to the design of the insulin modified IVGTT and that the most important design factor was the placement of sample times followed by the use of an optimal insulin dose. This paper illustrates how complex provocation experiments can be improved by sequential modeling and optimal design.
Optimal time points sampling in pathway modelling.
Hu, Shiyan
2004-01-01
Modelling cellular dynamics based on experimental data is at the heart of system biology. Considerable progress has been made to dynamic pathway modelling as well as the related parameter estimation. However, few of them gives consideration for the issue of optimal sampling time selection for parameter estimation. Time course experiments in molecular biology rarely produce large and accurate data sets and the experiments involved are usually time consuming and expensive. Therefore, to approximate parameters for models with only few available sampling data is of significant practical value. For signal transduction, the sampling intervals are usually not evenly distributed and are based on heuristics. In the paper, we investigate an approach to guide the process of selecting time points in an optimal way to minimize the variance of parameter estimates. In the method, we first formulate the problem to a nonlinear constrained optimization problem by maximum likelihood estimation. We then modify and apply a quantum-inspired evolutionary algorithm, which combines the advantages of both quantum computing and evolutionary computing, to solve the optimization problem. The new algorithm does not suffer from the morass of selecting good initial values and being stuck into local optimum as usually accompanied with the conventional numerical optimization techniques. The simulation results indicate the soundness of the new method.
Adaptive sampling of information in perceptual decision-making.
Cassey, Thomas C; Evens, David R; Bogacz, Rafal; Marshall, James A R; Ludwig, Casimir J H
2013-01-01
In many perceptual and cognitive decision-making problems, humans sample multiple noisy information sources serially, and integrate the sampled information to make an overall decision. We derive the optimal decision procedure for two-alternative choice tasks in which the different options are sampled one at a time, sources vary in the quality of the information they provide, and the available time is fixed. To maximize accuracy, the optimal observer allocates time to sampling different information sources in proportion to their noise levels. We tested human observers in a corresponding perceptual decision-making task. Observers compared the direction of two random dot motion patterns that were triggered only when fixated. Observers allocated more time to the noisier pattern, in a manner that correlated with their sensory uncertainty about the direction of the patterns. There were several differences between the optimal observer predictions and human behaviour. These differences point to a number of other factors, beyond the quality of the currently available sources of information, that influences the sampling strategy.
Optimal regulation in systems with stochastic time sampling
NASA Technical Reports Server (NTRS)
Montgomery, R. C.; Lee, P. S.
1980-01-01
An optimal control theory that accounts for stochastic variable time sampling in a distributed microprocessor based flight control system is presented. The theory is developed by using a linear process model for the airplane dynamics and the information distribution process is modeled as a variable time increment process where, at the time that information is supplied to the control effectors, the control effectors know the time of the next information update only in a stochastic sense. An optimal control problem is formulated and solved for the control law that minimizes the expected value of a quadratic cost function. The optimal cost obtained with a variable time increment Markov information update process where the control effectors know only the past information update intervals and the Markov transition mechanism is almost identical to that obtained with a known and uniform information update interval.
Clewe, Oskar; Karlsson, Mats O; Simonsson, Ulrika S H
2015-12-01
Bronchoalveolar lavage (BAL) is a pulmonary sampling technique for characterization of drug concentrations in epithelial lining fluid and alveolar cells. Two hypothetical drugs with different pulmonary distribution rates (fast and slow) were considered. An optimized BAL sampling design was generated assuming no previous information regarding the pulmonary distribution (rate and extent) and with a maximum of two samples per subject. Simulations were performed to evaluate the impact of the number of samples per subject (1 or 2) and the sample size on the relative bias and relative root mean square error of the parameter estimates (rate and extent of pulmonary distribution). The optimized BAL sampling design depends on a characterized plasma concentration time profile, a population plasma pharmacokinetic model, the limit of quantification (LOQ) of the BAL method and involves only two BAL sample time points, one early and one late. The early sample should be taken as early as possible, where concentrations in the BAL fluid ≥ LOQ. The second sample should be taken at a time point in the declining part of the plasma curve, where the plasma concentration is equivalent to the plasma concentration in the early sample. Using a previously described general pulmonary distribution model linked to a plasma population pharmacokinetic model, simulated data using the final BAL sampling design enabled characterization of both the rate and extent of pulmonary distribution. The optimized BAL sampling design enables characterization of both the rate and extent of the pulmonary distribution for both fast and slowly equilibrating drugs.
Shan, Yi-chu; Zhang, Yu-kui; Zhao, Rui-huan
2002-07-01
In high performance liquid chromatography, it is necessary to apply multi-composition gradient elution for the separation of complex samples such as environmental and biological samples. Multivariate stepwise gradient elution is one of the most efficient elution modes, because it combines the high selectivity of multi-composition mobile phase and shorter analysis time of gradient elution. In practical separations, the separation selectivity of samples can be effectively adjusted by using ternary mobile phase. For the optimization of these parameters, the retention equation of samples must be obtained at first. Traditionally, several isocratic experiments are used to get the retention equation of solute. However, it is time consuming especially for the separation of complex samples with a wide range of polarity. A new method for the fast optimization of ternary stepwise gradient elution was proposed based on the migration rule of solute in column. First, the coefficients of retention equation of solute are obtained by running several linear gradient experiments, then the optimal separation conditions are searched according to the hierarchical chromatography response function which acts as the optimization criterion. For each kind of organic modifier, two initial linear gradient experiments are used to obtain the primary coefficients of retention equation of each solute. For ternary mobile phase, only four linear gradient runs are needed to get the coefficients of retention equation. Then the retention times of solutes under arbitrary mobile phase composition can be predicted. The initial optimal mobile phase composition is obtained by resolution mapping for all of the solutes. A hierarchical chromatography response function is used to evaluate the separation efficiencies and search the optimal elution conditions. In subsequent optimization, the migrating distance of solute in the column is considered to decide the mobile phase composition and sustaining time of the latter steps until all the solutes are eluted out. Thus the first stepwise gradient elution conditions are predicted. If the resolution of samples under the predicted optimal separation conditions is satisfactory, the optimization procedure is stopped; otherwise, the coefficients of retention equation are adjusted according to the experimental results under the previously predicted elution conditions. Then the new stepwise gradient elution conditions are predicted repeatedly until satisfactory resolution is obtained. Normally, the satisfactory separation conditions can be found only after six experiments by using the proposed method. In comparison with the traditional optimization method, the time needed to finish the optimization procedure can be greatly reduced. The method has been validated by its application to the separation of several samples such as amino acid derivatives, aromatic amines, in which satisfactory separations were obtained with predicted resolution.
Šumić, Zdravko; Vakula, Anita; Tepić, Aleksandra; Čakarević, Jelena; Vitas, Jasmina; Pavlić, Branimir
2016-07-15
Fresh red currants were dried by vacuum drying process under different drying conditions. Box-Behnken experimental design with response surface methodology was used for optimization of drying process in terms of physical (moisture content, water activity, total color change, firmness and rehydratation power) and chemical (total phenols, total flavonoids, monomeric anthocyanins and ascorbic acid content and antioxidant activity) properties of dried samples. Temperature (48-78 °C), pressure (30-330 mbar) and drying time (8-16 h) were investigated as independent variables. Experimental results were fitted to a second-order polynomial model where regression analysis and analysis of variance were used to determine model fitness and optimal drying conditions. The optimal conditions of simultaneously optimized responses were temperature of 70.2 °C, pressure of 39 mbar and drying time of 8 h. It could be concluded that vacuum drying provides samples with good physico-chemical properties, similar to lyophilized sample and better than conventionally dried sample. Copyright © 2016 Elsevier Ltd. All rights reserved.
Barca, E; Castrignanò, A; Buttafuoco, G; De Benedetto, D; Passarella, G
2015-07-01
Soil survey is generally time-consuming, labor-intensive, and costly. Optimization of sampling scheme allows one to reduce the number of sampling points without decreasing or even increasing the accuracy of investigated attribute. Maps of bulk soil electrical conductivity (EC a ) recorded with electromagnetic induction (EMI) sensors could be effectively used to direct soil sampling design for assessing spatial variability of soil moisture. A protocol, using a field-scale bulk EC a survey, has been applied in an agricultural field in Apulia region (southeastern Italy). Spatial simulated annealing was used as a method to optimize spatial soil sampling scheme taking into account sampling constraints, field boundaries, and preliminary observations. Three optimization criteria were used. the first criterion (minimization of mean of the shortest distances, MMSD) optimizes the spreading of the point observations over the entire field by minimizing the expectation of the distance between an arbitrarily chosen point and its nearest observation; the second criterion (minimization of weighted mean of the shortest distances, MWMSD) is a weighted version of the MMSD, which uses the digital gradient of the grid EC a data as weighting function; and the third criterion (mean of average ordinary kriging variance, MAOKV) minimizes mean kriging estimation variance of the target variable. The last criterion utilizes the variogram model of soil water content estimated in a previous trial. The procedures, or a combination of them, were tested and compared in a real case. Simulated annealing was implemented by the software MSANOS able to define or redesign any sampling scheme by increasing or decreasing the original sampling locations. The output consists of the computed sampling scheme, the convergence time, and the cooling law, which can be an invaluable support to the process of sampling design. The proposed approach has found the optimal solution in a reasonable computation time. The use of bulk EC a gradient as an exhaustive variable, known at any node of an interpolation grid, has allowed the optimization of the sampling scheme, distinguishing among areas with different priority levels.
Optimization of integrated impeller mixer via radiotracer experiments.
Othman, N; Kamarudin, S K; Takriff, M S; Rosli, M I; Engku Chik, E M F; Adnan, M A K
2014-01-01
Radiotracer experiments are carried out in order to determine the mean residence time (MRT) as well as percentage of dead zone, V dead (%), in an integrated mixer consisting of Rushton and pitched blade turbine (PBT). Conventionally, optimization was performed by varying one parameter and others were held constant (OFAT) which lead to enormous number of experiments. Thus, in this study, a 4-factor 3-level Taguchi L9 orthogonal array was introduced to obtain an accurate optimization of mixing efficiency with minimal number of experiments. This paper describes the optimal conditions of four process parameters, namely, impeller speed, impeller clearance, type of impeller, and sampling time, in obtaining MRT and V dead (%) using radiotracer experiments. The optimum conditions for the experiments were 100 rpm impeller speed, 50 mm impeller clearance, Type A mixer, and 900 s sampling time to reach optimization.
Multirate sampled-data yaw-damper and modal suppression system design
NASA Technical Reports Server (NTRS)
Berg, Martin C.; Mason, Gregory S.
1990-01-01
A multirate control law synthesized algorithm based on an infinite-time quadratic cost function, was developed along with a method for analyzing the robustness of multirate systems. A generalized multirate sampled-data control law structure (GMCLS) was introduced. A new infinite-time-based parameter optimization multirate sampled-data control law synthesis method and solution algorithm were developed. A singular-value-based method for determining gain and phase margins for multirate systems was also developed. The finite-time-based parameter optimization multirate sampled-data control law synthesis algorithm originally intended to be applied to the aircraft problem was instead demonstrated by application to a simpler problem involving the control of the tip position of a two-link robot arm. The GMCLS, the infinite-time-based parameter optimization multirate control law synthesis method and solution algorithm, and the singular-value based method for determining gain and phase margins were all demonstrated by application to the aircraft control problem originally proposed for this project.
Knowledge-based nonuniform sampling in multidimensional NMR.
Schuyler, Adam D; Maciejewski, Mark W; Arthanari, Haribabu; Hoch, Jeffrey C
2011-07-01
The full resolution afforded by high-field magnets is rarely realized in the indirect dimensions of multidimensional NMR experiments because of the time cost of uniformly sampling to long evolution times. Emerging methods utilizing nonuniform sampling (NUS) enable high resolution along indirect dimensions by sampling long evolution times without sampling at every multiple of the Nyquist sampling interval. While the earliest NUS approaches matched the decay of sampling density to the decay of the signal envelope, recent approaches based on coupled evolution times attempt to optimize sampling by choosing projection angles that increase the likelihood of resolving closely-spaced resonances. These approaches employ knowledge about chemical shifts to predict optimal projection angles, whereas prior applications of tailored sampling employed only knowledge of the decay rate. In this work we adapt the matched filter approach as a general strategy for knowledge-based nonuniform sampling that can exploit prior knowledge about chemical shifts and is not restricted to sampling projections. Based on several measures of performance, we find that exponentially weighted random sampling (envelope matched sampling) performs better than shift-based sampling (beat matched sampling). While shift-based sampling can yield small advantages in sensitivity, the gains are generally outweighed by diminished robustness. Our observation that more robust sampling schemes are only slightly less sensitive than schemes highly optimized using prior knowledge about chemical shifts has broad implications for any multidimensional NMR study employing NUS. The results derived from simulated data are demonstrated with a sample application to PfPMT, the phosphoethanolamine methyltransferase of the human malaria parasite Plasmodium falciparum.
NASA Astrophysics Data System (ADS)
Hamza, Karim; Shalaby, Mohamed
2014-09-01
This article presents a framework for simulation-based design optimization of computationally expensive problems, where economizing the generation of sample designs is highly desirable. One popular approach for such problems is efficient global optimization (EGO), where an initial set of design samples is used to construct a kriging model, which is then used to generate new 'infill' sample designs at regions of the search space where there is high expectancy of improvement. This article attempts to address one of the limitations of EGO, where generation of infill samples can become a difficult optimization problem in its own right, as well as allow the generation of multiple samples at a time in order to take advantage of parallel computing in the evaluation of the new samples. The proposed approach is tested on analytical functions, and then applied to the vehicle crashworthiness design of a full Geo Metro model undergoing frontal crash conditions.
Sidler, Dominik; Cristòfol-Clough, Michael; Riniker, Sereina
2017-06-13
Replica-exchange enveloping distribution sampling (RE-EDS) allows the efficient estimation of free-energy differences between multiple end-states from a single molecular dynamics (MD) simulation. In EDS, a reference state is sampled, which can be tuned by two types of parameters, i.e., smoothness parameters(s) and energy offsets, such that all end-states are sufficiently sampled. However, the choice of these parameters is not trivial. Replica exchange (RE) or parallel tempering is a widely applied technique to enhance sampling. By combining EDS with the RE technique, the parameter choice problem could be simplified and the challenge shifted toward an optimal distribution of the replicas in the smoothness-parameter space. The choice of a certain replica distribution can alter the sampling efficiency significantly. In this work, global round-trip time optimization (GRTO) algorithms are tested for the use in RE-EDS simulations. In addition, a local round-trip time optimization (LRTO) algorithm is proposed for systems with slowly adapting environments, where a reliable estimate for the round-trip time is challenging to obtain. The optimization algorithms were applied to RE-EDS simulations of a system of nine small-molecule inhibitors of phenylethanolamine N-methyltransferase (PNMT). The energy offsets were determined using our recently proposed parallel energy-offset (PEOE) estimation scheme. While the multistate GRTO algorithm yielded the best replica distribution for the ligands in water, the multistate LRTO algorithm was found to be the method of choice for the ligands in complex with PNMT. With this, the 36 alchemical free-energy differences between the nine ligands were calculated successfully from a single RE-EDS simulation 10 ns in length. Thus, RE-EDS presents an efficient method for the estimation of relative binding free energies.
Wang, Lei; Zhao, Pengyue; Zhang, Fengzu; Bai, Aijuan; Pan, Canping
2013-01-01
Ambient ionization direct analysis in real time (DART) coupled to single-quadrupole MS (DART-MS) was evaluated for rapid detection of caffeine in commercial samples without chromatographic separation or sample preparation. Four commercial samples were examined: tea, instant coffee, green tea beverage, and soft drink. The response-related parameters were optimized for the DART temperature and MS fragmentor. Under optimal conditions, the molecular ion (M+H)+ was the major ion for identification of caffeine. The results showed that DART-MS is a promising tool for the quick analysis of important marker molecules in commercial samples. Furthermore, this system has demonstrated significant potential for high sample throughput and real-time analysis.
Use of microwaves to improve nutritional value of soybeans for future space inhabitants
NASA Technical Reports Server (NTRS)
Singh, G.
1983-01-01
Whole soybeans from four different varieties at different moisture contents were microwaved for varying times to determine the conditions for maximum destruction of trypsin inhibitor and lipoxygenase activities, and optimal growth of chicks. Microwaving 150 gm samples of soybeans (at 14 to 28% moisture) for 1.5 min was found optimal for reduction of trypsin inhibitor and lipoxygenase activities. Microwaving 1 kgm samples of soybeans for 9 minutes destroyed 82% of the trypsin inhibitor activity and gave optimal chick growth. It should be pointed out that the microwaving time would vary according to the weight of the sample and the power of the microwave oven. The microwave oven used in the above experiments was rated at 650 watts 2450 MHz.
Optimal updating magnitude in adaptive flat-distribution sampling
NASA Astrophysics Data System (ADS)
Zhang, Cheng; Drake, Justin A.; Ma, Jianpeng; Pettitt, B. Montgomery
2017-11-01
We present a study on the optimization of the updating magnitude for a class of free energy methods based on flat-distribution sampling, including the Wang-Landau (WL) algorithm and metadynamics. These methods rely on adaptive construction of a bias potential that offsets the potential of mean force by histogram-based updates. The convergence of the bias potential can be improved by decreasing the updating magnitude with an optimal schedule. We show that while the asymptotically optimal schedule for the single-bin updating scheme (commonly used in the WL algorithm) is given by the known inverse-time formula, that for the Gaussian updating scheme (commonly used in metadynamics) is often more complex. We further show that the single-bin updating scheme is optimal for very long simulations, and it can be generalized to a class of bandpass updating schemes that are similarly optimal. These bandpass updating schemes target only a few long-range distribution modes and their optimal schedule is also given by the inverse-time formula. Constructed from orthogonal polynomials, the bandpass updating schemes generalize the WL and Langfeld-Lucini-Rago algorithms as an automatic parameter tuning scheme for umbrella sampling.
Optimal updating magnitude in adaptive flat-distribution sampling.
Zhang, Cheng; Drake, Justin A; Ma, Jianpeng; Pettitt, B Montgomery
2017-11-07
We present a study on the optimization of the updating magnitude for a class of free energy methods based on flat-distribution sampling, including the Wang-Landau (WL) algorithm and metadynamics. These methods rely on adaptive construction of a bias potential that offsets the potential of mean force by histogram-based updates. The convergence of the bias potential can be improved by decreasing the updating magnitude with an optimal schedule. We show that while the asymptotically optimal schedule for the single-bin updating scheme (commonly used in the WL algorithm) is given by the known inverse-time formula, that for the Gaussian updating scheme (commonly used in metadynamics) is often more complex. We further show that the single-bin updating scheme is optimal for very long simulations, and it can be generalized to a class of bandpass updating schemes that are similarly optimal. These bandpass updating schemes target only a few long-range distribution modes and their optimal schedule is also given by the inverse-time formula. Constructed from orthogonal polynomials, the bandpass updating schemes generalize the WL and Langfeld-Lucini-Rago algorithms as an automatic parameter tuning scheme for umbrella sampling.
Selective Data Acquisition in NMR. The Quantification of Anti-phase Scalar Couplings
NASA Astrophysics Data System (ADS)
Hodgkinson, P.; Holmes, K. J.; Hore, P. J.
Almost all time-domain NMR experiments employ "linear sampling," in which the NMR response is digitized at equally spaced times, with uniform signal averaging. Here, the possibilities of nonlinear sampling are explored using anti-phase doublets in the indirectly detected dimensions of multidimensional COSY-type experiments as an example. The Cramér-Rao lower bounds are used to evaluate and optimize experiments in which the sampling points, or the extent of signal averaging at each point, or both, are varied. The optimal nonlinear sampling for the estimation of the coupling constant J, by model fitting, turns out to involve just a few key time points, for example, at the first node ( t= 1/ J) of the sin(π Jt) modulation. Such sparse sampling patterns can be used to derive more practical strategies, in which the sampling or the signal averaging is distributed around the most significant time points. The improvements in the quantification of NMR parameters can be quite substantial especially when, as is often the case for indirectly detected dimensions, the total number of samples is limited by the time available.
Comparison of Optimal Design Methods in Inverse Problems
Banks, H. T.; Holm, Kathleen; Kappel, Franz
2011-01-01
Typical optimal design methods for inverse or parameter estimation problems are designed to choose optimal sampling distributions through minimization of a specific cost function related to the resulting error in parameter estimates. It is hoped that the inverse problem will produce parameter estimates with increased accuracy using data collected according to the optimal sampling distribution. Here we formulate the classical optimal design problem in the context of general optimization problems over distributions of sampling times. We present a new Prohorov metric based theoretical framework that permits one to treat succinctly and rigorously any optimal design criteria based on the Fisher Information Matrix (FIM). A fundamental approximation theory is also included in this framework. A new optimal design, SE-optimal design (standard error optimal design), is then introduced in the context of this framework. We compare this new design criteria with the more traditional D-optimal and E-optimal designs. The optimal sampling distributions from each design are used to compute and compare standard errors; the standard errors for parameters are computed using asymptotic theory or bootstrapping and the optimal mesh. We use three examples to illustrate ideas: the Verhulst-Pearl logistic population model [13], the standard harmonic oscillator model [13] and a popular glucose regulation model [16, 19, 29]. PMID:21857762
Cepeda-Vázquez, Mayela; Blumenthal, David; Camel, Valérie; Rega, Barbara
2017-03-01
Furan, a possibly carcinogenic compound to humans, and furfural, a naturally occurring volatile contributing to aroma, can be both found in thermally treated foods. These process-induced compounds, formed by close reaction pathways, play an important role as markers of food safety and quality. A method capable of simultaneously quantifying both molecules is thus highly relevant for developing mitigation strategies and preserving the sensory properties of food at the same time. We have developed a unique reliable and sensitive headspace trap (HS trap) extraction method coupled to GC-MS for the simultaneous quantification of furan and furfural in a solid processed food (sponge cake). HS Trap extraction has been optimized using an optimal design of experiments (O-DOE) approach, considering four instrumental and two sample preparation variables, as well as a blocking factor identified during preliminary assays. Multicriteria and multiple response optimization was performed based on a desirability function, yielding the following conditions: thermostatting temperature, 65°C; thermostatting time, 15min; number of pressurization cycles, 4; dry purge time, 0.9min; water / sample amount ratio (dry basis), 16; and total amount (water + sample amount, dry basis), 10g. The performances of the optimized method were also assessed: repeatability (RSD: ≤3.3% for furan and ≤2.6% for furfural), intermediate precision (RSD: 4.0% for furan and 4.3% for furfural), linearity (R 2 : 0.9957 for furan and 0.9996 for furfural), LOD (0.50ng furan g sample dry basis -1 and 10.2ng furfural g sample dry basis -1 ), LOQ (0.99ng furan g sample dry basis -1 and 41.1ng furfural g sample dry basis -1 ). Matrix effect was observed mainly for furan. Finally, the optimized method was applied to other sponge cakes with different matrix characteristics and levels of analytes. Copyright © 2016. Published by Elsevier B.V.
Simple Example of Backtest Overfitting (SEBO)
DOE Office of Scientific and Technical Information (OSTI.GOV)
In the field of mathematical finance, a "backtest" is the usage of historical market data to assess the performance of a proposed trading strategy. It is a relatively simple matter for a present-day computer system to explore thousands, millions or even billions of variations of a proposed strategy, and pick the best performing variant as the "optimal" strategy "in sample" (i.e., on the input dataset). Unfortunately, such an "optimal" strategy often performs very poorly "out of sample" (i.e. on another dataset), because the parameters of the invest strategy have been oversit to the in-sample data, a situation known as "backtestmore » overfitting". While the mathematics of backtest overfitting has been examined in several recent theoretical studies, here we pursue a more tangible analysis of this problem, in the form of an online simulator tool. Given a input random walk time series, the tool develops an "optimal" variant of a simple strategy by exhaustively exploring all integer parameter values among a handful of parameters. That "optimal" strategy is overfit, since by definition a random walk is unpredictable. Then the tool tests the resulting "optimal" strategy on a second random walk time series. In most runs using our online tool, the "optimal" strategy derived from the first time series performs poorly on the second time series, demonstrating how hard it is not to overfit a backtest. We offer this online tool, "Simple Example of Backtest Overfitting (SEBO)", to facilitate further research in this area.« less
Optimal Time-Resource Allocation for Energy-Efficient Physical Activity Detection
Thatte, Gautam; Li, Ming; Lee, Sangwon; Emken, B. Adar; Annavaram, Murali; Narayanan, Shrikanth; Spruijt-Metz, Donna; Mitra, Urbashi
2011-01-01
The optimal allocation of samples for physical activity detection in a wireless body area network for health-monitoring is considered. The number of biometric samples collected at the mobile device fusion center, from both device-internal and external Bluetooth heterogeneous sensors, is optimized to minimize the transmission power for a fixed number of samples, and to meet a performance requirement defined using the probability of misclassification between multiple hypotheses. A filter-based feature selection method determines an optimal feature set for classification, and a correlated Gaussian model is considered. Using experimental data from overweight adolescent subjects, it is found that allocating a greater proportion of samples to sensors which better discriminate between certain activity levels can result in either a lower probability of error or energy-savings ranging from 18% to 22%, in comparison to equal allocation of samples. The current activity of the subjects and the performance requirements do not significantly affect the optimal allocation, but employing personalized models results in improved energy-efficiency. As the number of samples is an integer, an exhaustive search to determine the optimal allocation is typical, but computationally expensive. To this end, an alternate, continuous-valued vector optimization is derived which yields approximately optimal allocations and can be implemented on the mobile fusion center due to its significantly lower complexity. PMID:21796237
NASA Astrophysics Data System (ADS)
Wahl, N.; Hennig, P.; Wieser, H. P.; Bangert, M.
2017-07-01
The sensitivity of intensity-modulated proton therapy (IMPT) treatment plans to uncertainties can be quantified and mitigated with robust/min-max and stochastic/probabilistic treatment analysis and optimization techniques. Those methods usually rely on sparse random, importance, or worst-case sampling. Inevitably, this imposes a trade-off between computational speed and accuracy of the uncertainty propagation. Here, we investigate analytical probabilistic modeling (APM) as an alternative for uncertainty propagation and minimization in IMPT that does not rely on scenario sampling. APM propagates probability distributions over range and setup uncertainties via a Gaussian pencil-beam approximation into moments of the probability distributions over the resulting dose in closed form. It supports arbitrary correlation models and allows for efficient incorporation of fractionation effects regarding random and systematic errors. We evaluate the trade-off between run-time and accuracy of APM uncertainty computations on three patient datasets. Results are compared against reference computations facilitating importance and random sampling. Two approximation techniques to accelerate uncertainty propagation and minimization based on probabilistic treatment plan optimization are presented. Runtimes are measured on CPU and GPU platforms, dosimetric accuracy is quantified in comparison to a sampling-based benchmark (5000 random samples). APM accurately propagates range and setup uncertainties into dose uncertainties at competitive run-times (GPU ≤slant {5} min). The resulting standard deviation (expectation value) of dose show average global γ{3% / {3}~mm} pass rates between 94.2% and 99.9% (98.4% and 100.0%). All investigated importance sampling strategies provided less accuracy at higher run-times considering only a single fraction. Considering fractionation, APM uncertainty propagation and treatment plan optimization was proven to be possible at constant time complexity, while run-times of sampling-based computations are linear in the number of fractions. Using sum sampling within APM, uncertainty propagation can only be accelerated at the cost of reduced accuracy in variance calculations. For probabilistic plan optimization, we were able to approximate the necessary pre-computations within seconds, yielding treatment plans of similar quality as gained from exact uncertainty propagation. APM is suited to enhance the trade-off between speed and accuracy in uncertainty propagation and probabilistic treatment plan optimization, especially in the context of fractionation. This brings fully-fledged APM computations within reach of clinical application.
Wahl, N; Hennig, P; Wieser, H P; Bangert, M
2017-06-26
The sensitivity of intensity-modulated proton therapy (IMPT) treatment plans to uncertainties can be quantified and mitigated with robust/min-max and stochastic/probabilistic treatment analysis and optimization techniques. Those methods usually rely on sparse random, importance, or worst-case sampling. Inevitably, this imposes a trade-off between computational speed and accuracy of the uncertainty propagation. Here, we investigate analytical probabilistic modeling (APM) as an alternative for uncertainty propagation and minimization in IMPT that does not rely on scenario sampling. APM propagates probability distributions over range and setup uncertainties via a Gaussian pencil-beam approximation into moments of the probability distributions over the resulting dose in closed form. It supports arbitrary correlation models and allows for efficient incorporation of fractionation effects regarding random and systematic errors. We evaluate the trade-off between run-time and accuracy of APM uncertainty computations on three patient datasets. Results are compared against reference computations facilitating importance and random sampling. Two approximation techniques to accelerate uncertainty propagation and minimization based on probabilistic treatment plan optimization are presented. Runtimes are measured on CPU and GPU platforms, dosimetric accuracy is quantified in comparison to a sampling-based benchmark (5000 random samples). APM accurately propagates range and setup uncertainties into dose uncertainties at competitive run-times (GPU [Formula: see text] min). The resulting standard deviation (expectation value) of dose show average global [Formula: see text] pass rates between 94.2% and 99.9% (98.4% and 100.0%). All investigated importance sampling strategies provided less accuracy at higher run-times considering only a single fraction. Considering fractionation, APM uncertainty propagation and treatment plan optimization was proven to be possible at constant time complexity, while run-times of sampling-based computations are linear in the number of fractions. Using sum sampling within APM, uncertainty propagation can only be accelerated at the cost of reduced accuracy in variance calculations. For probabilistic plan optimization, we were able to approximate the necessary pre-computations within seconds, yielding treatment plans of similar quality as gained from exact uncertainty propagation. APM is suited to enhance the trade-off between speed and accuracy in uncertainty propagation and probabilistic treatment plan optimization, especially in the context of fractionation. This brings fully-fledged APM computations within reach of clinical application.
Fryš, Ondřej; Česla, Petr; Bajerová, Petra; Adam, Martin; Ventura, Karel
2012-09-15
A method for focused ultrasonic extraction of nitroglycerin, triphenyl amine and acetyl tributyl citrate presented in double-base propellant samples following by the gas chromatography/mass spectrometry analysis was developed. A face-centered central composite design of the experiments and response surface modeling was used for optimization of the time, amplitude and sample amount. The dichloromethane was used as the extractant solvent. The optimal extraction conditions with respect to the maximum yield of the lowest abundant compound triphenyl amine were found at the 20 min extraction time, 35% amplitude of ultrasonic waves and 2.5 g of the propellant sample. The results obtained under optimal conditions were compared with the results achieved with validated Soxhlet extraction method, which is typically used for isolation and pre-concentration of compounds from the samples of explosives. The extraction yields for acetyl tributyl citrate using both extraction methods were comparable; however, the yield of ultrasonic extraction of nitroglycerin and triphenyl amine was lower than using Soxhlet extraction. The possible sources of different extraction yields are estimated and discussed. Copyright © 2012 Elsevier B.V. All rights reserved.
Han, Zong-wei; Huang, Wei; Luo, Yun; Zhang, Chun-di; Qi, Da-cheng
2015-03-01
Taking the soil organic matter in eastern Zhongxiang County, Hubei Province, as a research object, thirteen sample sets from different regions were arranged surrounding the road network, the spatial configuration of which was optimized by the simulated annealing approach. The topographic factors of these thirteen sample sets, including slope, plane curvature, profile curvature, topographic wetness index, stream power index and sediment transport index, were extracted by the terrain analysis. Based on the results of optimization, a multiple linear regression model with topographic factors as independent variables was built. At the same time, a multilayer perception model on the basis of neural network approach was implemented. The comparison between these two models was carried out then. The results revealed that the proposed approach was practicable in optimizing soil sampling scheme. The optimal configuration was capable of gaining soil-landscape knowledge exactly, and the accuracy of optimal configuration was better than that of original samples. This study designed a sampling configuration to study the soil attribute distribution by referring to the spatial layout of road network, historical samples, and digital elevation data, which provided an effective means as well as a theoretical basis for determining the sampling configuration and displaying spatial distribution of soil organic matter with low cost and high efficiency.
Živković Semren, Tanja; Brčić Karačonji, Irena; Safner, Toni; Brajenović, Nataša; Tariba Lovaković, Blanka; Pizent, Alica
2018-01-01
Non-targeted metabolomics research of human volatile urinary metabolome can be used to identify potential biomarkers associated with the changes in metabolism related to various health disorders. To ensure reliable analysis of urinary volatile organic metabolites (VOMs) by gas chromatography-mass spectrometry (GC-MS), parameters affecting the headspace-solid phase microextraction (HS-SPME) procedure have been evaluated and optimized. The influence of incubation and extraction temperatures and times, coating fibre material and salt addition on SPME efficiency was investigated by multivariate optimization methods using reduced factorial and Doehlert matrix designs. The results showed optimum values for temperature to be 60°C, extraction time 50min, and incubation time 35min. The proposed conditions were applied to investigate urine samples' stability regarding different storage conditions and freeze-thaw processes. The sum of peak areas of urine samples stored at 4°C, -20°C, and -80°C up to six months showed a time dependent decrease over time although storage at -80°C resulted in a slight non-significant reduction comparing to the fresh sample. However, due to the volatile nature of the analysed compounds, more than two cycles of freezing/thawing of the sample stored for six months at -80°C should be avoided whenever possible. Copyright © 2017 Elsevier B.V. All rights reserved.
High precision measurement of silicon in naphthas by ICP-OES using isooctane as diluent.
Gazulla, M F; Rodrigo, M; Orduña, M; Ventura, M J; Andreu, C
2017-03-01
An analytical protocol for the accurate and precise determination of Si in naphthas is presented by using ICP-OES, optimizing from the sample preparation to the measurement conditions, in order to be able to analyze for the first time silicon contents below 100µgkg -1 in a relatively short time thus being used as a control method. In the petrochemical industry, silicon can be present as a contaminant in different petroleum products such as gasoline, ethanol, or naphthas, forming different silicon compounds during the treatment of these products that are irreversibly adsorbed onto catalyst surfaces decreasing its time life. The complex nature of the organic naphtha sample together with the low detection limits needed make the analysis of silicon quite difficult. The aim of this work is to optimize the measurement of silicon in naphthas by ICP-OES introducing as an improvement the use of isooctane as diluent. The set up was carried out by optimizing the measurement conditions (power, nebulizer flow, pump rate, read time, and viewing mode) and the sample preparation (type of diluent, cleaning process, blanks, and studying various dilution ratios depending on the sample characteristics). Copyright © 2016 Elsevier B.V. All rights reserved.
Selection of sampling rate for digital control of aircrafts
NASA Technical Reports Server (NTRS)
Katz, P.; Powell, J. D.
1974-01-01
The considerations in selecting the sample rates for digital control of aircrafts are identified and evaluated using the optimal discrete method. A high performance aircraft model which includes a bending mode and wind gusts was studied. The following factors which influence the selection of the sampling rates were identified: (1) the time and roughness response to control inputs; (2) the response to external disturbances; and (3) the sensitivity to variations of parameters. It was found that the time response to a control input and the response to external disturbances limit the selection of the sampling rate. The optimal discrete regulator, the steady state Kalman filter, and the mean response to external disturbances are calculated.
Statistical aspects of point count sampling
Barker, R.J.; Sauer, J.R.; Ralph, C.J.; Sauer, J.R.; Droege, S.
1995-01-01
The dominant feature of point counts is that they do not census birds, but instead provide incomplete counts of individuals present within a survey plot. Considering a simple model for point count sampling, we demon-strate that use of these incomplete counts can bias estimators and testing procedures, leading to inappropriate conclusions. A large portion of the variability in point counts is caused by the incomplete counting, and this within-count variation can be confounded with ecologically meaningful varia-tion. We recommend caution in the analysis of estimates obtained from point counts. Using; our model, we also consider optimal allocation of sampling effort. The critical step in the optimization process is in determining the goals of the study and methods that will be used to meet these goals. By explicitly defining the constraints on sampling and by estimating the relationship between precision and bias of estimators and time spent counting, we can predict the optimal time at a point for each of several monitoring goals. In general, time spent at a point will differ depending on the goals of the study.
A fast optimization approach for treatment planning of volumetric modulated arc therapy.
Yan, Hui; Dai, Jian-Rong; Li, Ye-Xiong
2018-05-30
Volumetric modulated arc therapy (VMAT) is widely used in clinical practice. It not only significantly reduces treatment time, but also produces high-quality treatment plans. Current optimization approaches heavily rely on stochastic algorithms which are time-consuming and less repeatable. In this study, a novel approach is proposed to provide a high-efficient optimization algorithm for VMAT treatment planning. A progressive sampling strategy is employed for beam arrangement of VMAT planning. The initial beams with equal-space are added to the plan in a coarse sampling resolution. Fluence-map optimization and leaf-sequencing are performed for these beams. Then, the coefficients of fluence-maps optimization algorithm are adjusted according to the known fluence maps of these beams. In the next round the sampling resolution is doubled and more beams are added. This process continues until the total number of beams arrived. The performance of VMAT optimization algorithm was evaluated using three clinical cases and compared to those of a commercial planning system. The dosimetric quality of VMAT plans is equal to or better than the corresponding IMRT plans for three clinical cases. The maximum dose to critical organs is reduced considerably for VMAT plans comparing to those of IMRT plans, especially in the head and neck case. The total number of segments and monitor units are reduced for VMAT plans. For three clinical cases, VMAT optimization takes < 5 min accomplished using proposed approach and is 3-4 times less than that of the commercial system. The proposed VMAT optimization algorithm is able to produce high-quality VMAT plans efficiently and consistently. It presents a new way to accelerate current optimization process of VMAT planning.
Carro, N; García, I; Ignacio, M-C; Llompart, M; Yebra, M-C; Mouteira, A
2002-10-01
A sample-preparation procedure (extraction and saponification) using microwave energy is proposed for determination of organochlorine pesticides in oyster samples. A Plackett-Burman factorial design has been used to optimize the microwave-assisted extraction and mild saponification on a freeze dried sample spiked with a mixture of aldrin, endrin, dieldrin, heptachlor, heptachorepoxide, isodrin, transnonachlor, p, p'-DDE, and p, p'-DDD. Six variables: solvent volume, extraction time, extraction temperature, amount of acetone (%) in the extractant solvent, amount of sample, and volume of NaOH solution were considered in the optimization process. The results show that the amount of sample is statistically significant for dieldrin, aldrin, p, p'-DDE, heptachlor, and transnonachlor and solvent volume for dieldrin, aldrin, and p, p'-DDE. The volume of NaOH solution is statistically significant for aldrin and p, p'-DDE only. Extraction temperature and extraction time seem to be the main factors determining the efficiency of extraction process for isodrin and p, p'-DDE, respectively. The optimized procedure was compared with conventional Soxhlet extraction.
Beom Kim, Seon; Kim, CheongTaek; Liu, Qing; Hee Jo, Yang; Joo Choi, Hak; Hwang, Bang Yeon; Kyum Kim, Sang; Kyeong Lee, Mi
2016-08-01
Coumarin derivatives have been reported to inhibit melanin biosynthesis. The melanogenesis inhibitory activity of osthol, a major coumarin of the fruits of Cnidium monnieri Cusson (Umbelliferae), and optimized extraction conditions for the maximum yield from the isolation of osthol from C. monnieri fruits were investigated. B16F10 melanomas were treated with osthol at concentration of 1, 3, and 10 μM for 72 h. The expression of melanogenesis genes, such as tyrosinase, TRP-1, and TRP-2 was also assessed. For optimization, extraction factors such as extraction solvent, extraction time, and sample/solvent ratio were tested and optimized for maximum yield of osthol using response surface methodology with the Box-Behnken design (BBD). Osthol inhibits melanin content in B16F10 melanoma cells with an IC50 value of 4.9 μM. The melanogenesis inhibitory activity of osthol was achieved not by direct inhibition of tyrosinase activity but by inhibiting melanogenic enzyme expressions, such as tyrosinase, TRP-1, and TRP-2. The optimal condition was obtained as a sample/solvent ratio, 1500 mg/10 ml; an extraction time 30.3 min; and a methanol concentration of 97.7%. The osthol yield under optimal conditions was found to be 15.0 mg/g dried samples, which were well matched with the predicted value of 14.9 mg/g dried samples. These results will provide useful information about optimized extraction conditions for the development of osthol as cosmetic therapeutics to reduce skin hyperpigmentation.
Temporal Data Set Reduction Based on D-Optimality for Quantitative FLIM-FRET Imaging.
Omer, Travis; Intes, Xavier; Hahn, Juergen
2015-01-01
Fluorescence lifetime imaging (FLIM) when paired with Förster resonance energy transfer (FLIM-FRET) enables the monitoring of nanoscale interactions in living biological samples. FLIM-FRET model-based estimation methods allow the quantitative retrieval of parameters such as the quenched (interacting) and unquenched (non-interacting) fractional populations of the donor fluorophore and/or the distance of the interactions. The quantitative accuracy of such model-based approaches is dependent on multiple factors such as signal-to-noise ratio and number of temporal points acquired when sampling the fluorescence decays. For high-throughput or in vivo applications of FLIM-FRET, it is desirable to acquire a limited number of temporal points for fast acquisition times. Yet, it is critical to acquire temporal data sets with sufficient information content to allow for accurate FLIM-FRET parameter estimation. Herein, an optimal experimental design approach based upon sensitivity analysis is presented in order to identify the time points that provide the best quantitative estimates of the parameters for a determined number of temporal sampling points. More specifically, the D-optimality criterion is employed to identify, within a sparse temporal data set, the set of time points leading to optimal estimations of the quenched fractional population of the donor fluorophore. Overall, a reduced set of 10 time points (compared to a typical complete set of 90 time points) was identified to have minimal impact on parameter estimation accuracy (≈5%), with in silico and in vivo experiment validations. This reduction of the number of needed time points by almost an order of magnitude allows the use of FLIM-FRET for certain high-throughput applications which would be infeasible if the entire number of time sampling points were used.
Wlodarczyk, Dorota
2017-03-01
This study explored the effects intervening in the linkages of optimism and hope with subjective health in the short term after myocardial infarction. A two-wave study design was used. The sample consisted of 222 myocardial infarction survivors. When adopting a cross-sectional design, optimism and hope predicted subjective health at Time 1 and Time 2. After controlling for baseline subjective health, they were no longer significant predictors of subjective health at Time 2. Parallel indirect effects of seeking social support and problem solving were significant for both optimism and hope. After controlling for the shared variance between optimism and hope, these effects remained significant only for optimism.
Petinataud, Dimitri; Berger, Sibel; Ferdynus, Cyril; Debourgogne, Anne; Contet-Audonneau, Nelly; Machouart, Marie
2016-05-01
Onychomycosis is a common nail disorder mainly due to dermatophytes for which the conventional diagnosis requires direct microscopic observation and culture of a biological sample. Nevertheless, antifungal treatments are commonly prescribed without a mycological examination having been performed, partly because of the slow growth of dermatophytes. Therefore, molecular biology has been applied to this pathology, to support a quick and accurate distinction between onychomycosis and other nail damage. Commercial kits are now available from several companies for improving traditional microbiological diagnosis. In this paper, we present the first evaluation of the real-time PCR kit marketed by Bio Evolution for the diagnosis of dermatophytosis. Secondly, we compare the efficacy of the kit on optimal and non-optimal samples. This study was conducted on 180 nails samples, processed by conventional methods and retrospectively analysed using this kit. According to our results, this molecular kit has shown high specificity and sensitivity in detecting dermatophytes, regardless of sample quality. On the other hand, and as expected, optimal samples allowed the identification of a higher number of dermatophytes by conventional mycological diagnosis, compared to non-optimal samples. Finally, we have suggested several strategies for the practical use of such a kit in a medical laboratory for quick pathogen detection. © 2016 Blackwell Verlag GmbH.
A Class of Prediction-Correction Methods for Time-Varying Convex Optimization
NASA Astrophysics Data System (ADS)
Simonetto, Andrea; Mokhtari, Aryan; Koppel, Alec; Leus, Geert; Ribeiro, Alejandro
2016-09-01
This paper considers unconstrained convex optimization problems with time-varying objective functions. We propose algorithms with a discrete time-sampling scheme to find and track the solution trajectory based on prediction and correction steps, while sampling the problem data at a constant rate of $1/h$, where $h$ is the length of the sampling interval. The prediction step is derived by analyzing the iso-residual dynamics of the optimality conditions. The correction step adjusts for the distance between the current prediction and the optimizer at each time step, and consists either of one or multiple gradient steps or Newton steps, which respectively correspond to the gradient trajectory tracking (GTT) or Newton trajectory tracking (NTT) algorithms. Under suitable conditions, we establish that the asymptotic error incurred by both proposed methods behaves as $O(h^2)$, and in some cases as $O(h^4)$, which outperforms the state-of-the-art error bound of $O(h)$ for correction-only methods in the gradient-correction step. Moreover, when the characteristics of the objective function variation are not available, we propose approximate gradient and Newton tracking algorithms (AGT and ANT, respectively) that still attain these asymptotical error bounds. Numerical simulations demonstrate the practical utility of the proposed methods and that they improve upon existing techniques by several orders of magnitude.
NASA Astrophysics Data System (ADS)
Wahid Nuryadin, Bebeh; Suryani, Yayu; Yuliani, Yuli; Setiadji, Soni; Yeti Nuryantini, Ade; Iskandar, Ferry
2018-04-01
The effect of sintering time to the transient nature and optimization of red photoluminescence manganese-doped boron carbon oxynitride (BCNO:Mn) phosphor was investigated. The BCNO:Mn samples were synthesized using a facile urea-assisted combustion route involving boric acid, citric acid, manganese salt and urea. The optimized intensity of the dual peak emission at 420 nm (blue emission) and 630 nm (red emission) in the photoluminescence (PL) spectrum could be achieved by controlling the sintering time of the BCNO:Mn. The BCNO:Mn samples in high-crystalline form was found to be in a cubic and hexagonal structure. Based on the PL analysis, it is suggested that the BCNO:Mn symmetric band at 630 nm can be attributed to the 4T1(4G)—6A1(6S) transition absorption of Mn2+ ions into the hexagonal structure. Microstructure analysis showed an irregular and agglomerated shape of the BCNO:Mn sample.
Park, Jinil; Shin, Taehoon; Yoon, Soon Ho; Goo, Jin Mo; Park, Jang-Yeon
2016-05-01
The purpose of this work was to develop a 3D radial-sampling strategy which maintains uniform k-space sample density after retrospective respiratory gating, and demonstrate its feasibility in free-breathing ultrashort-echo-time lung MRI. A multi-shot, interleaved 3D radial sampling function was designed by segmenting a single-shot trajectory of projection views such that each interleaf samples k-space in an incoherent fashion. An optimal segmentation factor for the interleaved acquisition was derived based on an approximate model of respiratory patterns such that radial interleaves are evenly accepted during the retrospective gating. The optimality of the proposed sampling scheme was tested by numerical simulations and phantom experiments using human respiratory waveforms. Retrospectively, respiratory-gated, free-breathing lung MRI with the proposed sampling strategy was performed in healthy subjects. The simulation yielded the most uniform k-space sample density with the optimal segmentation factor, as evidenced by the smallest standard deviation of the number of neighboring samples as well as minimal side-lobe energy in the point spread function. The optimality of the proposed scheme was also confirmed by minimal image artifacts in phantom images. Human lung images showed that the proposed sampling scheme significantly reduced streak and ring artifacts compared with the conventional retrospective respiratory gating while suppressing motion-related blurring compared with full sampling without respiratory gating. In conclusion, the proposed 3D radial-sampling scheme can effectively suppress the image artifacts due to non-uniform k-space sample density in retrospectively respiratory-gated lung MRI by uniformly distributing gated radial views across the k-space. Copyright © 2016 John Wiley & Sons, Ltd.
Optimization of sintering conditions for cerium-doped yttrium aluminum garnet
NASA Astrophysics Data System (ADS)
Cranston, Robert Wesley McEachern
YAG:Ce phosphors have become widely used as blue/yellow light converters in camera projectors, white light emitting diodes (WLEDs) and general lighting applications. Many studies have been published on the production, characterization, and analysis of this optical ceramic but few have been done on determining optimal synthesis conditions. In this work, YAG:Ce phosphors were synthesized through solid state mixing and sintering. The synthesized powders and the highest quality commercially available powders were pressed and sintered to high densities and their photoluminescence (PL) intensity measured. The optimization process involved the sintering temperature, sintering time, annealing temperature and the level of Ce concentration. In addition to the PL intensity, samples were also characterized using particle size analysis, X-ray diffraction (XRD), and scanning electron microscopy (SEM). The PL data was compared with data produced from a YAG:Ce phosphor sample provided by Christie Digital. The peak intensities of the samples were converted to a relative percentage of this industry product. The highest value for the intensity of the commercial powder was measured for a Ce concentration of 0.3 mole% with a sintering temperature of 1540°C and a sintering dwell time of 7 hours. The optimal processing parameters for the in-house synthesized powder were slightly different from those of commercial powders. The optimal Ce concentration was 0.4 mole% Ce, sintering temperature was 1560°C and sintering dwell time was 10 hours. These optimal conditions produced a relative intensity of 94.20% and 95.28% for the in-house and commercial powders respectively. Polishing of these samples resulted in an increase of 5% in the PL intensity.
Cache-Aware Asymptotically-Optimal Sampling-Based Motion Planning
Ichnowski, Jeffrey; Prins, Jan F.; Alterovitz, Ron
2014-01-01
We present CARRT* (Cache-Aware Rapidly Exploring Random Tree*), an asymptotically optimal sampling-based motion planner that significantly reduces motion planning computation time by effectively utilizing the cache memory hierarchy of modern central processing units (CPUs). CARRT* can account for the CPU’s cache size in a manner that keeps its working dataset in the cache. The motion planner progressively subdivides the robot’s configuration space into smaller regions as the number of configuration samples rises. By focusing configuration exploration in a region for periods of time, nearest neighbor searching is accelerated since the working dataset is small enough to fit in the cache. CARRT* also rewires the motion planning graph in a manner that complements the cache-aware subdivision strategy to more quickly refine the motion planning graph toward optimality. We demonstrate the performance benefit of our cache-aware motion planning approach for scenarios involving a point robot as well as the Rethink Robotics Baxter robot. PMID:25419474
Cache-Aware Asymptotically-Optimal Sampling-Based Motion Planning.
Ichnowski, Jeffrey; Prins, Jan F; Alterovitz, Ron
2014-05-01
We present CARRT* (Cache-Aware Rapidly Exploring Random Tree*), an asymptotically optimal sampling-based motion planner that significantly reduces motion planning computation time by effectively utilizing the cache memory hierarchy of modern central processing units (CPUs). CARRT* can account for the CPU's cache size in a manner that keeps its working dataset in the cache. The motion planner progressively subdivides the robot's configuration space into smaller regions as the number of configuration samples rises. By focusing configuration exploration in a region for periods of time, nearest neighbor searching is accelerated since the working dataset is small enough to fit in the cache. CARRT* also rewires the motion planning graph in a manner that complements the cache-aware subdivision strategy to more quickly refine the motion planning graph toward optimality. We demonstrate the performance benefit of our cache-aware motion planning approach for scenarios involving a point robot as well as the Rethink Robotics Baxter robot.
A General Investigation of Optimized Atmospheric Sample Duration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eslinger, Paul W.; Miley, Harry S.
2012-11-28
ABSTRACT The International Monitoring System (IMS) consists of up to 80 aerosol and xenon monitoring systems spaced around the world that have collection systems sensitive enough to detect nuclear releases from underground nuclear tests at great distances (CTBT 1996; CTBTO 2011). Although a few of the IMS radionuclide stations are closer together than 1,000 km (such as the stations in Kuwait and Iran), many of them are 2,000 km or more apart. In the absence of a scientific basis for optimizing the duration of atmospheric sampling, historically scientists used a integration times from 24 hours to 14 days for radionuclidesmore » (Thomas et al. 1977). This was entirely adequate in the past because the sources of signals were far away and large, meaning that they were smeared over many days by the time they had travelled 10,000 km. The Fukushima event pointed out the unacceptable delay time (72 hours) between the start of sample acquisition and final data being shipped. A scientific basis for selecting a sample duration time is needed. This report considers plume migration of a nondecaying tracer using archived atmospheric data for 2011 in the HYSPLIT (Draxler and Hess 1998; HYSPLIT 2011) transport model. We present two related results: the temporal duration of the majority of the plume as a function of distance and the behavior of the maximum plume concentration as a function of sample collection duration and distance. The modeled plume behavior can then be combined with external information about sampler design to optimize sample durations in a sampling network.« less
NASA Astrophysics Data System (ADS)
Furton, Kenneth G.; Almirall, Jose R.; Wang, Jing
1999-02-01
In this paper, we present data comparing a variety of different conditions for extracting ignitable liquid residues from simulated fire debris samples in order to optimize the conditions for using Solid Phase Microextraction. A simulated accelerant mixture containing 30 components, including those from light petroleum distillates, medium petroleum distillates and heavy petroleum distillates were used to study the important variables controlling Solid Phase Microextraction (SPME) recoveries. SPME is an inexpensive, rapid and sensitive method for the analysis of volatile residues from the headspace over solid debris samples in a container or directly from aqueous samples followed by GC. The relative effects of controllable variables, including fiber chemistry, adsorption and desorption temperature, extraction time, and desorption time, have been optimized. The addition of water and ethanol to simulated debris samples in a can was shown to increase the sensitivity when using headspace SPME extraction. The relative enhancement of sensitivity has been compared as a function of the hydrocarbon chain length, sample temperature, time, and added ethanol concentrations. The technique has also been optimized to the extraction of accelerants directly from water added to the fire debris samples. The optimum adsorption time for the low molecular weight components was found to be approximately 25 minutes. The high molecular weight components were found at a higher concentration the longer the fiber was exposed to the headspace (up to 1 hr). The higher molecular weight components were also found in higher concentrations in the headspace when water and/or ethanol was added to the debris.
Optimization and Control of Cyber-Physical Vehicle Systems
Bradley, Justin M.; Atkins, Ella M.
2015-01-01
A cyber-physical system (CPS) is composed of tightly-integrated computation, communication and physical elements. Medical devices, buildings, mobile devices, robots, transportation and energy systems can benefit from CPS co-design and optimization techniques. Cyber-physical vehicle systems (CPVSs) are rapidly advancing due to progress in real-time computing, control and artificial intelligence. Multidisciplinary or multi-objective design optimization maximizes CPS efficiency, capability and safety, while online regulation enables the vehicle to be responsive to disturbances, modeling errors and uncertainties. CPVS optimization occurs at design-time and at run-time. This paper surveys the run-time cooperative optimization or co-optimization of cyber and physical systems, which have historically been considered separately. A run-time CPVS is also cooperatively regulated or co-regulated when cyber and physical resources are utilized in a manner that is responsive to both cyber and physical system requirements. This paper surveys research that considers both cyber and physical resources in co-optimization and co-regulation schemes with applications to mobile robotic and vehicle systems. Time-varying sampling patterns, sensor scheduling, anytime control, feedback scheduling, task and motion planning and resource sharing are examined. PMID:26378541
Optimization and Control of Cyber-Physical Vehicle Systems.
Bradley, Justin M; Atkins, Ella M
2015-09-11
A cyber-physical system (CPS) is composed of tightly-integrated computation, communication and physical elements. Medical devices, buildings, mobile devices, robots, transportation and energy systems can benefit from CPS co-design and optimization techniques. Cyber-physical vehicle systems (CPVSs) are rapidly advancing due to progress in real-time computing, control and artificial intelligence. Multidisciplinary or multi-objective design optimization maximizes CPS efficiency, capability and safety, while online regulation enables the vehicle to be responsive to disturbances, modeling errors and uncertainties. CPVS optimization occurs at design-time and at run-time. This paper surveys the run-time cooperative optimization or co-optimization of cyber and physical systems, which have historically been considered separately. A run-time CPVS is also cooperatively regulated or co-regulated when cyber and physical resources are utilized in a manner that is responsive to both cyber and physical system requirements. This paper surveys research that considers both cyber and physical resources in co-optimization and co-regulation schemes with applications to mobile robotic and vehicle systems. Time-varying sampling patterns, sensor scheduling, anytime control, feedback scheduling, task and motion planning and resource sharing are examined.
Designing a multiple dependent state sampling plan based on the coefficient of variation.
Yan, Aijun; Liu, Sanyang; Dong, Xiaojuan
2016-01-01
A multiple dependent state (MDS) sampling plan is developed based on the coefficient of variation of the quality characteristic which follows a normal distribution with unknown mean and variance. The optimal plan parameters of the proposed plan are solved by a nonlinear optimization model, which satisfies the given producer's risk and consumer's risk at the same time and minimizes the sample size required for inspection. The advantages of the proposed MDS sampling plan over the existing single sampling plan are discussed. Finally an example is given to illustrate the proposed plan.
Wang, Bing; Fang, Aiqin; Heim, John; Bogdanov, Bogdan; Pugh, Scott; Libardoni, Mark; Zhang, Xiang
2010-01-01
A novel peak alignment algorithm using a distance and spectrum correlation optimization (DISCO) method has been developed for two-dimensional gas chromatography time-of-flight mass spectrometry (GC×GC/TOF-MS) based metabolomics. This algorithm uses the output of the instrument control software, ChromaTOF, as its input data. It detects and merges multiple peak entries of the same metabolite into one peak entry in each input peak list. After a z-score transformation of metabolite retention times, DISCO selects landmark peaks from all samples based on both two-dimensional retention times and mass spectrum similarity of fragment ions measured by Pearson’s correlation coefficient. A local linear fitting method is employed in the original two-dimensional retention time space to correct retention time shifts. A progressive retention time map searching method is used to align metabolite peaks in all samples together based on optimization of the Euclidean distance and mass spectrum similarity. The effectiveness of the DISCO algorithm is demonstrated using data sets acquired under different experiment conditions and a spiked-in experiment. PMID:20476746
The effect of different control point sampling sequences on convergence of VMAT inverse planning
NASA Astrophysics Data System (ADS)
Pardo Montero, Juan; Fenwick, John D.
2011-04-01
A key component of some volumetric-modulated arc therapy (VMAT) optimization algorithms is the progressive addition of control points to the optimization. This idea was introduced in Otto's seminal VMAT paper, in which a coarse sampling of control points was used at the beginning of the optimization and new control points were progressively added one at a time. A different form of the methodology is also present in the RapidArc optimizer, which adds new control points in groups called 'multiresolution levels', each doubling the number of control points in the optimization. This progressive sampling accelerates convergence, improving the results obtained, and has similarities with the ordered subset algorithm used to accelerate iterative image reconstruction. In this work we have used a VMAT optimizer developed in-house to study the performance of optimization algorithms which use different control point sampling sequences, most of which fall into three different classes: doubling sequences, which add new control points in groups such that the number of control points in the optimization is (roughly) doubled; Otto-like progressive sampling which adds one control point at a time, and equi-length sequences which contain several multiresolution levels each with the same number of control points. Results are presented in this study for two clinical geometries, prostate and head-and-neck treatments. A dependence of the quality of the final solution on the number of starting control points has been observed, in agreement with previous works. We have found that some sequences, especially E20 and E30 (equi-length sequences with 20 and 30 multiresolution levels, respectively), generate better results than a 5 multiresolution level RapidArc-like sequence. The final value of the cost function is reduced up to 20%, such reductions leading to small improvements in dosimetric parameters characterizing the treatments—slightly more homogeneous target doses and better sparing of the organs at risk.
Gao, Chen-chen; Li, Feng-min; Lu, Lun; Sun, Yue
2015-10-01
For the determination of trace amounts of phthalic acid esters (PAEs) in complex seawater matrix, a stir bar sorptive extraction gas chromatography mass spectrometry (SBSE-GC-MS) method was established. Dimethyl phthalate (DMP), diethyl phthalate (DEP), dibutyl phthalate (DBP), butyl benzyl phthalate (BBP), dibutyl phthalate (2-ethylhexyl) phthalate (DEHP) and dioctyl phthalate (DOP) were selected as study objects. The effects of extraction time, amount of methanol, amount of sodium chloride, desorption time and desorption solvent were optimized. The method of SBSE-GC-MS was validated through recoveries and relative standard deviation. The optimal extraction time was 2 h. The optimal methanol content was 10%. The optimal sodium chloride content was 5% . The optimal desorption time was 50 min. The optimal desorption solvent was the mixture of methanol to acetonitrile (4:1, volume: volume). The linear relationship between the peak area and the concentration of PAEs was relevant. The correlation coefficients were greater than 0.997. The detection limits were between 0.25 and 174.42 ng x L(-1). The recoveries of different concentrations were between 56.97% and 124.22% . The relative standard deviations were between 0.41% and 14.39%. On the basis of the method, several estuaries water sample of Jiaozhou Bay were detected. DEP was detected in all samples, and the concentration of BBP, DEHP and DOP were much higher than the rest.
Optimal sampling with prior information of the image geometry in microfluidic MRI.
Han, S H; Cho, H; Paulsen, J L
2015-03-01
Recent advances in MRI acquisition for microscopic flows enable unprecedented sensitivity and speed in a portable NMR/MRI microfluidic analysis platform. However, the application of MRI to microfluidics usually suffers from prolonged acquisition times owing to the combination of the required high resolution and wide field of view necessary to resolve details within microfluidic channels. When prior knowledge of the image geometry is available as a binarized image, such as for microfluidic MRI, it is possible to reduce sampling requirements by incorporating this information into the reconstruction algorithm. The current approach to the design of the partial weighted random sampling schemes is to bias toward the high signal energy portions of the binarized image geometry after Fourier transformation (i.e. in its k-space representation). Although this sampling prescription is frequently effective, it can be far from optimal in certain limiting cases, such as for a 1D channel, or more generally yield inefficient sampling schemes at low degrees of sub-sampling. This work explores the tradeoff between signal acquisition and incoherent sampling on image reconstruction quality given prior knowledge of the image geometry for weighted random sampling schemes, finding that optimal distribution is not robustly determined by maximizing the acquired signal but from interpreting its marginal change with respect to the sub-sampling rate. We develop a corresponding sampling design methodology that deterministically yields a near optimal sampling distribution for image reconstructions incorporating knowledge of the image geometry. The technique robustly identifies optimal weighted random sampling schemes and provides improved reconstruction fidelity for multiple 1D and 2D images, when compared to prior techniques for sampling optimization given knowledge of the image geometry. Copyright © 2015 Elsevier Inc. All rights reserved.
Grodowska, Katarzyna; Parczewski, Andrzej
2013-01-01
The purpose of the present work was to find optimum conditions of headspace gas chromatography (HS-GC) determination of residual solvents which usually appear in pharmaceutical products. Two groups of solvents were taken into account in the present examination. Group I consisted of isopropanol, n-propanol, isobutanol, n-butanol and 1,4-dioxane and group II included cyclohexane, n-hexane and n-heptane. The members of the groups were selected in previous investigations in which experimental design and chemometric methods were applied. Four factors were taken into consideration in optimization which describe HS conditions: sample volume, equilibration time, equilibrium temperature and NaCl concentration in a sample. The relative GC peak area served as an optimization criterion which was considered separately for each analyte. Sequential variable size simplex optimization strategy was used and the progress of optimization was traced and visualized in various ways simultaneously. The optimum HS conditions appeared different for the groups of solvents tested, which proves that influence of experimental conditions (factors) depends on analyte properties. The optimization resulted in significant signal increase (from seven to fifteen times).
Robust Airfoil Optimization to Achieve Consistent Drag Reduction Over a Mach Range
NASA Technical Reports Server (NTRS)
Li, Wu; Huyse, Luc; Padula, Sharon; Bushnell, Dennis M. (Technical Monitor)
2001-01-01
We prove mathematically that in order to avoid point-optimization at the sampled design points for multipoint airfoil optimization, the number of design points must be greater than the number of free-design variables. To overcome point-optimization at the sampled design points, a robust airfoil optimization method (called the profile optimization method) is developed and analyzed. This optimization method aims at a consistent drag reduction over a given Mach range and has three advantages: (a) it prevents severe degradation in the off-design performance by using a smart descent direction in each optimization iteration, (b) there is no random airfoil shape distortion for any iterate it generates, and (c) it allows a designer to make a trade-off between a truly optimized airfoil and the amount of computing time consumed. For illustration purposes, we use the profile optimization method to solve a lift-constrained drag minimization problem for 2-D airfoil in Euler flow with 20 free-design variables. A comparison with other airfoil optimization methods is also included.
Son, Na Ry; Seo, Dong Joo; Lee, Min Hwa; Seo, Sheungwoo; Wang, Xiaoyu; Lee, Bog-Hieu; Lee, Jeong-Su; Joo, In-Sun; Hwang, In-Gyun; Choi, Changsun
2014-09-01
The aim of this study was to develop an optimal technique for detecting hepatitis E virus (HEV) in swine livers. Here, three elution buffers and two concentration methods were compared with respect to enhancing recovery of HEV from swine liver samples. Real-time reverse transcription-polymerase chain reaction (RT-PCR) and nested RT-PCR were performed to detect HEV RNA. When phosphate-buffered saline (PBS, pH 7.4) was used to concentrate HEV in swine liver samples using ultrafiltration, real-time RT-PCR detected HEV in 6 of the 26 samples. When threonine buffer was used to concentrate HEV using polyethylene glycol (PEG) precipitation and ultrafiltration, real-time RT-PCR detected HEV in 1 and 3 of the 26 samples, respectively. When glycine buffer was used to concentrate HEV using ultrafiltration and PEG precipitation, real-time RT-PCR detected HEV in 1 and 3 samples of the 26 samples, respectively. When nested RT-PCR was used to detect HEV, all samples tested negative regardless of the type of elution buffer or concentration method used. Therefore, the combination of real-time RT-PCR and ultrafiltration with PBS buffer was the most sensitive and reliable method for detecting HEV in swine livers. Copyright © 2014 Elsevier B.V. All rights reserved.
Influence of item distribution pattern and abundance on efficiency of benthic core sampling
Behney, Adam C.; O'Shaughnessy, Ryan; Eichholz, Michael W.; Stafford, Joshua D.
2014-01-01
ore sampling is a commonly used method to estimate benthic item density, but little information exists about factors influencing the accuracy and time-efficiency of this method. We simulated core sampling in a Geographic Information System framework by generating points (benthic items) and polygons (core samplers) to assess how sample size (number of core samples), core sampler size (cm2), distribution of benthic items, and item density affected the bias and precision of estimates of density, the detection probability of items, and the time-costs. When items were distributed randomly versus clumped, bias decreased and precision increased with increasing sample size and increased slightly with increasing core sampler size. Bias and precision were only affected by benthic item density at very low values (500–1,000 items/m2). Detection probability (the probability of capturing ≥ 1 item in a core sample if it is available for sampling) was substantially greater when items were distributed randomly as opposed to clumped. Taking more small diameter core samples was always more time-efficient than taking fewer large diameter samples. We are unable to present a single, optimal sample size, but provide information for researchers and managers to derive optimal sample sizes dependent on their research goals and environmental conditions.
Yan, Xu; Zhou, Minxiong; Ying, Lingfang; Yin, Dazhi; Fan, Mingxia; Yang, Guang; Zhou, Yongdi; Song, Fan; Xu, Dongrong
2013-01-01
Diffusion kurtosis imaging (DKI) is a new method of magnetic resonance imaging (MRI) that provides non-Gaussian information that is not available in conventional diffusion tensor imaging (DTI). DKI requires data acquisition at multiple b-values for parameter estimation; this process is usually time-consuming. Therefore, fewer b-values are preferable to expedite acquisition. In this study, we carefully evaluated various acquisition schemas using different numbers and combinations of b-values. Acquisition schemas that sampled b-values that were distributed to two ends were optimized. Compared to conventional schemas using equally spaced b-values (ESB), optimized schemas require fewer b-values to minimize fitting errors in parameter estimation and may thus significantly reduce scanning time. Following a ranked list of optimized schemas resulted from the evaluation, we recommend the 3b schema based on its estimation accuracy and time efficiency, which needs data from only 3 b-values at 0, around 800 and around 2600 s/mm2, respectively. Analyses using voxel-based analysis (VBA) and region-of-interest (ROI) analysis with human DKI datasets support the use of the optimized 3b (0, 1000, 2500 s/mm2) DKI schema in practical clinical applications. PMID:23735303
Enhanced Thermoelectric Properties of Double-Filled CoSb3 via High-Pressure Regulating.
Wang, Libin; Deng, Le; Qin, Jieming; Jia, Xiaopeng
2018-05-24
It has been discussed for a long time that synthetic pressure can effectively optimize thermoelectric properties. The beneficial effect of synthesis pressures on thermoelectric properties has been discussed for a long time. In this paper, it is theoretically and experimentally demonstrated that appropriate synthesis pressures can increase the figure of merit (ZT) through optimizing thermal transport and electronic transport properties. Indium and barium atoms double-filled CoSb 3 samples were prepared use high-pressure and high-temperature technique for half an hour. X-ray diffraction and some structure analysis were used to reveal the relationship between microstructures and thermoelectric properties. In 0.15 Ba 0.35 Co 4 Sb 12 samples were synthesized by different pressures; sample synthesized by 3 GPa has the best electrical transport properties, and sample synthesized by 2.5 GPa has the lowest thermal conductivity. The maximum ZT value of sample synthesized by 3.0 GPa reached 1.18.
NASA Astrophysics Data System (ADS)
Feng, J.; Bai, L.; Liu, S.; Su, X.; Hu, H.
2012-07-01
In this paper, the MODIS remote sensing data, featured with low-cost, high-timely and moderate/low spatial resolutions, in the North China Plain (NCP) as a study region were firstly used to carry out mixed-pixel spectral decomposition to extract an useful regionalized indicator parameter (RIP) (i.e., an available ratio, that is, fraction/percentage, of winter wheat planting area in each pixel as a regionalized indicator variable (RIV) of spatial sampling) from the initial selected indicators. Then, the RIV values were spatially analyzed, and the spatial structure characteristics (i.e., spatial correlation and variation) of the NCP were achieved, which were further processed to obtain the scalefitting, valid a priori knowledge or information of spatial sampling. Subsequently, founded upon an idea of rationally integrating probability-based and model-based sampling techniques and effectively utilizing the obtained a priori knowledge or information, the spatial sampling models and design schemes and their optimization and optimal selection were developed, as is a scientific basis of improving and optimizing the existing spatial sampling schemes of large-scale cropland remote sensing monitoring. Additionally, by the adaptive analysis and decision strategy the optimal local spatial prediction and gridded system of extrapolation results were able to excellently implement an adaptive report pattern of spatial sampling in accordance with report-covering units in order to satisfy the actual needs of sampling surveys.
Martendal, Edmar; de Souza Silveira, Cristine Durante; Nardini, Giuliana Stael; Carasek, Eduardo
2011-06-17
This study proposes a new approach to the optimization of the extraction of the volatile fraction of plant matrices using the headspace solid-phase microextraction (HS-SPME) technique. The optimization focused on the extraction time and temperature using a CAR/DVB/PDMS 50/30 μm SPME fiber and 100mg of a mixture of plants as the sample in a 15-mL vial. The extraction time (10-60 min) and temperature (5-60 °C) were optimized by means of a central composite design. The chromatogram was divided into four groups of peaks based on the elution temperature to provide a better understanding of the influence of the extraction parameters on the extraction efficiency considering compounds with different volatilities/polarities. In view of the different optimum extraction time and temperature conditions obtained for each group, a new approach based on the use of two extraction temperatures in the same procedure is proposed. The optimum conditions were achieved by extracting for 30 min with a sample temperature of 60 °C followed by a further 15 min at 5 °C. The proposed method was compared with the optimized conventional method based on a single extraction temperature (45 min of extraction at 50 °C) by submitting five samples to both procedures. The proposed method led to better results in all cases, considering as the response both peak area and the number of identified peaks. The newly proposed optimization approach provided an excellent alternative procedure to extract analytes with quite different volatilities in the same procedure. Copyright © 2011 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Supian, Sudradjat; Wahyuni, Sri; Nahar, Julita; Subiyanto
2018-01-01
In this paper, traveling time workers from the central post office Bandung in delivering the package to the destination location was optimized by using Hungarian method. Sensitivity analysis against data changes that may occur was also conducted. The sampled data in this study are 10 workers who will be assigned to deliver mail package to 10 post office delivery centers in Bandung that is Cikutra, Padalarang, Ujung Berung, Dayeuh Kolot, Asia- Africa, Soreang, Situ Saeur, Cimahi, Cipedes and Cikeruh. The result of this research is optimal traveling time from 10 workers to 10 destination locations. The optimal traveling time required by the workers is 387 minutes to reach the destination. Based on this result, manager of the central post office Bandung can make optimal decisions to assign tasks to their workers.
Sampling is the act of selecting items from a specified population in order to estimate the parameters of that population (e.g., selecting soil samples to characterize the properties at an environmental site). Sampling occurs at various levels and times throughout an environmenta...
NASA Astrophysics Data System (ADS)
Engeland, K.; Steinsland, I.
2012-04-01
This work is driven by the needs of next generation short term optimization methodology for hydro power production. Stochastic optimization are about to be introduced; i.e. optimizing when available resources (water) and utility (prices) are uncertain. In this paper we focus on the available resources, i.e. water, where uncertainty mainly comes from uncertainty in future runoff. When optimizing a water system all catchments and several lead times have to be considered simultaneously. Depending on the system of hydropower reservoirs, it might be a set of headwater catchments, a system of upstream /downstream reservoirs where water used from one catchment /dam arrives in a lower catchment maybe days later, or a combination of both. The aim of this paper is therefore to construct a simultaneous probabilistic forecast for several catchments and lead times, i.e. to provide a predictive distribution for the forecasts. Stochastic optimization methods need samples/ensembles of run-off forecasts as input. Hence, it should also be possible to sample from our probabilistic forecast. A post-processing approach is taken, and an error model based on Box- Cox transformation, power transform and a temporal-spatial copula model is used. It accounts for both between catchment and between lead time dependencies. In operational use it is strait forward to sample run-off ensembles from this models that inherits the catchment and lead time dependencies. The methodology is tested and demonstrated in the Ulla-Førre river system, and simultaneous probabilistic forecasts for five catchments and ten lead times are constructed. The methodology has enough flexibility to model operationally important features in this case study such as hetroscadasety, lead-time varying temporal dependency and lead-time varying inter-catchment dependency. Our model is evaluated using CRPS for marginal predictive distributions and energy score for joint predictive distribution. It is tested against deterministic run-off forecast, climatology forecast and a persistent forecast, and is found to be the better probabilistic forecast for lead time grater then two. From an operational point of view the results are interesting as the between catchment dependency gets stronger with longer lead-times.
Sequence Optimized Real-Time RT-PCR Assay for Detection of Crimean-Congo Hemorrhagic Fever Virus
2017-03-21
19-23]. Real-56 time reverse-transcription PCR remains the gold standard for quantitative , sensitive, and specific 57 detection of CCHFV; however...five-fold in two different series , and samples were run by real- time RT-PCR 116 in triplicate. The preliminary LOD was the lowest RNA dilution where...1 Sequence optimized real- time RT-PCR assay for detection of Crimean-Congo hemorrhagic fever 1 virus 2 3 JW Koehler1, KL Delp1, AT Hall1, SP
González-Toledo, E; Prat, M D; Alpendurada, M F
2001-07-20
Solid-phase microextraction (SPME) coupled to high-performance liquid chromatography (HPLC) has been applied to the analysis of priority pollutant phenolic compounds in water samples. Two types of polar fibers [50 microm Carbowax-templated resin (CW-TPR) and 60 microm polydimethylsiloxane-divinylbenzene (PDMS-DVB)] were evaluated. The effects of equilibration time and ionic strength of samples on the adsorption step were studied. The parameters affecting the desorption process, such as desorption mode, solvent composition and desorption time, were optimized. The developed method was used to determine the phenols in spiked river water samples collected in the Douro River, Portugal. Detection limits of 1-10 microg l(-1) were achieved under the optimized conditions.
NASA Astrophysics Data System (ADS)
Flinders, Bryn; Beasley, Emma; Verlaan, Ricky M.; Cuypers, Eva; Francese, Simona; Bassindale, Tom; Clench, Malcolm R.; Heeren, Ron M. A.
2017-08-01
Matrix-assisted laser desorption/ionization-mass spectrometry imaging (MALDI-MSI) has been employed to rapidly screen longitudinally sectioned drug user hair samples for cocaine and its metabolites using continuous raster imaging. Optimization of the spatial resolution and raster speed were performed on intact cocaine contaminated hair samples. The optimized settings (100 × 150 μm at 0.24 mm/s) were subsequently used to examine longitudinally sectioned drug user hair samples. The MALDI-MS/MS images showed the distribution of the most abundant cocaine product ion at m/z 182. Using the optimized settings, multiple hair samples obtained from two users were analyzed in approximately 3 h: six times faster than the standard spot-to-spot acquisition method. Quantitation was achieved using longitudinally sectioned control hair samples sprayed with a cocaine dilution series. A multiple reaction monitoring (MRM) experiment was also performed using the `dynamic pixel' imaging method to screen for cocaine and a range of its metabolites, in order to differentiate between contaminated hairs and drug users. Cocaine, benzoylecgonine, and cocaethylene were detectable, in agreement with analyses carried out using the standard LC-MS/MS method. [Figure not available: see fulltext.
Electric Propulsion System Selection Process for Interplanetary Missions
NASA Technical Reports Server (NTRS)
Landau, Damon; Chase, James; Kowalkowski, Theresa; Oh, David; Randolph, Thomas; Sims, Jon; Timmerman, Paul
2008-01-01
The disparate design problems of selecting an electric propulsion system, launch vehicle, and flight time all have a significant impact on the cost and robustness of a mission. The effects of these system choices combine into a single optimization of the total mission cost, where the design constraint is a required spacecraft neutral (non-electric propulsion) mass. Cost-optimal systems are designed for a range of mass margins to examine how the optimal design varies with mass growth. The resulting cost-optimal designs are compared with results generated via mass optimization methods. Additional optimizations with continuous system parameters address the impact on mission cost due to discrete sets of launch vehicle, power, and specific impulse. The examined mission set comprises a near-Earth asteroid sample return, multiple main belt asteroid rendezvous, comet rendezvous, comet sample return, and a mission to Saturn.
Local activation time sampling density for atrial tachycardia contact mapping: how much is enough?
Williams, Steven E; Harrison, James L; Chubb, Henry; Whitaker, John; Kiedrowicz, Radek; Rinaldi, Christopher A; Cooklin, Michael; Wright, Matthew; Niederer, Steven; O'Neill, Mark D
2018-02-01
Local activation time (LAT) mapping forms the cornerstone of atrial tachycardia diagnosis. Although anatomic and positional accuracy of electroanatomic mapping (EAM) systems have been validated, the effect of electrode sampling density on LAT map reconstruction is not known. Here, we study the effect of chamber geometry and activation complexity on optimal LAT sampling density using a combined in silico and in vivo approach. In vivo 21 atrial tachycardia maps were studied in three groups: (1) focal activation, (2) macro-re-entry, and (3) localized re-entry. In silico activation was simulated on a 4×4cm atrial monolayer, sampled randomly at 0.25-10 points/cm2 and used to re-interpolate LAT maps. Activation patterns were studied in the geometrically simple porcine right atrium (RA) and complex human left atrium (LA). Activation complexity was introduced into the porcine RA by incomplete inter-caval linear ablation. In all cases, optimal sampling density was defined as the highest density resulting in minimal further error reduction in the re-interpolated maps. Optimal sampling densities for LA tachycardias were 0.67 ± 0.17 points/cm2 (focal activation), 1.05 ± 0.32 points/cm2 (macro-re-entry) and 1.23 ± 0.26 points/cm2 (localized re-entry), P = 0.0031. Increasing activation complexity was associated with increased optimal sampling density both in silico (focal activation 1.09 ± 0.14 points/cm2; re-entry 1.44 ± 0.49 points/cm2; spiral-wave 1.50 ± 0.34 points/cm2, P < 0.0001) and in vivo (porcine RA pre-ablation 0.45 ± 0.13 vs. post-ablation 0.78 ± 0.17 points/cm2, P = 0.0008). Increasing chamber geometry was also associated with increased optimal sampling density (0.61 ± 0.22 points/cm2 vs. 1.0 ± 0.34 points/cm2, P = 0.0015). Optimal sampling densities can be identified to maximize diagnostic yield of LAT maps. Greater sampling density is required to correctly reveal complex activation and represent activation across complex geometries. Overall, the optimal sampling density for LAT map interpolation defined in this study was ∼1.0-1.5 points/cm2. Published on behalf of the European Society of Cardiology
Cascella, Raffaella; Stocchi, Laura; Strafella, Claudia; Mezzaroma, Ivano; Mannazzu, Marco; Vullo, Vincenzo; Montella, Francesco; Parruti, Giustino; Borgiani, Paola; Sangiuolo, Federica; Novelli, Giuseppe; Pirazzoli, Antonella; Zampatti, Stefania; Giardina, Emiliano
2015-01-01
Our work aimed to designate the optimal DNA source for pharmacogenetic assays, such as the screening for HLA-B*57:01 allele. A saliva and four buccal swab samples were taken from 104 patients. All the samples were stored at different time and temperature conditions and then genotyped for the HLA-B*57:01 allele by SSP-PCR and classical/capillary electrophoresis. The genotyping analysis reported different performance rates depending on the storage conditions of the samples. Given our results, the buccal swab demonstrated to be more resistant and stable in time with respect to the saliva. Our investigation designates the buccal swab as the optimal DNA source for pharmacogenetic assays in terms of resistance, low infectivity, low-invasiveness and easy sampling, and safe transport in centralized medical centers providing specialized pharmacogenetic tests.
Comparison of optimal design methods in inverse problems
NASA Astrophysics Data System (ADS)
Banks, H. T.; Holm, K.; Kappel, F.
2011-07-01
Typical optimal design methods for inverse or parameter estimation problems are designed to choose optimal sampling distributions through minimization of a specific cost function related to the resulting error in parameter estimates. It is hoped that the inverse problem will produce parameter estimates with increased accuracy using data collected according to the optimal sampling distribution. Here we formulate the classical optimal design problem in the context of general optimization problems over distributions of sampling times. We present a new Prohorov metric-based theoretical framework that permits one to treat succinctly and rigorously any optimal design criteria based on the Fisher information matrix. A fundamental approximation theory is also included in this framework. A new optimal design, SE-optimal design (standard error optimal design), is then introduced in the context of this framework. We compare this new design criterion with the more traditional D-optimal and E-optimal designs. The optimal sampling distributions from each design are used to compute and compare standard errors; the standard errors for parameters are computed using asymptotic theory or bootstrapping and the optimal mesh. We use three examples to illustrate ideas: the Verhulst-Pearl logistic population model (Banks H T and Tran H T 2009 Mathematical and Experimental Modeling of Physical and Biological Processes (Boca Raton, FL: Chapman and Hall/CRC)), the standard harmonic oscillator model (Banks H T and Tran H T 2009) and a popular glucose regulation model (Bergman R N, Ider Y Z, Bowden C R and Cobelli C 1979 Am. J. Physiol. 236 E667-77 De Gaetano A and Arino O 2000 J. Math. Biol. 40 136-68 Toffolo G, Bergman R N, Finegood D T, Bowden C R and Cobelli C 1980 Diabetes 29 979-90).
Adaptive Sampling of Time Series During Remote Exploration
NASA Technical Reports Server (NTRS)
Thompson, David R.
2012-01-01
This work deals with the challenge of online adaptive data collection in a time series. A remote sensor or explorer agent adapts its rate of data collection in order to track anomalous events while obeying constraints on time and power. This problem is challenging because the agent has limited visibility (all its datapoints lie in the past) and limited control (it can only decide when to collect its next datapoint). This problem is treated from an information-theoretic perspective, fitting a probabilistic model to collected data and optimizing the future sampling strategy to maximize information gain. The performance characteristics of stationary and nonstationary Gaussian process models are compared. Self-throttling sensors could benefit environmental sensor networks and monitoring as well as robotic exploration. Explorer agents can improve performance by adjusting their data collection rate, preserving scarce power or bandwidth resources during uninteresting times while fully covering anomalous events of interest. For example, a remote earthquake sensor could conserve power by limiting its measurements during normal conditions and increasing its cadence during rare earthquake events. A similar capability could improve sensor platforms traversing a fixed trajectory, such as an exploration rover transect or a deep space flyby. These agents can adapt observation times to improve sample coverage during moments of rapid change. An adaptive sampling approach couples sensor autonomy, instrument interpretation, and sampling. The challenge is addressed as an active learning problem, which already has extensive theoretical treatment in the statistics and machine learning literature. A statistical Gaussian process (GP) model is employed to guide sample decisions that maximize information gain. Nonsta tion - ary (e.g., time-varying) covariance relationships permit the system to represent and track local anomalies, in contrast with current GP approaches. Most common GP models are stationary, e.g., the covariance relationships are time-invariant. In such cases, information gain is independent of previously collected data, and the optimal solution can always be computed in advance. Information-optimal sampling of a stationary GP time series thus reduces to even spacing, and such models are not appropriate for tracking localized anomalies. Additionally, GP model inference can be computationally expensive.
Coscollà, Clara; Navarro-Olivares, Santiago; Martí, Pedro; Yusà, Vicent
2014-02-01
When attempting to discover the important factors and then optimise a response by tuning these factors, experimental design (design of experiments, DoE) gives a powerful suite of statistical methodology. DoE identify significant factors and then optimise a response with respect to them in method development. In this work, a headspace-solid-phase micro-extraction (HS-SPME) combined with gas chromatography tandem mass spectrometry (GC-MS/MS) methodology for the simultaneous determination of six important organotin compounds namely monobutyltin (MBT), dibutyltin (DBT), tributyltin (TBT), monophenyltin (MPhT), diphenyltin (DPhT), triphenyltin (TPhT) has been optimized using a statistical design of experiments (DOE). The analytical method is based on the ethylation with NaBEt4 and simultaneous headspace-solid-phase micro-extraction of the derivative compounds followed by GC-MS/MS analysis. The main experimental parameters influencing the extraction efficiency selected for optimization were pre-incubation time, incubation temperature, agitator speed, extraction time, desorption temperature, buffer (pH, concentration and volume), headspace volume, sample salinity, preparation of standards, ultrasonic time and desorption time in the injector. The main factors (excitation voltage, excitation time, ion source temperature, isolation time and electron energy) affecting the GC-IT-MS/MS response were also optimized using the same statistical design of experiments. The proposed method presented good linearity (coefficient of determination R(2)>0.99) and repeatibilty (1-25%) for all the compounds under study. The accuracy of the method measured as the average percentage recovery of the compounds in spiked surface and marine waters was higher than 70% for all compounds studied. Finally, the optimized methodology was applied to real aqueous samples enabled the simultaneous determination of all compounds under study in surface and marine water samples obtained from Valencia region (Spain). © 2013 Elsevier B.V. All rights reserved.
Zhang, Chu; Feng, Xuping; Wang, Jian; Liu, Fei; He, Yong; Zhou, Weijun
2017-01-01
Detection of plant diseases in a fast and simple way is crucial for timely disease control. Conventionally, plant diseases are accurately identified by DNA, RNA or serology based methods which are time consuming, complex and expensive. Mid-infrared spectroscopy is a promising technique that simplifies the detection procedure for the disease. Mid-infrared spectroscopy was used to identify the spectral differences between healthy and infected oilseed rape leaves. Two different sample sets from two experiments were used to explore and validate the feasibility of using mid-infrared spectroscopy in detecting Sclerotinia stem rot (SSR) on oilseed rape leaves. The average mid-infrared spectra showed differences between healthy and infected leaves, and the differences varied among different sample sets. Optimal wavenumbers for the 2 sample sets selected by the second derivative spectra were similar, indicating the efficacy of selecting optimal wavenumbers. Chemometric methods were further used to quantitatively detect the oilseed rape leaves infected by SSR, including the partial least squares-discriminant analysis, support vector machine and extreme learning machine. The discriminant models using the full spectra and the optimal wavenumbers of the 2 sample sets were effective for classification accuracies over 80%. The discriminant results for the 2 sample sets varied due to variations in the samples. The use of two sample sets proved and validated the feasibility of using mid-infrared spectroscopy and chemometric methods for detecting SSR on oilseed rape leaves. The similarities among the selected optimal wavenumbers in different sample sets made it feasible to simplify the models and build practical models. Mid-infrared spectroscopy is a reliable and promising technique for SSR control. This study helps in developing practical application of using mid-infrared spectroscopy combined with chemometrics to detect plant disease.
Optimization of extended propulsion time nuclear-electric propulsion trajectories
NASA Technical Reports Server (NTRS)
Sauer, C. G., Jr.
1981-01-01
This paper presents the methodology used in optimizing extended propulsion time NEP missions considering realistic thruster lifetime constraints. These missions consist of a powered spiral escape from a 700-km circular orbit at the earth, followed by a powered heliocentric transfer with an optimized coast phase, and terminating in a spiral capture phase at the target planet. This analysis is most applicable to those missions with very high energy requirements such as outer planet orbiter missions or sample return missions where the total propulsion time could greatly exceed the expected lifetime of an individual thruster. This methodology has been applied to the investigation of NEP missions to the outer planets where examples are presented of both constrained and optimized trajectories.
Filipiak, Wojciech; Filipiak, Anna; Ager, Clemens; Wiesenhofer, Helmut; Amann, Anton
2012-06-01
The approach for breath-VOCs' collection and preconcentration by applying needle traps was developed and optimized. The alveolar air was collected from only a few exhalations under visual control of expired CO(2) into a large gas-tight glass syringe and then warmed up to 45 °C for a short time to avoid condensation. Subsequently, a specially constructed sampling device equipped with Bronkhorst® electronic flow controllers was used for automated adsorption. This sampling device allows time-saving collection of expired/inspired air in parallel onto three different needle traps as well as improvement of sensitivity and reproducibility of NT-GC-MS analysis by collection of relatively large (up to 150 ml) volume of exhaled breath. It was shown that the collection of alveolar air derived from only a few exhalations into a large syringe followed by automated adsorption on needle traps yields better results than manual sorption by up/down cycles with a 1 ml syringe, mostly due to avoided condensation and electronically controlled stable sample flow rate. The optimal profile and composition of needle traps consists of 2 cm Carbopack X and 1 cm Carboxen 1000, allowing highly efficient VOCs' enrichment, while injection by a fast expansive flow technique requires no modifications in instrumentation and fully automated GC-MS analysis can be performed with a commercially available autosampler. This optimized analytical procedure considerably facilitates the collection and enrichment of alveolar air, and is therefore suitable for application at the bedside of critically ill patients in an intensive care unit. Due to its simplicity it can replace the time-consuming sampling of sufficient breath volume by numerous up/down cycles with a 1 ml syringe.
NASA Astrophysics Data System (ADS)
Zawadowicz, M. A.; Del Negro, L. A.
2010-12-01
Hazardous air pollutants (HAPs) are usually present in the atmosphere at pptv-level, requiring measurements with high sensitivity and minimal contamination. Commonly used evacuated canister methods require an overhead in space, money and time that often is prohibitive to primarily-undergraduate institutions. This study optimized an analytical method based on solid-phase microextraction (SPME) of ambient gaseous matrix, which is a cost-effective technique of selective VOC extraction, accessible to an unskilled undergraduate. Several approaches to SPME extraction and sample analysis were characterized and several extraction parameters optimized. Extraction time, temperature and laminar air flow velocity around the fiber were optimized to give highest signal and efficiency. Direct, dynamic extraction of benzene from a moving air stream produced better precision (±10%) than sampling of stagnant air collected in a polymeric bag (±24%). Using a low-polarity chromatographic column in place of a standard (5%-Phenyl)-methylpolysiloxane phase decreased the benzene detection limit from 2 ppbv to 100 pptv. The developed method is simple and fast, requiring 15-20 minutes per extraction and analysis. It will be field-validated and used as a field laboratory component of various undergraduate Chemistry and Environmental Studies courses.
NASA Astrophysics Data System (ADS)
Merrill, S.; Horowitz, J.; Traino, A. C.; Chipkin, S. R.; Hollot, C. V.; Chait, Y.
2011-02-01
Calculation of the therapeutic activity of radioiodine 131I for individualized dosimetry in the treatment of Graves' disease requires an accurate estimate of the thyroid absorbed radiation dose based on a tracer activity administration of 131I. Common approaches (Marinelli-Quimby formula, MIRD algorithm) use, respectively, the effective half-life of radioiodine in the thyroid and the time-integrated activity. Many physicians perform one, two, or at most three tracer dose activity measurements at various times and calculate the required therapeutic activity by ad hoc methods. In this paper, we study the accuracy of estimates of four 'target variables': time-integrated activity coefficient, time of maximum activity, maximum activity, and effective half-life in the gland. Clinical data from 41 patients who underwent 131I therapy for Graves' disease at the University Hospital in Pisa, Italy, are used for analysis. The radioiodine kinetics are described using a nonlinear mixed-effects model. The distributions of the target variables in the patient population are characterized. Using minimum root mean squared error as the criterion, optimal 1-, 2-, and 3-point sampling schedules are determined for estimation of the target variables, and probabilistic bounds are given for the errors under the optimal times. An algorithm is developed for computing the optimal 1-, 2-, and 3-point sampling schedules for the target variables. This algorithm is implemented in a freely available software tool. Taking into consideration 131I effective half-life in the thyroid and measurement noise, the optimal 1-point time for time-integrated activity coefficient is a measurement 1 week following the tracer dose. Additional measurements give only a slight improvement in accuracy.
Bouillon-Pichault, Marion; Jullien, Vincent; Bazzoli, Caroline; Pons, Gérard; Tod, Michel
2011-02-01
The aim of this work was to determine whether optimizing the study design in terms of ages and sampling times for a drug eliminated solely via cytochrome P450 3A4 (CYP3A4) would allow us to accurately estimate the pharmacokinetic parameters throughout the entire childhood timespan, while taking into account age- and weight-related changes. A linear monocompartmental model with first-order absorption was used successively with three different residual error models and previously published pharmacokinetic parameters ("true values"). The optimal ages were established by D-optimization using the CYP3A4 maturation function to create "optimized demographic databases." The post-dose times for each previously selected age were determined by D-optimization using the pharmacokinetic model to create "optimized sparse sampling databases." We simulated concentrations by applying the population pharmacokinetic model to the optimized sparse sampling databases to create optimized concentration databases. The latter were modeled to estimate population pharmacokinetic parameters. We then compared true and estimated parameter values. The established optimal design comprised four age ranges: 0.008 years old (i.e., around 3 days), 0.192 years old (i.e., around 2 months), 1.325 years old, and adults, with the same number of subjects per group and three or four samples per subject, in accordance with the error model. The population pharmacokinetic parameters that we estimated with this design were precise and unbiased (root mean square error [RMSE] and mean prediction error [MPE] less than 11% for clearance and distribution volume and less than 18% for k(a)), whereas the maturation parameters were unbiased but less precise (MPE < 6% and RMSE < 37%). Based on our results, taking growth and maturation into account a priori in a pediatric pharmacokinetic study is theoretically feasible. However, it requires that very early ages be included in studies, which may present an obstacle to the use of this approach. First-pass effects, alternative elimination routes, and combined elimination pathways should also be investigated.
Patil, A A; Sachin, B S; Shinde, D B; Wakte, P S
2013-07-01
Coumestan wedelolactone is an important phytocomponent from Eclipta alba (L.) Hassk. It possesses diverse pharmacological activities, which have prompted the development of various extraction techniques and strategies for its better utilization. The aim of the present study is to develop and optimize supercritical carbon dioxide assisted sample preparation and HPLC identification of wedelolactone from E. alba (L.) Hassk. The response surface methodology was employed to study the optimization of sample preparation using supercritical carbon dioxide for wedelolactone from E. alba (L.) Hassk. The optimized sample preparation involves the investigation of quantitative effects of sample preparation parameters viz. operating pressure, temperature, modifier concentration and time on yield of wedelolactone using Box-Behnken design. The wedelolactone content was determined using validated HPLC methodology. The experimental data were fitted to second-order polynomial equation using multiple regression analysis and analyzed using the appropriate statistical method. By solving the regression equation and analyzing 3D plots, the optimum extraction conditions were found to be: extraction pressure, 25 MPa; temperature, 56 °C; modifier concentration, 9.44% and extraction time, 60 min. Optimum extraction conditions demonstrated wedelolactone yield of 15.37 ± 0.63 mg/100 g E. alba (L.) Hassk, which was in good agreement with the predicted values. Temperature and modifier concentration showed significant effect on the wedelolactone yield. The supercritical carbon dioxide extraction showed higher selectivity than the conventional Soxhlet assisted extraction method. Copyright © 2013 Elsevier Masson SAS. All rights reserved.
Viana, Duarte S; Santamaría, Luis; Figuerola, Jordi
2016-02-01
Propagule retention time is a key factor in determining propagule dispersal distance and the shape of "seed shadows". Propagules dispersed by animal vectors are either ingested and retained in the gut until defecation or attached externally to the body until detachment. Retention time is a continuous variable, but it is commonly measured at discrete time points, according to pre-established sampling time-intervals. Although parametric continuous distributions have been widely fitted to these interval-censored data, the performance of different fitting methods has not been evaluated. To investigate the performance of five different fitting methods, we fitted parametric probability distributions to typical discretized retention-time data with known distribution using as data-points either the lower, mid or upper bounds of sampling intervals, as well as the cumulative distribution of observed values (using either maximum likelihood or non-linear least squares for parameter estimation); then compared the estimated and original distributions to assess the accuracy of each method. We also assessed the robustness of these methods to variations in the sampling procedure (sample size and length of sampling time-intervals). Fittings to the cumulative distribution performed better for all types of parametric distributions (lognormal, gamma and Weibull distributions) and were more robust to variations in sample size and sampling time-intervals. These estimated distributions had negligible deviations of up to 0.045 in cumulative probability of retention times (according to the Kolmogorov-Smirnov statistic) in relation to original distributions from which propagule retention time was simulated, supporting the overall accuracy of this fitting method. In contrast, fitting the sampling-interval bounds resulted in greater deviations that ranged from 0.058 to 0.273 in cumulative probability of retention times, which may introduce considerable biases in parameter estimates. We recommend the use of cumulative probability to fit parametric probability distributions to propagule retention time, specifically using maximum likelihood for parameter estimation. Furthermore, the experimental design for an optimal characterization of unimodal propagule retention time should contemplate at least 500 recovered propagules and sampling time-intervals not larger than the time peak of propagule retrieval, except in the tail of the distribution where broader sampling time-intervals may also produce accurate fits.
Hamedi, Raheleh; Hadjmohammadi, Mohammad Reza
2017-09-01
A novel design of hollow-fiber liquid-phase microextraction containing multiwalled carbon nanotubes as a solid sorbent, which is immobilized in the pore and lumen of hollow fiber by the sol-gel technique, was developed for the pre-concentration and determination of polycyclic aromatic hydrocarbons in environmental water samples. The proposed method utilized both solid- and liquid-phase microextraction media. Parameters that affect the extraction of polycyclic aromatic hydrocarbons were optimized in two successive steps as follows. Firstly, a methodology based on a quarter factorial design was used to choose the significant variables. Then, these significant factors were optimized utilizing central composite design. Under the optimized condition (extraction time = 25 min, amount of multiwalled carbon nanotubes = 78 mg, sample volume = 8 mL, and desorption time = 5 min), the calibration curves showed high linearity (R 2 = 0.99) in the range of 0.01-500 ng/mL and the limits of detection were in the range of 0.007-1.47 ng/mL. The obtained extraction recoveries for 10 ng/mL of polycyclic aromatic hydrocarbons standard solution were in the range of 85-92%. Replicating the experiment under these conditions five times gave relative standard deviations lower than 6%. Finally, the method was successfully applied for pre-concentration and determination of polycyclic aromatic hydrocarbons in environmental water samples. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Calvano, C D; Aresta, A; Iacovone, M; De Benedetto, G E; Zambonin, C G; Battaglia, M; Ditonno, P; Rutigliano, M; Bettocchi, C
2010-03-11
Protein analysis in biological fluids, such as urine, by means of mass spectrometry (MS) still suffers for insufficient standardization in protocols for sample collection, storage and preparation. In this work, the influence of these variables on healthy donors human urine protein profiling performed by matrix assisted laser desorption ionization time-of-flight mass spectrometry (MALDI-TOF-MS) was studied. A screening of various urine sample pre-treatment procedures and different sample deposition approaches on the MALDI target was performed. The influence of urine samples storage time and temperature on spectral profiles was evaluated by means of principal component analysis (PCA). The whole optimized procedure was eventually applied to the MALDI-TOF-MS analysis of human urine samples taken from prostate cancer patients. The best results in terms of detected ions number and abundance in the MS spectra were obtained by using home-made microcolumns packed with hydrophilic-lipophilic balance (HLB) resin as sample pre-treatment method; this procedure was also less expensive and suitable for high throughput analyses. Afterwards, the spin coating approach for sample deposition on the MALDI target plate was optimized, obtaining homogenous and reproducible spots. Then, PCA indicated that low storage temperatures of acidified and centrifuged samples, together with short handling time, allowed to obtain reproducible profiles without artifacts contribution due to experimental conditions. Finally, interesting differences were found by comparing the MALDI-TOF-MS protein profiles of pooled urine samples of healthy donors and prostate cancer patients. The results showed that analytical and pre-analytical variables are crucial for the success of urine analysis, to obtain meaningful and reproducible data, even if the intra-patient variability is very difficult to avoid. It has been proven how pooled urine samples can be an interesting way to make easier the comparison between healthy and pathological samples and to individuate possible differences in the protein expression between the two sets of samples. Copyright 2009 Elsevier B.V. All rights reserved.
Optimization of throughput in semipreparative chiral liquid chromatography using stacked injection.
Taheri, Mohammadreza; Fotovati, Mohsen; Hosseini, Seyed-Kiumars; Ghassempour, Alireza
2017-10-01
An interesting mode of chromatography for preparation of pure enantiomers from pure samples is the method of stacked injection as a pseudocontinuous procedure. Maximum throughput and minimal production costs can be achieved by the use of total chiral column length in this mode of chromatography. To maximize sample loading, often touching bands of the two enantiomers is automatically achieved. Conventional equations show direct correlation between touching-band loadability and the selectivity factor of two enantiomers. The important question for one who wants to obtain the highest throughput is "How to optimize different factors including selectivity, resolution, run time, and loading of the sample in order to save time without missing the touching-band resolution?" To answer this question, tramadol and propranolol were separated on cellulose 3,5-dimethyl phenyl carbamate, as two pure racemic mixtures with low and high solubilities in mobile phase, respectively. The mobile phase composition consisted of n-hexane solvent with alcohol modifier and diethylamine as the additive. A response surface methodology based on central composite design was used to optimize separation factors against the main responses. According to the stacked injection properties, two processes were investigated for maximizing throughput: one with a poorly soluble and another with a highly soluble racemic mixture. For each case, different optimization possibilities were inspected. It was revealed that resolution is a crucial response for separations of this kind. Peak area and run time are two critical parameters in optimization of stacked injection for binary mixtures which have low solubility in the mobile phase. © 2017 Wiley Periodicals, Inc.
Mridula, D; Gupta, R K; Bhadwal, Sheetal; Khaira, Harjot; Tyagi, S K
2016-04-01
Present study was undertaken to optimize the level of food materials viz. groundnut meal, beetroot juice and refined wheat flour for development of nutritious pasta using response surface methodology. Box-benken design of experiments was used to design different experimental combinations considering 10 to 20 g groundnut meal, 6 to 18 mL beetroot juice and 80 to 90 g refined wheat flour. Quality attributes such as protein content, antioxidant activity, colour, cooking quality (solid loss, rehydration ratio and cooking time) and sensory acceptability of pasta samples were the dependent variables for the study. The results revealed that pasta samples with higher levels of groundnut meal and beetroot juice were high in antioxidant activity and overall sensory acceptability. The samples with higher content of groundnut meal indicated higher protein contents in them. On the other hand, the samples with higher beetroot juice content were high in rehydration ratio and lesser cooking time along with low solid loss in cooking water. The different level of studied food materials significantly affected the colour quality of pasta samples. Optimized combination for development of nutritious pasta consisted of 20 g groundnut meal, 18 mL beetroot juice and 83.49 g refined wheat flour with overall desirability as 0.905. This pasta sample required 5.5 min to cook and showed 1.37 % solid loss and rehydration ratio as 6.28. Pasta sample prepared following optimized formulation provided 19.56 % protein content, 23.95 % antioxidant activity and 125.89 mg/100 g total phenols with overall sensory acceptability scores 8.71.
Truzzi, Cristina; Annibaldi, Anna; Illuminati, Silvia; Finale, Carolina; Scarponi, Giuseppe
2014-05-01
The study compares official spectrophotometric methods for the determination of proline content in honey - those of the International Honey Commission (IHC) and the Association of Official Analytical Chemists (AOAC) - with the original Ough method. Results show that the extra time-consuming treatment stages added by the IHC method with respect to the Ough method are pointless. We demonstrate that the AOACs method proves to be the best in terms of accuracy and time saving. The optimized waiting time for the absorbance recording is set at 35min from the removal of reaction tubes from the boiling bath used in the sample treatment. The optimized method was validated in the matrix: linearity up to 1800mgL(-1), limit of detection 20mgL(-1), limit of quantification 61mgL(-1). The method was applied to 43 unifloral honey samples from the Marche region, Italy. Copyright © 2013 Elsevier Ltd. All rights reserved.
Optimizing a dynamical decoupling protocol for solid-state electronic spin ensembles in diamond
DOE Office of Scientific and Technical Information (OSTI.GOV)
Farfurnik, D.; Jarmola, A.; Pham, L. M.
2015-08-24
In this study, we demonstrate significant improvements of the spin coherence time of a dense ensemble of nitrogen-vacancy (NV) centers in diamond through optimized dynamical decoupling (DD). Cooling the sample down to 77 K suppresses longitudinal spin relaxation T 1 effects and DD microwave pulses are used to increase the transverse coherence time T 2 from ~0.7ms up to ~30ms. Furthermore, we extend previous work of single-axis (Carr-Purcell-Meiboom-Gill) DD towards the preservation of arbitrary spin states. Following a theoretical and experimental characterization of pulse and detuning errors, we compare the performance of various DD protocols. We also identify that themore » optimal control scheme for preserving an arbitrary spin state is a recursive protocol, the concatenated version of the XY8 pulse sequence. The improved spin coherence might have an immediate impact on improvements of the sensitivities of ac magnetometry. Moreover, the protocol can be used on denser diamond samples to increase coherence times up to NV-NV interaction time scales, a major step towards the creation of quantum collective NV spin states.« less
NASA Astrophysics Data System (ADS)
Senba, Y.; Nagasono, M.; Koyama, T.; Yumoto, H.; Ohashi, H.; Tono, K.; Togashi, T.; Inubushi, Y.; Sato, T.; Yabashi, M.; Ishikawa, T.
2013-03-01
Optimization of focusing conditions is important in free-electron laser applications. A time-of-flight mass analyzer has been designed and constructed for this purpose. The time-of-flight spectra of ionic species evolved from laser ablation of gold were measured. The yields of ionic species showed strong correlations with free-electron-laser intensity. This method conveniently allows for direct estimation of laser intensity on sample and determination of focusing position.
Lau, Esther Yuet Ying; Hui, C Harry; Lam, Jasmine; Cheung, Shu-Fai
2017-01-01
While both sleep and optimism have been found to be predictive of well-being, few studies have examined their relationship with each other. Neither do we know much about the mediators and moderators of the relationship. This study investigated (1) the causal relationship between sleep quality and optimism in a college student sample, (2) the role of symptoms of depression, anxiety, and stress as mediators, and (3) how circadian preference might moderate the relationship. Internet survey data were collected from 1,684 full-time university students (67.6% female, mean age = 20.9 years, SD = 2.66) at three time-points, spanning about 19 months. Measures included the Attributional Style Questionnaire, the Pittsburgh Sleep Quality Index, the Composite Scale of Morningness, and the Depression Anxiety Stress Scale-21. Moderate correlations were found among sleep quality, depressive mood, stress symptoms, anxiety symptoms, and optimism. Cross-lagged analyses showed a bidirectional effect between optimism and sleep quality. Moreover, path analyses demonstrated that anxiety and stress symptoms partially mediated the influence of optimism on sleep quality, while depressive mood partially mediated the influence of sleep quality on optimism. In support of our hypothesis, sleep quality affects mood symptoms and optimism differently for different circadian preferences. Poor sleep results in depressive mood and thus pessimism in non-morning persons only. In contrast, the aggregated (direct and indirect) effects of optimism on sleep quality were invariant of circadian preference. Taken together, people who are pessimistic generally have more anxious mood and stress symptoms, which adversely affect sleep while morningness seems to have a specific protective effect countering the potential damage poor sleep has on optimism. In conclusion, optimism and sleep quality were both cause and effect of each other. Depressive mood partially explained the effect of sleep quality on optimism, whereas anxiety and stress symptoms were mechanisms bridging optimism to sleep quality. This was the first study examining the complex relationships among sleep quality, optimism, and mood symptoms altogether longitudinally in a student sample. Implications on prevention and intervention for sleep problems and mood disorders are discussed.
Instrument for Real-Time Digital Nucleic Acid Amplification on Custom Microfluidic Devices
Selck, David A.
2016-01-01
Nucleic acid amplification tests that are coupled with a digital readout enable the absolute quantification of single molecules, even at ultralow concentrations. Digital methods are robust, versatile and compatible with many amplification chemistries including isothermal amplification, making them particularly invaluable to assays that require sensitive detection, such as the quantification of viral load in occult infections or detection of sparse amounts of DNA from forensic samples. A number of microfluidic platforms are being developed for carrying out digital amplification. However, the mechanistic investigation and optimization of digital assays has been limited by the lack of real-time kinetic information about which factors affect the digital efficiency and analytical sensitivity of a reaction. Commercially available instruments that are capable of tracking digital reactions in real-time are restricted to only a small number of device types and sample-preparation strategies. Thus, most researchers who wish to develop, study, or optimize digital assays rely on the rate of the amplification reaction when performed in a bulk experiment, which is now recognized as an unreliable predictor of digital efficiency. To expand our ability to study how digital reactions proceed in real-time and enable us to optimize both the digital efficiency and analytical sensitivity of digital assays, we built a custom large-format digital real-time amplification instrument that can accommodate a wide variety of devices, amplification chemistries and sample-handling conditions. Herein, we validate this instrument, we provide detailed schematics that will enable others to build their own custom instruments, and we include a complete custom software suite to collect and analyze the data retrieved from the instrument. We believe assay optimizations enabled by this instrument will improve the current limits of nucleic acid detection and quantification, improving our fundamental understanding of single-molecule reactions and providing advancements in practical applications such as medical diagnostics, forensics and environmental sampling. PMID:27760148
Optimizing direct amplification of forensic commercial kits for STR determination.
Caputo, M; Bobillo, M C; Sala, A; Corach, D
2017-04-01
Direct DNA amplification in forensic genotyping reduces analytical time when large sample sets are being analyzed. The amplification success depends mainly upon two factors: on one hand, the PCR chemistry and, on the other, the type of solid substrate where the samples are deposited. We developed a workflow strategy aiming to optimize times and cost when starting from blood samples spotted onto diverse absorbent substrates. A set of 770 blood samples spotted onto Blood cards, Whatman ® 3 MM paper, FTA™ Classic cards, and Whatman ® Grade 1 was analyzed by a unified working strategy including a low-cost pre-treatment, a PCR amplification volume scale-down, and the use of the 3500 Genetic Analyzer as the analytical platform. Samples were analyzed using three different commercial multiplex STR direct amplification kits. The efficiency of the strategy was evidenced by a higher percentage of high-quality profiles obtained (over 94%), a reduced number of re-injections (average 3.2%), and a reduced amplification failure rate (lower than 5%). Average peak height ratio among different commercial kits was 0.91, and the intra-locus balance showed values ranging from 0.92 to 0.94. A comparison with previously reported results was performed demonstrating the efficiency of the proposed modifications. The protocol described herein showed high performance, producing optimal quality profiles, and being both time and cost effective. Copyright © 2017 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.
NASA Astrophysics Data System (ADS)
Saruwatari, Shunsuke; Suzuki, Makoto; Morikawa, Hiroyuki
The paper shows a compact hard real-time operating system for wireless sensor nodes called PAVENET OS. PAVENET OS provides hybrid multithreading: preemptive multithreading and cooperative multithreading. Both of the multithreading are optimized for two kinds of tasks on wireless sensor networks, and those are real-time tasks and best-effort ones. PAVENET OS can efficiently perform hard real-time tasks that cannot be performed by TinyOS. The paper demonstrates the hybrid multithreading realizes compactness and low overheads, which are comparable to those of TinyOS, through quantitative evaluation. The evaluation results show PAVENET OS performs 100 Hz sensor sampling with 0.01% jitter while performing wireless communication tasks, whereas optimized TinyOS has 0.62% jitter. In addition, PAVENET OS has a small footprint and low overheads (minimum RAM size: 29 bytes, minimum ROM size: 490 bytes, minimum task switch time: 23 cycles).
Optimal measurement counting time and statistics in gamma spectrometry analysis: The time balance
NASA Astrophysics Data System (ADS)
Joel, Guembou Shouop Cebastien; Penabei, Samafou; Maurice, Ndontchueng Moyo; Gregoire, Chene; Jilbert, Nguelem Mekontso Eric; Didier, Takoukam Serge; Werner, Volker; David, Strivay
2017-01-01
The optimal measurement counting time for gamma-ray spectrometry analysis using HPGe detectors was determined in our laboratory by comparing twelve hours measurement counting time at day and twelve hours measurement counting time at night. The day spectrum does not fully cover the night spectrum for the same sample. It is observed that the perturbation come to the sun-light. After several investigations became clearer: to remove all effects of radiation from outside (earth, the sun, and universe) our system, it is necessary to measure the background for 24, 48 or 72 hours. In the same way, the samples have to be measured for 24, 48 or 72 hours to be safe to be purified the measurement (equality of day and night measurement). It is also possible to not use the background of the winter in summer. Depend on to the energy of radionuclide we seek, it is clear that the most important steps of a gamma spectrometry measurement are the preparation of the sample and the calibration of the detector.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fong, Erika J.; Huang, Chao; Hamilton, Julie
Here, a major advantage of microfluidic devices is the ability to manipulate small sample volumes, thus reducing reagent waste and preserving precious sample. However, to achieve robust sample manipulation it is necessary to address device integration with the macroscale environment. To realize repeatable, sensitive particle separation with microfluidic devices, this protocol presents a complete automated and integrated microfluidic platform that enables precise processing of 0.15–1.5 ml samples using microfluidic devices. Important aspects of this system include modular device layout and robust fixtures resulting in reliable and flexible world to chip connections, and fully-automated fluid handling which accomplishes closed-loop sample collection,more » system cleaning and priming steps to ensure repeatable operation. Different microfluidic devices can be used interchangeably with this architecture. Here we incorporate an acoustofluidic device, detail its characterization, performance optimization, and demonstrate its use for size-separation of biological samples. By using real-time feedback during separation experiments, sample collection is optimized to conserve and concentrate sample. Although requiring the integration of multiple pieces of equipment, advantages of this architecture include the ability to process unknown samples with no additional system optimization, ease of device replacement, and precise, robust sample processing.« less
NASA Astrophysics Data System (ADS)
Qi, Shengqi; Hou, Deyi; Luo, Jian
2017-09-01
This study presents a numerical model based on field data to simulate groundwater flow in both the aquifer and the well-bore for the low-flow sampling method and the well-volume sampling method. The numerical model was calibrated to match well with field drawdown, and calculated flow regime in the well was used to predict the variation of dissolved oxygen (DO) concentration during the purging period. The model was then used to analyze sampling representativeness and sampling time. Site characteristics, such as aquifer hydraulic conductivity, and sampling choices, such as purging rate and screen length, were found to be significant determinants of sampling representativeness and required sampling time. Results demonstrated that: (1) DO was the most useful water quality indicator in ensuring groundwater sampling representativeness in comparison with turbidity, pH, specific conductance, oxidation reduction potential (ORP) and temperature; (2) it is not necessary to maintain a drawdown of less than 0.1 m when conducting low flow purging. However, a high purging rate in a low permeability aquifer may result in a dramatic decrease in sampling representativeness after an initial peak; (3) the presence of a short screen length may result in greater drawdown and a longer sampling time for low-flow purging. Overall, the present study suggests that this new numerical model is suitable for describing groundwater flow during the sampling process, and can be used to optimize sampling strategies under various hydrogeological conditions.
NASA Astrophysics Data System (ADS)
Capozzoli, Amedeo; Curcio, Claudio; Liseno, Angelo; Savarese, Salvatore; Schipani, Pietro
2016-07-01
The communication presents an innovative method for the diagnosis of reflector antennas in radio astronomical applications. The approach is based on the optimization of the number and the distribution of the far field sampling points exploited to retrieve the antenna status in terms of feed misalignments, this to drastically reduce the time length of the measurement process and minimize the effects of variable environmental conditions and simplifying the tracking process of the source. The feed misplacement is modeled in terms of an aberration function of the aperture field. The relationship between the unknowns and the far field pattern samples is linearized thanks to a Principal Component Analysis. The number and the position of the field samples are then determined by optimizing the Singular Values behaviour of the relevant operator.
Optimal CCD readout by digital correlated double sampling
NASA Astrophysics Data System (ADS)
Alessandri, C.; Abusleme, A.; Guzman, D.; Passalacqua, I.; Alvarez-Fontecilla, E.; Guarini, M.
2016-01-01
Digital correlated double sampling (DCDS), a readout technique for charge-coupled devices (CCD), is gaining popularity in astronomical applications. By using an oversampling ADC and a digital filter, a DCDS system can achieve a better performance than traditional analogue readout techniques at the expense of a more complex system analysis. Several attempts to analyse and optimize a DCDS system have been reported, but most of the work presented in the literature has been experimental. Some approximate analytical tools have been presented for independent parameters of the system, but the overall performance and trade-offs have not been yet modelled. Furthermore, there is disagreement among experimental results that cannot be explained by the analytical tools available. In this work, a theoretical analysis of a generic DCDS readout system is presented, including key aspects such as the signal conditioning stage, the ADC resolution, the sampling frequency and the digital filter implementation. By using a time-domain noise model, the effect of the digital filter is properly modelled as a discrete-time process, thus avoiding the imprecision of continuous-time approximations that have been used so far. As a result, an accurate, closed-form expression for the signal-to-noise ratio at the output of the readout system is reached. This expression can be easily optimized in order to meet a set of specifications for a given CCD, thus providing a systematic design methodology for an optimal readout system. Simulated results are presented to validate the theory, obtained with both time- and frequency-domain noise generation models for completeness.
Montesdeoca-Esponda, Sarah; Sosa-Ferrera, Zoraida; Kabir, Abuzar; Furton, Kenneth G; Santana-Rodríguez, José Juan
2015-10-01
A fast and sensitive sample preparation strategy using fabric phase sorptive extraction followed by ultra-high-performance liquid chromatography and tandem mass spectrometry detection has been developed to analyse benzotriazole UV stabilizer compounds in aqueous samples. Benzotriazole UV stabilizer compounds are a group of compounds added to sunscreens and other personal care products which may present detrimental effects to aquatic ecosystems. Fabric phase sorptive extraction is a novel solvent minimized sample preparation approach that integrates the advantages of sol-gel derived hybrid inorganic-organic nanocomposite sorbents and the flexible, permeable and hydrophobic surface chemistry of polyester fabric. It is a highly sensitive, fast, efficient and inexpensive device that can be reused and does not suffer from coating damage, unlike SPME fibres or stir bars. In this paper, we optimized the extraction of seven benzotriazole UV filters evaluating the majority of the parameters involved in the extraction process, such as sorbent chemistry selection, extraction time, back-extraction solvent, back-extraction time and the impact of ionic strength. Under the optimized conditions, fabric phase sorptive extraction allows enrichment factors of 10 times with detection limits ranging from 6.01 to 60.7 ng L(-1) and intra- and inter-day % RSDs lower than 11 and 30 % for all compounds, respectively. The optimized sample preparation technique followed by ultra-high-performance liquid chromatography and tandem mass spectrometry detection was applied to determine the target analytes in sewage samples from wastewater treatment plants with different purification processes of Gran Canaria Island (Spain). Two UV stabilizer compounds were measured in ranges 17.0-60.5 ng mL(-1) (UV 328) and 69.3-99.2 ng mL(-1) (UV 360) in the three sewage water samples analysed.
Heller, Melina; Vitali, Luciano; Oliveira, Marcone Augusto Leal; Costa, Ana Carolina O; Micke, Gustavo Amadeu
2011-07-13
The present study aimed to develop a methodology using capillary electrophoresis for the determination of sinapaldehyde, syringaldehyde, coniferaldehyde, and vanillin in whiskey samples. The main objective was to obtain a screening method to differentiate authentic samples from seized samples suspected of being false using the phenolic aldehydes as chemical markers. The optimized background electrolyte was composed of 20 mmol L(-1) sodium tetraborate with 10% MeOH at pH 9.3. The study examined two kinds of sample stacking, using a long-end injection mode: normal sample stacking (NSM) and sample stacking with matrix removal (SWMR). In SWMR, the optimized injection time of the samples was 42 s (SWMR42); at this time, no matrix effects were observed. Values of r were >0.99 for the both methods. The LOD and LOQ were better than 100 and 330 mg mL(-1) for NSM and better than 22 and 73 mg L(-1) for SWMR. The CE-UV reliability in the aldehyde analysis in the real sample was compared statistically with LC-MS/MS methodology, and no significant differences were found, with a 95% confidence interval between the methodologies.
NASA Astrophysics Data System (ADS)
Grisey, A.; Yon, S.; Pechoux, T.; Letort, V.; Lafitte, P.
2017-03-01
Treatment time reduction is a key issue to expand the use of high intensity focused ultrasound (HIFU) surgery, especially for benign pathologies. This study aims at quantitatively assessing the potential reduction of the treatment time arising from moving the focal point during long pulses. In this context, the optimization of the focal point trajectory is crucial to achieve a uniform thermal dose repartition and avoid boiling. At first, a numerical optimization algorithm was used to generate efficient trajectories. Thermal conduction was simulated in 3D with a finite difference code and damages to the tissue were modeled using the thermal dose formula. Given an initial trajectory, the thermal dose field was first computed, then, making use of Pontryagin's maximum principle, the trajectory was iteratively refined. Several initial trajectories were tested. Then, an ex vivo study was conducted in order to validate the efficicency of the resulting optimized strategies. Single pulses were performed at 3MHz on fresh veal liver samples with an Echopulse and the size of each unitary lesion was assessed by cutting each sample along three orthogonal planes and measuring the dimension of the whitened area based on photographs. We propose a promising approach to significantly shorten HIFU treatment time: the numerical optimization algorithm was shown to provide a reliable insight on trajectories that can improve treatment strategies. The model must now be improved in order to take in vivo conditions into account and extensively validated.
Pozzi, P; Wilding, D; Soloviev, O; Verstraete, H; Bliek, L; Vdovin, G; Verhaegen, M
2017-01-23
The quality of fluorescence microscopy images is often impaired by the presence of sample induced optical aberrations. Adaptive optical elements such as deformable mirrors or spatial light modulators can be used to correct aberrations. However, previously reported techniques either require special sample preparation, or time consuming optimization procedures for the correction of static aberrations. This paper reports a technique for optical sectioning fluorescence microscopy capable of correcting dynamic aberrations in any fluorescent sample during the acquisition. This is achieved by implementing adaptive optics in a non conventional confocal microscopy setup, with multiple programmable confocal apertures, in which out of focus light can be separately detected, and used to optimize the correction performance with a sampling frequency an order of magnitude faster than the imaging rate of the system. The paper reports results comparing the correction performances to traditional image optimization algorithms, and demonstrates how the system can compensate for dynamic changes in the aberrations, such as those introduced during a focal stack acquisition though a thick sample.
Evaluation of the volatile profile of Tuber liyuanum by HS-SPME with GC-MS.
Liu, Changjiao; Li, Yu
2017-04-01
The volatile components of Tuber liyuanum were determined by HS-SPME with GC-MS for the first time. The effects of different fibre coating, extraction time, extraction temperature and sample amount were studied to get optimal extraction conditions. The optimal conditions were SPME fibre of Carboxen/PDMS, extraction time of 40 min, extraction temperature of 80 °C, sample amount of 2 g. Under these conditions 57 compounds in volatile of T. liyuanum were detected with a resemblance percentage above 80%. Aldehydes and aromatics were the main chemical families identified. The contribution of 3-Octanone(11.67%), phenylethyl alcohol (10.60%), isopentana (9.29%) and methylbutana (8.06%) for the total volatile profile were more significant in T. liyuanum than other compounds.
Concrete thawing studied by single-point ramped imaging.
Prado, P J; Balcom, B J; Beyea, S D; Armstrong, R L; Bremner, T W
1997-12-01
A series of two-dimensional images of proton distribution in a hardened concrete sample has been obtained during the thawing process (from -50 degrees C up to 11 degrees C). The SPRITE sequence is optimal for this study given the characteristic short relaxation times of water in this porous media (T2* < 200 micros and T1 < 3.6 ms). The relaxation parameters of the sample were determined in order to optimize the time efficiency of the sequence, permitting a 4-scan 64 x 64 acquisition in under 3 min. The image acquisition is fast on the time scale of the temperature evolution of the specimen. The frozen water distribution is quantified through a position based study of the image contrast. A multiple point acquisition method is presented and the signal sensitivity improvement is discussed.
Hu, Meng; Krauss, Martin; Brack, Werner; Schulze, Tobias
2016-11-01
Liquid chromatography-high resolution mass spectrometry (LC-HRMS) is a well-established technique for nontarget screening of contaminants in complex environmental samples. Automatic peak detection is essential, but its performance has only rarely been assessed and optimized so far. With the aim to fill this gap, we used pristine water extracts spiked with 78 contaminants as a test case to evaluate and optimize chromatogram and spectral data processing. To assess whether data acquisition strategies have a significant impact on peak detection, three values of MS cycle time (CT) of an LTQ Orbitrap instrument were tested. Furthermore, the key parameter settings of the data processing software MZmine 2 were optimized to detect the maximum number of target peaks from the samples by the design of experiments (DoE) approach and compared to a manual evaluation. The results indicate that short CT significantly improves the quality of automatic peak detection, which means that full scan acquisition without additional MS 2 experiments is suggested for nontarget screening. MZmine 2 detected 75-100 % of the peaks compared to manual peak detection at an intensity level of 10 5 in a validation dataset on both spiked and real water samples under optimal parameter settings. Finally, we provide an optimization workflow of MZmine 2 for LC-HRMS data processing that is applicable for environmental samples for nontarget screening. The results also show that the DoE approach is useful and effort-saving for optimizing data processing parameters. Graphical Abstract ᅟ.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beltran, C; Kamal, H
Purpose: To provide a multicriteria optimization algorithm for intensity modulated radiation therapy using pencil proton beam scanning. Methods: Intensity modulated radiation therapy using pencil proton beam scanning requires efficient optimization algorithms to overcome the uncertainties in the Bragg peaks locations. This work is focused on optimization algorithms that are based on Monte Carlo simulation of the treatment planning and use the weights and the dose volume histogram (DVH) control points to steer toward desired plans. The proton beam treatment planning process based on single objective optimization (representing a weighted sum of multiple objectives) usually leads to time-consuming iterations involving treatmentmore » planning team members. We proved a time efficient multicriteria optimization algorithm that is developed to run on NVIDIA GPU (Graphical Processing Units) cluster. The multicriteria optimization algorithm running time benefits from up-sampling of the CT voxel size of the calculations without loss of fidelity. Results: We will present preliminary results of Multicriteria optimization for intensity modulated proton therapy based on DVH control points. The results will show optimization results of a phantom case and a brain tumor case. Conclusion: The multicriteria optimization of the intensity modulated radiation therapy using pencil proton beam scanning provides a novel tool for treatment planning. Work support by a grant from Varian Inc.« less
X-Ray Imaging Applied to Problems in Planetary Materials
NASA Technical Reports Server (NTRS)
Jurewicz, A. J. G.; Mih, D. T.; Jones, S. M.; Connolly, H.
2000-01-01
Real-time radiography (X-ray imaging) can be a useful tool for tasks such as (1) the non-destructive, preliminary examination of opaque samples and (2) optimizing how to section opaque samples for more traditional microscopy and chemical analysis.
Optimal design of clinical trials with biologics using dose-time-response models.
Lange, Markus R; Schmidli, Heinz
2014-12-30
Biologics, in particular monoclonal antibodies, are important therapies in serious diseases such as cancer, psoriasis, multiple sclerosis, or rheumatoid arthritis. While most conventional drugs are given daily, the effect of monoclonal antibodies often lasts for months, and hence, these biologics require less frequent dosing. A good understanding of the time-changing effect of the biologic for different doses is needed to determine both an adequate dose and an appropriate time-interval between doses. Clinical trials provide data to estimate the dose-time-response relationship with semi-mechanistic nonlinear regression models. We investigate how to best choose the doses and corresponding sample size allocations in such clinical trials, so that the nonlinear dose-time-response model can be precisely estimated. We consider both local and conservative Bayesian D-optimality criteria for the design of clinical trials with biologics. For determining the optimal designs, computer-intensive numerical methods are needed, and we focus here on the particle swarm optimization algorithm. This metaheuristic optimizer has been successfully used in various areas but has only recently been applied in the optimal design context. The equivalence theorem is used to verify the optimality of the designs. The methodology is illustrated based on results from a clinical study in patients with gout, treated by a monoclonal antibody. Copyright © 2014 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Saffari, Hamid; Sohrabi, Beheshteh; Noori, Mohammad Reza; Bahrami, Hamid Reza Talesh
2018-03-01
A single step anodizing process is used to produce micro-nano structures on Aluminum (1050) substrates with sulfuric acid as electrolyte. Therefore, surface energy of the anodized layer is reduced using stearic acid modification. Undoubtedly, effects of different parameters including anodizing time, electrical current, and type and concentration of electrolyte on the final contact angle are systemically studied and optimized. Results show that anodizing current of 0.41 A, electrolyte (sulfuric acid) concentration of 15 wt.% and anodizing time of 90 min are optimal conditions which give contact angle as high as 159.2° and sliding angle lower than 5°. Moreover, the study reveals that adding oxalic acid to the sulfuric acid cannot enhance superhydrophobicity of the samples. Also, scanning electron microscopy images of samples show that irregular (bird's nest) structures present on the surface instead of high-ordered honeycomb structures expecting from normal anodizing process. Additionally, X-ray diffraction analysis of the samples shows that only amorphous structures present on the surface. The Brunauer-Emmett-Teller (BET) specific surface area of the anodized layer is 2.55 m2 g-1 in optimal condition. Ultimately, the surface keeps its hydrophobicity in air and deionized water (DIW) after one week and 12 weeks, respectively.
Human-in-the-loop Bayesian optimization of wearable device parameters
Malcolm, Philippe; Speeckaert, Jozefien; Siviy, Christoper J.; Walsh, Conor J.; Kuindersma, Scott
2017-01-01
The increasing capabilities of exoskeletons and powered prosthetics for walking assistance have paved the way for more sophisticated and individualized control strategies. In response to this opportunity, recent work on human-in-the-loop optimization has considered the problem of automatically tuning control parameters based on realtime physiological measurements. However, the common use of metabolic cost as a performance metric creates significant experimental challenges due to its long measurement times and low signal-to-noise ratio. We evaluate the use of Bayesian optimization—a family of sample-efficient, noise-tolerant, and global optimization methods—for quickly identifying near-optimal control parameters. To manage experimental complexity and provide comparisons against related work, we consider the task of minimizing metabolic cost by optimizing walking step frequencies in unaided human subjects. Compared to an existing approach based on gradient descent, Bayesian optimization identified a near-optimal step frequency with a faster time to convergence (12 minutes, p < 0.01), smaller inter-subject variability in convergence time (± 2 minutes, p < 0.01), and lower overall energy expenditure (p < 0.01). PMID:28926613
NASA Astrophysics Data System (ADS)
Cao, Jin; Jiang, Zhibin; Wang, Kangzhou
2017-07-01
Many nonlinear customer satisfaction-related factors significantly influence the future customer demand for service-oriented manufacturing (SOM). To address this issue and enhance the prediction accuracy, this article develops a novel customer demand prediction approach for SOM. The approach combines the phase space reconstruction (PSR) technique with the optimized least square support vector machine (LSSVM). First, the prediction sample space is reconstructed by the PSR to enrich the time-series dynamics of the limited data sample. Then, the generalization and learning ability of the LSSVM are improved by the hybrid polynomial and radial basis function kernel. Finally, the key parameters of the LSSVM are optimized by the particle swarm optimization algorithm. In a real case study, the customer demand prediction of an air conditioner compressor is implemented. Furthermore, the effectiveness and validity of the proposed approach are demonstrated by comparison with other classical predication approaches.
A robust approach to optimal matched filter design in ultrasonic non-destructive evaluation (NDE)
NASA Astrophysics Data System (ADS)
Li, Minghui; Hayward, Gordon
2017-02-01
The matched filter was demonstrated to be a powerful yet efficient technique to enhance defect detection and imaging in ultrasonic non-destructive evaluation (NDE) of coarse grain materials, provided that the filter was properly designed and optimized. In the literature, in order to accurately approximate the defect echoes, the design utilized the real excitation signals, which made it time consuming and less straightforward to implement in practice. In this paper, we present a more robust and flexible approach to optimal matched filter design using the simulated excitation signals, and the control parameters are chosen and optimized based on the real scenario of array transducer, transmitter-receiver system response, and the test sample, as a result, the filter response is optimized and depends on the material characteristics. Experiments on industrial samples are conducted and the results confirm the great benefits of the method.
Salahinejad, Maryam; Aflaki, Fereydoon
2011-06-01
Dispersive liquid-liquid microextraction followed by inductively coupled plasma-optical emission spectrometry has been investigated for determination of Cd(II) ions in water samples. Ammonium pyrrolidine dithiocarbamate was used as chelating agent. Several factors influencing the microextraction efficiency of Cd (II) ions such as extracting and dispersing solvent type and their volumes, pH, sample volume, and salting effect were optimized. The optimization was performed both via one variable at a time, and central composite design methods and the optimum conditions were selected. Both optimization methods showed nearly the same results: sample size 5 mL; dispersive solvent ethanol; dispersive solvent volume 2 mL; extracting solvent chloroform; extracting solvent volume 200 [Formula: see text]L; pH and salt amount do not affect significantly the microextraction efficiency. The limits of detection and quantification were 0.8 and 2.5 ng L( - 1), respectively. The relative standard deviation for five replicate measurements of 0.50 mg L( - 1) of Cd (II) was 4.4%. The recoveries for the spiked real samples from tap, mineral, river, dam, and sea waters samples ranged from 92.2% to 104.5%.
Optimization of low-level LS counter Quantulus 1220 for tritium determination in water samples
NASA Astrophysics Data System (ADS)
Jakonić, Ivana; Todorović, Natasa; Nikolov, Jovana; Bronić, Ines Krajcar; Tenjović, Branislava; Vesković, Miroslav
2014-05-01
Liquid scintillation counting (LSC) is the most commonly used technique for measuring tritium. To optimize tritium analysis in waters by ultra-low background liquid scintillation spectrometer Quantulus 1220 the optimization of sample/scintillant ratio, choice of appropriate scintillation cocktail and comparison of their efficiency, background and minimal detectable activity (MDA), the effect of chemi- and photoluminescence and combination of scintillant/vial were performed. ASTM D4107-08 (2006) method had been successfully applied in our laboratory for two years. During our last preparation of samples a serious quench effect in count rates of samples that could be consequence of possible contamination by DMSO was noticed. The goal of this paper is to demonstrate development of new direct method in our laboratory proposed by Pujol and Sanchez-Cabeza (1999), which turned out to be faster and simpler than ASTM method while we are dealing with problem of neutralization of DMSO in apparatus. The minimum detectable activity achieved was 2.0 Bq l-1 for a total counting time of 300 min. In order to test the optimization of system for this method tritium level was determined in Danube river samples and also for several samples within intercomparison with Ruđer Bošković Institute (IRB).
IPO: a tool for automated optimization of XCMS parameters.
Libiseller, Gunnar; Dvorzak, Michaela; Kleb, Ulrike; Gander, Edgar; Eisenberg, Tobias; Madeo, Frank; Neumann, Steffen; Trausinger, Gert; Sinner, Frank; Pieber, Thomas; Magnes, Christoph
2015-04-16
Untargeted metabolomics generates a huge amount of data. Software packages for automated data processing are crucial to successfully process these data. A variety of such software packages exist, but the outcome of data processing strongly depends on algorithm parameter settings. If they are not carefully chosen, suboptimal parameter settings can easily lead to biased results. Therefore, parameter settings also require optimization. Several parameter optimization approaches have already been proposed, but a software package for parameter optimization which is free of intricate experimental labeling steps, fast and widely applicable is still missing. We implemented the software package IPO ('Isotopologue Parameter Optimization') which is fast and free of labeling steps, and applicable to data from different kinds of samples and data from different methods of liquid chromatography - high resolution mass spectrometry and data from different instruments. IPO optimizes XCMS peak picking parameters by using natural, stable (13)C isotopic peaks to calculate a peak picking score. Retention time correction is optimized by minimizing relative retention time differences within peak groups. Grouping parameters are optimized by maximizing the number of peak groups that show one peak from each injection of a pooled sample. The different parameter settings are achieved by design of experiments, and the resulting scores are evaluated using response surface models. IPO was tested on three different data sets, each consisting of a training set and test set. IPO resulted in an increase of reliable groups (146% - 361%), a decrease of non-reliable groups (3% - 8%) and a decrease of the retention time deviation to one third. IPO was successfully applied to data derived from liquid chromatography coupled to high resolution mass spectrometry from three studies with different sample types and different chromatographic methods and devices. We were also able to show the potential of IPO to increase the reliability of metabolomics data. The source code is implemented in R, tested on Linux and Windows and it is freely available for download at https://github.com/glibiseller/IPO . The training sets and test sets can be downloaded from https://health.joanneum.at/IPO .
Variance of discharge estimates sampled using acoustic Doppler current profilers from moving boats
Garcia, Carlos M.; Tarrab, Leticia; Oberg, Kevin; Szupiany, Ricardo; Cantero, Mariano I.
2012-01-01
This paper presents a model for quantifying the random errors (i.e., variance) of acoustic Doppler current profiler (ADCP) discharge measurements from moving boats for different sampling times. The model focuses on the random processes in the sampled flow field and has been developed using statistical methods currently available for uncertainty analysis of velocity time series. Analysis of field data collected using ADCP from moving boats from three natural rivers of varying sizes and flow conditions shows that, even though the estimate of the integral time scale of the actual turbulent flow field is larger than the sampling interval, the integral time scale of the sampled flow field is on the order of the sampling interval. Thus, an equation for computing the variance error in discharge measurements associated with different sampling times, assuming uncorrelated flow fields is appropriate. The approach is used to help define optimal sampling strategies by choosing the exposure time required for ADCPs to accurately measure flow discharge.
Xu, Wei; Chu, Kedan; Li, Huang; Zhang, Yuqin; Zheng, Haiyin; Chen, Ruilan; Chen, Lidian
2012-12-03
An ionic liquids (IL)-based microwave-assisted approach for extraction and determination of flavonoids from Bauhinia championii (Benth.) Benth. was proposed for the first time. Several ILs with different cations and anions and the microwave-assisted extraction (MAE) conditions, including sample particle size, extraction time and liquid-solid ratio, were investigated. Two M 1-butyl-3-methylimidazolium bromide ([bmim] Br) solution with 0.80 M HCl was selected as the optimal solvent. Meanwhile the optimized conditions a ratio of liquid to material of 30:1, and the extraction for 10 min at 70 °C. Compared with conventional heat-reflux extraction (CHRE) and the regular MAE, IL-MAE exhibited a higher extraction yield and shorter extraction time (from 1.5 h to 10 min). The optimized extraction samples were analysed by LC-MS/MS. IL extracts of Bauhinia championii (Benth.)Benth consisted mainly of flavonoids, among which myricetin, quercetin and kaempferol, β-sitosterol, triacontane and hexacontane were identified. The study indicated that IL-MAE was an efficient and rapid method with simple sample preparation. LC-MS/MS was also used to determine the chemical composition of the ethyl acetate/MAE extract of Bauhinia championii (Benth.) Benth, and it maybe become a rapid method to determine the composition of new plant extracts.
Vázquez Blanco, E; López Mahía, P; Muniategui Lorenzo, S; Prada Rodríguez, D; Fernández Fernández, E
2000-02-01
Microwave energy was applied to extract polycyclic aromatic hydrocarbons (PAHs) and linear aliphatic hydrocarbons (LAHs) from marine sediments. The influence of experimental conditions, such as different extracting solvents and mixtures, microwave power, irradiation time and number of samples extracted per run has been tested using real marine sediment samples; volume of the solvent, sample quantity and matrix effects were also evaluated. The yield of extracted compounds obtained by microwave irradiation was compared with that obtained using the traditional Soxhlet extraction. The best results were achieved with a mixture of acetone and hexane (1:1), and recoveries ranged from 92 to 106%. The extraction time is dependent on the irradiation power and the number of samples extracted per run, so when the irradiation power was set to 500 W, the extraction times varied from 6 min for 1 sample to 18 min for 8 samples. Analytical determinations were carried out by high-performance liquid chromatography (HPLC) with an ultraviolet-visible photodiode-array detector for PAHs and gas chromatography (GC) using a FID detector for LAHs. To test the accuracy of the microwave-assisted extraction (MAE) technique, optimized methodology was applied to the analysis of standard reference material (SRM 1941), obtaining acceptable results.
New Grandparents' Mental Health: The Protective Role of Optimism, Self-Mastery, and Social Support
ERIC Educational Resources Information Center
Ben Shlomo, Shirley; Taubman - Ben-Ari, Orit
2012-01-01
The current study examines the contribution of optimism, self-mastery, perceived social support, and background variables (age, physical health, economic status) to mental health following the transition to grandparenthood. The sample consisted of 257 first-time Israeli grandparents (grandmothers and grandfathers, maternal and paternal) who were…
Hamedi, Raheleh; Hadjmohammadi, Mohammad Reza
2016-12-01
A sensitive and rapid method based on alcohol-assisted dispersive liquid-liquid microextraction followed by high-performance liquid chromatography for the determination of fluoxetine in human plasma and urine samples was developed. The effects of six parameters on the extraction recovery were investigated and optimized utilizing Plackett-Burman design and Box-Benken design, respectively. According to the Plackett-Burman design results, the volume of disperser solvent, extraction time, and stirring speed had no effect on the recovery of fluoxetine. The optimized conditions included a mixture of 172 μL of 1-octanol as extraction solvent and 400 μL of methanol as disperser solvent, pH of 11.3 and 0% w/v of salt in the sample solution. Replicating the experiment in optimized condition for five times, gave the average extraction recoveries equal to 90.15%. The detection limit of fluoxetine in human plasma was obtained 3 ng/mL, and the linearity was in the range of 10-1200 ng/mL. The corresponding values for human urine were 4.2 ng/mL with the linearity range from 10 to 2000 ng/mL. Relative standard deviations for intra and inter day extraction of fluoxetine were less than 7% in five measurements. The developed method was successfully applied for the determination of fluoxetine in human plasma and urine samples. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Optimal Control for Aperiodic Dual-Rate Systems With Time-Varying Delays
Salt, Julián; Guinaldo, María; Chacón, Jesús
2018-01-01
In this work, we consider a dual-rate scenario with slow input and fast output. Our objective is the maximization of the decay rate of the system through the suitable choice of the n-input signals between two measures (periodic sampling) and their times of application. The optimization algorithm is extended for time-varying delays in order to make possible its implementation in networked control systems. We provide experimental results in an air levitation system to verify the validity of the algorithm in a real plant. PMID:29747441
Optimal Control for Aperiodic Dual-Rate Systems With Time-Varying Delays.
Aranda-Escolástico, Ernesto; Salt, Julián; Guinaldo, María; Chacón, Jesús; Dormido, Sebastián
2018-05-09
In this work, we consider a dual-rate scenario with slow input and fast output. Our objective is the maximization of the decay rate of the system through the suitable choice of the n -input signals between two measures (periodic sampling) and their times of application. The optimization algorithm is extended for time-varying delays in order to make possible its implementation in networked control systems. We provide experimental results in an air levitation system to verify the validity of the algorithm in a real plant.
Miklós, István; Darling, Aaron E
2009-06-22
Inversions are among the most common mutations acting on the order and orientation of genes in a genome, and polynomial-time algorithms exist to obtain a minimal length series of inversions that transform one genome arrangement to another. However, the minimum length series of inversions (the optimal sorting path) is often not unique as many such optimal sorting paths exist. If we assume that all optimal sorting paths are equally likely, then statistical inference on genome arrangement history must account for all such sorting paths and not just a single estimate. No deterministic polynomial algorithm is known to count the number of optimal sorting paths nor sample from the uniform distribution of optimal sorting paths. Here, we propose a stochastic method that uniformly samples the set of all optimal sorting paths. Our method uses a novel formulation of parallel Markov chain Monte Carlo. In practice, our method can quickly estimate the total number of optimal sorting paths. We introduce a variant of our approach in which short inversions are modeled to be more likely, and we show how the method can be used to estimate the distribution of inversion lengths and breakpoint usage in pathogenic Yersinia pestis. The proposed method has been implemented in a program called "MC4Inversion." We draw comparison of MC4Inversion to the sampler implemented in BADGER and a previously described importance sampling (IS) technique. We find that on high-divergence data sets, MC4Inversion finds more optimal sorting paths per second than BADGER and the IS technique and simultaneously avoids bias inherent in the IS technique.
Helicopter TEM parameters analysis and system optimization based on time constant
NASA Astrophysics Data System (ADS)
Xiao, Pan; Wu, Xin; Shi, Zongyang; Li, Jutao; Liu, Lihua; Fang, Guangyou
2018-03-01
Helicopter transient electromagnetic (TEM) method is a kind of common geophysical prospecting method, widely used in mineral detection, underground water exploration and environment investigation. In order to develop an efficient helicopter TEM system, it is necessary to analyze and optimize the system parameters. In this paper, a simple and quantitative method is proposed to analyze the system parameters, such as waveform, power, base frequency, measured field and sampling time. A wire loop model is used to define a comprehensive 'time constant domain' that shows a range of time constant, analogous to a range of conductance, after which the characteristics of the system parameters in this domain is obtained. It is found that the distortion caused by the transmitting base frequency is less than 5% when the ratio of the transmitting period to the target time constant is greater than 6. When the sampling time window is less than the target time constant, the distortion caused by the sampling time window is less than 5%. According to this method, a helicopter TEM system, called CASHTEM, is designed, and flight test has been carried out in the known mining area. The test results show that the system has good detection performance, verifying the effectiveness of the method.
Miniaturized and direct spectrophotometric multi-sample analysis of trace metals in natural waters.
Albendín, Gemma; López-López, José A; Pinto, Juan J
2016-03-15
Trends in the analysis of trace metals in natural waters are mainly based on the development of sample treatment methods to isolate and pre-concentrate the metal from the matrix in a simpler extract for further instrumental analysis. However, direct analysis is often possible using more accessible techniques such as spectrophotometry. In this case a proper ligand is required to form a complex that absorbs radiation in the ultraviolet-visible (UV-Vis) spectrum. In this sense, the hydrazone derivative, di-2-pyridylketone benzoylhydrazone (dPKBH), forms complexes with copper (Cu) and vanadium (V) that absorb light at 370 and 395 nm, respectively. Although spectrophotometric methods are considered as time- and reagent-consuming, this work focused on its miniaturization by reducing the volume of sample as well as time and cost of analysis. In both methods, a micro-amount of sample is placed into a microplate reader with a capacity for 96 samples, which can be analyzed in times ranging from 5 to 10 min. The proposed methods have been optimized using a Box-Behnken design of experiments. For Cu determination, concentration of phosphate buffer solution at pH 8.33, masking agents (ammonium fluoride and sodium citrate), and dPKBH were optimized. For V analysis, sample (pH 4.5) was obtained using acetic acid/sodium acetate buffer, and masking agents were ammonium fluoride and 1,2-cyclohexanediaminetetraacetic acid. Under optimal conditions, both methods were applied to the analysis of certified reference materials TMDA-62 (lake water), LGC-6016 (estuarine water), and LGC-6019 (river water). In all cases, results proved the accuracy of the method. Copyright © 2015 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Maymandi, Nahal; Kerachian, Reza; Nikoo, Mohammad Reza
2018-03-01
This paper presents a new methodology for optimizing Water Quality Monitoring (WQM) networks of reservoirs and lakes using the concept of the value of information (VOI) and utilizing results of a calibrated numerical water quality simulation model. With reference to the value of information theory, water quality of every checkpoint with a specific prior probability differs in time. After analyzing water quality samples taken from potential monitoring points, the posterior probabilities are updated using the Baye's theorem, and VOI of the samples is calculated. In the next step, the stations with maximum VOI is selected as optimal stations. This process is repeated for each sampling interval to obtain optimal monitoring network locations for each interval. The results of the proposed VOI-based methodology is compared with those obtained using an entropy theoretic approach. As the results of the two methodologies would be partially different, in the next step, the results are combined using a weighting method. Finally, the optimal sampling interval and location of WQM stations are chosen using the Evidential Reasoning (ER) decision making method. The efficiency and applicability of the methodology are evaluated using available water quantity and quality data of the Karkheh Reservoir in the southwestern part of Iran.
Das, Anup Kumar; Mandal, Vivekananda; Mandal, Subhash C
2013-01-01
Triterpenoids are a group of important phytocomponents from Ficus racemosa (syn. Ficus glomerata Roxb.) that are known to possess diverse pharmacological activities and which have prompted the development of various extraction techniques and strategies for its better utilisation. To develop an effective, rapid and ecofriendly microwave-assisted extraction (MAE) strategy to optimise the extraction of a potent bioactive triterpenoid compound, lupeol, from young leaves of Ficus racemosa using response surface methodology (RSM) for industrial scale-up. Initially a Plackett-Burman design matrix was applied to identify the most significant extraction variables amongst microwave power, irradiation time, particle size, solvent:sample ratio loading, varying solvent strength and pre-leaching time on lupeol extraction. Among the six variables tested, microwave power, irradiation time and solvent-sample/loading ratio were found to have a significant effect (P < 0.05) on lupeol extraction and were fitted to a Box-Behnken-design-generated quadratic polynomial equation to predict optimal extraction conditions as well as to locate operability regions with maximum yield. The optimal conditions were microwave power of 65.67% of 700 W, extraction time of 4.27 min and solvent-sample ratio loading of 21.33 mL/g. Confirmation trials under the optimal conditions gave an experimental yield (18.52 µg/g of dry leaves) close to the RSM predicted value of 18.71 µg/g. Under the optimal conditions the mathematical model was found to be well fitted with the experimental data. The MAE was found to be a more rapid, convenient and appropriate extraction method, with a higher yield and lower solvent consumption when compared with conventional extraction techniques. Copyright © 2012 John Wiley & Sons, Ltd.
Optimization Study of Pulsed DC Nitrogen-Hydrogen Plasma in the Presence of an Active Screen Cage
NASA Astrophysics Data System (ADS)
Saeed, A.; W. Khan, A.; F., Jan; U. Shah, H.; Abrar, M.; Zaka-Ul-Islam, M.; Khalid, M.; Zakaullah, M.
2014-05-01
A glow discharge plasma nitriding reactor in the presence of an active screen cage is optimized in terms of current density, filling pressure and hydrogen concentrations using optical emission spectroscopy (OES). The samples of AISI 304 are nitrided for different treatment times under optimum conditions. The treated samples were analyzed by X-ray diffraction (XRD) to explore the changes induced in the crystallographic structure. The XRD pattern confirmed the formation of iron and chromium nitrides arising from incorporation of nitrogen as an interstitial solid solution in the iron lattice. A Vickers microhardness tester was used to evaluate the surface hardness as a function of treatment time (h). The results showed clear evidence of improved surface hardness and a substantial amount of decrease in the treatment time compared with the previous work.
Poormohammadi, Ali; Bahrami, Abdulrahman; Farhadian, Maryam; Ghorbani Shahna, Farshid; Ghiasvand, Alireza
2017-12-08
Carbotrap B as a highly pure surface sorbent with excellent adsorption/desorption properties was packed into a stainless steel needle to develop a new needle trap device (NTD). The performance of the prepared NTD was investigated for sampling, pre-concentration and injection of benzene, toluene, ethyl benzene, o-xylene, and p-xylene (BTEX) into the column of gas chromatography-mass spectrometry (GC-MS) device. Response surface methodology (RSM) with central composite design (CCD) was also employed in two separate consecutive steps to optimize the sampling and device parameters. First, the sampling parameters such as sampling temperature and relative humidity were optimized. Afterwards, the RSM was used for optimizing the desorption parameters including desorption temperature and time. The results indicated that the peak area responses of the analytes of interest decreased with increasing sampling temperature and relative humidity. The optimum values of desorption temperature were in the range 265-273°C, and desorption time were in the range 3.4-3.8min. The limits of detection (LODs) and limits of quantitation (LOQs) of the studied analytes were found over the range of 0.03-0.04ng/mL, and 0.1-0.13ng/mL, respectively. These results demonstrated that the NTD packed with Carbotrap B offers a high sensitive procedure for sampling and analysis of BTEX in concentration range of 0.03-25ng/mL in air. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Utschick, C.; Skoulatos, M.; Schneidewind, A.; Böni, P.
2016-11-01
The cold-neutron triple-axis spectrometer PANDA at the neutron source FRM II has been serving an international user community studying condensed matter physics problems. We report on a new setup, improving the signal-to-noise ratio for small samples and pressure cell setups. Analytical and numerical Monte Carlo methods are used for the optimization of elliptic and parabolic focusing guides. They are placed between the monochromator and sample positions, and the flux at the sample is compared to the one achieved by standard monochromator focusing techniques. A 25 times smaller spot size is achieved, associated with a factor of 2 increased intensity, within the same divergence limits, ± 2 ° . This optional neutron focusing guide shall establish a top-class spectrometer for studying novel exotic properties of matter in combination with more stringent sample environment conditions such as extreme pressures associated with small sample sizes.
Optimizing read-out of the NECTAr front-end electronics
NASA Astrophysics Data System (ADS)
Vorobiov, S.; Feinstein, F.; Bolmont, J.; Corona, P.; Delagnes, E.; Falvard, A.; Gascón, D.; Glicenstein, J.-F.; Naumann, C. L.; Nayman, P.; Ribo, M.; Sanuy, A.; Tavernet, J.-P.; Toussenel, F.; Vincent, P.
2012-12-01
We describe the optimization of the read-out specifications of the NECTAr front-end electronics for the Cherenkov Telescope Array (CTA). The NECTAr project aims at building and testing a demonstrator module of a new front-end electronics design, which takes an advantage of the know-how acquired while building the cameras of the CAT, H.E.S.S.-I and H.E.S.S.-II experiments. The goal of the optimization work is to define the specifications of the digitizing electronics of a CTA camera, in particular integration time window, sampling rate, analog bandwidth using physics simulations. We employed for this work real photomultiplier pulses, sampled at 100 ps with a 600 MHz bandwidth oscilloscope. The individual pulses are drawn randomly at the times at which the photo-electrons, originating from atmospheric showers, arrive at the focal planes of imaging atmospheric Cherenkov telescopes. The timing information is extracted from the existing CTA simulations on the GRID and organized in a local database, together with all the relevant physical parameters (energy, primary particle type, zenith angle, distance from the shower axis, pixel offset from the optical axis, night-sky background level, etc.), and detector configurations (telescope types, camera/mirror configurations, etc.). While investigating the parameter space, an optimal pixel charge integration time window, which minimizes relative error in the measured charge, has been determined. This will allow to gain in sensitivity and to lower the energy threshold of CTA telescopes. We present results of our optimizations and first measurements obtained using the NECTAr demonstrator module.
Determination of semi-volatile additives in wines using SPME and GC-MS.
Sagandykova, Gulyaim N; Alimzhanova, Mereke B; Nurzhanova, Yenglik T; Kenessov, Bulat
2017-04-01
Parameters of headspace solid-phase microextraction, such as fiber coating (85μm CAR/PDMS), extraction time (2min for white and 3min for red wines), temperature (85°C), pre-incubation time (15min) were optimized for identification and quantification of semi-volatile additives (propylene glycol, sorbic and benzoic acids) in wines. To overcome problems in their determination, an evaporation of the wine matrix was performed. Using the optimized method, screening of 25 wine samples was performed, and the presence of propylene glycol, sorbic and benzoic acids was found in 22, 20 and 6 samples, respectively. Analysis of different wines using a standard addition approach showed good linearity in concentration ranges 0-250, 0-125, and 0-250mg/L for propylene glycol, sorbic and benzoic acids, respectively. The proposed method can be recommended for quality control of wine and disclosing adulterated samples. Copyright © 2016 Elsevier Ltd. All rights reserved.
Gas chromatographic column for the storage of sample profiles
NASA Technical Reports Server (NTRS)
Dimandja, J. M.; Valentin, J. R.; Phillips, J. B.
1994-01-01
The concept of a sample retention column that preserves the true time profile of an analyte of interest is studied. This storage system allows for the detection to be done at convenient times, as opposed to the nearly continuous monitoring that is required by other systems to preserve a sample time profile. The sample storage column is essentially a gas chromatography column, although its use is not the separation of sample components. The functions of the storage column are the selective isolation of the component of interest from the rest of the components present in the sample and the storage of this component as a function of time. Using octane as a test substance, the sample storage system was optimized with respect to such parameters as storage and readout temperature, flow rate through the storage column, column efficiency and storage time. A 3-h sample profile was collected and stored at 30 degrees C for 20 h. The profile was then retrieved, essentially intact, in 5 min at 130 degrees C.
NASA Astrophysics Data System (ADS)
Elsabawy, Khaled M.; Fallatah, Ahmed M.; Alharthi, Salman S.
2018-07-01
For the first time high energy Helium-Silver laser which belongs to the category of metal-vapor lasers applied as microstructure promoter for optimally Ir-doped-MgB2sample. The Ir-optimally doped-Mg0.94Ir 0.06B2 superconducting sample was selected from previously published article for one of authors themselves. The samples were irradiated by a three different doses 1, 2 and 3 h from an ultrahigh energy He-Ag-Laser with average power of 103 W/cm2 at distance of 3 cm. Superconducting measurements and micro-structural features were investigated as function of He-Ag Laser irradiation doses. Results indicated that irradiations via an ultrahigh energy He-Ag-Laser promoted grains to lower sizes and consequently measured Jc's values enhanced and increased. Furthermore Tc-offsets for all irradiated samples are better than non-irradiated Mg0.94Ir 0.06B2.
Duffull, Stephen B; Graham, Gordon; Mengersen, Kerrie; Eccleston, John
2012-01-01
Information theoretic methods are often used to design studies that aim to learn about pharmacokinetic and linked pharmacokinetic-pharmacodynamic systems. These design techniques, such as D-optimality, provide the optimum experimental conditions. The performance of the optimum design will depend on the ability of the investigator to comply with the proposed study conditions. However, in clinical settings it is not possible to comply exactly with the optimum design and hence some degree of unplanned suboptimality occurs due to error in the execution of the study. In addition, due to the nonlinear relationship of the parameters of these models to the data, the designs are also locally dependent on an arbitrary choice of a nominal set of parameter values. A design that is robust to both study conditions and uncertainty in the nominal set of parameter values is likely to be of use clinically. We propose an adaptive design strategy to account for both execution error and uncertainty in the parameter values. In this study we investigate designs for a one-compartment first-order pharmacokinetic model. We do this in a Bayesian framework using Markov-chain Monte Carlo (MCMC) methods. We consider log-normal prior distributions on the parameters and investigate several prior distributions on the sampling times. An adaptive design was used to find the sampling window for the current sampling time conditional on the actual times of all previous samples.
Nakata, Toshihiko; Ninomiya, Takanori
2006-10-10
A general solution of undersampling frequency conversion and its optimization for parallel photodisplacement imaging is presented. Phase-modulated heterodyne interference light generated by a linear region of periodic displacement is captured by a charge-coupled device image sensor, in which the interference light is sampled at a sampling rate lower than the Nyquist frequency. The frequencies of the components of the light, such as the sideband and carrier (which include photodisplacement and topography information, respectively), are downconverted and sampled simultaneously based on the integration and sampling effects of the sensor. A general solution of frequency and amplitude in this downconversion is derived by Fourier analysis of the sampling procedure. The optimal frequency condition for the heterodyne beat signal, modulation signal, and sensor gate pulse is derived such that undesirable components are eliminated and each information component is converted into an orthogonal function, allowing each to be discretely reproduced from the Fourier coefficients. The optimal frequency parameters that maximize the sideband-to-carrier amplitude ratio are determined, theoretically demonstrating its high selectivity over 80 dB. Preliminary experiments demonstrate that this technique is capable of simultaneous imaging of reflectivity, topography, and photodisplacement for the detection of subsurface lattice defects at a speed corresponding to an acquisition time of only 0.26 s per 256 x 256 pixel area.
Fong, Erika J.; Huang, Chao; Hamilton, Julie; ...
2015-11-23
Here, a major advantage of microfluidic devices is the ability to manipulate small sample volumes, thus reducing reagent waste and preserving precious sample. However, to achieve robust sample manipulation it is necessary to address device integration with the macroscale environment. To realize repeatable, sensitive particle separation with microfluidic devices, this protocol presents a complete automated and integrated microfluidic platform that enables precise processing of 0.15–1.5 ml samples using microfluidic devices. Important aspects of this system include modular device layout and robust fixtures resulting in reliable and flexible world to chip connections, and fully-automated fluid handling which accomplishes closed-loop sample collection,more » system cleaning and priming steps to ensure repeatable operation. Different microfluidic devices can be used interchangeably with this architecture. Here we incorporate an acoustofluidic device, detail its characterization, performance optimization, and demonstrate its use for size-separation of biological samples. By using real-time feedback during separation experiments, sample collection is optimized to conserve and concentrate sample. Although requiring the integration of multiple pieces of equipment, advantages of this architecture include the ability to process unknown samples with no additional system optimization, ease of device replacement, and precise, robust sample processing.« less
NASA Astrophysics Data System (ADS)
Baisden, W. T.; Canessa, S.
2013-01-01
In 1959, Athol Rafter began a substantial programme of systematically monitoring the flow of 14C produced by atmospheric thermonuclear tests through organic matter in New Zealand soils under stable land use. A database of ∼500 soil radiocarbon measurements spanning 50 years has now been compiled, and is used here to identify optimal approaches for soil C-cycle studies. Our results confirm the potential of 14C to determine residence times, by estimating the amount of ‘bomb 14C’ incorporated. High-resolution time series confirm this approach is appropriate, and emphasise that residence times can be calculated routinely with two or more time points as little as 10 years apart. This approach is generally robust to the key assumptions that can create large errors when single time-point 14C measurements are modelled. The three most critical assumptions relate to: (1) the distribution of turnover times, and particularly the proportion of old C (‘passive fraction’), (2) the lag time between photosynthesis and C entering the modelled pool, (3) changes in the rates of C input. When carrying out approaches using robust assumptions on time-series samples, multiple soil layers can be aggregated using a mixing equation. Where good archived samples are available, AMS measurements can develop useful understanding for calibrating models of the soil C cycle at regional to continental scales with sample numbers on the order of hundreds rather than thousands. Sample preparation laboratories and AMS facilities can play an important role in coordinating the efficient delivery of robust calculated residence times for soil carbon.
Mariño-Repizo, Leonardo; Goicoechea, Hector; Raba, Julio; Cerutti, Soledad
2018-06-07
A novel, simple, easy and cheap sample treatment strategy based on salting-out assisted liquid-liquid extraction (SALLE) for ochratoxin A (OTA) ultra-trace analysis in beer samples using ultra-high performance liquid chromatography-tandem mass spectrometry determination was developed. The factors involved in the efficiency of pretreatment were studied employing factorial design in the screening phase and the optimal conditions of the significant variables on the analytical response were evaluated using a central composite face-centred design (CCF). Consequently, the amount of salt ((NH 4 ) 2 SO 4 ), together with the volumes of sample, hydrophilic (acetone) and nonpolar (toluene) solvents, and times of vortexing and centrifugation were optimized. Under optimized conditions, the limits of detection (LOD) and quantification (LOQ) were 0.02 µg l -1 and 0.08 µg l -1 respectively. OTA extraction recovery by SALLE was approximately 90% (0.2 µg l -1 ). Furthermore, the methodology was in agreement with EU Directive requirements and was successfully applied for analysis of beer samples.
Nejad, Mina Ghasemi; Faraji, Hakim; Moghimi, Ali
2017-04-01
In this study, AA-DLLME combined with UV-Vis spectrophotometry was developed for pre-concentration, microextraction and determination of lead in aqueous samples. Optimization of the independent variables was carried out according to chemometric methods in three steps. According to the screening and optimization study, 86 μL of 1-undecanol (extracting solvent), 12 times syringe pumps, pH 2.0, 0.00% of salt and 0.1% DDTP (chelating agent) were chosen as the optimum independent variables for microextraction and determination of lead. Under the optimized conditions, R = 0.9994, and linearity range was 0.01-100 µg mL -1 . LOD and LOQ were 3.4 and 11.6 ng mL -1 , respectively. The method was applied for analysis of real water samples, such as tap, mineral, river and waste water.
Setting the magic angle for fast magic-angle spinning probes.
Penzel, Susanne; Smith, Albert A; Ernst, Matthias; Meier, Beat H
2018-06-15
Fast magic-angle spinning, coupled with 1 H detection is a powerful method to improve spectral resolution and signal to noise in solid-state NMR spectra. Commercial probes now provide spinning frequencies in excess of 100 kHz. Then, one has sufficient resolution in the 1 H dimension to directly detect protons, which have a gyromagnetic ratio approximately four times larger than 13 C spins. However, the gains in sensitivity can quickly be lost if the rotation angle is not set precisely. The most common method of magic-angle calibration is to optimize the number of rotary echoes, or sideband intensity, observed on a sample of KBr. However, this typically uses relatively low spinning frequencies, where the spinning of fast-MAS probes is often unstable, and detection on the 13 C channel, for which fast-MAS probes are typically not optimized. Therefore, we compare the KBr-based optimization of the magic angle with two alternative approaches: optimization of the splitting observed in 13 C-labeled glycine-ethylester on the carbonyl due to the Cα-C' J-coupling, or optimization of the H-N J-coupling spin echo in the protein sample itself. The latter method has the particular advantage that no separate sample is necessary for the magic-angle optimization. Copyright © 2018. Published by Elsevier Inc.
NASA Astrophysics Data System (ADS)
Karimi, Hamed; Rosenberg, Gili; Katzgraber, Helmut G.
2017-10-01
We present and apply a general-purpose, multistart algorithm for improving the performance of low-energy samplers used for solving optimization problems. The algorithm iteratively fixes the value of a large portion of the variables to values that have a high probability of being optimal. The resulting problems are smaller and less connected, and samplers tend to give better low-energy samples for these problems. The algorithm is trivially parallelizable since each start in the multistart algorithm is independent, and could be applied to any heuristic solver that can be run multiple times to give a sample. We present results for several classes of hard problems solved using simulated annealing, path-integral quantum Monte Carlo, parallel tempering with isoenergetic cluster moves, and a quantum annealer, and show that the success metrics and the scaling are improved substantially. When combined with this algorithm, the quantum annealer's scaling was substantially improved for native Chimera graph problems. In addition, with this algorithm the scaling of the time to solution of the quantum annealer is comparable to the Hamze-de Freitas-Selby algorithm on the weak-strong cluster problems introduced by Boixo et al. Parallel tempering with isoenergetic cluster moves was able to consistently solve three-dimensional spin glass problems with 8000 variables when combined with our method, whereas without our method it could not solve any.
Zheng, Hong; Clausen, Morten Rahr; Dalsgaard, Trine Kastrup; Mortensen, Grith; Bertram, Hanne Christine
2013-08-06
We describe a time-saving protocol for the processing of LC-MS-based metabolomics data by optimizing parameter settings in XCMS and threshold settings for removing noisy and low-intensity peaks using design of experiment (DoE) approaches including Plackett-Burman design (PBD) for screening and central composite design (CCD) for optimization. A reliability index, which is based on evaluation of the linear response to a dilution series, was used as a parameter for the assessment of data quality. After identifying the significant parameters in the XCMS software by PBD, CCD was applied to determine their values by maximizing the reliability and group indexes. Optimal settings by DoE resulted in improvements of 19.4% and 54.7% in the reliability index for a standard mixture and human urine, respectively, as compared with the default setting, and a total of 38 h was required to complete the optimization. Moreover, threshold settings were optimized by using CCD for further improvement. The approach combining optimal parameter setting and the threshold method improved the reliability index about 9.5 times for a standards mixture and 14.5 times for human urine data, which required a total of 41 h. Validation results also showed improvements in the reliability index of about 5-7 times even for urine samples from different subjects. It is concluded that the proposed methodology can be used as a time-saving approach for improving the processing of LC-MS-based metabolomics data.
System for sensing droplet formation time delay in a flow cytometer
Van den Engh, Ger; Esposito, Richard J.
1997-01-01
A droplet flow cytometer system which includes a system to optimize the droplet formation time delay based on conditions actually experienced includes an automatic droplet sampler which rapidly moves a plurality of containers stepwise through the droplet stream while simultaneously adjusting the droplet time delay. Through the system sampling of an actual substance to be processed can be used to minimize the effect of the substances variations or the determination of which time delay is optimal. Analysis such as cell counting and the like may be conducted manually or automatically and input to a time delay adjustment which may then act with analysis equipment to revise the time delay estimate actually applied during processing. The automatic sampler can be controlled through a microprocessor and appropriate programming to bracket an initial droplet formation time delay estimate. When maximization counts through volume, weight, or other types of analysis exists in the containers, the increment may then be reduced for a more accurate ultimate setting. This may be accomplished while actually processing the sample without interruption.
NASA Astrophysics Data System (ADS)
Tang, Gao; Jiang, FanHuag; Li, JunFeng
2015-11-01
Near-Earth asteroids have gained a lot of interest and the development in low-thrust propulsion technology makes complex deep space exploration missions possible. A mission from low-Earth orbit using low-thrust electric propulsion system to rendezvous with near-Earth asteroid and bring sample back is investigated. By dividing the mission into five segments, the complex mission is solved separately. Then different methods are used to find optimal trajectories for every segment. Multiple revolutions around the Earth and multiple Moon gravity assists are used to decrease the fuel consumption to escape from the Earth. To avoid possible numerical difficulty of indirect methods, a direct method to parameterize the switching moment and direction of thrust vector is proposed. To maximize the mass of sample, optimal control theory and homotopic approach are applied to find the optimal trajectory. Direct methods of finding proper time to brake the spacecraft using Moon gravity assist are also proposed. Practical techniques including both direct and indirect methods are investigated to optimize trajectories for different segments and they can be easily extended to other missions and more precise dynamic model.
Deshmukh, Yogita; Khare, Puja; Patra, D D; Nadaf, Altafhusain B
2014-01-01
A rapid micro-scale solid-phase micro-extraction (SPME) procedure coupled with gas-chromatography with flame ionized detector (GC-FID) was used to extract parts per billion levels of a principle basmati aroma compound "2-acetyl-1-pyrroline" (2-AP) from bacterial samples. In present investigation, optimization parameters of bacterial incubation period, sample weight, pre-incubation time, adsorption time, and temperature, precursors and their concentrations has been studied. In the optimized conditions, detection of 2-AP produced by Bacillus cereus ATCC10702 using only 0.5 g of sample volume was 85 μg/kg. Along with 2-AP, 15 other compounds produced by B. cereus were also reported out of which 14 were reported for the first time consisting mainly of (E)-2-hexenal, pentadecanal, 4-hydroxy-2-butanone, n-hexanal, 2-6-nonadienal, 3-methoxy-2(5H) furanone and 2-acetyl-1-pyridine and octanal. High recovery of 2-AP (87 %) from very less amount of B. cereus samples was observed. The method is reproducible fast and can be used for detection of 2-AP production by B. cereus. © 2014 American Institute of Chemical Engineers.
Darling, Aaron E.
2009-01-01
Inversions are among the most common mutations acting on the order and orientation of genes in a genome, and polynomial-time algorithms exist to obtain a minimal length series of inversions that transform one genome arrangement to another. However, the minimum length series of inversions (the optimal sorting path) is often not unique as many such optimal sorting paths exist. If we assume that all optimal sorting paths are equally likely, then statistical inference on genome arrangement history must account for all such sorting paths and not just a single estimate. No deterministic polynomial algorithm is known to count the number of optimal sorting paths nor sample from the uniform distribution of optimal sorting paths. Here, we propose a stochastic method that uniformly samples the set of all optimal sorting paths. Our method uses a novel formulation of parallel Markov chain Monte Carlo. In practice, our method can quickly estimate the total number of optimal sorting paths. We introduce a variant of our approach in which short inversions are modeled to be more likely, and we show how the method can be used to estimate the distribution of inversion lengths and breakpoint usage in pathogenic Yersinia pestis. The proposed method has been implemented in a program called “MC4Inversion.” We draw comparison of MC4Inversion to the sampler implemented in BADGER and a previously described importance sampling (IS) technique. We find that on high-divergence data sets, MC4Inversion finds more optimal sorting paths per second than BADGER and the IS technique and simultaneously avoids bias inherent in the IS technique. PMID:20333186
2018-01-01
Hyperspectral image classification with a limited number of training samples without loss of accuracy is desirable, as collecting such data is often expensive and time-consuming. However, classifiers trained with limited samples usually end up with a large generalization error. To overcome the said problem, we propose a fuzziness-based active learning framework (FALF), in which we implement the idea of selecting optimal training samples to enhance generalization performance for two different kinds of classifiers, discriminative and generative (e.g. SVM and KNN). The optimal samples are selected by first estimating the boundary of each class and then calculating the fuzziness-based distance between each sample and the estimated class boundaries. Those samples that are at smaller distances from the boundaries and have higher fuzziness are chosen as target candidates for the training set. Through detailed experimentation on three publically available datasets, we showed that when trained with the proposed sample selection framework, both classifiers achieved higher classification accuracy and lower processing time with the small amount of training data as opposed to the case where the training samples were selected randomly. Our experiments demonstrate the effectiveness of our proposed method, which equates favorably with the state-of-the-art methods. PMID:29304512
NASA Technical Reports Server (NTRS)
Sauer, Carl G., Jr.
1989-01-01
A patched conic trajectory optimization program MIDAS is described that was developed to investigate a wide variety of complex ballistic heliocentric transfer trajectories. MIDAS includes the capability of optimizing trajectory event times such as departure date, arrival date, and intermediate planetary flyby dates and is able to both add and delete deep space maneuvers when dictated by the optimization process. Both powered and unpowered flyby or gravity assist trajectories of intermediate bodies can be handled and capability is included to optimize trajectories having a rendezvous with an intermediate body such as for a sample return mission. Capability is included in the optimization process to constrain launch energy and launch vehicle parking orbit parameters.
Song, Yuqiao; Liao, Jie; Dong, Junxing; Chen, Li
2015-09-01
The seeds of grapevine (Vitis vinifera) are a byproduct of wine production. To examine the potential value of grape seeds, grape seeds from seven sources were subjected to fingerprinting using direct analysis in real time coupled with time-of-flight mass spectrometry combined with chemometrics. Firstly, we listed all reported components (56 components) from grape seeds and calculated the precise m/z values of the deprotonated ions [M-H](-) . Secondly, the experimental conditions were systematically optimized based on the peak areas of total ion chromatograms of the samples. Thirdly, the seven grape seed samples were examined using the optimized method. Information about 20 grape seed components was utilized to represent characteristic fingerprints. Finally, hierarchical clustering analysis and principal component analysis were performed to analyze the data. Grape seeds from seven different sources were classified into two clusters; hierarchical clustering analysis and principal component analysis yielded similar results. The results of this study lay the foundation for appropriate utilization and exploitation of grape seed samples. Due to the absence of complicated sample preparation methods and chromatographic separation, the method developed in this study represents one of the simplest and least time-consuming methods for grape seed fingerprinting. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Single step optimization of manipulator maneuvers with variable structure control
NASA Technical Reports Server (NTRS)
Chen, N.; Dwyer, T. A. W., III
1987-01-01
One step ahead optimization has been recently proposed for spacecraft attitude maneuvers as well as for robot manipulator maneuvers. Such a technique yields a discrete time control algorithm implementable as a sequence of state-dependent, quadratic programming problems for acceleration optimization. Its sensitivity to model accuracy, for the required inversion of the system dynamics, is shown in this paper to be alleviated by a fast variable structure control correction, acting between the sampling intervals of the slow one step ahead discrete time acceleration command generation algorithm. The slow and fast looping concept chosen follows that recently proposed for optimal aiming strategies with variable structure control. Accelerations required by the VSC correction are reserved during the slow one step ahead command generation so that the ability to overshoot the sliding surface is guaranteed.
Unidentified Organic Compounds. For target analytes, standards are purchased, extraction and clean-up procedures are optimized, and mass spectra and retention times for the chromatographic separation are obtained for comparison to the target compounds in environmental sample ...
In situ semi-quantitative analysis of polluted soils by laser-induced breakdown spectroscopy (LIBS).
Ismaël, Amina; Bousquet, Bruno; Michel-Le Pierrès, Karine; Travaillé, Grégoire; Canioni, Lionel; Roy, Stéphane
2011-05-01
Time-saving, low-cost analyses of soil contamination are required to ensure fast and efficient pollution removal and remedial operations. In this work, laser-induced breakdown spectroscopy (LIBS) has been successfully applied to in situ analyses of polluted soils, providing direct semi-quantitative information about the extent of pollution. A field campaign has been carried out in Brittany (France) on a site presenting high levels of heavy metal concentrations. Results on iron as a major component as well as on lead and copper as minor components are reported. Soil samples were dried and prepared as pressed pellets to minimize the effects of moisture and density on the results. LIBS analyses were performed with a Nd:YAG laser operating at 1064 nm, 60 mJ per 10 ns pulse, at a repetition rate of 10 Hz with a diameter of 500 μm on the sample surface. Good correlations were obtained between the LIBS signals and the values of concentrations deduced from inductively coupled plasma atomic emission spectroscopy (ICP-AES). This result proves that LIBS is an efficient method for optimizing sampling operations. Indeed, "LIBS maps" were established directly on-site, providing valuable assistance in optimizing the selection of the most relevant samples for future expensive and time-consuming laboratory analysis and avoiding useless analyses of very similar samples. Finally, it is emphasized that in situ LIBS is not described here as an alternative quantitative analytical method to the usual laboratory measurements but simply as an efficient time-saving tool to optimize sampling operations and to drastically reduce the number of soil samples to be analyzed, thus reducing costs. The detection limits of 200 ppm for lead and 80 ppm for copper reported here are compatible with the thresholds of toxicity; thus, this in situ LIBS campaign was fully validated for these two elements. Consequently, further experiments are planned to extend this study to other chemical elements and other matrices of soils.
NASA Astrophysics Data System (ADS)
Aspinall, M. D.; Joyce, M. J.; Mackin, R. O.; Jarrah, Z.; Boston, A. J.; Nolan, P. J.; Peyton, A. J.; Hawkes, N. P.
2009-01-01
A unique, digital time pick-off method, known as sample-interpolation timing (SIT) is described. This method demonstrates the possibility of improved timing resolution for the digital measurement of time of flight compared with digital replica-analogue time pick-off methods for signals sampled at relatively low rates. Three analogue timing methods have been replicated in the digital domain (leading-edge, crossover and constant-fraction timing) for pulse data sampled at 8 GSa s-1. Events arising from the 7Li(p, n)7Be reaction have been detected with an EJ-301 organic liquid scintillator and recorded with a fast digital sampling oscilloscope. Sample-interpolation timing was developed solely for the digital domain and thus performs more efficiently on digital signals compared with analogue time pick-off methods replicated digitally, especially for fast signals that are sampled at rates that current affordable and portable devices can achieve. Sample interpolation can be applied to any analogue timing method replicated digitally and thus also has the potential to exploit the generic capabilities of analogue techniques with the benefits of operating in the digital domain. A threshold in sampling rate with respect to the signal pulse width is observed beyond which further improvements in timing resolution are not attained. This advance is relevant to many applications in which time-of-flight measurement is essential.
A practical and sensitive method to assess volatile organic compounds (VOCs) from JP-8 jet fuel in human whole blood was developed by modifying previously established liquid-liquid extraction procedures, optimizing extraction times, solvent volume, specific sample processing te...
Amid, Mehrnoush; Murshid, Fara Syazana; Manap, Mohd Yazid; Islam Sarker, Zaidul
2016-01-01
This study aimed to investigate the effects of the ultrasound-assisted extraction conditions on the yield, specific activity, temperature, and storage stability of the pectinase enzyme from guava peel. The ultrasound variables studied were sonication time (10-30 min), ultrasound temperature (30-50 °C), pH (2.0-8.0), and solvent-to-sample ratio (2:1 mL/g to 6:1 mL/g). The main goal was to optimize the ultrasound-assisted extraction conditions to maximize the recovery of pectinase from guava peel with the most desirable enzyme-specific activity and stability. Under the optimum conditions, a high yield (96.2%), good specific activity (18.2 U/mg), temperature stability (88.3%), and storage stability (90.3%) of the extracted enzyme were achieved. The optimal conditions were 20 min sonication time, 40 °C temperature, at pH 5.0, using a 4:1 mL/g solvent-to-sample ratio. The study demonstrated that optimization of ultrasound-assisted process conditions for the enzyme extraction could improve the enzymatic characteristics and yield of the enzyme.
ERIC Educational Resources Information Center
Treen, Emily; Atanasova, Christina; Pitt, Leyland; Johnson, Michael
2016-01-01
Marketing instructors using simulation games as a way of inducing some realism into a marketing course are faced with many dilemmas. Two important quandaries are the optimal size of groups and how much of the students' time should ideally be devoted to the game. Using evidence from a very large sample of teams playing a simulation game, the study…
Adventures in Modern Time Series Analysis: From the Sun to the Crab Nebula and Beyond
NASA Technical Reports Server (NTRS)
Scargle, Jeffrey
2014-01-01
With the generation of long, precise, and finely sampled time series the Age of Digital Astronomy is uncovering and elucidating energetic dynamical processes throughout the Universe. Fulfilling these opportunities requires data effective analysis techniques rapidly and automatically implementing advanced concepts. The Time Series Explorer, under development in collaboration with Tom Loredo, provides tools ranging from simple but optimal histograms to time and frequency domain analysis for arbitrary data modes with any time sampling. Much of this development owes its existence to Joe Bredekamp and the encouragement he provided over several decades. Sample results for solar chromospheric activity, gamma-ray activity in the Crab Nebula, active galactic nuclei and gamma-ray bursts will be displayed.
Improved Strategies and Optimization of Calibration Models for Real-time PCR Absolute Quantification
Real-time PCR absolute quantification applications rely on the use of standard curves to make estimates of DNA target concentrations in unknown samples. Traditional absolute quantification approaches dictate that a standard curve must accompany each experimental run. However, t...
Yousef, A M; Melhem, M; Xue, B; Arafat, T; Reynolds, D K; Van Wart, S A
2013-05-01
Clopidogrel is metabolized primarily into an inactive carboxyl metabolite (clopidogrel-IM) or to a lesser extent an active thiol metabolite. A population pharmacokinetic (PK) model was developed using NONMEM(®) to describe the time course of clopidogrel-IM in plasma and to design a sparse-sampling strategy to predict clopidogrel-IM exposures for use in characterizing anti-platelet activity. Serial blood samples from 76 healthy Jordanian subjects administered a single 75 mg oral dose of clopidogrel were collected and assayed for clopidogrel-IM using reverse phase high performance liquid chromatography. A two-compartment (2-CMT) PK model with first-order absorption and elimination plus an absorption lag-time was evaluated, as well as a variation of this model designed to mimic enterohepatic recycling (EHC). Optimal PK sampling strategies (OSS) were determined using WinPOPT based upon collection of 3-12 post-dose samples. A two-compartment model with EHC provided the best fit and reduced bias in C(max) (median prediction error (PE%) of 9.58% versus 12.2%) relative to the basic two-compartment model, AUC(0-24) was similar for both models (median PE% = 1.39%). The OSS for fitting the two-compartment model with EHC required the collection of seven samples (0.25, 1, 2, 4, 5, 6 and 12 h). Reasonably unbiased and precise exposures were obtained when re-fitting this model to a reduced dataset considering only these sampling times. A two-compartment model considering EHC best characterized the time course of clopidogrel-IM in plasma. Use of the suggested OSS will allow for the collection of fewer PK samples when assessing clopidogrel-IM exposures. Copyright © 2013 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Beyhaghi, Pooriya
2016-11-01
This work considers the problem of the efficient minimization of the infinite time average of a stationary ergodic process in the space of a handful of independent parameters which affect it. Problems of this class, derived from physical or numerical experiments which are sometimes expensive to perform, are ubiquitous in turbulence research. In such problems, any given function evaluation, determined with finite sampling, is associated with a quantifiable amount of uncertainty, which may be reduced via additional sampling. This work proposes the first algorithm of this type. Our algorithm remarkably reduces the overall cost of the optimization process for problems of this class. Further, under certain well-defined conditions, rigorous proof of convergence is established to the global minimum of the problem considered.
Dias, Adriana Neves; da Silva, Ana Cristine; Simão, Vanessa; Merib, Josias; Carasek, Eduardo
2015-08-12
This study describes the use of cork as a new coating for bar adsorptive microextraction (BAμE) and its application in determining benzophenone, triclocarban and parabens in aqueous samples by HPLC-DAD. In this study bars with 7.5 and 15 mm of length were used. The extraction and liquid desorption steps for BAμE were optimized employing multivariate and univariate procedures. The desorption time and solvent used for liquid desorption were optimized by univariate and multivariate studies, respectively. For the extraction step the sample pH was optimized by univariate experiments while the parameters extraction time and ionic strength were evaluated using the Doehlert design. The optimum extraction conditions were sample pH 5.5, NaCl concentration 25% and extraction time 90 min. Liquid desorption was carried out for 30 min with 250 μL (bar length of 15 mm) or 100 μL (bar length of 7.5 mm) of ACN:MeOH (50:50, v/v). The quantification limits varied between 1.6 and 20 μg L(-1) (bar length of 15 mm) and 0.64 and 8 μg L(-1) (bar length of 7.5 mm). The linear correlation coefficients were higher than 0.98 for both bars. The method with 7.5 mm bar length showed recovery values between 65 and 123%. The bar-to-bar reproducibility and the repeatability were lower than 13% (n = 2) and 14% (n = 3), respectively. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Nezhadali, Azizollah; Motlagh, Maryam Omidvar; Sadeghzadeh, Samira
2018-02-01
A selective method based on molecularly imprinted polymer (MIP) solid-phase extraction (SPE) using UV-Vis spectrophotometry as a detection technique was developed for the determination of fluoxetine (FLU) in pharmaceutical and human serum samples. The MIPs were synthesized using pyrrole as a functional monomer in the presence of FLU as a template molecule. The factors that affecting the preparation and extraction ability of MIP such as amount of sorbent, initiator concentration, the amount of monomer to template ratio, uptake shaking rate, uptake time, washing buffer pH, take shaking rate, Taking time and polymerization time were considered for optimization. First a Plackett-Burman design (PBD) consists of 12 randomized runs were applied to determine the influence of each factor. The other optimization processes were performed using central composite design (CCD), artificial neural network (ANN) and genetic algorithm (GA). At optimal condition the calibration curve showed linearity over a concentration range of 10- 7-10- 8 M with a correlation coefficient (R2) of 0.9970. The limit of detection (LOD) for FLU was obtained 6.56 × 10- 9 M. The repeatability of the method was obtained 1.61%. The synthesized MIP sorbent showed a good selectivity and sensitivity toward FLU. The MIP/SPE method was used for the determination of FLU in pharmaceutical, serum and plasma samples, successfully.
Enhancements on the Convex Programming Based Powered Descent Guidance Algorithm for Mars Landing
NASA Technical Reports Server (NTRS)
Acikmese, Behcet; Blackmore, Lars; Scharf, Daniel P.; Wolf, Aron
2008-01-01
In this paper, we present enhancements on the powered descent guidance algorithm developed for Mars pinpoint landing. The guidance algorithm solves the powered descent minimum fuel trajectory optimization problem via a direct numerical method. Our main contribution is to formulate the trajectory optimization problem, which has nonconvex control constraints, as a finite dimensional convex optimization problem, specifically as a finite dimensional second order cone programming (SOCP) problem. SOCP is a subclass of convex programming, and there are efficient SOCP solvers with deterministic convergence properties. Hence, the resulting guidance algorithm can potentially be implemented onboard a spacecraft for real-time applications. Particularly, this paper discusses the algorithmic improvements obtained by: (i) Using an efficient approach to choose the optimal time-of-flight; (ii) Using a computationally inexpensive way to detect the feasibility/ infeasibility of the problem due to the thrust-to-weight constraint; (iii) Incorporating the rotation rate of the planet into the problem formulation; (iv) Developing additional constraints on the position and velocity to guarantee no-subsurface flight between the time samples of the temporal discretization; (v) Developing a fuel-limited targeting algorithm; (vi) Initial result on developing an onboard table lookup method to obtain almost fuel optimal solutions in real-time.
Adapted random sampling patterns for accelerated MRI.
Knoll, Florian; Clason, Christian; Diwoky, Clemens; Stollberger, Rudolf
2011-02-01
Variable density random sampling patterns have recently become increasingly popular for accelerated imaging strategies, as they lead to incoherent aliasing artifacts. However, the design of these sampling patterns is still an open problem. Current strategies use model assumptions like polynomials of different order to generate a probability density function that is then used to generate the sampling pattern. This approach relies on the optimization of design parameters which is very time consuming and therefore impractical for daily clinical use. This work presents a new approach that generates sampling patterns by making use of power spectra of existing reference data sets and hence requires neither parameter tuning nor an a priori mathematical model of the density of sampling points. The approach is validated with downsampling experiments, as well as with accelerated in vivo measurements. The proposed approach is compared with established sampling patterns, and the generalization potential is tested by using a range of reference images. Quantitative evaluation is performed for the downsampling experiments using RMS differences to the original, fully sampled data set. Our results demonstrate that the image quality of the method presented in this paper is comparable to that of an established model-based strategy when optimization of the model parameter is carried out and yields superior results to non-optimized model parameters. However, no random sampling pattern showed superior performance when compared to conventional Cartesian subsampling for the considered reconstruction strategy.
spsann - optimization of sample patterns using spatial simulated annealing
NASA Astrophysics Data System (ADS)
Samuel-Rosa, Alessandro; Heuvelink, Gerard; Vasques, Gustavo; Anjos, Lúcia
2015-04-01
There are many algorithms and computer programs to optimize sample patterns, some private and others publicly available. A few have only been presented in scientific articles and text books. This dispersion and somewhat poor availability is holds back to their wider adoption and further development. We introduce spsann, a new R-package for the optimization of sample patterns using spatial simulated annealing. R is the most popular environment for data processing and analysis. Spatial simulated annealing is a well known method with widespread use to solve optimization problems in the soil and geo-sciences. This is mainly due to its robustness against local optima and easiness of implementation. spsann offers many optimizing criteria for sampling for variogram estimation (number of points or point-pairs per lag distance class - PPL), trend estimation (association/correlation and marginal distribution of the covariates - ACDC), and spatial interpolation (mean squared shortest distance - MSSD). spsann also includes the mean or maximum universal kriging variance (MUKV) as an optimizing criterion, which is used when the model of spatial variation is known. PPL, ACDC and MSSD were combined (PAN) for sampling when we are ignorant about the model of spatial variation. spsann solves this multi-objective optimization problem scaling the objective function values using their maximum absolute value or the mean value computed over 1000 random samples. Scaled values are aggregated using the weighted sum method. A graphical display allows to follow how the sample pattern is being perturbed during the optimization, as well as the evolution of its energy state. It is possible to start perturbing many points and exponentially reduce the number of perturbed points. The maximum perturbation distance reduces linearly with the number of iterations. The acceptance probability also reduces exponentially with the number of iterations. R is memory hungry and spatial simulated annealing is a computationally intensive method. As such, many strategies were used to reduce the computation time and memory usage: a) bottlenecks were implemented in C++, b) a finite set of candidate locations is used for perturbing the sample points, and c) data matrices are computed only once and then updated at each iteration instead of being recomputed. spsann is available at GitHub under a licence GLP Version 2.0 and will be further developed to: a) allow the use of a cost surface, b) implement other sensitive parts of the source code in C++, c) implement other optimizing criteria, d) allow to add or delete points to/from an existing point pattern.
Optimal sampling for radiotelemetry studies of spotted owl habitat and home range.
Andrew B. Carey; Scott P. Horton; Janice A. Reid
1989-01-01
Radiotelemetry studies of spotted owl (Strix occidentalis) ranges and habitat-use must be designed efficiently to estimate parameters needed for a sample of individuals sufficient to describe the population. Independent data are required by analytical methods and provide the greatest return of information per effort. We examined time series of...
Seawell, Asani H.; Cutrona, Carolyn E.; Russell, Daniel W.
2012-01-01
The present longitudinal study examined the role of general and tailored social support in mitigating the deleterious impact of racial discrimination on depressive symptoms and optimism in a large sample of African American women. Participants were 590 African American women who completed measures assessing racial discrimination, general social support, tailored social support for racial discrimination, depressive symptoms, and optimism at two time points (2001–2002 and 2003–2004). Our results indicated that higher levels of general and tailored social support predicted optimism one year later; changes in both types of support also predicted changes in optimism over time. Although initial levels of neither measure of social support predicted depressive symptoms over time, changes in tailored support predicted changes in depressive symptoms. We also sought to determine whether general and tailored social support “buffer” or diminish the negative effects of racial discrimination on depressive symptoms and optimism. Our results revealed a classic buffering effect of tailored social support, but not general support on depressive symptoms for women experiencing high levels of discrimination. PMID:24443614
Harte, Philip T.
2017-01-01
A common assumption with groundwater sampling is that low (<0.5 L/min) pumping rates during well purging and sampling captures primarily lateral flow from the formation through the well-screened interval at a depth coincident with the pump intake. However, if the intake is adjacent to a low hydraulic conductivity part of the screened formation, this scenario will induce vertical groundwater flow to the pump intake from parts of the screened interval with high hydraulic conductivity. Because less formation water will initially be captured during pumping, a substantial volume of water already in the well (preexisting screen water or screen storage) will be captured during this initial time until inflow from the high hydraulic conductivity part of the screened formation can travel vertically in the well to the pump intake. Therefore, the length of the time needed for adequate purging prior to sample collection (called optimal purge duration) is controlled by the in-well, vertical travel times. A preliminary, simple analytical model was used to provide information on the relation between purge duration and capture of formation water for different gross levels of heterogeneity (contrast between low and high hydraulic conductivity layers). The model was then used to compare these time–volume relations to purge data (pumping rates and drawdown) collected at several representative monitoring wells from multiple sites. Results showed that computation of time-dependent capture of formation water (as opposed to capture of preexisting screen water), which were based on vertical travel times in the well, compares favorably with the time required to achieve field parameter stabilization. If field parameter stabilization is an indicator of arrival time of formation water, which has been postulated, then in-well, vertical flow may be an important factor at wells where low-flow sampling is the sample method of choice.
Feng, Zufei; Xu, Yuehong; Wei, Shuguang; Zhang, Bao; Guan, Fanglin; Li, Shengbin
2015-01-01
A magnetic carbon nanomaterial for Fe3O4-modified hydroxylated multi-walled carbon nanotubes (Fe3O4-MWCNTs-OH) was prepared by the aggregating effect of Fe3O4 nanoparticles on MWCNTs-OH, and this material was combined with high-performance liquid chromatography (HPLC)/photodiode array detector (PAD) to determine strychnine in human serum samples. Some important parameters that could influence the extraction efficiency of strychnine were optimized, including the extraction time, amounts of Fe3O4-MWCNTs-OH, pH of sample solution, desorption solvent and desorption time. Under optimal conditions, the recoveries of spiked serum samples were between 98.3 and 102.7%, and the relative standard deviations (RSDs) ranged from 0.9 to 5.3%. The correlation coefficient was 0.9997. The LODs and LOQs of strychnine were 6.2 and 20.5 ng mL(-1), at signal-to-noise ratios of 3 and 10, respectively. These experimental results showed that the proposed method is feasible for the analysis of strychnine in serum samples.
Kogovšek, P; Hodgetts, J; Hall, J; Prezelj, N; Nikolić, P; Mehle, N; Lenarčič, R; Rotter, A; Dickinson, M; Boonham, N; Dermastia, M; Ravnikar, M
2015-01-01
In Europe the most devastating phytoplasma associated with grapevine yellows (GY) diseases is a quarantine pest, flavescence dorée (FDp), from the 16SrV taxonomic group. The on-site detection of FDp with an affordable device would contribute to faster and more efficient decisions on the control measures for FDp. Therefore, a real-time isothermal LAMP assay for detection of FDp was validated according to the EPPO standards and MIQE guidelines. The LAMP assay was shown to be specific and extremely sensitive, because it detected FDp in all leaf samples that were determined to be FDp infected using quantitative real-time PCR. The whole procedure of sample preparation and testing was designed and optimized for on-site detection and can be completed in one hour. The homogenization procedure of the grapevine samples (leaf vein, flower or berry) was optimized to allow direct testing of crude homogenates with the LAMP assay, without the need for DNA extraction, and was shown to be extremely sensitive. PMID:26146413
Rousset, Nassim; Monet, Frédéric; Gervais, Thomas
2017-03-21
This work focuses on modelling design and operation of "microfluidic sample traps" (MSTs). MSTs regroup a widely used class of microdevices that incorporate wells, recesses or chambers adjacent to a channel to individually trap, culture and/or release submicroliter 3D tissue samples ranging from simple cell aggregates and spheroids, to ex vivo tissue samples and other submillimetre-scale tissue models. Numerous MST designs employing various trapping mechanisms have been proposed in the literature, spurring the development of 3D tissue models for drug discovery and personalized medicine. Yet, there lacks a general framework to optimize trapping stability, trapping time, shear stress, and sample metabolism. Herein, the effects of hydrodynamics and diffusion-reaction on tissue viability and device operation are investigated using analytical and finite element methods with systematic parametric sweeps over independent design variables chosen to correspond to the four design degrees of freedom. Combining different results, we show that, for a spherical tissue of diameter d < 500 μm, the simplest, closest to optimal trap shape is a cube of dimensions w equal to twice the tissue diameter: w = 2d. Furthermore, to sustain tissues without perfusion, available medium volume per trap needs to be 100× the tissue volume to ensure optimal metabolism for at least 24 hours.
High frequency resolution terahertz time-domain spectroscopy
NASA Astrophysics Data System (ADS)
Sangala, Bagvanth Reddy
2013-12-01
A new method for the high frequency resolution terahertz time-domain spectroscopy is developed based on the characteristic matrix method. This method is useful for studying planar samples or stack of planar samples. The terahertz radiation was generated by optical rectification in a ZnTe crystal and detected by another ZnTe crystal via electro-optic sampling method. In this new characteristic matrix based method, the spectra of the sample and reference waveforms will be modeled by using characteristic matrices. We applied this new method to measure the optical constants of air. The terahertz transmission through the layered systems air-Teflon-air-Quartz-air and Nitrogen gas-Teflon-Nitrogen gas-Quartz-Nitrogen gas was modeled by the characteristic matrix method. A transmission coefficient is derived from these models which was optimized to fit the experimental transmission coefficient to extract the optical constants of air. The optimization of an error function involving the experimental complex transmission coefficient and the theoretical transmission coefficient was performed using patternsearch algorithm of MATLAB. Since this method takes account of the echo waveforms due to reflections in the layered samples, this method allows analysis of longer time-domain waveforms giving rise to very high frequency resolution in the frequency-domain. We have presented the high frequency resolution terahertz time-domain spectroscopy of air and compared the results with the literature values. We have also fitted the complex susceptibility of air to the Lorentzian and Gaussian functions to extract the linewidths.
[Optimization of extraction technics of total saponins from Pulsatilla cernua].
Li, Hai-Yan; Hao, Ning; Xu, Yong-Nan; Piao, Zhong-Yun
2010-04-01
The extraction condition of total saponins from Pulsatilla cenua by ultrasonic wave was optimized by single factor and orthogonal experiments. The largest absorbency of saponin was intended to be 470 nm by wavelength scan method with the pulchinenoside B4 as control sample, the linear relationship was observed between the absorbency and the content of saponin in the range of 0 - 0.040 mg/mL. The optimal conditions of extraction was as following: 80% of alcohol concentration, 40 min of ultrasonic time, 1: 20 of solid to liquid ratio, 80 W of ultrasonic power and one time for extraction. Among them, alcohol had the most significant effect on the extraction of total saponins. The content of total saponins in Pulsatilla cernua was 4. 32% under the optimal condition. The method developed here is efficient, stable, accurate and repeatable.
Optimism and Pessimism in Social Context: An Interpersonal Perspective on Resilience and Risk
Smith, Timothy W.; Ruiz, John M.; Cundiff, Jenny M.; Baron, Kelly G.; Nealey-Moore, Jill B.
2016-01-01
Using the interpersonal perspective, we examined social correlates of dispositional optimism. In Study 1, optimism and pessimism were associated with warm-dominant and hostile-submissive interpersonal styles, respectively, across four samples, and had expected associations with social support and interpersonal stressors. In 300 married couples, Study 2 replicated these findings regarding interpersonal styles, using self-reports and spouse ratings. Optimism-pessimism also had significant actor and partner associations with marital quality. In Study 3 (120 couples), husbands’ and wives’ optimism predicted increases in their own marital adjustment over time, and husbands’ optimism predicted increases in wives’ marital adjustment. Thus, the interpersonal perspective is a useful integrative framework for examining social processes that could contribute to associations of optimism-pessimism with physical health and emotional adjustment. PMID:27840458
Physiologically Relevant Changes in Serotonin Resolved by Fast Microdialysis
2013-01-01
Online microdialysis is a sampling and detection method that enables continuous interrogation of extracellular molecules in freely moving subjects under behaviorally relevant conditions. A majority of recent publications using brain microdialysis in rodents report sample collection times of 20–30 min. These long sampling times are due, in part, to limitations in the detection sensitivity of high performance liquid chromatography (HPLC). By optimizing separation and detection conditions, we decreased the retention time of serotonin to 2.5 min and the detection threshold to 0.8 fmol. Sampling times were consequently reduced from 20 to 3 min per sample for online detection of serotonin (and dopamine) in brain dialysates using a commercial HPLC system. We developed a strategy to collect and to analyze dialysate samples continuously from two animals in tandem using the same instrument. Improvements in temporal resolution enabled elucidation of rapid changes in extracellular serotonin levels associated with mild stress and circadian rhythms. These dynamics would be difficult or impossible to differentiate using conventional microdialysis sampling rates. PMID:23614776
Biochemical surface modification of Co-Cr-Mo.
Puleo, D A
1996-01-01
Because of the limited mechanical properties of tissue substitutes formed by culturing cells on polymeric scaffolds, other approaches to tissue engineering must be explored for applications that require complete and immediate ability to bear weight, e.g. total joint replacements. Biochemical surface modification offers a way to partially regulate events at the bone-implant interface to obtain preferred tissue responses. Tresyl chloride, gamma-aminopropyltriethoxysilane (APS) and p-nitrophenyl chloroformate (p-NPC) immobilization schemes were used to couple a model enzyme, trypsin, on bulk samples of Co-Cr-Mo. For comparison, samples were simply adsorbed with protein. The three derivatization schemes resulted in different patterns and levels of activity. Tresyl chloride was not effective in immobilizing active enzyme on Co-Cr-Mo. Aqueous silanization with 12.5% APS resulted in optimal immobilized activity. Activity on samples derivatized with 0.65 mg p-NPC cm-2 was four to five times greater than that on samples simple adsorbed with enzyme or optimally derivatized with APS and was about eight times that on tresylated samples. This work demonstrates that, although different methods have different effectiveness, chemical derivatization can be used to alter the amount and/or stability of biomolecules immobilized on the surface of Co-Cr-Mo.
Razmi, Rasoul; Shahpari, Behrouz; Pourbasheer, Eslam; Boustanifar, Mohammad Hasan; Azari, Zhila; Ebadi, Amin
2016-11-01
A rapid and simple method for the extraction and preconcentration of ceftazidime in aqueous samples has been developed using dispersive liquid-liquid microextraction followed by high-performance liquid chromatography analysis. The extraction parameters, such as the volume of extraction solvent and disperser solvent, salt effect, sample volume, centrifuge rate, centrifuge time, extraction time, and temperature in the dispersive liquid-liquid microextraction process, were studied and optimized with the experimental design methods. Firstly, for the preliminary screening of the parameters the taguchi design was used and then, the fractional factorial design was used for significant factors optimization. At the optimum conditions, the calibration curves for ceftazidime indicated good linearity over the range of 0.001-10 μg/mL with correlation coefficients higher than the 0.98, and the limits of detection were 0.13 and 0.17 ng/mL, for water and urine samples, respectively. The proposed method successfully employed to determine ceftazidime in water and urine samples and good agreement between the experimental data and predictive values has been achieved. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Circulating tumoral cells lack circadian-rhythm in hospitalized metastasic breast cancer patients.
García-Sáenz, José Angel; Martín, Miguel; Maestro, Marisa; Vidaurreta, Marta; Veganzones, Silvia; Villalobos, Laura; Rodríguez-Lajusticia, Laura; Rafael, Sara; Sanz-Casla, María Teresa; Casado, Antonio; Sastre, Javier; Arroyo, Manuel; Díaz-Rubio, Eduardo
2006-11-01
The relationship between breast cancer and circadian rhythm variation has been extensively studied. Increased breast tumorigenesis has been reported in melatonin-suppressed experimental models and in observational studies. Circulating Tumor Cells (CTC) circadian- rhythm may optimize the timing of therapies. This is a prospective experimental study to ascertain the day-time and night-time CTC levels in hospitalized metastasic breast cancer (MBC) patients. CTC are isolated and enumerated from a 08:00 AM and 08:00 PM blood collections. 23 MBC and 23 healthy volunteers entered the study. 69 samples were collected (23 samples at 08:00 AM and 23 samples at 08:00 PM from MBC; 23 samples from healthy volunteers). Results from two patients were rejected due to sample processing errors. No CTC were isolated from healthy-volunteers. No-differences between daytime and night-time CTC were observed. Therefore, we could not ascertain CTC circadian-rhythm in hospitalized metastasic breast cancer patients.
A quantitative evaluation of two methods for preserving hair samples
Roon, David A.; Waits, L.P.; Kendall, K.C.
2003-01-01
Hair samples are an increasingly important DNA source for wildlife studies, yet optimal storage methods and DNA degradation rates have not been rigorously evaluated. We tested amplification success rates over a one-year storage period for DNA extracted from brown bear (Ursus arctos) hair samples preserved using silica desiccation and -20C freezing. For three nuclear DNA microsatellites, success rates decreased significantly after a six-month time point, regardless of storage method. For a 1000 bp mitochondrial fragment, a similar decrease occurred after a two-week time point. Minimizing delays between collection and DNA extraction will maximize success rates for hair-based noninvasive genetic sampling projects.
Simulator for multilevel optimization research
NASA Technical Reports Server (NTRS)
Padula, S. L.; Young, K. C.
1986-01-01
A computer program designed to simulate and improve multilevel optimization techniques is described. By using simple analytic functions to represent complex engineering analyses, the simulator can generate and test a large variety of multilevel decomposition strategies in a relatively short time. This type of research is an essential step toward routine optimization of large aerospace systems. The paper discusses the types of optimization problems handled by the simulator and gives input and output listings and plots for a sample problem. It also describes multilevel implementation techniques which have value beyond the present computer program. Thus, this document serves as a user's manual for the simulator and as a guide for building future multilevel optimization applications.
Barish, Syndi; Ochs, Michael F.; Sontag, Eduardo D.; Gevertz, Jana L.
2017-01-01
Cancer is a highly heterogeneous disease, exhibiting spatial and temporal variations that pose challenges for designing robust therapies. Here, we propose the VEPART (Virtual Expansion of Populations for Analyzing Robustness of Therapies) technique as a platform that integrates experimental data, mathematical modeling, and statistical analyses for identifying robust optimal treatment protocols. VEPART begins with time course experimental data for a sample population, and a mathematical model fit to aggregate data from that sample population. Using nonparametric statistics, the sample population is amplified and used to create a large number of virtual populations. At the final step of VEPART, robustness is assessed by identifying and analyzing the optimal therapy (perhaps restricted to a set of clinically realizable protocols) across each virtual population. As proof of concept, we have applied the VEPART method to study the robustness of treatment response in a mouse model of melanoma subject to treatment with immunostimulatory oncolytic viruses and dendritic cell vaccines. Our analysis (i) showed that every scheduling variant of the experimentally used treatment protocol is fragile (nonrobust) and (ii) discovered an alternative region of dosing space (lower oncolytic virus dose, higher dendritic cell dose) for which a robust optimal protocol exists. PMID:28716945
SPECT System Optimization Against A Discrete Parameter Space
Meng, L. J.; Li, N.
2013-01-01
In this paper, we present an analytical approach for optimizing the design of a static SPECT system or optimizing the sampling strategy with a variable/adaptive SPECT imaging hardware against an arbitrarily given set of system parameters. This approach has three key aspects. First, it is designed to operate over a discretized system parameter space. Second, we have introduced an artificial concept of virtual detector as the basic building block of an imaging system. With a SPECT system described as a collection of the virtual detectors, one can convert the task of system optimization into a process of finding the optimum imaging time distribution (ITD) across all virtual detectors. Thirdly, the optimization problem (finding the optimum ITD) could be solved with a block-iterative approach or other non-linear optimization algorithms. In essence, the resultant optimum ITD could provide a quantitative measure of the relative importance (or effectiveness) of the virtual detectors and help to identify the system configuration or sampling strategy that leads to an optimum imaging performance. Although we are using SPECT imaging as a platform to demonstrate the system optimization strategy, this development also provides a useful framework for system optimization problems in other modalities, such as positron emission tomography (PET) and X-ray computed tomography (CT) [1, 2]. PMID:23587609
Classifier-Guided Sampling for Complex Energy System Optimization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Backlund, Peter B.; Eddy, John P.
2015-09-01
This report documents the results of a Laboratory Directed Research and Development (LDRD) effort enti tled "Classifier - Guided Sampling for Complex Energy System Optimization" that was conducted during FY 2014 and FY 2015. The goal of this proj ect was to develop, implement, and test major improvements to the classifier - guided sampling (CGS) algorithm. CGS is type of evolutionary algorithm for perform ing search and optimization over a set of discrete design variables in the face of one or more objective functions. E xisting evolutionary algorithms, such as genetic algorithms , may require a large number of omore » bjecti ve function evaluations to identify optimal or near - optimal solutions . Reducing the number of evaluations can result in significant time savings, especially if the objective function is computationally expensive. CGS reduce s the evaluation count by us ing a Bayesian network classifier to filter out non - promising candidate designs , prior to evaluation, based on their posterior probabilit ies . In this project, b oth the single - objective and multi - objective version s of the CGS are developed and tested on a set of benchm ark problems. As a domain - specific case study, CGS is used to design a microgrid for use in islanded mode during an extended bulk power grid outage.« less
Zhou, Fuqun; Zhang, Aining
2016-01-01
Nowadays, various time-series Earth Observation data with multiple bands are freely available, such as Moderate Resolution Imaging Spectroradiometer (MODIS) datasets including 8-day composites from NASA, and 10-day composites from the Canada Centre for Remote Sensing (CCRS). It is challenging to efficiently use these time-series MODIS datasets for long-term environmental monitoring due to their vast volume and information redundancy. This challenge will be greater when Sentinel 2–3 data become available. Another challenge that researchers face is the lack of in-situ data for supervised modelling, especially for time-series data analysis. In this study, we attempt to tackle the two important issues with a case study of land cover mapping using CCRS 10-day MODIS composites with the help of Random Forests’ features: variable importance, outlier identification. The variable importance feature is used to analyze and select optimal subsets of time-series MODIS imagery for efficient land cover mapping, and the outlier identification feature is utilized for transferring sample data available from one year to an adjacent year for supervised classification modelling. The results of the case study of agricultural land cover classification at a regional scale show that using only about a half of the variables we can achieve land cover classification accuracy close to that generated using the full dataset. The proposed simple but effective solution of sample transferring could make supervised modelling possible for applications lacking sample data. PMID:27792152
Zhou, Fuqun; Zhang, Aining
2016-10-25
Nowadays, various time-series Earth Observation data with multiple bands are freely available, such as Moderate Resolution Imaging Spectroradiometer (MODIS) datasets including 8-day composites from NASA, and 10-day composites from the Canada Centre for Remote Sensing (CCRS). It is challenging to efficiently use these time-series MODIS datasets for long-term environmental monitoring due to their vast volume and information redundancy. This challenge will be greater when Sentinel 2-3 data become available. Another challenge that researchers face is the lack of in-situ data for supervised modelling, especially for time-series data analysis. In this study, we attempt to tackle the two important issues with a case study of land cover mapping using CCRS 10-day MODIS composites with the help of Random Forests' features: variable importance, outlier identification. The variable importance feature is used to analyze and select optimal subsets of time-series MODIS imagery for efficient land cover mapping, and the outlier identification feature is utilized for transferring sample data available from one year to an adjacent year for supervised classification modelling. The results of the case study of agricultural land cover classification at a regional scale show that using only about a half of the variables we can achieve land cover classification accuracy close to that generated using the full dataset. The proposed simple but effective solution of sample transferring could make supervised modelling possible for applications lacking sample data.
Todd Trench, Elaine C.
2004-01-01
A time-series analysis approach developed by the U.S. Geological Survey was used to analyze trends in total phosphorus and evaluate optimal sampling designs for future trend detection, using long-term data for two water-quality monitoring stations on the Quinebaug River in eastern Connecticut. Trend-analysis results for selected periods of record during 1971?2001 indicate that concentrations of total phosphorus in the Quinebaug River have varied over time, but have decreased significantly since the 1970s and 1980s. Total phosphorus concentrations at both stations increased in the late 1990s and early 2000s, but were still substantially lower than historical levels. Drainage areas for both stations are primarily forested, but water quality at both stations is affected by point discharges from municipal wastewater-treatment facilities. Various designs with sampling frequencies ranging from 4 to 11 samples per year were compared to the trend-detection power of the monthly (12-sample) design to determine the most efficient configuration of months to sample for a given annual sampling frequency. Results from this evaluation indicate that the current (2004) 8-sample schedule for the two Quinebaug stations, with monthly sampling from May to September and bimonthly sampling for the remainder of the year, is not the most efficient 8-sample design for future detection of trends in total phosphorus. Optimal sampling schedules for the two stations differ, but in both cases, trend-detection power generally is greater among 8-sample designs that include monthly sampling in fall and winter. Sampling designs with fewer than 8 samples per year generally provide a low level of probability for detection of trends in total phosphorus. Managers may determine an acceptable level of probability for trend detection within the context of the multiple objectives of the state?s water-quality management program and the scientific understanding of the watersheds in question. Managers may identify a threshold of probability for trend detection that is high enough to justify the agency?s investment in the water-quality sampling program. Results from an analysis of optimal sampling designs can provide an important component of information for the decision-making process in which sampling schedules are periodically reviewed and revised. Results from the study described in this report and previous studies indicate that optimal sampling schedules for trend detection may differ substantially for different stations and constituents. A more comprehensive statewide evaluation of sampling schedules for key stations and constituents could provide useful information for any redesign of the schedule for water-quality monitoring in the Quinebaug River Basin and elsewhere in the state.
Human blood metabolite timetable indicates internal body time
Kasukawa, Takeya; Sugimoto, Masahiro; Hida, Akiko; Minami, Yoichi; Mori, Masayo; Honma, Sato; Honma, Ken-ichi; Mishima, Kazuo; Soga, Tomoyoshi; Ueda, Hiroki R.
2012-01-01
A convenient way to estimate internal body time (BT) is essential for chronotherapy and time-restricted feeding, both of which use body-time information to maximize potency and minimize toxicity during drug administration and feeding, respectively. Previously, we proposed a molecular timetable based on circadian-oscillating substances in multiple mouse organs or blood to estimate internal body time from samples taken at only a few time points. Here we applied this molecular-timetable concept to estimate and evaluate internal body time in humans. We constructed a 1.5-d reference timetable of oscillating metabolites in human blood samples with 2-h sampling frequency while simultaneously controlling for the confounding effects of activity level, light, temperature, sleep, and food intake. By using this metabolite timetable as a reference, we accurately determined internal body time within 3 h from just two anti-phase blood samples. Our minimally invasive, molecular-timetable method with human blood enables highly optimized and personalized medicine. PMID:22927403
Danhelova, Hana; Hradecky, Jaromir; Prinosilova, Sarka; Cajka, Tomas; Riddellova, Katerina; Vaclavik, Lukas; Hajslova, Jana
2012-07-01
The development and use of a fast method employing a direct analysis in real time (DART) ion source coupled to high-resolution time-of-flight mass spectrometry (TOFMS) for the quantitative analysis of caffeine in various coffee samples has been demonstrated in this study. A simple sample extraction procedure employing hot water was followed by direct, high-throughput (<1 min per run) examination of the extracts spread on a glass rod under optimized conditions of ambient mass spectrometry, without any prior chromatographic separation. For quantification of caffeine using DART-TOFMS, an external calibration was used. Isotopically labeled caffeine was used to compensate for the variations of the ion intensities of caffeine signal. Recoveries of the DART-TOFMS method were 97% for instant coffee at the spiking levels of 20 and 60 mg/g, respectively, while for roasted ground coffee, the obtained values were 106% and 107% at the spiking levels of 10 and 30 mg/g, respectively. The repeatability of the whole analytical procedure (expressed as relative standard deviation, RSD, %) was <5% for all tested spiking levels and matrices. Since the linearity range of the method was relatively narrow (two orders of magnitude), an optimization of sample dilution prior the DART-TOFMS measurement to avoid saturation of the detector was needed.
Two-step adaptive management for choosing between two management actions
Moore, Alana L.; Walker, Leila; Runge, Michael C.; McDonald-Madden, Eve; McCarthy, Michael A
2017-01-01
Adaptive management is widely advocated to improve environmental management. Derivations of optimal strategies for adaptive management, however, tend to be case specific and time consuming. In contrast, managers might seek relatively simple guidance, such as insight into when a new potential management action should be considered, and how much effort should be expended on trialing such an action. We constructed a two-time-step scenario where a manager is choosing between two possible management actions. The manager has a total budget that can be split between a learning phase and an implementation phase. We use this scenario to investigate when and how much a manager should invest in learning about the management actions available. The optimal investment in learning can be understood intuitively by accounting for the expected value of sample information, the benefits that accrue during learning, the direct costs of learning, and the opportunity costs of learning. We find that the optimal proportion of the budget to spend on learning is characterized by several critical thresholds that mark a jump from spending a large proportion of the budget on learning to spending nothing. For example, as sampling variance increases, it is optimal to spend a larger proportion of the budget on learning, up to a point: if the sampling variance passes a critical threshold, it is no longer beneficial to invest in learning. Similar thresholds are observed as a function of the total budget and the difference in the expected performance of the two actions. We illustrate how this model can be applied using a case study of choosing between alternative rearing diets for hihi, an endangered New Zealand passerine. Although the model presented is a simplified scenario, we believe it is relevant to many management situations. Managers often have relatively short time horizons for management, and might be reluctant to consider further investment in learning and monitoring beyond collecting data from a single time period.
Two-step adaptive management for choosing between two management actions.
Moore, Alana L; Walker, Leila; Runge, Michael C; McDonald-Madden, Eve; McCarthy, Michael A
2017-06-01
Adaptive management is widely advocated to improve environmental management. Derivations of optimal strategies for adaptive management, however, tend to be case specific and time consuming. In contrast, managers might seek relatively simple guidance, such as insight into when a new potential management action should be considered, and how much effort should be expended on trialing such an action. We constructed a two-time-step scenario where a manager is choosing between two possible management actions. The manager has a total budget that can be split between a learning phase and an implementation phase. We use this scenario to investigate when and how much a manager should invest in learning about the management actions available. The optimal investment in learning can be understood intuitively by accounting for the expected value of sample information, the benefits that accrue during learning, the direct costs of learning, and the opportunity costs of learning. We find that the optimal proportion of the budget to spend on learning is characterized by several critical thresholds that mark a jump from spending a large proportion of the budget on learning to spending nothing. For example, as sampling variance increases, it is optimal to spend a larger proportion of the budget on learning, up to a point: if the sampling variance passes a critical threshold, it is no longer beneficial to invest in learning. Similar thresholds are observed as a function of the total budget and the difference in the expected performance of the two actions. We illustrate how this model can be applied using a case study of choosing between alternative rearing diets for hihi, an endangered New Zealand passerine. Although the model presented is a simplified scenario, we believe it is relevant to many management situations. Managers often have relatively short time horizons for management, and might be reluctant to consider further investment in learning and monitoring beyond collecting data from a single time period. © 2017 by the Ecological Society of America.
Nojavan, Saeed; Bidarmanesh, Tina; Memarzadeh, Farkhondeh; Chalavi, Soheila
2014-09-01
A simple electromembrane extraction (EME) procedure combined with ion chromatography (IC) was developed to quantify inorganic anions in different pure water samples and water miscible organic solvents. The parameters affecting extraction performance, such as supported liquid membrane (SLM) solvent, extraction time, pH of donor and acceptor solutions, and extraction voltage were optimized. The optimized EME conditions were as follows: 1-heptanol was used as the SLM solvent, the extraction time was 10 min, pHs of the acceptor and donor solutions were 10 and 7, respectively, and the extraction voltage was 15 V. The mobile phase used for IC was a combination of 1.8 mM sodium carbonate and 1.7 mM sodium bicarbonate. Under these optimized conditions, all anions had enrichment factors ranging from 67 to 117 with RSDs between 7.3 and 13.5% (n = 5). Good linearity values ranging from 2 to 1200 ng/mL with coefficients of determination (R(2) ) between 0.987 and 0.999 were obtained. The LODs of the EME-IC method ranged from 0.6 to 7.5 ng/mL. The developed method was applied to different samples to evaluate the feasibility of the method for real applications. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Narayanan, Vignesh; Jagannathan, Sarangapani
2017-06-08
This paper presents an approximate optimal distributed control scheme for a known interconnected system composed of input affine nonlinear subsystems using event-triggered state and output feedback via a novel hybrid learning scheme. First, the cost function for the overall system is redefined as the sum of cost functions of individual subsystems. A distributed optimal control policy for the interconnected system is developed using the optimal value function of each subsystem. To generate the optimal control policy, forward-in-time, neural networks are employed to reconstruct the unknown optimal value function at each subsystem online. In order to retain the advantages of event-triggered feedback for an adaptive optimal controller, a novel hybrid learning scheme is proposed to reduce the convergence time for the learning algorithm. The development is based on the observation that, in the event-triggered feedback, the sampling instants are dynamic and results in variable interevent time. To relax the requirement of entire state measurements, an extended nonlinear observer is designed at each subsystem to recover the system internal states from the measurable feedback. Using a Lyapunov-based analysis, it is demonstrated that the system states and the observer errors remain locally uniformly ultimately bounded and the control policy converges to a neighborhood of the optimal policy. Simulation results are presented to demonstrate the performance of the developed controller.
Porter, Charlotte A; Bradley, Kevin M; McGowan, Daniel R
2018-05-01
The aim of this study was to verify, with a large dataset of 1394 Cr-EDTA glomerular filtration rate (GFR) studies, the equivalence of slope-intercept and single-sample GFR. Raw data from 1394 patient studies were used to calculate four-sample slope-intercept GFR in addition to four individual single-sample GFR values (blood samples taken at 90, 150, 210 and 270 min after injection). The percentage differences between the four-sample slope-intercept and each of the single-sample GFR values were calculated, to identify the optimum single-sample time point. Having identified the optimum time point, the percentage difference between the slope-intercept and optimal single-sample GFR was calculated across a range of GFR values to investigate whether there was a GFR value below which the two methodologies cannot be considered equivalent. It was found that the lowest percentage difference between slope-intercept and single-sample GFR was for the third blood sample, taken at 210 min after injection. The median percentage difference was 2.5% and only 6.9% of patient studies had a percentage difference greater than 10%. Above a GFR value of 30 ml/min/1.73 m, the median percentage difference between the slope-intercept and optimal single-sample GFR values was below 10%, and so it was concluded that, above this value, the two techniques are sufficiently equivalent. This study supports the recommendation of performing single-sample GFR measurements for GFRs greater than 30 ml/min/1.73 m.
Optimal trading strategies—a time series approach
NASA Astrophysics Data System (ADS)
Bebbington, Peter A.; Kühn, Reimer
2016-05-01
Motivated by recent advances in the spectral theory of auto-covariance matrices, we are led to revisit a reformulation of Markowitz’ mean-variance portfolio optimization approach in the time domain. In its simplest incarnation it applies to a single traded asset and allows an optimal trading strategy to be found which—for a given return—is minimally exposed to market price fluctuations. The model is initially investigated for a range of synthetic price processes, taken to be either second order stationary, or to exhibit second order stationary increments. Attention is paid to consequences of estimating auto-covariance matrices from small finite samples, and auto-covariance matrix cleaning strategies to mitigate against these are investigated. Finally we apply our framework to real world data.
Le Roux, Delphine; Root, Brian E; Reedy, Carmen R; Hickey, Jeffrey A; Scott, Orion N; Bienvenue, Joan M; Landers, James P; Chassagne, Luc; de Mazancourt, Philippe
2014-08-19
A system that automatically performs the PCR amplification and microchip electrophoretic (ME) separation for rapid forensic short tandem repeat (STR) forensic profiling in a single disposable plastic chip is demonstrated. The microchip subassays were optimized to deliver results comparable to conventional benchtop methods. The microchip process was accomplished in sub-90 min compared with >2.5 h for the conventional approach. An infrared laser with a noncontact temperature sensing system was optimized for a 45 min PCR compared with the conventional 90 min amplification time. The separation conditions were optimized using LPA-co-dihexylacrylamide block copolymers specifically designed for microchip separations to achieve accurate DNA size calling in an effective length of 7 cm in a plastic microchip. This effective separation length is less than half of other reports for integrated STR analysis and allows a compact, inexpensive microchip design. This separation quality was maintained when integrated with microchip PCR. Thirty samples were analyzed conventionally and then compared with data generated by the microfluidic chip system. The microfluidic system allele calling was 100% concordant with the conventional process. This study also investigated allelic ladder consistency over time. The PCR-ME genetic profiles were analyzed using binning palettes generated from two sets of allelic ladders run three and six months apart. Using these binning palettes, no allele calling errors were detected in the 30 samples demonstrating that a microfluidic platform can be highly consistent over long periods of time.
NASA Technical Reports Server (NTRS)
Mahaffy, P. R.; Cabane, M.; Webster, C. R.; Archer, P. D.; Atreya, S. K.; Benna, M.; Brinckerhoff, W. B.; Brunner, A. E.; Buch, A.; Coll, P.;
2013-01-01
During the first 120 sols of Curiosity s landed mission on Mars (8/6/2012 to 12/7/2012) SAM sampled the atmosphere 9 times and an eolian bedform named Rocknest 4 times. The atmospheric experiments utilized SAM s quadrupole mass spectrometer (QMS) and tunable laser spectrometer (TLS) while the solid sample experiments also utilized the gas chromatograph (GC). Although a number of core experiments were pre-programmed and stored in EEProm, a high level SAM scripting language enabled the team to optimize experiments based on prior runs.
Vidal, Lorena; Chisvert, Alberto; Canals, Antonio; Salvador, Amparo
2010-04-15
A user-friendly and inexpensive ionic liquid-based single-drop microextraction (IL-SDME) procedure has been developed to preconcentrate trace amounts of six typical UV filters extensively used in cosmetic products (i.e., 2-hydroxy-4-methoxybenzophenone, isoamyl 4-methoxycinnamate, 3-(4'-methylbenzylidene)camphor, 2-ethylhexyl 2-cyano-3,3-diphenylacrylate, 2-ethylhexyl 4-dimethylaminobenzoate and 2-ethylhexyl 4-methoxycinnamate) from surface water samples prior to analysis by liquid chromatography-ultraviolet spectrophotometry detection (LC-UV). A two-stage multivariate optimization approach was developed by means of a Plackett-Burman design for screening and selecting the significant variables involved in the SDME procedure, which were later optimized by means of a circumscribed central composite design. The studied variables were drop volume, sample volume, agitation speed, ionic strength, extraction time and ethanol quantity. Owing to particularities, ionic liquid type and pH of the sample were optimized separately. Under optimized experimental conditions (i.e., 10 microL of 1-hexyl-3-methylimidazolium hexafluorophosphate, 20 mL of sample containing 1% (v/v) ethanol and NaCl free adjusted to pH 2, 37 min extraction time and 1300 rpm agitation speed) enrichment factors up to ca. 100-fold were obtained depending on the target analyte. The method gave good levels of repeatability with relative standard deviations varying between 2.8 and 8.8% (n=6). Limits of detection were found in the low microg L(-1) range, varying between 0.06 and 3.0 microg L(-1) depending on the target analyte. Recovery studies from different types of surface water samples collected during the winter period, which were analysed and confirmed free of all target analytes, ranged between 92 and 115%, showing that the matrix had a negligible effect upon extraction. Finally, the proposed method was applied to the analysis of different water samples (taken from two beaches, two swimming pools and a river) collected during the summer period. (c) 2009 Elsevier B.V. All rights reserved.
Demerouti, Evangelia; Sanz-Vergel, Ana Isabel; Petrou, Paraskevas; van den Heuvel, Machteld
2016-10-01
Although work and family are undoubtedly important life domains, individuals are also active in other life roles which are also important to them (like pursuing personal interests). Building on identity theory and the resource perspective on work-home interface, we examined whether there is an indirect effect of work-self conflict/facilitation on exhaustion and task performance over time through personal resources (i.e., self-efficacy and optimism). The sample was composed of 368 Dutch police officers. Results of the 3-wave longitudinal study confirmed that work-self conflict was related to lower levels of self-efficacy, whereas work-self facilitation was related to improved optimism over time. In turn, self-efficacy was related to higher task performance, whereas optimism was related to diminished levels of exhaustion over time. Further analysis supported the negative, indirect effect of work-self facilitation on exhaustion through optimism over time, and only a few reversed causal effects emerged. The study contributes to the literature on interrole management by showing the role of personal resources in the process of conflict or facilitation over time. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Solid phase microextraction Arrow for the sampling of volatile amines in wastewater and atmosphere.
Helin, Aku; Rönkkö, Tuukka; Parshintsev, Jevgeni; Hartonen, Kari; Schilling, Beat; Läubli, Thomas; Riekkola, Marja-Liisa
2015-12-24
A new method is introduced for the sampling of volatile low molecular weight alkylamines in ambient air and wastewater by utilizing a novel SPME Arrow system, which contains a larger volume of sorbent compared to a standard SPME fiber. Parameters affecting the extraction, such as coating material, need for preconcentration, sample volume, pH, stirring rate, salt addition, extraction time and temperature were carefully optimized. In addition, analysis conditions, including desorption temperature and time as well as gas chromatographic parameters, were optimized. Compared to conventional SPME fiber, the SPME Arrow had better robustness and sensitivity. Average intermediate reproducibility of the method expressed as relative standard deviation was 12% for dimethylamine and 14% for trimethylamine, and their limit of quantification 10μg/L and 0.13μg/L respectively. Working range was from limits of quantification to 500μg/L for dimethylamine and to 130μg/L for trimethylamine. Several alkylamines were qualitatively analyzed in real samples, while target compounds dimethyl- and trimethylamines were quantified. The concentrations in influent and effluent wastewater samples were almost the same (∼80μg/L for dimethylamine, 120μg/L for trimethylamine) meaning that amines pass the water purification process unchanged or they are produced at the same rate as they are removed. For the air samples, preconcentration with phosphoric acid coated denuder was required and the concentration of trimethylamine was found to be around 1ng/m(3). The developed method was compared with optimized method based on conventional SPME and advantages and disadvantages of both approaches are discussed. Copyright © 2015 Elsevier B.V. All rights reserved.
Sample treatment optimization for fish stool metabolomics.
Hano, Takeshi; Ito, Mana; Ito, Katsutoshi; Uchida, Motoharu
2018-06-07
Gut microbiota play an essential role in an organism's health. The fecal metabolite profiling content reflects these microbiota-mediated physiological changes in various organisms, including fish. Therefore, metabolomics analysis of fish feces should provide insight into the dynamics linking physiology and gut microbiota. However, metabolites are often unstable in aquatic environments, making fecal metabolites difficult to examine in fish. In this study, a novel method using gas chromatography-mass spectrometry (GC-MS) was developed and optimized for the preparation of metabolomics samples from the feces of the marine fish, red sea bream (Pagrus major). The preparation methodology was optimized, focusing on rinsing frequency and rinsing solvent. Feces (collected within 4 h of excretion) were rinsed three times with sterilized 2.5% NaCl solution or 3.0% artificial seawater (ASW). Among the 86 metabolites identified in the NaCl-rinsed samples, 57 showed superior recovery to that in ASW-rinsed samples, indicating that NaCl is a better rinsing solvent, particularly for amino acids, organic acids, and fatty acids. To evaluate rinsing frequency, fecal samples were rinsed with NaCl solution 0, 1, 3, or 5 times. The results indicate that three or more rinses enabled robust and stable detection of metabolites encapsulated within the solid fecal residue. Furthermore, these data suggest that rinsing is unnecessary when studying sugars, amino acids, and sterols, again highlighting the need for appropriate rinsing solvent and frequency. This study provides further insight into the use of fecal samples to evaluate and promote fish health during farming and supports the application of this and similar analyses to study the effects of environmental fluctuations and/or contamination. Copyright © 2018 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hu, Lingzhi, E-mail: hlingzhi@gmail.com, E-mail: raymond.muzic@case.edu; Traughber, Melanie; Su, Kuan-Hao
Purpose: The ultrashort echo-time (UTE) sequence is a promising MR pulse sequence for imaging cortical bone which is otherwise difficult to image using conventional MR sequences and also poses strong attenuation for photons in radiation therapy and PET imaging. The authors report here a systematic characterization of cortical bone signal decay and a scanning time optimization strategy for the UTE sequence through k-space undersampling, which can result in up to a 75% reduction in acquisition time. Using the undersampled UTE imaging sequence, the authors also attempted to quantitatively investigate the MR properties of cortical bone in healthy volunteers, thus demonstratingmore » the feasibility of using such a technique for generating bone-enhanced images which can be used for radiation therapy planning and attenuation correction with PET/MR. Methods: An angularly undersampled, radially encoded UTE sequence was used for scanning the brains of healthy volunteers. Quantitative MR characterization of tissue properties, including water fraction and R2{sup ∗} = 1/T2{sup ∗}, was performed by analyzing the UTE images acquired at multiple echo times. The impact of different sampling rates was evaluated through systematic comparison of the MR image quality, bone-enhanced image quality, image noise, water fraction, and R2{sup ∗} of cortical bone. Results: A reduced angular sampling rate of the UTE trajectory achieves acquisition durations in proportion to the sampling rate and in as short as 25% of the time required for full sampling using a standard Cartesian acquisition, while preserving unique MR contrast within the skull at the cost of a minimal increase in noise level. The R2{sup ∗} of human skull was measured as 0.2–0.3 ms{sup −1} depending on the specific region, which is more than ten times greater than the R2{sup ∗} of soft tissue. The water fraction in human skull was measured to be 60%–80%, which is significantly less than the >90% water fraction in brain. High-quality, bone-enhanced images can be generated using a reduced sampled UTE sequence with no visible compromise in image quality and they preserved bone-to-air contrast with as low as a 25% sampling rate. Conclusions: This UTE strategy with angular undersampling preserves the image quality and contrast of cortical bone, while reducing the total scanning time by as much as 75%. The quantitative results of R2{sup ∗} and the water fraction of skull based on Dixon analysis of UTE images acquired at multiple echo times provide guidance for the clinical adoption and further parameter optimization of the UTE sequence when used for radiation therapy and MR-based PET attenuation correction.« less
NASA Astrophysics Data System (ADS)
Longting, M.; Ye, S.; Wu, J.
2014-12-01
Identification and removing the DNAPL source in aquifer system is vital in rendering remediation successful and lowering the remediation time and cost. Our work is to apply an optimal search strategy introduced by Zoi and Pinder[1], with some modifications, to a field site in Nanjing City, China to define the strength, and location of DNAPL sources using the least samples. The overall strategy uses Monte Carlo stochastic groundwater flow and transport modeling, incorporates existing sampling data into the search strategy, and determines optimal sampling locations that are selected according to the reduction in overall uncertainty of the field and the proximity to the source locations. After a sample is taken, the plume is updated using a Kalman filter. The updated plume is then compared to the concentration fields that emanate from each individual potential source using fuzzy set technique. The comparison followed provides weights that reflect the degree of truth regarding the location of the source. The above steps are repeated until the optimal source characteristics are determined. Considering our site case, some specific modifications and work have been done as follows. K random fields are generated after fitting the measurement K data to the variogram model. The locations of potential sources that are given initial weights are targeted based on the field survey, with multiple potential source locations around the workshops and wastewater basin. Considering the short history (1999-2010) of manufacturing optical brightener PF at the site, and the existing sampling data, a preliminary source strength is then estimated, which will be optimized by simplex method or GA later. The whole algorithm then will guide us for optimal sampling and update as the investigation proceeds, until the weights finally stabilized. Reference [1] Dokou Zoi, and George F. Pinder. "Optimal search strategy for the definition of a DNAPL source." Journal of Hydrology 376.3 (2009): 542-556. Acknowledgement: Funding supported by National Natural Science Foundation of China (No. 41030746, 40872155) and DuPont Company is appreciated.
Acquiring the optimal time for hyperbaric therapy in the rat model of CFA induced arthritis.
Koo, Sung Tae; Lee, Chang-Hyung; Shin, Yong Il; Ko, Hyun Yoon; Lee, Da Gyo; Jeong, Han-Sol
2014-01-01
We previously published an article about the pressure effect using a rheumatoid animal model. Hyperbaric therapy appears to be beneficial in treating rheumatoid arthritis (RA) by reducing the inflammatory process in an animal model. In this sense, acquiring the optimal pressure-treatment time parameter for RA is important and no optimal hyperbaric therapy time has been suggested up to now. The purpose of our study was to acquire the optimal time for hyperbaric therapy in the RA rat model. Controlled animal study. Following injection of complete Freund's adjuvant (CFA) into one side of the knee joint, 32 rats were randomly assigned to 3 different time groups (1, 3, 5 hours a day) under 1.5 atmospheres absolute (ATA) hyperbaric chamber for 12 days. The pain levels were assessed daily for 2 weeks by weight bearing force (WBF) of the affected limb. In addition, the levels of gelatinase, MMP-2, and MMP-9 expression in the synovial fluids of the knees were analyzed. The reduction of WBF was high at 2 days after injection and then it was spontaneously increased up to 14 days in all 3 groups. There were significant differences of WBF between 5 hours and control during the third through twelfth days, between 3 hours and control during the third through fifth and tenth through twelfth days, and between 3 hours and 5 hours during the third through seventh days (P < 0.05). The MMP-9/MMP-2 ratio increased at 14 days after the CFA injection in all groups compared to the initial findings, however, the 3 hour group showed a smaller MMP-9/MMP-2 ratio than the control group. Although enough samples were used for the study to support our hypothesis, more samples will be needed to raise the validity and reliability. The effect of hyperbaric treatment appears to be dependent upon the elevated therapy time under 1.5 ATA pressure for a short period of time; however, the long-term effects were similar in all pressure groups. Further study will be needed to acquire the optimal pressure-treatment parameter relationship in various conditions for clinical application.
Statistical Learning of Origin-Specific Statically Optimal Individualized Treatment Rules
van der Laan, Mark J.; Petersen, Maya L.
2008-01-01
Consider a longitudinal observational or controlled study in which one collects chronological data over time on a random sample of subjects. The time-dependent process one observes on each subject contains time-dependent covariates, time-dependent treatment actions, and an outcome process or single final outcome of interest. A statically optimal individualized treatment rule (as introduced in van der Laan et. al. (2005), Petersen et. al. (2007)) is a treatment rule which at any point in time conditions on a user-supplied subset of the past, computes the future static treatment regimen that maximizes a (conditional) mean future outcome of interest, and applies the first treatment action of the latter regimen. In particular, Petersen et. al. (2007) clarified that, in order to be statically optimal, an individualized treatment rule should not depend on the observed treatment mechanism. Petersen et. al. (2007) further developed estimators of statically optimal individualized treatment rules based on a past capturing all confounding of past treatment history on outcome. In practice, however, one typically wishes to find individualized treatment rules responding to a user-supplied subset of the complete observed history, which may not be sufficient to capture all confounding. The current article provides an important advance on Petersen et. al. (2007) by developing locally efficient double robust estimators of statically optimal individualized treatment rules responding to such a user-supplied subset of the past. However, failure to capture all confounding comes at a price; the static optimality of the resulting rules becomes origin-specific. We explain origin-specific static optimality, and discuss the practical importance of the proposed methodology. We further present the results of a data analysis in which we estimate a statically optimal rule for switching antiretroviral therapy among patients infected with resistant HIV virus. PMID:19122792
Variable-Field Analytical Ultracentrifugation: I. Time-Optimized Sedimentation Equilibrium
Ma, Jia; Metrick, Michael; Ghirlando, Rodolfo; Zhao, Huaying; Schuck, Peter
2015-01-01
Sedimentation equilibrium (SE) analytical ultracentrifugation (AUC) is a gold standard for the rigorous determination of macromolecular buoyant molar masses and the thermodynamic study of reversible interactions in solution. A significant experimental drawback is the long time required to attain SE, which is usually on the order of days. We have developed a method for time-optimized SE (toSE) with defined time-varying centrifugal fields that allow SE to be attained in a significantly (up to 10-fold) shorter time than is usually required. To achieve this, numerical Lamm equation solutions for sedimentation in time-varying fields are computed based on initial estimates of macromolecular transport properties. A parameterized rotor-speed schedule is optimized with the goal of achieving a minimal time to equilibrium while limiting transient sample preconcentration at the base of the solution column. The resulting rotor-speed schedule may include multiple over- and underspeeding phases, balancing the formation of gradients from strong sedimentation fluxes with periods of high diffusional transport. The computation is carried out in a new software program called TOSE, which also facilitates convenient experimental implementation. Further, we extend AUC data analysis to sedimentation processes in such time-varying centrifugal fields. Due to the initially high centrifugal fields in toSE and the resulting strong migration, it is possible to extract sedimentation coefficient distributions from the early data. This can provide better estimates of the size of macromolecular complexes and report on sample homogeneity early on, which may be used to further refine the prediction of the rotor-speed schedule. In this manner, the toSE experiment can be adapted in real time to the system under study, maximizing both the information content and the time efficiency of SE experiments. PMID:26287634
Replica approach to mean-variance portfolio optimization
NASA Astrophysics Data System (ADS)
Varga-Haszonits, Istvan; Caccioli, Fabio; Kondor, Imre
2016-12-01
We consider the problem of mean-variance portfolio optimization for a generic covariance matrix subject to the budget constraint and the constraint for the expected return, with the application of the replica method borrowed from the statistical physics of disordered systems. We find that the replica symmetry of the solution does not need to be assumed, but emerges as the unique solution of the optimization problem. We also check the stability of this solution and find that the eigenvalues of the Hessian are positive for r = N/T < 1, where N is the dimension of the portfolio and T the length of the time series used to estimate the covariance matrix. At the critical point r = 1 a phase transition is taking place. The out of sample estimation error blows up at this point as 1/(1 - r), independently of the covariance matrix or the expected return, displaying the universality not only of the critical exponent, but also the critical point. As a conspicuous illustration of the dangers of in-sample estimates, the optimal in-sample variance is found to vanish at the critical point inversely proportional to the divergent estimation error.
Dynamics of hepatitis C under optimal therapy and sampling based analysis
NASA Astrophysics Data System (ADS)
Pachpute, Gaurav; Chakrabarty, Siddhartha P.
2013-08-01
We examine two models for hepatitis C viral (HCV) dynamics, one for monotherapy with interferon (IFN) and the other for combination therapy with IFN and ribavirin. Optimal therapy for both the models is determined using the steepest gradient method, by defining an objective functional which minimizes infected hepatocyte levels, virion population and side-effects of the drug(s). The optimal therapies for both the models show an initial period of high efficacy, followed by a gradual decline. The period of high efficacy coincides with a significant decrease in the viral load, whereas the efficacy drops after hepatocyte levels are restored. We use the Latin hypercube sampling technique to randomly generate a large number of patient scenarios and study the dynamics of each set under the optimal therapy already determined. Results show an increase in the percentage of responders (indicated by drop in viral load below detection levels) in case of combination therapy (72%) as compared to monotherapy (57%). Statistical tests performed to study correlations between sample parameters and time required for the viral load to fall below detection level, show a strong monotonic correlation with the death rate of infected hepatocytes, identifying it to be an important factor in deciding individual drug regimens.
Development of a magnetic lab-on-a-chip for point-of-care sepsis diagnosis
NASA Astrophysics Data System (ADS)
Schotter, Joerg; Shoshi, Astrit; Brueckl, Hubert
2009-05-01
We present design criteria, operation principles and experimental examples of magnetic marker manipulation for our magnetic lab-on-a-chip prototype. It incorporates both magnetic sample preparation and detection by embedded GMR-type magnetoresistive sensors and is optimized for the automated point-of-care detection of four different sepsis-indicative cytokines directly from about 5 μl of whole blood. The sample volume, magnetic particle size and cytokine concentration determine the microfluidic volume, sensor size and dimensioning of the magnetic gradient field generators. By optimizing these parameters to the specific diagnostic task, best performance is expected with respect to sensitivity, analysis time and reproducibility.
Shaw, P E; Wilson, C W
1988-09-01
The commercially available computer program, Drylab, for optimization of separations by high-performance liquid chromatography (HPLC) using binary solvent mixtures is used to improve an HPLC method for separation of the bitter principle, limonin, in grapefruit and navel orange juices. Best conditions for separation of limonin in a reasonable time are 30 to 32% acetonitrile in water at 0.9 mL/min using a 5-micron C18 column 10 cm long. These conditions are used to analyze grapefruit and navel orange juice samples, and these HPLC results are compared with values determined by enzyme immunoassay or thin-layer chromatography (TLC) on the same samples.
Optimal iodine staining of cardiac tissue for X-ray computed tomography.
Butters, Timothy D; Castro, Simon J; Lowe, Tristan; Zhang, Yanmin; Lei, Ming; Withers, Philip J; Zhang, Henggui
2014-01-01
X-ray computed tomography (XCT) has been shown to be an effective imaging technique for a variety of materials. Due to the relatively low differential attenuation of X-rays in biological tissue, a high density contrast agent is often required to obtain optimal contrast. The contrast agent, iodine potassium iodide ([Formula: see text]), has been used in several biological studies to augment the use of XCT scanning. Recently I2KI was used in XCT scans of animal hearts to study cardiac structure and to generate 3D anatomical computer models. However, to date there has been no thorough study into the optimal use of I2KI as a contrast agent in cardiac muscle with respect to the staining times required, which has been shown to impact significantly upon the quality of results. In this study we address this issue by systematically scanning samples at various stages of the staining process. To achieve this, mouse hearts were stained for up to 58 hours and scanned at regular intervals of 6-7 hours throughout this process. Optimal staining was found to depend upon the thickness of the tissue; a simple empirical exponential relationship was derived to allow calculation of the required staining time for cardiac samples of an arbitrary size.
Local synchronization of chaotic neural networks with sampled-data and saturating actuators.
Wu, Zheng-Guang; Shi, Peng; Su, Hongye; Chu, Jian
2014-12-01
This paper investigates the problem of local synchronization of chaotic neural networks with sampled-data and actuator saturation. A new time-dependent Lyapunov functional is proposed for the synchronization error systems. The advantage of the constructed Lyapunov functional lies in the fact that it is positive definite at sampling times but not necessarily between sampling times, and makes full use of the available information about the actual sampling pattern. A local stability condition of the synchronization error systems is derived, based on which a sampled-data controller with respect to the actuator saturation is designed to ensure that the master neural networks and slave neural networks are locally asymptotically synchronous. Two optimization problems are provided to compute the desired sampled-data controller with the aim of enlarging the set of admissible initial conditions or the admissible sampling upper bound ensuring the local synchronization of the considered chaotic neural networks. A numerical example is used to demonstrate the effectiveness of the proposed design technique.
Optimizing integrated airport surface and terminal airspace operations under uncertainty
NASA Astrophysics Data System (ADS)
Bosson, Christabelle S.
In airports and surrounding terminal airspaces, the integration of surface, arrival and departure scheduling and routing have the potential to improve the operations efficiency. Moreover, because both the airport surface and the terminal airspace are often altered by random perturbations, the consideration of uncertainty in flight schedules is crucial to improve the design of robust flight schedules. Previous research mainly focused on independently solving arrival scheduling problems, departure scheduling problems and surface management scheduling problems and most of the developed models are deterministic. This dissertation presents an alternate method to model the integrated operations by using a machine job-shop scheduling formulation. A multistage stochastic programming approach is chosen to formulate the problem in the presence of uncertainty and candidate solutions are obtained by solving sample average approximation problems with finite sample size. The developed mixed-integer-linear-programming algorithm-based scheduler is capable of computing optimal aircraft schedules and routings that reflect the integration of air and ground operations. The assembled methodology is applied to a Los Angeles case study. To show the benefits of integrated operations over First-Come-First-Served, a preliminary proof-of-concept is conducted for a set of fourteen aircraft evolving under deterministic conditions in a model of the Los Angeles International Airport surface and surrounding terminal areas. Using historical data, a representative 30-minute traffic schedule and aircraft mix scenario is constructed. The results of the Los Angeles application show that the integration of air and ground operations and the use of a time-based separation strategy enable both significant surface and air time savings. The solution computed by the optimization provides a more efficient routing and scheduling than the First-Come-First-Served solution. Additionally, a data driven analysis is performed for the Los Angeles environment and probabilistic distributions of pertinent uncertainty sources are obtained. A sensitivity analysis is then carried out to assess the methodology performance and find optimal sampling parameters. Finally, simulations of increasing traffic density in the presence of uncertainty are conducted first for integrated arrivals and departures, then for integrated surface and air operations. To compare the optimization results and show the benefits of integrated operations, two aircraft separation methods are implemented that offer different routing options. The simulations of integrated air operations and the simulations of integrated air and surface operations demonstrate that significant traveling time savings, both total and individual surface and air times, can be obtained when more direct routes are allowed to be traveled even in the presence of uncertainty. The resulting routings induce however extra take off delay for departing flights. As a consequence, some flights cannot meet their initial assigned runway slot which engenders runway position shifting when comparing resulting runway sequences computed under both deterministic and stochastic conditions. The optimization is able to compute an optimal runway schedule that represents an optimal balance between total schedule delays and total travel times.
The Role of Pubertal Timing in What Adolescent Boys Do Online
ERIC Educational Resources Information Center
Skoog, Therese; Stattin, Hakan; Kerr, Margaret
2009-01-01
The aim of this study was to investigate associations between pubertal timing and boys' Internet use, particularly their viewing of pornography. We used a sample comprising of 97 boys in grade 8 (M age, 14.22 years) from two schools in a medium-sized Swedish town. This age should be optimal for differentiating early, on-time, and later-maturing…
Li, Michelle W; Huynh, Bryan H; Hulvey, Matthew K; Lunte, Susan M; Martin, R Scott
2006-02-15
This work describes the fabrication and evaluation of a poly(dimethyl)siloxane (PDMS)-based device that enables the discrete injection of a sample plug from a continuous-flow stream into a microchannel for subsequent analysis by electrophoresis. Devices were fabricated by aligning valving and flow channel layers followed by plasma sealing the combined layers onto a glass plate that contained fittings for the introduction of liquid sample and nitrogen gas. The design incorporates a reduced-volume pneumatic valve that actuates (on the order of hundreds of milliseconds) to allow analyte from a continuously flowing sampling channel to be injected into a separation channel for electrophoresis. The injector design was optimized to include a pushback channel to flush away stagnant sample associated with the injector dead volume. The effect of the valve actuation time, the pushback voltage, and the sampling stream flow rate on the performance of the device was characterized. Using the optimized design and an injection frequency of 0.64 Hz showed that the injection process is reproducible (RSD of 1.77%, n = 15). Concentration change experiments using fluorescein as the analyte showed that the device could achieve a lag time as small as 14 s. Finally, to demonstrate the potential uses of this device, the microchip was coupled to a microdialysis probe to monitor a concentration change and sample a fluorescein dye mixture.
Li, Sheng; Yao, Xinhua; Fu, Jianzhong
2014-07-16
Thermoelectric energy harvesting is emerging as a promising alternative energy source to drive wireless sensors in mechanical systems. Typically, the waste heat from spindle units in machine tools creates potential for thermoelectric generation. However, the problem of low and fluctuant ambient temperature differences in spindle units limits the application of thermoelectric generation to drive a wireless sensor. This study is devoted to presenting a transformer-based power management system and its associated control strategy to make the wireless sensor work stably at different speeds of the spindle. The charging/discharging time of capacitors is optimized through this energy-harvesting strategy. A rotating spindle platform is set up to test the performance of the power management system at different speeds. The experimental results show that a longer sampling cycle time will increase the stability of the wireless sensor. The experiments also prove that utilizing the optimal time can make the power management system work more effectively compared with other systems using the same sample cycle.
Li, Sheng; Yao, Xinhua; Fu, Jianzhong
2014-01-01
Thermoelectric energy harvesting is emerging as a promising alternative energy source to drive wireless sensors in mechanical systems. Typically, the waste heat from spindle units in machine tools creates potential for thermoelectric generation. However, the problem of low and fluctuant ambient temperature differences in spindle units limits the application of thermoelectric generation to drive a wireless sensor. This study is devoted to presenting a transformer-based power management system and its associated control strategy to make the wireless sensor work stably at different speeds of the spindle. The charging/discharging time of capacitors is optimized through this energy-harvesting strategy. A rotating spindle platform is set up to test the performance of the power management system at different speeds. The experimental results show that a longer sampling cycle time will increase the stability of the wireless sensor. The experiments also prove that utilizing the optimal time can make the power management system work more effectively compared with other systems using the same sample cycle. PMID:25033189
Optimizing Clinical Trial Enrollment Methods Through "Goal Programming"
Davis, J.M.; Sandgren, A.J.; Manley, A.R.; Daleo, M.A.; Smith, S.S.
2014-01-01
Introduction Clinical trials often fail to reach desired goals due to poor recruitment outcomes, including low participant turnout, high recruitment cost, or poor representation of minorities. At present, there is limited literature available to guide recruitment methodology. This study, conducted by researchers at the University of Wisconsin Center for Tobacco Research and Intervention (UW-CTRI), provides an example of how iterative analysis of recruitment data may be used to optimize recruitment outcomes during ongoing recruitment. Study methodology UW-CTRI’s research team provided a description of methods used to recruit smokers in two randomized trials (n = 196 and n = 175). The trials targeted low socioeconomic status (SES) smokers and involved time-intensive smoking cessation interventions. Primary recruitment goals were to meet required sample size and provide representative diversity while working with limited funds and limited time. Recruitment data was analyzed repeatedly throughout each study to optimize recruitment outcomes. Results Estimates of recruitment outcomes based on prior studies on smoking cessation suggested that researchers would be able to recruit 240 low SES smokers within 30 months at a cost of $72,000. With employment of methods described herein, researchers were able to recruit 374 low SES smokers over 30 months at a cost of $36,260. Discussion Each human subjects study presents unique recruitment challenges with time and cost of recruitment dependent on the sample population and study methodology. Nonetheless, researchers may be able to improve recruitment outcomes though iterative analysis of recruitment data and optimization of recruitment methods throughout the recruitment period. PMID:25642125
Nezhadali, Azizollah; Motlagh, Maryam Omidvar; Sadeghzadeh, Samira
2018-02-05
A selective method based on molecularly imprinted polymer (MIP) solid-phase extraction (SPE) using UV-Vis spectrophotometry as a detection technique was developed for the determination of fluoxetine (FLU) in pharmaceutical and human serum samples. The MIPs were synthesized using pyrrole as a functional monomer in the presence of FLU as a template molecule. The factors that affecting the preparation and extraction ability of MIP such as amount of sorbent, initiator concentration, the amount of monomer to template ratio, uptake shaking rate, uptake time, washing buffer pH, take shaking rate, Taking time and polymerization time were considered for optimization. First a Plackett-Burman design (PBD) consists of 12 randomized runs were applied to determine the influence of each factor. The other optimization processes were performed using central composite design (CCD), artificial neural network (ANN) and genetic algorithm (GA). At optimal condition the calibration curve showed linearity over a concentration range of 10 -7 -10 -8 M with a correlation coefficient (R 2 ) of 0.9970. The limit of detection (LOD) for FLU was obtained 6.56×10 -9 M. The repeatability of the method was obtained 1.61%. The synthesized MIP sorbent showed a good selectivity and sensitivity toward FLU. The MIP/SPE method was used for the determination of FLU in pharmaceutical, serum and plasma samples, successfully. Copyright © 2017 Elsevier B.V. All rights reserved.
Optimization of HPV DNA detection in urine by improving collection, storage, and extraction.
Vorsters, A; Van den Bergh, J; Micalessi, I; Biesmans, S; Bogers, J; Hens, A; De Coster, I; Ieven, M; Van Damme, P
2014-11-01
The benefits of using urine for the detection of human papillomavirus (HPV) DNA have been evaluated in disease surveillance, epidemiological studies, and screening for cervical cancers in specific subgroups. HPV DNA testing in urine is being considered for important purposes, notably the monitoring of HPV vaccination in adolescent girls and young women who do not wish to have a vaginal examination. The need to optimize and standardize sampling, storage, and processing has been reported.In this paper, we examined the impact of a DNA-conservation buffer, the extraction method, and urine sampling on the detection of HPV DNA and human DNA in urine provided by 44 women with a cytologically normal but HPV DNA-positive cervical sample. Ten women provided first-void and midstream urine samples. DNA analysis was performed using real-time PCR to allow quantification of HPV and human DNA.The results showed that an optimized method for HPV DNA detection in urine should (a) prevent DNA degradation during extraction and storage, (b) recover cell-free HPV DNA in addition to cell-associated DNA, (c) process a sufficient volume of urine, and (d) use a first-void sample.In addition, we found that detectable human DNA in urine may not be a good internal control for sample validity. HPV prevalence data that are based on urine samples collected, stored, and/or processed under suboptimal conditions may underestimate infection rates.
SVM-Based Synthetic Fingerprint Discrimination Algorithm and Quantitative Optimization Strategy
Chen, Suhang; Chang, Sheng; Huang, Qijun; He, Jin; Wang, Hao; Huang, Qiangui
2014-01-01
Synthetic fingerprints are a potential threat to automatic fingerprint identification systems (AFISs). In this paper, we propose an algorithm to discriminate synthetic fingerprints from real ones. First, four typical characteristic factors—the ridge distance features, global gray features, frequency feature and Harris Corner feature—are extracted. Then, a support vector machine (SVM) is used to distinguish synthetic fingerprints from real fingerprints. The experiments demonstrate that this method can achieve a recognition accuracy rate of over 98% for two discrete synthetic fingerprint databases as well as a mixed database. Furthermore, a performance factor that can evaluate the SVM's accuracy and efficiency is presented, and a quantitative optimization strategy is established for the first time. After the optimization of our synthetic fingerprint discrimination task, the polynomial kernel with a training sample proportion of 5% is the optimized value when the minimum accuracy requirement is 95%. The radial basis function (RBF) kernel with a training sample proportion of 15% is a more suitable choice when the minimum accuracy requirement is 98%. PMID:25347063
Sjögren, Erik; Nyberg, Joakim; Magnusson, Mats O; Lennernäs, Hans; Hooker, Andrew; Bredberg, Ulf
2011-05-01
A penalized expectation of determinant (ED)-optimal design with a discrete parameter distribution was used to find an optimal experimental design for assessment of enzyme kinetics in a screening environment. A data set for enzyme kinetic data (V(max) and K(m)) was collected from previously reported studies, and every V(max)/K(m) pair (n = 76) was taken to represent a unique drug compound. The design was restricted to 15 samples, an incubation time of up to 40 min, and starting concentrations (C(0)) for the incubation between 0.01 and 100 μM. The optimization was performed by finding the sample times and C(0) returning the lowest uncertainty (S.E.) of the model parameter estimates. Individual optimal designs, one general optimal design and one, for laboratory practice suitable, pragmatic optimal design (OD) were obtained. In addition, a standard design (STD-D), representing a commonly applied approach for metabolic stability investigations, was constructed. Simulations were performed for OD and STD-D by using the Michaelis-Menten (MM) equation, and enzyme kinetic parameters were estimated with both MM and a monoexponential decay. OD generated a better result (relative standard error) for 99% of the compounds and an equal or better result [(root mean square error (RMSE)] for 78% of the compounds in estimation of metabolic intrinsic clearance. Furthermore, high-quality estimates (RMSE < 30%) of both V(max) and K(m) could be obtained for a considerable number (26%) of the investigated compounds by using the suggested OD. The results presented in this study demonstrate that the output could generally be improved compared with that obtained from the standard approaches used today.
Detecting recurrence domains of dynamical systems by symbolic dynamics.
beim Graben, Peter; Hutt, Axel
2013-04-12
We propose an algorithm for the detection of recurrence domains of complex dynamical systems from time series. Our approach exploits the characteristic checkerboard texture of recurrence domains exhibited in recurrence plots. In phase space, recurrence plots yield intersecting balls around sampling points that could be merged into cells of a phase space partition. We construct this partition by a rewriting grammar applied to the symbolic dynamics of time indices. A maximum entropy principle defines the optimal size of intersecting balls. The final application to high-dimensional brain signals yields an optimal symbolic recurrence plot revealing functional components of the signal.
Critical evaluation of sample pretreatment techniques.
Hyötyläinen, Tuulia
2009-06-01
Sample preparation before chromatographic separation is the most time-consuming and error-prone part of the analytical procedure. Therefore, selecting and optimizing an appropriate sample preparation scheme is a key factor in the final success of the analysis, and the judicious choice of an appropriate procedure greatly influences the reliability and accuracy of a given analysis. The main objective of this review is to critically evaluate the applicability, disadvantages, and advantages of various sample preparation techniques. Particular emphasis is placed on extraction techniques suitable for both liquid and solid samples.
Balancing a U-Shaped Assembly Line by Applying Nested Partitions Method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bhagwat, Nikhil V.
2005-01-01
In this study, we applied the Nested Partitions method to a U-line balancing problem and conducted experiments to evaluate the application. From the results, it is quite evident that the Nested Partitions method provided near optimal solutions (optimal in some cases). Besides, the execution time is quite short as compared to the Branch and Bound algorithm. However, for larger data sets, the algorithm took significantly longer times for execution. One of the reasons could be the way in which the random samples are generated. In the present study, a random sample is a solution in itself which requires assignment ofmore » tasks to various stations. The time taken to assign tasks to stations is directly proportional to the number of tasks. Thus, if the number of tasks increases, the time taken to generate random samples for the different regions also increases. The performance index for the Nested Partitions method in the present study was the number of stations in the random solutions (samples) generated. The total idle time for the samples can be used as another performance index. ULINO method is known to have used a combination of bounds to come up with good solutions. This approach of combining different performance indices can be used to evaluate the random samples and obtain even better solutions. Here, we used deterministic time values for the tasks. In industries where majority of tasks are performed manually, the stochastic version of the problem could be of vital importance. Experimenting with different objective functions (No. of stations was used in this study) could be of some significance to some industries where in the cost associated with creation of a new station is not the same. For such industries, the results obtained by using the present approach will not be of much value. Labor costs, task incompletion costs or a combination of those can be effectively used as alternate objective functions.« less
Foam generation and sample composition optimization for the FOAM-C experiment of the ISS
NASA Astrophysics Data System (ADS)
Carpy, R.; Picker, G.; Amann, B.; Ranebo, H.; Vincent-Bonnieu, S.; Minster, O.; Winter, J.; Dettmann, J.; Castiglione, L.; Höhler, R.; Langevin, D.
2011-12-01
End of 2009 and early 2010 a sealed cell, for foam generation and observation, has been designed and manufactured at Astrium Friedrichshafen facilities. With the use of this cell, different sample compositions of "wet foams" have been optimized for mixtures of chemicals such as water, dodecanol, pluronic, aethoxisclerol, glycerol, CTAB, SDS, as well as glass beads. This development is performed in the frame of the breadboarding development activities of the Experiment Container FOAM-C for operation in the ISS Fluid Science Laboratory (ISS). The sample cell supports multiple observation methods such as: Diffusing-Wave and Diffuse Transmission Spectrometry, Time Resolved Correlation Spectroscopy [1] and microscope observation, all of these methods are applied in the cell with a relatively small experiment volume <3cm3. These units, will be on orbit replaceable sets, that will allow multiple sample compositions processing (in the range of >40).
Naeemullah; Kazi, Tasneem G; Shah, Faheem; Afridi, Hassan I; Baig, Jameel Ahmed; Soomro, Abdul Sattar
2013-01-01
A simple method for the preconcentration of cadmium (Cd) and nickel (Ni) in drinking and wastewater samples was developed. Cloud point extraction has been used for the preconcentration of both metals, after formation of complexes with 8-hydroxyquinoline (8-HQ) and extraction with the surfactant octylphenoxypolyethoxyethanol (Triton X-114). Dilution of the surfactant-rich phase with acidified ethanol was performed after phase separation, and the Cd and Ni contents were measured by flame atomic absorption spectrometry. The experimental variables, such as pH, amounts of reagents (8-HQ and Triton X-114), temperature, incubation time, and sample volume, were optimized. After optimization of the complexation and extraction conditions, enhancement factors of 80 and 61, with LOD values of 0.22 and 0.52 microg/L, were obtained for Cd and Ni, respectively. The proposed method was applied satisfactorily for the determination of both elements in drinking and wastewater samples.
Rodil, Rosario; Schellin, Manuela; Popp, Peter
2007-09-07
Membrane-assisted solvent extraction (MASE) in combination with large volume injection-gas chromatography-mass spectrometry (LVI-GC-MS) was applied for the determination of 16 polycyclic aromatic hydrocarbons (PAHs) in aqueous samples. The MASE conditions were optimized for achieving high enrichment of the analytes from aqueous samples, in terms of extraction conditions (shaking speed, extraction temperature and time), extraction solvent and composition (ionic strength, sample pH and presence of organic solvent). Parameters like linearity and reproducibility of the procedure were determined. The extraction efficiency was above 65% for all the analytes and the relative standard deviation (RSD) for five consecutive extractions ranged from 6 to 18%. At optimized conditions detection limits at the ng/L level were achieved. The effectiveness of the method was tested by analyzing real samples, such as river water, apple juice, red wine and milk.
The dynamics of multimodal integration: The averaging diffusion model.
Turner, Brandon M; Gao, Juan; Koenig, Scott; Palfy, Dylan; L McClelland, James
2017-12-01
We combine extant theories of evidence accumulation and multi-modal integration to develop an integrated framework for modeling multimodal integration as a process that unfolds in real time. Many studies have formulated sensory processing as a dynamic process where noisy samples of evidence are accumulated until a decision is made. However, these studies are often limited to a single sensory modality. Studies of multimodal stimulus integration have focused on how best to combine different sources of information to elicit a judgment. These studies are often limited to a single time point, typically after the integration process has occurred. We address these limitations by combining the two approaches. Experimentally, we present data that allow us to study the time course of evidence accumulation within each of the visual and auditory domains as well as in a bimodal condition. Theoretically, we develop a new Averaging Diffusion Model in which the decision variable is the mean rather than the sum of evidence samples and use it as a base for comparing three alternative models of multimodal integration, allowing us to assess the optimality of this integration. The outcome reveals rich individual differences in multimodal integration: while some subjects' data are consistent with adaptive optimal integration, reweighting sources of evidence as their relative reliability changes during evidence integration, others exhibit patterns inconsistent with optimality.
Skendi, Adriana; Irakli, Maria N; Papageorgiou, Maria D
2016-04-01
A simple, sensitive and accurate analytical method was optimized and developed for the determination of deoxynivalenol and aflatoxins in cereals intended for human consumption using high-performance liquid chromatography with diode array and fluorescence detection and a photochemical reactor for enhanced detection. A response surface methodology, using a fractional central composite design, was carried out for optimization of the water percentage at the beginning of the run (X1, 80-90%), the level of acetonitrile at the end of gradient system (X2, 10-20%) with the water percentage fixed at 60%, and the flow rate (X3, 0.8-1.2 mL/min). The studied responses were the chromatographic peak area, the resolution factor and the time of analysis. Optimal chromatographic conditions were: X1 = 80%, X2 = 10%, and X3 = 1 mL/min. Following a double sample extraction with water and a mixture of methanol/water, mycotoxins were rapidly purified by an optimized solid-phase extraction protocol. The optimized method was further validated with respect to linearity (R(2) >0.9991), sensitivity, precision, and recovery (90-112%). The application to 23 commercial cereal samples from Greece showed contamination levels below the legally set limits, except for one maize sample. The main advantages of the developed method are the simplicity of operation and the low cost. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Optimal Budget Allocation for Sample Average Approximation
2011-06-01
an optimization algorithm applied to the sample average problem. We examine the convergence rate of the estimator as the computing budget tends to...regime for the optimization algorithm . 1 Introduction Sample average approximation (SAA) is a frequently used approach to solving stochastic programs...appealing due to its simplicity and the fact that a large number of standard optimization algorithms are often available to optimize the resulting sample
Determination of 3-MCPD by GC-MS/MS with PTV-LV injector used for a survey of Spanish foodstuffs.
León, Nuria; Yusà, Vicent; Pardo, Olga; Pastor, Agustín
2008-05-15
3-Monochloropropane-1,2-diol (3-MCPD) is the most common chemical contaminant of the group of chloropropanols. It can occur in foods and food ingredients at low levels as a result of processing, migration from packaging materials during storage and domestic cooking. A sensitive method for determination of 3-MCPD in foodstuffs using programmable temperature vaporization (PTV) with large-volume injection (LVI) gas chromatography (GC) with tandem mass spectrometry detection (MS/MS) has been developed and optimized. The optimization of the injection and detection parameters was carried out using statistical experimental design. A Plackett-Burman design was used to estimate the influence of resonance excitation voltage (REV), isolation time (IT), excitation time (ET), ion source temperature (IST), and electron energy (EE) on the analytical response in the ion trap mass spectrometer (ITMS). Only REV was found to have a statically significant effect. On the other hand, a central composite design was used to optimize the settings of injection temperature (T(inlet)), vaporization temperature (T(vap)), vaporization time (t(vap)) and flow (Flow). The optimized method has an instrumental limit of detection (signal-to-noise ratio 3:1) of 0.044 ng mL(-1). From Valencian, Spain, supermarkets 94 samples of foods were surveyed for 3-MCPD. Using the optimized method levels higher than the limit established for soy sauce by the European Union were found in some samples. The estimated daily intake of 3-MCPD throughout the investigated foodstuffs for adults and children was found about 0.005 and 0.01%, respectively, of the established provisional tolerable daily intake.
Besmer, Michael D.; Hammes, Frederik; Sigrist, Jürg A.; Ort, Christoph
2017-01-01
Monitoring of microbial drinking water quality is a key component for ensuring safety and understanding risk, but conventional monitoring strategies are typically based on low sampling frequencies (e.g., quarterly or monthly). This is of concern because many drinking water sources, such as karstic springs are often subject to changes in bacterial concentrations on much shorter time scales (e.g., hours to days), for example after precipitation events. Microbial contamination events are crucial from a risk assessment perspective and should therefore be targeted by monitoring strategies to establish both the frequency of their occurrence and the magnitude of bacterial peak concentrations. In this study we used monitoring data from two specific karstic springs. We assessed the performance of conventional monitoring based on historical records and tested a number of alternative strategies based on a high-resolution data set of bacterial concentrations in spring water collected with online flow cytometry (FCM). We quantified the effect of increasing sampling frequency and found that for the specific case studied, at least bi-weekly sampling would be needed to detect precipitation events with a probability of >90%. We then proposed an optimized monitoring strategy with three targeted samples per event, triggered by precipitation measurements. This approach is more effective and efficient than simply increasing overall sampling frequency. It would enable the water utility to (1) analyze any relevant event and (2) limit median underestimation of peak concentrations to approximately 10%. We conclude with a generalized perspective on sampling optimization and argue that the assessment of short-term dynamics causing microbial peak loads initially requires increased sampling/analysis efforts, but can be optimized subsequently to account for limited resources. This offers water utilities and public health authorities systematic ways to evaluate and optimize their current monitoring strategies. PMID:29213255
Besmer, Michael D; Hammes, Frederik; Sigrist, Jürg A; Ort, Christoph
2017-01-01
Monitoring of microbial drinking water quality is a key component for ensuring safety and understanding risk, but conventional monitoring strategies are typically based on low sampling frequencies (e.g., quarterly or monthly). This is of concern because many drinking water sources, such as karstic springs are often subject to changes in bacterial concentrations on much shorter time scales (e.g., hours to days), for example after precipitation events. Microbial contamination events are crucial from a risk assessment perspective and should therefore be targeted by monitoring strategies to establish both the frequency of their occurrence and the magnitude of bacterial peak concentrations. In this study we used monitoring data from two specific karstic springs. We assessed the performance of conventional monitoring based on historical records and tested a number of alternative strategies based on a high-resolution data set of bacterial concentrations in spring water collected with online flow cytometry (FCM). We quantified the effect of increasing sampling frequency and found that for the specific case studied, at least bi-weekly sampling would be needed to detect precipitation events with a probability of >90%. We then proposed an optimized monitoring strategy with three targeted samples per event, triggered by precipitation measurements. This approach is more effective and efficient than simply increasing overall sampling frequency. It would enable the water utility to (1) analyze any relevant event and (2) limit median underestimation of peak concentrations to approximately 10%. We conclude with a generalized perspective on sampling optimization and argue that the assessment of short-term dynamics causing microbial peak loads initially requires increased sampling/analysis efforts, but can be optimized subsequently to account for limited resources. This offers water utilities and public health authorities systematic ways to evaluate and optimize their current monitoring strategies.
Pintado-Herrera, Marina G; González-Mazo, Eduardo; Lara-Martín, Pablo A
2014-12-03
This work presents the development, optimization and validation of a multi-residue method for the simultaneous determination of 102 contaminants, including fragrances, UV filters, repellents, endocrine disruptors, biocides, polycyclic aromatic hydrocarbons (PAHs), polychlorinated biphenyls (PCBs), and several types of pesticides in aqueous matrices. Water samples were processed using stir bar sorptive extraction (SBSE) after the optimization of several parameters: agitation time, ionic strength, presence of organic modifiers, pH, and volume of the derivatizing agent. Target compounds were extracted from the bars by liquid desorption (LD). Separation, identification and quantification of analytes were carried out by gas chromatography (GC) coupled to time-of-flight (ToF-MS) mass spectrometry. A new ionization source, atmospheric pressure gas chromatography (APGC), was tested. The optimized protocol showed acceptable recovery percentages (50-100%) and limits of detection below 1ngL(-1) for most of the compounds. Occurrence of 21 out of 102 analytes was confirmed in several environmental aquatic matrices, including seawater, sewage effluent, river water and groundwater. Non-target compounds such as organophosphorus flame retardants were also identified in real samples by accurate mass measurement of their molecular ions using GC-APGC-ToF-MS. To the best of our knowledge, this is the first time that this technique has been applied for the analysis of contaminants in aquatic systems. By employing lower energy than the more widely used electron impact ionization (EI), AGPC provides significant advantages over EI for those substances very susceptible to high fragmentation (e.g., fragrances, pyrethroids). Copyright © 2014 Elsevier B.V. All rights reserved.
Motealleh, Behrooz; Zahedi, Payam; Rezaeian, Iraj; Moghimi, Morvarid; Abdolghaffari, Amir Hossein; Zarandi, Mohammad Amin
2014-07-01
For the first time, it has been tried to achieve optimum conditions for electrospun poly(ε-caprolactone)/polystyrene (PCL/PS) nanofibrous samples as active wound dressings containing chamomile via D-optimal design approach. In this work, systematic in vitro and in vivo studies were carried out by drug release rate, antibacterial and antifungal evaluations, cell culture, and rat wound model along with histology observation. The optimized samples were prepared under the following electrospinning conditions: PCL/PS ratio (65/35), PCL concentration 9%(w/v), PS concentration 14%(w/v), distance between the syringe needle tip and the collector 15.5 cm, applied voltage 18 kV, and solution flow rate 0.46 mL h(-1) . The FE-SEM micrographs showed electrospun PCL/PS (65/35) nanofibrous sample containing 15% chamomile had a minimum average diameter (∼175 nm) compared to the neat samples (∼268 nm). The drug released resulted in a gradual and high amount of chamomile from the optimized PCL/PS nanofibrous sample (∼70%) in respect to PCL and PS nanofibers after 48 h. This claim was also confirmed by antibacterial and antifungal evaluations in which an inhibitory zone with a diameter of about 7.6 mm was formed. The rat wound model results also indicated that the samples loaded with 15% chamomile extract were remarkably capable to heal the wounds up to 99 ± 0.5% after 14 days post-treatment periods. The adhesion of mesenchymal stem cells and their viability on the optimized samples were confirmed by MTT analysis. Also, the electrospun nanofibrous mats based on PCL/PS (65/35) showed a high efficiency in the wound closure and healing process compared to the reference sample, PCL/PS nanofibers without chamomile. Finally, the histology analysis revealed that the formation of epithelial tissues, the lack of necrosis and collagen fibers accumulation in the dermis tissues for the above optimized samples. © 2013 Wiley Periodicals, Inc.
Nong, Chunyan; Niu, Zongliang; Li, Pengyao; Wang, Chunping; Li, Wanyu; Wen, Yingying
2017-04-15
Dual-cloud point extraction (dCPE) was successfully developed for simultaneous extraction of trace sulfonamides (SAs) including sulfamerazine (SMZ), sulfadoxin (SDX), sulfathiazole (STZ) in urine and water samples. Several parameters affecting the extraction were optimized, such as sample pH, concentration of Triton X-114, extraction temperature and time, centrifugation rate and time, back-extraction solution pH, back-extraction temperature and time, back-extraction centrifugation rate and time. High performance liquid chromatography (HPLC) was applied for the SAs analysis. Under the optimum extraction and detection conditions, successful separation of the SAs was achieved within 9min, and excellent analytical performances were attained. Good linear relationships (R 2 ≥0.9990) between peak area and concentration for SMZ and STZ were optimized from 0.02 to 10μg/mL, for SDX from 0.01 to 10μg/mL. Detection limits of 3.0-6.2ng/mL were achieved. Satisfactory recoveries ranging from 85 to 108% were determined with urine, lake and tap water spiked at 0.2, 0.5 and 1μg/mL, respectively, with relative standard deviations (RSDs, n=6) of 1.5-7.7%. This method was demonstrated to be convenient, rapid, cost-effective and environmentally benign, and could be used as an alternative tool to existing methods for analysing trace residues of SAs in urine and water samples. Copyright © 2017 Elsevier B.V. All rights reserved.
Chen, Bo; Huang, Yuming
2014-06-25
Dispersive liquid-phase microextraction with solidification of floating organic drop (SFO-DLPME) is one of the most interesting sample preparation techniques developed in recent years. In this paper, a new, rapid, and efficient SFO-DLPME coupled with high-performance liquid chromatography (HPLC) was established for the extraction and sensitive detection of banned Sudan dyes, namely, Sudan I, Sudan II, Sudan III, and Sudan IV, in foodstuff and water samples. Various factors, such as the type and volume of extractants and dispersants, pH and volume of sample solution, extraction time and temperature, ion strength, and humic acid concentration, were investigated and optimized to achieve optimal extraction of Sudan dyes in one single step. After optimization of extraction conditions using 1-dodecanol as an extractant and ethanol as a dispersant, the developed procedure was applied for extraction of the target Sudan dyes from 2 g of food samples and 10 mL of the spiked water samples. Under the optimized conditions, all Sudan dyes could be easily extracted by the proposed SFO-DLPME method. Limits of detection of the four Sudan dyes obtained were 0.10-0.20 ng g(-1) and 0.03 μg L(-1) when 2 g of foodstuff samples and 10 mL of water samples were adopted, respectively. The inter- and intraday reproducibilities were below 4.8% for analysis of Sudan dyes in foodstuffs. The method was satisfactorily used for the detection of Sudan dyes, and the recoveries of the target for the spiked foodstuff and water samples ranged from 92.6 to 106.6% and from 91.1 to 108.6%, respectively. These results indicated that the proposed method is simple, rapid, sensitive, and suitable for the pre-concentration and detection of the target dyes in foodstuff samples.
López Monzón, A; Vega Moreno, D; Torres Padrón, M E; Sosa Ferrera, Z; Santana Rodríguez, J J
2007-03-01
Solid-phase microextraction (SPME) coupled with high-performance liquid chromatography (HPLC) with fluorescence detection was optimized for extraction and determination of four benzimidazole fungicides (benomyl, carbendazim, thiabendazole, and fuberidazole) in water. We studied extraction and desorption conditions, for example fiber type, extraction time, ionic strength, extraction temperature, and desorption time to achieve the maximum efficiency in the extraction. Results indicate that SPME using a Carboxen-polydimethylsiloxane 75 microm (CAR-PDMS) fiber is suitable for extraction of these types of compound. Final analysis of benzimidazole fungicides was performed by HPLC with fluorescence detection. Recoveries ranged from 80.6 to 119.6 with RSDs below 9% and limits of detection between 0.03 and 1.30 ng mL-1 for the different analytes. The optimized procedure was applied successfully to the determination of benzimidazole fungicides mixtures in environmental water samples (sea, sewage, and ground water).
Wavefront correction using machine learning methods for single molecule localization microscopy
NASA Astrophysics Data System (ADS)
Tehrani, Kayvan F.; Xu, Jianquan; Kner, Peter
2015-03-01
Optical Aberrations are a major challenge in imaging biological samples. In particular, in single molecule localization (SML) microscopy techniques (STORM, PALM, etc.) a high Strehl ratio point spread function (PSF) is necessary to achieve sub-diffraction resolution. Distortions in the PSF shape directly reduce the resolution of SML microscopy. The system aberrations caused by the imperfections in the optics and instruments can be compensated using Adaptive Optics (AO) techniques prior to imaging. However, aberrations caused by the biological sample, both static and dynamic, have to be dealt with in real time. A challenge for wavefront correction in SML microscopy is a robust optimization approach in the presence of noise because of the naturally high fluctuations in photon emission from single molecules. Here we demonstrate particle swarm optimization for real time correction of the wavefront using an intensity independent metric. We show that the particle swarm algorithm converges faster than the genetic algorithm for bright fluorophores.
Harju, Kirsi; Rapinoja, Marja-Leena; Avondet, Marc-André; Arnold, Werner; Schär, Martin; Burrell, Stephen; Luginbühl, Werner; Vanninen, Paula
2015-01-01
Saxitoxin (STX) and some selected paralytic shellfish poisoning (PSP) analogues in mussel samples were identified and quantified with liquid chromatography-tandem mass spectrometry (LC-MS/MS). Sample extraction and purification methods of mussel sample were optimized for LC-MS/MS analysis. The developed method was applied to the analysis of the homogenized mussel samples in the proficiency test (PT) within the EQuATox project (Establishment of Quality Assurance for the Detection of Biological Toxins of Potential Bioterrorism Risk). Ten laboratories from eight countries participated in the STX PT. Identification of PSP toxins in naturally contaminated mussel samples was performed by comparison of product ion spectra and retention times with those of reference standards. The quantitative results were obtained with LC-MS/MS by spiking reference standards in toxic mussel extracts. The results were within the z-score of ±1 when compared to the results measured with the official AOAC (Association of Official Analytical Chemists) method 2005.06, pre-column oxidation high-performance liquid chromatography with fluorescence detection (HPLC-FLD). PMID:26610567
Sample handling for mass spectrometric proteomic investigations of human urine.
Petri, Anette Lykke; Høgdall, Claus; Christensen, Ib Jarle; Simonsen, Anja Hviid; T'jampens, Davy; Hellmann, Marja-Leena; Kjaer, Susanne Krüger; Fung, Eric T; Høgdall, Estrid
2008-09-01
Because of its non-invasive sample collection method, human urine is an attractive biological material both for discovering biomarkers and for use in future screening trials for different diseases. Before urine can be used for these applications, standardized protocols for sample handling that optimize protein stability are required. In this explorative study, we examine the influence of different urine collection methods, storage temperatures, storage times, and repetitive freeze-thaw procedures on the protein profiles obtained by surface-enhanced laser desorption/ionization time-of-flight mass spectrometry (SELDI-TOF-MS). Prospectively collected urine samples from 11 women were collected as either morning or midday specimens. The effects of storage temperature, time to freezing, and freeze-thaw cycles were assessed by calculating the number, intensity, and reproducibility of peaks visualized by SELDI-TOF-MS. On the CM10 array, 122 peaks were detected and 28 peaks were found to be significantly different between urine types, storage temperature and time to freezing. On the IMAC-Cu array, 65 peaks were detected and 1 peak was found to be significantly different according to time to freezing. No significant differences were demonstrated for freeze-thaw cycles. Optimal handling and storage conditions are necessary in clinical urine proteomic investigations. Collection of urine with a single and consistently performed protocol is needed to reduce analytical bias. Collecting only one urine type, which is stored for a limited period at 4°C until freezing at -80°C prior to analysis will provide the most stable profiles. Copyright © 2008 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Optimal design of near-Earth asteroid sample-return trajectories in the Sun-Earth-Moon system
NASA Astrophysics Data System (ADS)
He, Shengmao; Zhu, Zhengfan; Peng, Chao; Ma, Jian; Zhu, Xiaolong; Gao, Yang
2016-08-01
In the 6th edition of the Chinese Space Trajectory Design Competition held in 2014, a near-Earth asteroid sample-return trajectory design problem was released, in which the motion of the spacecraft is modeled in multi-body dynamics, considering the gravitational forces of the Sun, Earth, and Moon. It is proposed that an electric-propulsion spacecraft initially parking in a circular 200-km-altitude low Earth orbit is expected to rendezvous with an asteroid and carry as much sample as possible back to the Earth in a 10-year time frame. The team from the Technology and Engineering Center for Space Utilization, Chinese Academy of Sciences has reported a solution with an asteroid sample mass of 328 tons, which is ranked first in the competition. In this article, we will present our design and optimization methods, primarily including overall analysis, target selection, escape from and capture by the Earth-Moon system, and optimization of impulsive and low-thrust trajectories that are modeled in multi-body dynamics. The orbital resonance concept and lunar gravity assists are considered key techniques employed for trajectory design. The reported solution, preliminarily revealing the feasibility of returning a hundreds-of-tons asteroid or asteroid sample, envisions future space missions relating to near-Earth asteroid exploration.
DoE optimization of a mercury isotope ratio determination method for environmental studies.
Berni, Alex; Baschieri, Carlo; Covelli, Stefano; Emili, Andrea; Marchetti, Andrea; Manzini, Daniela; Berto, Daniela; Rampazzo, Federico
2016-05-15
By using the experimental design (DoE) technique, we optimized an analytical method for the determination of mercury isotope ratios by means of cold-vapor multicollector ICP-MS (CV-MC-ICP-MS) to provide absolute Hg isotopic ratio measurements with a suitable internal precision. By running 32 experiments, the influence of mercury and thallium internal standard concentrations, total measuring time and sample flow rate was evaluated. Method was optimized varying Hg concentration between 2 and 20 ng g(-1). The model finds out some correlations within the parameters affect the measurements precision and predicts suitable sample measurement precisions for Hg concentrations from 5 ng g(-1) Hg upwards. The method was successfully applied to samples of Manila clams (Ruditapes philippinarum) coming from the Marano and Grado lagoon (NE Italy), a coastal environment affected by long term mercury contamination mainly due to mining activity. Results show different extents of both mass dependent fractionation (MDF) and mass independent fractionation (MIF) phenomena in clams according to their size and sampling sites in the lagoon. The method is fit for determinations on real samples, allowing for the use of Hg isotopic ratios to study mercury biogeochemical cycles in complex ecosystems. Copyright © 2016 Elsevier B.V. All rights reserved.
Grassi, G; Cappello, N; Gheorghe, M F; Salton, L; Di Bisceglie, C; Manieri, C; Benedetto, C
2010-11-01
The objective of this study is to determine the optimal conditions for human semen incubation treated with exogenous platelet activating factor (ePAF) for intra-uterine insemination (IUI). This prospective study was carried out on 32 infertile men and each semen sample was processed with the ISolate Sperm Separation Medium, washed with sperm washing medium (SWM) and resuspended either in SWM alone (control samples), or with ePAF 0.1, 0.5, and 1.0 μM. Each concentration was subsequently incubated and evaluated at 5, 15, 30, and 60 min. The motility parameters were evaluated by the computer-aided sperm analysis (C.A.S.A.) system. Curvilinear velocity, straight line velocity, average path velocity, rapid and progressive motility significantly increased compared to control samples at an ePAF concentration of 0.1 μM (with at least 15 min of incubation). The best results were obtained with ePAF concentrations of: 0.1 μM (60 min of incubation) and 0.5 μM (30-60 min of incubation). In conclusion, results are enhanced when ePAF is added to standard semen preparation for IUI. An ePAF concentration of 0.1 μM, with an incubation time of 15 min, can be used for semen samples with normal motility. Whilst, for semen samples with poor motility, the ePAF concentration is best increased to 0.5 μM and/or the incubation time prolonged to 60 min.
Options for Robust Airfoil Optimization under Uncertainty
NASA Technical Reports Server (NTRS)
Padula, Sharon L.; Li, Wu
2002-01-01
A robust optimization method is developed to overcome point-optimization at the sampled design points. This method combines the best features from several preliminary methods proposed by the authors and their colleagues. The robust airfoil shape optimization is a direct method for drag reduction over a given range of operating conditions and has three advantages: (1) it prevents severe degradation in the off-design performance by using a smart descent direction in each optimization iteration, (2) it uses a large number of spline control points as design variables yet the resulting airfoil shape does not need to be smoothed, and (3) it allows the user to make a tradeoff between the level of optimization and the amount of computing time consumed. For illustration purposes, the robust optimization method is used to solve a lift-constrained drag minimization problem for a two-dimensional (2-D) airfoil in Euler flow with 20 geometric design variables.
Sarker, Mohamed Zaidul Islam; Selamat, Jinap; Habib, Abu Sayem Md. Ahsan; Ferdosh, Sahena; Akanda, Mohamed Jahurul Haque; Jaffri, Juliana Mohamed
2012-01-01
Fish oil was extracted from the viscera of African Catfish using supercritical carbon dioxide (SC-CO2). A Central Composite Design of Response Surface methodology (RSM) was employed to optimize the SC-CO2 extraction parameters. The oil yield (Y) as response variable was executed against the four independent variables, namely pressure, temperature, flow rate and soaking time. The oil yield varied with the linear, quadratic and interaction of pressure, temperature, flow rate and soaking time. Optimum points were observed within the variables of temperature from 35 °C to 80 °C, pressure from 10 MPa to 40 MPa, flow rate from 1 mL/min to 3 mL/min and soaking time from 1 h to 4 h. However, the extraction parameters were found to be optimized at temperature 57.5 °C, pressure 40 MPa, flow rate 2.0 mL/min and soaking time 2.5 h. At this optimized condition, the highest oil yields were found to be 67.0% (g oil/100 g sample on dry basis) in the viscera of catfish which was reasonable to the yields of 78.0% extracted using the Soxhlet method. PMID:23109854
New clinical insights for transiently evoked otoacoustic emission protocols.
Hatzopoulos, Stavros; Grzanka, Antoni; Martini, Alessandro; Konopka, Wieslaw
2009-08-01
The objective of the study was to optimize the area of a time-frequency analysis and then investigate any stable patterns in the time-frequency structure of otoacoustic emissions in a population of 152 healthy adults sampled over one year. TEOAE recordings were collected from 302 ears in subjects presenting normal hearing and normal impedance values. The responses were analyzed by the Wigner-Ville distribution (WVD). The TF region of analysis was optimized by examining the energy content of various rectangular and triangular TF regions. The TEOAE components from the initial and recordings 12 months later were compared in the optimized TF region. The best region for TF analysis was identified with base point 1 at 2.24 ms and 2466 Hz, base point 2 at 6.72 ms and 2466 Hz, and the top point at 2.24 ms and 5250 Hz. Correlation indices from the TF optimized region were higher, and were statistically significant, than the traditional indices in the selected time window. An analysis of the TF data within a 12-month period indicated a 85% TEOAE component similarity in 90% of the tested subjects.
NASA Astrophysics Data System (ADS)
Kim, Ji-hyun; Han, Jae-Ho; Jeong, Jichai
2015-09-01
Integration time and reference intensity are important factors for achieving high signal-to-noise ratio (SNR) and sensitivity in optical coherence tomography (OCT). In this context, we present an adaptive optimization method of reference intensity for OCT setup. The reference intensity is automatically controlled by tilting a beam position using a Galvanometric scanning mirror system. Before sample scanning, the OCT system acquires two dimensional intensity map with normalized intensity and variables in color spaces using false-color mapping. Then, the system increases or decreases reference intensity following the map data for optimization with a given algorithm. In our experiments, the proposed method successfully corrected the reference intensity with maintaining spectral shape, enabled to change integration time without manual calibration of the reference intensity, and prevented image degradation due to over-saturation and insufficient reference intensity. Also, SNR and sensitivity could be improved by increasing integration time with automatic adjustment of the reference intensity. We believe that our findings can significantly aid in the optimization of SNR and sensitivity for optical coherence tomography systems.
The influence of optimism and pessimism on student achievement in mathematics
NASA Astrophysics Data System (ADS)
Yates, Shirley M.
2002-11-01
Students' causal attributions are not only fundamental motivational variables but are also critical motivators of their persistence in learning. Optimism, pessimism, and achievement in mathematics were measured in a sample of primary and lower secondary students on two occasions. Although achievement in mathematics was most strongly related to prior achievement and grade level, optimism and pessimism were significant factors. In particular, students with a more generally pessimistic outlook on life had a lower level of achievement in mathematics over time. Gender was not a significant factor in achievement. The implications of these findings are discussed.
Basheer, Chanbasha
2018-04-01
An efficient on-site extraction technique to determine carcinogenic heterocyclic aromatic amines in seawater has been reported. A micro-solid-phase extraction device placed inside a portable battery-operated pump was used for the on-site extraction of seawater samples. Before on-site applications, parameters that influence the extraction efficiency (extraction time, type of sorbent materials, suitable desorption solvent, desorption time, and sample volume) were investigated and optimized in the laboratory. The developed method was then used for the on-site sampling of heterocyclic aromatic amines determination in seawater samples close to distillation plant. Once the on-site extraction completed, the small extraction device with the analytes was brought back to the laboratory for analysis using high-performance liquid chromatography with fluorescence detection. Based on the optimized conditions, the calibration curves were linear over the concentration range of 0.05-20 μg/L with correlation coefficients up to 0.996. The limits of detection were 0.004-0.026 μg/L, and the reproducibility values were between 1.3 and 7.5%. To evaluate the extraction efficiency, a comparison was made with conventional solid-phase extraction and it was applied to various fortified real seawater samples. The average relative recoveries obtained from the spiked seawater samples varied in the range 79.9-95.2%. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Memarian, Elham; Hosseiny Davarani, Saied Saeed; Nojavan, Saeed; Movahed, Siyavash Kazemi
2016-09-07
In this work, a new solid-phase microextraction fiber was prepared based on nitrogen-doped graphene (N-doped G). Moreover, a new strategy was proposed to solve problems dealt in direct coating of N-doped G. For this purpose, first, Graphene oxide (GO) was coated on Pt wire by electrophoretic deposition method. Then, chemical reduction of coated GO to N-doped G was accomplished by hydrazine and NH3. The prepared fiber showed good mechanical and thermal stabilities. The obtained fiber was used in two different modes (conventional headspace solid-phase microextraction and cold-fiber headspace solid-phase microextraction (CF-HS-SPME)). Both modes were optimized and applied for the extraction of benzene and xylenes from different aqueous samples. All effective parameters including extraction time, salt content, stirring rate, and desorption time were optimized. The optimized CF-HS-SPME combined with GC-FID showed good limit of detections (LODs) (0.3-2.3 μg/L), limit of quantifications (LOQs) (1.0-7.0 μg/L) and linear ranges (1.0-5000 μg/L). The developed method was applied for the analysis of benzene and xylenes in rainwater and some wastewater samples. Copyright © 2016 Elsevier B.V. All rights reserved.
Li, Wenjing; Lin, Yu; Wang, Yuchun; Hong, Bo
2017-09-11
A method based on a simplified extraction by matrix solid phase dispersion (MSPD) followed by ultra-performance liquid chromatography coupled with the quadrupole time-of-flight tandem mass spectrometry (UPLC/Q-TOF-MS) determination is validated for analysis of two phenolics and three terpenoids in Euphorbia fischeriana . The optimized experimental parameters of MSPD including dispersing sorbent (silica gel), ratio of sample to dispersing sorbent (1:2), elution solvent (water-ethanol: 30-70) and volume of the elution solvent (10 mL) were examined and set down. The highest extraction yields of chromatogram information and the five compounds were obtained under the optimized conditions. A total of 25 constituents have been identified and five components have been quantified from Euphorbia fischeriana . A linear relationship (r² ≥ 0.9964) between the concentrations and the peak areas of the mixed standard substances were revealed. The average recovery was between 92.4% and 103.2% with RSD values less than 3.45% ( n = 5). The extraction yields of two phenolics and three terpenoids obtained by the MSPD were higher than those of traditional reflux and sonication extraction with reduced requirement on sample, solvent and time. In addition, the optimized method will be applied for analyzing terpenoids in other Chinese herbal medicine samples.
Barman, Nirmali; Badwaik, Laxmikant S
2017-01-01
Osmotic dehydration (OD) of carambola slices were carried out using glucose, sucrose, fructose and glycerol as osmotic agents with 70°Bx solute concentration, 50°C of temperature and for time of 180min. Glycerol and sucrose were selected on the basis of their higher water loss, weight reduction and lowers solid gain. Further the optimization of OD of carambola slices (5mm thick) were carried out under different process conditions of temperature (40-60°C), concentration of sucrose and glycerol (50-70°Bx), time (180min) and fruit to solution ratio (1:10) against various responses viz. water loss, solid gain, texture, rehydration ratio and sensory score according to a composite design. The optimized value for temperature, concentration of sucrose and glycerol has been found to be 50°C, 66°Bx and 66°Bx respectively. Under optimized conditions the effect of ultrasound for 10, 20, 30min and centrifugal force (2800rpm) for 15, 30, 45 and 60min on OD of carambola slices were checked. The controlled samples showed 68.14% water loss and 13.05% solid gain in carambola slices. While, the sample having 30min ultrasonic treatment showed 73.76% water loss and 9.79% solid gain; and the sample treated with centrifugal force for 60min showed 75.65% water loss and 6.76% solid gain. The results showed that with increasing in treatment time the water loss, rehydration ratio were increased and solid gain, texture were decreased. Copyright © 2016 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tuomas, V.; Jaakko, L.
This article discusses the optimization of the target motion sampling (TMS) temperature treatment method, previously implemented in the Monte Carlo reactor physics code Serpent 2. The TMS method was introduced in [1] and first practical results were presented at the PHYSOR 2012 conference [2]. The method is a stochastic method for taking the effect of thermal motion into account on-the-fly in a Monte Carlo neutron transport calculation. It is based on sampling the target velocities at collision sites and then utilizing the 0 K cross sections at target-at-rest frame for reaction sampling. The fact that the total cross section becomesmore » a distributed quantity is handled using rejection sampling techniques. The original implementation of the TMS requires 2.0 times more CPU time in a PWR pin-cell case than a conventional Monte Carlo calculation relying on pre-broadened effective cross sections. In a HTGR case examined in this paper the overhead factor is as high as 3.6. By first changing from a multi-group to a continuous-energy implementation and then fine-tuning a parameter affecting the conservativity of the majorant cross section, it is possible to decrease the overhead factors to 1.4 and 2.3, respectively. Preliminary calculations are also made using a new and yet incomplete optimization method in which the temperature of the basis cross section is increased above 0 K. It seems that with the new approach it may be possible to decrease the factors even as low as 1.06 and 1.33, respectively, but its functionality has not yet been proven. Therefore, these performance measures should be considered preliminary. (authors)« less
Positivity in healthcare: relation of optimism to performance.
Luthans, Kyle W; Lebsack, Sandra A; Lebsack, Richard R
2008-01-01
The purpose of this paper is to explore the linkage between nurses' levels of optimism and performance outcomes. The study sample consisted of 78 nurses in all areas of a large healthcare facility (hospital) in the Midwestern United States. The participants completed surveys to determine their current state of optimism. Supervisory performance appraisal data were gathered in order to measure performance outcomes. Spearman correlations and a one-way ANOVA were used to analyze the data. The results indicated a highly significant positive relationship between the nurses' measured state of optimism and their supervisors' ratings of their commitment to the mission of the hospital, a measure of contribution to increasing customer satisfaction, and an overall measure of work performance. This was an exploratory study. Larger sample sizes and longitudinal data would be beneficial because it is probable that state optimism levels will vary and that it might be more accurate to measure state optimism at several points over time in order to better predict performance outcomes. Finally, the study design does not imply causation. Suggestions for effectively developing and managing nurses' optimism to positively impact their performance are provided. To date, there has been very little empirical evidence assessing the impact that positive psychological capacities such as optimism of key healthcare professionals may have on performance. This paper was designed to help begin to fill this void by examining the relationship between nurses' self-reported optimism and their supervisors' evaluations of their performance.
Ghamari, Farhad; Bahrami, Abdulrahman; Yamini, Yadollah; Shahna, Farshid Ghorbani; Moghimbeigi, Abbas
2016-01-01
For the first time, hollow-fiber liquid-phase microextraction combined with high-performance liquid chromatography–ultraviolet was used to extract trans,trans-muconic acid, in urine samples of workers who had been exposed to benzene. The parameters affecting the metabolite extraction were optimized as follows: the volume of sample solution was 11 mL with pH 2, liquid membrane containing dihexyl ether as the supporter, 15% (w/v) of trioctylphosphine oxide as the carrier, the time of extraction was 120 minutes, and stirring rate was 500 rpm. Organic phase impregnated in the pores of a hollow fiber was extracted into 24 µL solution of 0.05 mol L−1 Na2CO3 located inside the lumen of the fiber. Under optimized conditions, a high enrichment factor of 153–182 folds, relative recovery of 83%–92%, and detection limit of 0.001 µg mL−1 were obtained. The method was successfully applied to the analysis of ttMA in real urine samples. PMID:27660405
Gong, Sheng-Xiang; Wang, Xia; Li, Lei; Wang, Ming-Lin; Zhao, Ru-Song
2015-11-01
In this paper, a novel and simple method for the sensitive determination of endocrine disrupter compounds octylphenol (OP) and nonylphenol (NP) in environmental water samples has been developed using solid-phase microextraction (SPME) coupled with gas chromatography-mass spectrometry. Carboxylated carbon nano-spheres (CNSs-COOH) are used as a novel SPME coating via physical adhesion. The CNSs-COOH fiber possessed higher adsorption efficiency than 100 μm polydimethysiloxane (PDMS) fiber and was similar to 85 μm polyacrylate (PA) fiber for the two analytes. Important parameters, such as extraction time, pH, agitation speed, ionic strength, and desorption temperature and time, were investigated and optimized in detail. Under the optimal parameters, the developed method achieved low limits of detection of 0.13~0.14 ng·L(-1) and a wide linear range of 1~1000 ng·(-1) for OP and NP. The novel method was validated with several real environmental water samples, and satisfactory results were obtained.
info-gibbs: a motif discovery algorithm that directly optimizes information content during sampling.
Defrance, Matthieu; van Helden, Jacques
2009-10-15
Discovering cis-regulatory elements in genome sequence remains a challenging issue. Several methods rely on the optimization of some target scoring function. The information content (IC) or relative entropy of the motif has proven to be a good estimator of transcription factor DNA binding affinity. However, these information-based metrics are usually used as a posteriori statistics rather than during the motif search process itself. We introduce here info-gibbs, a Gibbs sampling algorithm that efficiently optimizes the IC or the log-likelihood ratio (LLR) of the motif while keeping computation time low. The method compares well with existing methods like MEME, BioProspector, Gibbs or GAME on both synthetic and biological datasets. Our study shows that motif discovery techniques can be enhanced by directly focusing the search on the motif IC or the motif LLR. http://rsat.ulb.ac.be/rsat/info-gibbs
Determination of T-2 and HT-2 toxins from maize by direct analysis in real time mass spectrometry
USDA-ARS?s Scientific Manuscript database
Direct analysis in real time (DART) ionization coupled to mass spectrometry (MS) was used for the rapid quantitative analysis of T-2 toxin, and the related HT-2 toxin, extracted from corn. Sample preparation procedures and instrument parameters were optimized to obtain sensitive and accurate determi...
Automation of POST Cases via External Optimizer and "Artificial p2" Calculation
NASA Technical Reports Server (NTRS)
Dees, Patrick D.; Zwack, Mathew R.; Michelson, Diane K.
2017-01-01
During conceptual design speed and accuracy are often at odds. Specifically in the realm of launch vehicles, optimizing the ascent trajectory requires a larger pool of analytical power and expertise. Experienced analysts working on familiar vehicles can produce optimal trajectories in a short time frame, however whenever either "experienced" or "familiar " is not applicable the optimization process can become quite lengthy. In order to construct a vehicle agnostic method an established global optimization algorithm is needed. In this work the authors develop an "artificial" error term to map arbitrary control vectors to non-zero error by which a global method can operate. Two global methods are compared alongside Design of Experiments and random sampling and are shown to produce comparable results to analysis done by a human expert.
Suboptimal LQR-based spacecraft full motion control: Theory and experimentation
NASA Astrophysics Data System (ADS)
Guarnaccia, Leone; Bevilacqua, Riccardo; Pastorelli, Stefano P.
2016-05-01
This work introduces a real time suboptimal control algorithm for six-degree-of-freedom spacecraft maneuvering based on a State-Dependent-Algebraic-Riccati-Equation (SDARE) approach and real-time linearization of the equations of motion. The control strategy is sub-optimal since the gains of the linear quadratic regulator (LQR) are re-computed at each sample time. The cost function of the proposed controller has been compared with the one obtained via a general purpose optimal control software, showing, on average, an increase in control effort of approximately 15%, compensated by real-time implementability. Lastly, the paper presents experimental tests on a hardware-in-the-loop six-degree-of-freedom spacecraft simulator, designed for testing new guidance, navigation, and control algorithms for nano-satellites in a one-g laboratory environment. The tests show the real-time feasibility of the proposed approach.
NASA Astrophysics Data System (ADS)
Rajabi, Mohammad Mahdi; Ataie-Ashtiani, Behzad; Janssen, Hans
2015-02-01
The majority of literature regarding optimized Latin hypercube sampling (OLHS) is devoted to increasing the efficiency of these sampling strategies through the development of new algorithms based on the combination of innovative space-filling criteria and specialized optimization schemes. However, little attention has been given to the impact of the initial design that is fed into the optimization algorithm, on the efficiency of OLHS strategies. Previous studies, as well as codes developed for OLHS, have relied on one of the following two approaches for the selection of the initial design in OLHS: (1) the use of random points in the hypercube intervals (random LHS), and (2) the use of midpoints in the hypercube intervals (midpoint LHS). Both approaches have been extensively used, but no attempt has been previously made to compare the efficiency and robustness of their resulting sample designs. In this study we compare the two approaches and show that the space-filling characteristics of OLHS designs are sensitive to the initial design that is fed into the optimization algorithm. It is also illustrated that the space-filling characteristics of OLHS designs based on midpoint LHS are significantly better those based on random LHS. The two approaches are compared by incorporating their resulting sample designs in Monte Carlo simulation (MCS) for uncertainty propagation analysis, and then, by employing the sample designs in the selection of the training set for constructing non-intrusive polynomial chaos expansion (NIPCE) meta-models which subsequently replace the original full model in MCSs. The analysis is based on two case studies involving numerical simulation of density dependent flow and solute transport in porous media within the context of seawater intrusion in coastal aquifers. We show that the use of midpoint LHS as the initial design increases the efficiency and robustness of the resulting MCSs and NIPCE meta-models. The study also illustrates that this relative improvement decreases with increasing number of sample points and input parameter dimensions. Since the computational time and efforts for generating the sample designs in the two approaches are identical, the use of midpoint LHS as the initial design in OLHS is thus recommended.
NASA Technical Reports Server (NTRS)
Kasahara, Hironori; Honda, Hiroki; Narita, Seinosuke
1989-01-01
Parallel processing of real-time dynamic systems simulation on a multiprocessor system named OSCAR is presented. In the simulation of dynamic systems, generally, the same calculation are repeated every time step. However, we cannot apply to Do-all or the Do-across techniques for parallel processing of the simulation since there exist data dependencies from the end of an iteration to the beginning of the next iteration and furthermore data-input and data-output are required every sampling time period. Therefore, parallelism inside the calculation required for a single time step, or a large basic block which consists of arithmetic assignment statements, must be used. In the proposed method, near fine grain tasks, each of which consists of one or more floating point operations, are generated to extract the parallelism from the calculation and assigned to processors by using optimal static scheduling at compile time in order to reduce large run time overhead caused by the use of near fine grain tasks. The practicality of the scheme is demonstrated on OSCAR (Optimally SCheduled Advanced multiprocessoR) which has been developed to extract advantageous features of static scheduling algorithms to the maximum extent.
Low-sensitivity H ∞ filter design for linear delta operator systems with sampling time jitter
NASA Astrophysics Data System (ADS)
Guo, Xiang-Gui; Yang, Guang-Hong
2012-04-01
This article is concerned with the problem of designing H ∞ filters for a class of linear discrete-time systems with low-sensitivity to sampling time jitter via delta operator approach. Delta-domain model is used to avoid the inherent numerical ill-condition resulting from the use of the standard shift-domain model at high sampling rates. Based on projection lemma in combination with the descriptor system approach often used to solve problems related to delay, a novel bounded real lemma with three slack variables for delta operator systems is presented. A sensitivity approach based on this novel lemma is proposed to mitigate the effects of sampling time jitter on system performance. Then, the problem of designing a low-sensitivity filter can be reduced to a convex optimisation problem. An important consideration in the design of correlation filters is the optimal trade-off between the standard H ∞ criterion and the sensitivity of the transfer function with respect to sampling time jitter. Finally, a numerical example demonstrating the validity of the proposed design method is given.
Asfaram, Arash; Ghaedi, Mehrorang; Purkait, Mihir Kumar
2017-09-01
A sensitive analytical method is investigated to concentrate and determine trace level of Sildenafil Citrate (SLC) present in water and urine samples. The method is based on a sample treatment using dispersive solid-phase micro-extraction (DSPME) with laboratory-made Mn@ CuS/ZnS nanocomposite loaded on activated carbon (Mn@ CuS/ZnS-NCs-AC) as a sorbent for the target analyte. The efficiency was enhanced by ultrasound-assisted (UA) with dispersive nanocomposite solid-phase micro-extraction (UA-DNSPME). Four significant variables affecting SLC recovery like; pH, eluent volume, sonication time and adsorbent mass were selected by the Plackett-Burman design (PBD) experiments. These selected factors were optimized by the central composite design (CCD) to maximize extraction of SLC. The results exhibited that the optimum conditions for maximizing extraction of SLC were 6.0 pH, 300μL eluent (acetonitrile) volume, 10mg of adsorbent and 6min sonication time. Under optimized conditions, virtuous linearity of SLC was ranged from 30 to 4000ngmL -1 with R 2 of 0.99. The limit of detection (LOD) was 2.50ngmL -1 and the recoveries at two spiked levels were ranged from 97.37 to 103.21% with the relative standard deviation (RSD) less than 4.50% (n=15). The enhancement factor (EF) was 81.91. The results show that the combination UAE with DNSPME is a suitable method for the determination of SLC in water and urine samples. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Li, Xiaoguang; Ji, Guangbin; Lv, Hualiang; Wang, Min; Du, Youwei
2014-04-01
Fe/Cu composite samples with Cu particles depositing on carbonyl iron sheets were prepared by chemical plating. Cu additions were uniformly distributed on the grain boundaries of the flaky carbonyl iron while keeping the internal structure of iron. Meanwhile, we found that the chemical plating time made a key point on both the microwave absorbing properties and infrared emissivity. With the growth of chemical plating time, the value of reflection loss gives a linear decrease and the infrared emissivity is reduced with a tendency of index reduction. When the plating time is less than 30 min, the reflection loss of the samples maintains above -20 GHz, moreover, prolonging the plating time more than 30 min, the infrared emissivity of the samples is reduced to 0.50 or less. It can be concluded that both the microwave absorbing and infrared properties are excellent at the optimal plating time of 30 min.
Adaptive sampling of AEM transients
NASA Astrophysics Data System (ADS)
Di Massa, Domenico; Florio, Giovanni; Viezzoli, Andrea
2016-02-01
This paper focuses on the sampling of the electromagnetic transient as acquired by airborne time-domain electromagnetic (TDEM) systems. Typically, the sampling of the electromagnetic transient is done using a fixed number of gates whose width grows logarithmically (log-gating). The log-gating has two main benefits: improving the signal to noise (S/N) ratio at late times, when the electromagnetic signal has amplitudes equal or lower than the natural background noise, and ensuring a good resolution at the early times. However, as a result of fixed time gates, the conventional log-gating does not consider any geological variations in the surveyed area, nor the possibly varying characteristics of the measured signal. We show, using synthetic models, how a different, flexible sampling scheme can increase the resolution of resistivity models. We propose a new sampling method, which adapts the gating on the base of the slope variations in the electromagnetic (EM) transient. The use of such an alternative sampling scheme aims to get more accurate inverse models by extracting the geoelectrical information from the measured data in an optimal way.
Perales-Sánchez, Janitzio X K; Reyes-Moreno, Cuauhtémoc; Gómez-Favela, Mario A; Milán-Carrillo, Jorge; Cuevas-Rodríguez, Edith O; Valdez-Ortiz, Angel; Gutiérrez-Dorado, Roberto
2014-09-01
The aim of this study was to optimize the germination conditions of amaranth seeds that would maximize the antioxidant activity (AoxA), total phenolic (TPC), and flavonoid (TFC) contents. To optimize the germination bioprocess, response surface methodology was applied over three response variables (AoxA, TPC, TFC). A central composite rotable experimental design with two factors [germination temperature (GT), 20-45 ºC; germination time (Gt), 14-120 h] in five levels was used; 13 treatments were generated. The amaranth seeds were soaked in distilled water (25 °C/6 h) before germination. The sprouts from each treatment were dried (50 °C/8 h), cooled, and ground to obtain germinated amaranth flours (GAF). The best combination of germination bioprocess variables for producing optimized GAF with the highest AoxA [21.56 mmol trolox equivalent (TE)/100 g sample, dw], TPC [247.63 mg gallic acid equivalent (GAE)/100 g sample, dw], and TFC [81.39 mg catechin equivalent (CAE)/100 g sample, dw] was GT = 30 ºC/Gt = 78 h. The germination bioprocess increased AoxA, TPC, and TFC in 300-470, 829, and 213%, respectively. The germination is an effective strategy to increase the TPC and TFC of amaranth seeds for enhancing functionality with improved antioxidant activity.
Determination of Ignitable Liquids in Fire Debris: Direct Analysis by Electronic Nose
Ferreiro-González, Marta; Barbero, Gerardo F.; Palma, Miguel; Ayuso, Jesús; Álvarez, José A.; Barroso, Carmelo G.
2016-01-01
Arsonists usually use an accelerant in order to start or accelerate a fire. The most widely used analytical method to determine the presence of such accelerants consists of a pre-concentration step of the ignitable liquid residues followed by chromatographic analysis. A rapid analytical method based on headspace-mass spectrometry electronic nose (E-Nose) has been developed for the analysis of Ignitable Liquid Residues (ILRs). The working conditions for the E-Nose analytical procedure were optimized by studying different fire debris samples. The optimized experimental variables were related to headspace generation, specifically, incubation temperature and incubation time. The optimal conditions were 115 °C and 10 min for these two parameters. Chemometric tools such as hierarchical cluster analysis (HCA) and linear discriminant analysis (LDA) were applied to the MS data (45–200 m/z) to establish the most suitable spectroscopic signals for the discrimination of several ignitable liquids. The optimized method was applied to a set of fire debris samples. In order to simulate post-burn samples several ignitable liquids (gasoline, diesel, citronella, kerosene, paraffin) were used to ignite different substrates (wood, cotton, cork, paper and paperboard). A full discrimination was obtained on using discriminant analysis. This method reported here can be considered as a green technique for fire debris analyses. PMID:27187407
2018-01-01
Starch is increasingly used as a functional group in many industrial applications and foods due to its ability to work as a thickener. The experimental values of extracting starch from yellow skin potato indicate the processing conditions at 3000 rpm and 15 min as optimum for the highest yield of extracted starch. The effect of adding different concentrations of extracted starch under the optimized conditions was studied to determine the acidity, pH, syneresis, microbial counts, and sensory evaluation in stored yogurt manufactured at 5 °C for 15 days. The results showed that adding sufficient concentrations of starch (0.75%, 1%) could provide better results in terms of the minimum change in the total acidity, decrease in pH, reduction in syneresis, and preferable results for all sensory parameters. The results revealed that the total bacteria count of all yogurt samples increased throughout the storage time. However, adding different concentrations of optimized extracted starch had a significant effect, decreasing the microbial content compared with the control sample (YC). In addition, the results indicated that coliform bacteria were not found during the storage time. PMID:29382115
Yari, Abdollah; Rashnoo, Saba
2017-11-01
Here, we are reporting a sensitive, simple and rapid method for the analysis of cyanidin chloride and pelargonidin chloride anthocyanins in cherry, sour cherry, pomegranate and barberry produced in Iran. The analytes were extracted with acetonitrile-hydrochloric acid (1% v/v) mixture under optimized pretreatment conditions. Clean-up of the extract from fruits was conducted by magnetic solid phase extraction using salicylic acid functionalized silica-coated magnetite nanoparticles (SCMNPs) as the adsorbent. The optimized conditions searched with central composite design. Working under optimum conditions specified as: SCMNPs modified with salicylic acid, sorbent contact time and sample 10min, mechanical stirring time 57.3min. HPLC with UV-detection was used for determination of the analytes. The limit of detection, LOD, obtained for the two anthocyanins were 0.02 and 0.03μgg -1 , respectively. The ranges of the spiked recoveries were 80.0-97.6 and 72.9-97.2%, with the relative standard deviations (RSD) of 2.1 and 2.5%, respectively. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Bhosale, Parag; Staring, Marius; Al-Ars, Zaid; Berendsen, Floris F.
2018-03-01
Currently, non-rigid image registration algorithms are too computationally intensive to use in time-critical applications. Existing implementations that focus on speed typically address this by either parallelization on GPU-hardware, or by introducing methodically novel techniques into CPU-oriented algorithms. Stochastic gradient descent (SGD) optimization and variations thereof have proven to drastically reduce the computational burden for CPU-based image registration, but have not been successfully applied in GPU hardware due to its stochastic nature. This paper proposes 1) NiftyRegSGD, a SGD optimization for the GPU-based image registration tool NiftyReg, 2) random chunk sampler, a new random sampling strategy that better utilizes the memory bandwidth of GPU hardware. Experiments have been performed on 3D lung CT data of 19 patients, which compared NiftyRegSGD (with and without random chunk sampler) with CPU-based elastix Fast Adaptive SGD (FASGD) and NiftyReg. The registration runtime was 21.5s, 4.4s and 2.8s for elastix-FASGD, NiftyRegSGD without, and NiftyRegSGD with random chunk sampling, respectively, while similar accuracy was obtained. Our method is publicly available at https://github.com/SuperElastix/NiftyRegSGD.
Sarvin, Boris; Fedorova, Elizaveta; Shpigun, Oleg; Titova, Maria; Nikitin, Mikhail; Kochkin, Dmitry; Rodin, Igor; Stavrianidi, Andrey
2018-03-30
In this paper, the ultrasound assisted extraction method for isolation of steroidal glycosides from D. deltoidea plant cell suspension culture with a subsequent HPLC-MS determination was developed. After the organic solvent was selected via a two-factor experiment the optimization via Latin Square 4 × 4 experimental design was carried out for the following parameters: extraction time, organic solvent concentration in extraction solution and the ratio of solvent to sample. It was also shown that the ultrasound assisted extraction method is not suitable for isolation of steroidal glycosides from the D. deltoidea plant material. The results were double-checked using the multiple successive extraction method and refluxing extraction. Optimal conditions for the extraction of steroidal glycosides by the ultrasound assisted extraction method were: extraction time, 60 min; acetonitrile (water) concentration in extraction solution, 50%; the ratio of solvent to sample, 400 mL/g. Also, the developed method was tested on D. deltoidea cell suspension cultures of different terms and conditions of cultivation. The completeness of the extraction was confirmed using the multiple successive extraction method. Copyright © 2018 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Kim, Min-Suk; Won, Hwa-Yeon; Jeong, Jong-Mun; Böcker, Paul; Vergaij-Huizer, Lydia; Kupers, Michiel; Jovanović, Milenko; Sochal, Inez; Ryan, Kevin; Sun, Kyu-Tae; Lim, Young-Wan; Byun, Jin-Moo; Kim, Gwang-Gon; Suh, Jung-Joon
2016-03-01
In order to optimize yield in DRAM semiconductor manufacturing for 2x nodes and beyond, the (processing induced) overlay fingerprint towards the edge of the wafer needs to be reduced. Traditionally, this is achieved by acquiring denser overlay metrology at the edge of the wafer, to feed field-by-field corrections. Although field-by-field corrections can be effective in reducing localized overlay errors, the requirement for dense metrology to determine the corrections can become a limiting factor due to a significant increase of metrology time and cost. In this study, a more cost-effective solution has been found in extending the regular correction model with an edge-specific component. This new overlay correction model can be driven by an optimized, sparser sampling especially at the wafer edge area, and also allows for a reduction of noise propagation. Lithography correction potential has been maximized, with significantly less metrology needs. Evaluations have been performed, demonstrating the benefit of edge models in terms of on-product overlay performance, as well as cell based overlay performance based on metrology-to-cell matching improvements. Performance can be increased compared to POR modeling and sampling, which can contribute to (overlay based) yield improvement. Based on advanced modeling including edge components, metrology requirements have been optimized, enabling integrated metrology which drives down overall metrology fab footprint and lithography cycle time.
Cacho, J I; Nicolás, J; Viñas, P; Campillo, N; Hernández-Córdoba, M
2016-12-02
A solventless analytical method is proposed for analyzing the compounds responsible for cork taint in cork stoppers. Direct sample introduction (DSI) is evaluated as a sample introduction system for the gas chromatography-mass spectrometry (GC-MS) determination of four haloanisoles (HAs) in cork samples. Several parameters affecting the DSI step, including desorption temperature and time, gas flow rate and other focusing parameters, were optimized using univariate and multivariate approaches. The proposed method shows high sensitivity and minimises sample handling, with detection limits of 1.6-2.6ngg -1 , depending on the compound. The suitability of the optimized procedure as a screening method was evaluated by obtaining decision limits (CCα) and detection capabilities (CCβ) for each analyte, which were found to be in 6.9-11.8 and 8.7-14.8ngg -1 , respectively, depending on the compound. Twenty-four cork samples were analysed, and 2,4,6-trichloroanisole was found in four of them at levels between 12.6 and 53ngg -1 . Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Roslindar Yaziz, Siti; Zakaria, Roslinazairimah; Hura Ahmad, Maizah
2017-09-01
The model of Box-Jenkins - GARCH has been shown to be a promising tool for forecasting higher volatile time series. In this study, the framework of determining the optimal sample size using Box-Jenkins model with GARCH is proposed for practical application in analysing and forecasting higher volatile data. The proposed framework is employed to daily world gold price series from year 1971 to 2013. The data is divided into 12 different sample sizes (from 30 to 10200). Each sample is tested using different combination of the hybrid Box-Jenkins - GARCH model. Our study shows that the optimal sample size to forecast gold price using the framework of the hybrid model is 1250 data of 5-year sample. Hence, the empirical results of model selection criteria and 1-step-ahead forecasting evaluations suggest that the latest 12.25% (5-year data) of 10200 data is sufficient enough to be employed in the model of Box-Jenkins - GARCH with similar forecasting performance as by using 41-year data.
Microfluidic-Based Robotic Sampling System for Radioactive Solutions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jack D. Law; Julia L. Tripp; Tara E. Smith
A novel microfluidic based robotic sampling system has been developed for sampling and analysis of liquid solutions in nuclear processes. This system couples the use of a microfluidic sample chip with a robotic system designed to allow remote, automated sampling of process solutions in-cell and facilitates direct coupling of the microfluidic sample chip with analytical instrumentation. This system provides the capability for near real time analysis, reduces analytical waste, and minimizes the potential for personnel exposure associated with traditional sampling methods. A prototype sampling system was designed, built and tested. System testing demonstrated operability of the microfluidic based sample systemmore » and identified system modifications to optimize performance.« less
Raman-spectroscopy-based chemical contaminant detection in milk powder
NASA Astrophysics Data System (ADS)
Dhakal, Sagar; Chao, Kuanglin; Qin, Jianwei; Kim, Moon S.
2015-05-01
Addition of edible and inedible chemical contaminants in food powders for purposes of economic benefit has become a recurring trend. In recent years, severe health issues have been reported due to consumption of food powders contaminated with chemical substances. This study examines the effect of spatial resolution used during spectral collection to select the optimal spatial resolution for detecting melamine in milk powder. Sample depth of 2mm, laser intensity of 200mw, and exposure time of 0.1s were previously determined as optimal experimental parameters for Raman imaging. Spatial resolution of 0.25mm was determined as the optimal resolution for acquiring spectral signal of melamine particles from a milk-melamine mixture sample. Using the optimal resolution of 0.25mm, sample depth of 2mm and laser intensity of 200mw obtained from previous study, spectral signal from 5 different concentration of milk-melamine mixture (1%, 0.5%, 0.1%, 0.05%, and 0.025%) were acquired to study the relationship between number of detected melamine pixels and corresponding sample concentration. The result shows that melamine concentration has a linear relation with detected number of melamine pixels with correlation coefficient of 0.99. It can be concluded that the quantitative analysis of powder mixture is dependent on many factors including physical characteristics of mixture, experimental parameters, and sample depth. The results obtained in this study are promising. We plan to apply the result obtained from this study to develop quantitative detection model for rapid screening of melamine in milk powder. This methodology can also be used for detection of other chemical contaminants in milk powders.
Lommen, Jonathan M; Flassbeck, Sebastian; Behl, Nicolas G R; Niesporek, Sebastian; Bachert, Peter; Ladd, Mark E; Nagel, Armin M
2018-08-01
To investigate and to reduce influences on the determination of the short and long apparent transverse relaxation times ( T2,s*, T2,l*) of 23 Na in vivo with respect to signal sampling. The accuracy of T2* determination was analyzed in simulations for five different sampling schemes. The influence of noise in the parameter fit was investigated for three different models. A dedicated sampling scheme was developed for brain parenchyma by numerically optimizing the parameter estimation. This scheme was compared in vivo to linear sampling at 7T. For the considered sampling schemes, T2,s* / T2,l* exhibit an average bias of 3% / 4% with a variation of 25% / 15% based on simulations with previously published T2* values. The accuracy could be improved with the optimized sampling scheme by strongly averaging the earliest sample. A fitting model with constant noise floor can increase accuracy while additional fitting of a noise term is only beneficial in case of sampling until late echo time > 80 ms. T2* values in white matter were determined to be T2,s* = 5.1 ± 0.8 / 4.2 ± 0.4 ms and T2,l* = 35.7 ± 2.4 / 34.4 ± 1.5 ms using linear/optimized sampling. Voxel-wise T2* determination of 23 Na is feasible in vivo. However, sampling and fitting methods have to be chosen carefully to retrieve accurate results. Magn Reson Med 80:571-584, 2018. © 2018 International Society for Magnetic Resonance in Medicine. © 2018 International Society for Magnetic Resonance in Medicine.
Fu, Xingang; Li, Shuhui; Fairbank, Michael; Wunsch, Donald C; Alonso, Eduardo
2015-09-01
This paper investigates how to train a recurrent neural network (RNN) using the Levenberg-Marquardt (LM) algorithm as well as how to implement optimal control of a grid-connected converter (GCC) using an RNN. To successfully and efficiently train an RNN using the LM algorithm, a new forward accumulation through time (FATT) algorithm is proposed to calculate the Jacobian matrix required by the LM algorithm. This paper explores how to incorporate FATT into the LM algorithm. The results show that the combination of the LM and FATT algorithms trains RNNs better than the conventional backpropagation through time algorithm. This paper presents an analytical study on the optimal control of GCCs, including theoretically ideal optimal and suboptimal controllers. To overcome the inapplicability of the optimal GCC controller under practical conditions, a new RNN controller with an improved input structure is proposed to approximate the ideal optimal controller. The performance of an ideal optimal controller and a well-trained RNN controller was compared in close to real-life power converter switching environments, demonstrating that the proposed RNN controller can achieve close to ideal optimal control performance even under low sampling rate conditions. The excellent performance of the proposed RNN controller under challenging and distorted system conditions further indicates the feasibility of using an RNN to approximate optimal control in practical applications.
Abbasi, Ibrahim; Kirstein, Oscar D; Hailu, Asrat; Warburg, Alon
2016-10-01
Visceral leishmaniasis (VL), one of the most important neglected tropical diseases, is caused by Leishmania donovani eukaryotic protozoan parasite of the genus Leishmania, the disease is prevalent mainly in the Indian sub-continent, East Africa and Brazil. VL can be diagnosed by PCR amplifying ITS1 and/or kDNA genes. The current study involved the optimization of Loop-mediated isothermal amplification (LAMP) for the detection of Leishmania DNA in human blood or tissue samples. Three LAMP systems were developed; in two of those the primers were designed based on shared regions of the ITS1 gene among different Leishmania species, while the primers for the third LAMP system were derived from a newly identified repeated region in the Leishmania genome. The LAMP tests were shown to be sufficiently sensitive to detect 0.1pg of DNA from most Leishmania species. The green nucleic acid stain SYTO16, was used here for the first time to allow real-time monitoring of LAMP amplification. The advantage of real time-LAMP using SYTO 16 over end-point LAMP product detection is discussed. The efficacy of the real time-LAMP tests for detecting Leishmania DNA in dried blood samples from volunteers living in endemic areas, was compared with that of qRT-kDNA PCR. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
High-performance 3D compressive sensing MRI reconstruction.
Kim, Daehyun; Trzasko, Joshua D; Smelyanskiy, Mikhail; Haider, Clifton R; Manduca, Armando; Dubey, Pradeep
2010-01-01
Compressive Sensing (CS) is a nascent sampling and reconstruction paradigm that describes how sparse or compressible signals can be accurately approximated using many fewer samples than traditionally believed. In magnetic resonance imaging (MRI), where scan duration is directly proportional to the number of acquired samples, CS has the potential to dramatically decrease scan time. However, the computationally expensive nature of CS reconstructions has so far precluded their use in routine clinical practice - instead, more-easily generated but lower-quality images continue to be used. We investigate the development and optimization of a proven inexact quasi-Newton CS reconstruction algorithm on several modern parallel architectures, including CPUs, GPUs, and Intel's Many Integrated Core (MIC) architecture. Our (optimized) baseline implementation on a quad-core Core i7 is able to reconstruct a 256 × 160×80 volume of the neurovasculature from an 8-channel, 10 × undersampled data set within 56 seconds, which is already a significant improvement over existing implementations. The latest six-core Core i7 reduces the reconstruction time further to 32 seconds. Moreover, we show that the CS algorithm benefits from modern throughput-oriented architectures. Specifically, our CUDA-base implementation on NVIDIA GTX480 reconstructs the same dataset in 16 seconds, while Intel's Knights Ferry (KNF) of the MIC architecture even reduces the time to 12 seconds. Such level of performance allows the neurovascular dataset to be reconstructed within a clinically viable time.
Wang, Jiabin; Wu, Fangling; Zhao, Qi
2015-08-01
A C18 monolithic capillary column was utilized as the solid phase microextraction column to construct an in-tube SPME-HPLC system which was used to simultaneously extract and detect five phenoxy acid herbicides, including 2,4-dichlorophenoxyacetic acid (2,4-D), 2- (2-chloro)-phenoxy propionic acid (2,2-CPPA), 2-(3-chloro)-phenoxy propionic acid (2,3- CPPA), phenoxy propionic acid (PPA) and 2-(2,4-dichlorophenoxy) propionic acid (2,4-DP). The operating parameters of the in-tube SPME-HPLC system, including the length of the monolithic column, the sampling flow rate, the sampling time, the elution flow rate and the elution time, had been investigated in detail. The optimized operating parameters of the in-tube SPME-HPLC system were as follow: the length of the monolithic column was 20 cm, the sampling flow rate was 0. 04 mL/min, sampling time was 13 min; the elution flow rate was 0.02 mL/min, elution time was 5 min. Under the optimized conditions, the detection limits of the five phenoxy acid herbicides were as follows: 9 µg/L for PPA, 4 µg/L for 2,2-CPPA, 4 µg/L for 2,3-CPPA, 5 µg/L for 2,4-D, 5 µg/L for 2,4-DP. Compared with the HPLC method with direct injection, the combined system showed a good enrichment factors to the analytes. The recoveries of the five phenoxy acid herbicides were between 79.0% and 98.0% (RSD ≤ 3.9%). This method was successfully used to detect the five phenoxy acid herbicides in water samples with satisfactory results.
Consedine, Nathan S
2012-08-01
Disparities in breast screening are well documented. Less clear are differences within groups of immigrant and non-immigrant minority women or differences in adherence to mammography guidelines over time. A sample of 1,364 immigrant and non-immigrant women (African American, English Caribbean, Haitian, Dominican, Eastern European, and European American) were recruited using a stratified cluster-sampling plan. In addition to measuring established predictors of screening, women reported mammography frequency in the last 10 years and were (per ACS guidelines at the time) categorized as never, sub-optimal (<1 screen/year), or adherent (1+ screens/year) screeners. Multinomial logistic regression showed that while ethnicity infrequently predicted the never versus sub-optimal comparison, English Caribbean, Haitian, and Eastern European women were less likely to screen systematically over time. Demographics did not predict the never versus sub-optimal distinction; only regular physician, annual exam, physician recommendation, and cancer worry showed effects. However, the adherent categorization was predicted by demographics, was less likely among women without insurance, a regular physician, or an annual exam, and more likely among women reporting certain patterns of emotion (low embarrassment and greater worry). Because regular screening is crucial to breast health, there is a clear need to consider patterns of screening among immigrant and non-immigrant women as well as whether the variables predicting the initiation of screening are distinct from those predicting systematic screening over time.
Ultrasound Assisted Extraction of Phenolic Compounds from Peaches and Pumpkins
Altemimi, Ammar; Watson, Dennis G.; Choudhary, Ruplal; Dasari, Mallika R.; Lightfoot, David A.
2016-01-01
The ultrasound-assisted extraction (UAE) method was used to optimize the extraction of phenolic compounds from pumpkins and peaches. The response surface methodology (RSM) was used to study the effects of three independent variables each with three treatments. They included extraction temperatures (30, 40 and 50°C), ultrasonic power levels (30, 50 and 70%) and extraction times (10, 20 and 30 min). The optimal conditions for extractions of total phenolics from pumpkins were inferred to be a temperature of 41.45°C, a power of 44.60% and a time of 25.67 min. However, an extraction temperature of 40.99°C, power of 56.01% and time of 25.71 min was optimal for recovery of free radical scavenging activity (measured by 1, 1-diphenyl-2-picrylhydrazyl (DPPH) reduction). The optimal conditions for peach extracts were an extraction temperature of 41.53°C, power of 43.99% and time of 27.86 min for total phenolics. However, an extraction temperature of 41.60°C, power of 44.88% and time of 27.49 min was optimal for free radical scavenging activity (judged by from DPPH reduction). Further, the UAE processes were significantly better than solvent extractions without ultrasound. By electron microscopy it was concluded that ultrasonic processing caused damage in cells for all treated samples (pumpkin, peach). However, the FTIR spectra did not show any significant changes in chemical structures caused by either ultrasonic processing or solvent extraction. PMID:26885655
Volume reconstruction optimization for tomo-PIV algorithms applied to experimental data
NASA Astrophysics Data System (ADS)
Martins, Fabio J. W. A.; Foucaut, Jean-Marc; Thomas, Lionel; Azevedo, Luis F. A.; Stanislas, Michel
2015-08-01
Tomographic PIV is a three-component volumetric velocity measurement technique based on the tomographic reconstruction of a particle distribution imaged by multiple camera views. In essence, the performance and accuracy of this technique is highly dependent on the parametric adjustment and the reconstruction algorithm used. Although synthetic data have been widely employed to optimize experiments, the resulting reconstructed volumes might not have optimal quality. The purpose of the present study is to offer quality indicators that can be applied to data samples in order to improve the quality of velocity results obtained by the tomo-PIV technique. The methodology proposed can potentially lead to significantly reduction in the time required to optimize a tomo-PIV reconstruction, also leading to better quality velocity results. Tomo-PIV data provided by a six-camera turbulent boundary-layer experiment were used to optimize the reconstruction algorithms according to this methodology. Velocity statistics measurements obtained by optimized BIMART, SMART and MART algorithms were compared with hot-wire anemometer data and velocity measurement uncertainties were computed. Results indicated that BIMART and SMART algorithms produced reconstructed volumes with equivalent quality as the standard MART with the benefit of reduced computational time.
Chiu, Mei Choi; Pun, Chi Seng; Wong, Hoi Ying
2017-08-01
Investors interested in the global financial market must analyze financial securities internationally. Making an optimal global investment decision involves processing a huge amount of data for a high-dimensional portfolio. This article investigates the big data challenges of two mean-variance optimal portfolios: continuous-time precommitment and constant-rebalancing strategies. We show that both optimized portfolios implemented with the traditional sample estimates converge to the worst performing portfolio when the portfolio size becomes large. The crux of the problem is the estimation error accumulated from the huge dimension of stock data. We then propose a linear programming optimal (LPO) portfolio framework, which applies a constrained ℓ 1 minimization to the theoretical optimal control to mitigate the risk associated with the dimensionality issue. The resulting portfolio becomes a sparse portfolio that selects stocks with a data-driven procedure and hence offers a stable mean-variance portfolio in practice. When the number of observations becomes large, the LPO portfolio converges to the oracle optimal portfolio, which is free of estimation error, even though the number of stocks grows faster than the number of observations. Our numerical and empirical studies demonstrate the superiority of the proposed approach. © 2017 Society for Risk Analysis.
Van Broeck, Bianca; Timmers, Maarten; Ramael, Steven; Bogert, Jennifer; Shaw, Leslie M; Mercken, Marc; Slemmon, John; Van Nueten, Luc; Engelborghs, Sebastiaan; Streffer, Johannes Rolf
2016-05-19
Cerebrospinal fluid (CSF) amyloid-beta (Aβ) peptides are predictive biomarkers for Alzheimer's disease and are proposed as pharmacodynamic markers for amyloid-lowering therapies. However, frequent sampling results in fluctuating CSF Aβ levels that have a tendency to increase compared with baseline. The impact of sampling frequency, volume, catheterization procedure, and ibuprofen pretreatment on CSF Aβ levels using continuous sampling over 36 h was assessed. In this open-label biomarker study, healthy participants (n = 18; either sex, age 55-85 years) were randomized into one of three cohorts (n = 6/cohort; high-frequency sampling). In all cohorts except cohort 2 (sampling started 6 h post catheterization), sampling through lumbar catheterization started immediately post catheterization. Cohort 3 received ibuprofen (800 mg) before catheterization. Following interim data review, an additional cohort 4 (n = 6) with an optimized sampling scheme (low-frequency and lower volume) was included. CSF Aβ(1-37), Aβ(1-38), Aβ(1-40), and Aβ(1-42) levels were analyzed. Increases and fluctuations in mean CSF Aβ levels occurred in cohorts 1-3 at times of high-frequency sampling. Some outliers were observed (cohorts 2 and 3) with an extreme pronunciation of this effect. Cohort 4 demonstrated minimal fluctuation of CSF Aβ both on a group and an individual level. Intersubject variability in CSF Aβ profiles over time was observed in all cohorts. CSF Aβ level fluctuation upon catheterization primarily depends on the sampling frequency and volume, but not on the catheterization procedure or inflammatory reaction. An optimized low-frequency sampling protocol minimizes or eliminates fluctuation of CSF Aβ levels, which will improve the capability of accurately measuring the pharmacodynamic read-out for amyloid-lowering therapies. ClinicalTrials.gov NCT01436188 . Registered 15 September 2011.
NASA Astrophysics Data System (ADS)
Zhou, Rurui; Li, Yu; Lu, Di; Liu, Haixing; Zhou, Huicheng
2016-09-01
This paper investigates the use of an epsilon-dominance non-dominated sorted genetic algorithm II (ɛ-NSGAII) as a sampling approach with an aim to improving sampling efficiency for multiple metrics uncertainty analysis using Generalized Likelihood Uncertainty Estimation (GLUE). The effectiveness of ɛ-NSGAII based sampling is demonstrated compared with Latin hypercube sampling (LHS) through analyzing sampling efficiency, multiple metrics performance, parameter uncertainty and flood forecasting uncertainty with a case study of flood forecasting uncertainty evaluation based on Xinanjiang model (XAJ) for Qing River reservoir, China. Results obtained demonstrate the following advantages of the ɛ-NSGAII based sampling approach in comparison to LHS: (1) The former performs more effective and efficient than LHS, for example the simulation time required to generate 1000 behavioral parameter sets is shorter by 9 times; (2) The Pareto tradeoffs between metrics are demonstrated clearly with the solutions from ɛ-NSGAII based sampling, also their Pareto optimal values are better than those of LHS, which means better forecasting accuracy of ɛ-NSGAII parameter sets; (3) The parameter posterior distributions from ɛ-NSGAII based sampling are concentrated in the appropriate ranges rather than uniform, which accords with their physical significance, also parameter uncertainties are reduced significantly; (4) The forecasted floods are close to the observations as evaluated by three measures: the normalized total flow outside the uncertainty intervals (FOUI), average relative band-width (RB) and average deviation amplitude (D). The flood forecasting uncertainty is also reduced a lot with ɛ-NSGAII based sampling. This study provides a new sampling approach to improve multiple metrics uncertainty analysis under the framework of GLUE, and could be used to reveal the underlying mechanisms of parameter sets under multiple conflicting metrics in the uncertainty analysis process.
Saussele, Susanne; Hehlmann, Rüdiger; Fabarius, Alice; Jeromin, Sabine; Proetel, Ulrike; Rinaldetti, Sebastien; Kohlbrenner, Katharina; Einsele, Hermann; Falge, Christiane; Kanz, Lothar; Neubauer, Andreas; Kneba, Michael; Stegelmann, Frank; Pfreundschuh, Michael; Waller, Cornelius F; Oppliger Leibundgut, Elisabeth; Heim, Dominik; Krause, Stefan W; Hofmann, Wolf-Karsten; Hasford, Joerg; Pfirrmann, Markus; Müller, Martin C; Hochhaus, Andreas; Lauseker, Michael
2018-05-01
Major molecular remission (MMR) is an important therapy goal in chronic myeloid leukemia (CML). So far, MMR is not a failure criterion according to ELN management recommendation leading to uncertainties when to change therapy in CML patients not reaching MMR after 12 months. At monthly landmarks, for different molecular remission status Hazard ratios (HR) were estimated for patients registered to CML study IV who were divided in a learning and a validation sample. The minimum HR for MMR was found at 2.5 years with 0.28 (compared to patients without remission). In the validation sample, a significant advantage for progression-free survival (PFS) for patients in MMR could be detected (p-value 0.007). The optimal time to predict PFS in patients with MMR could be validated in an independent sample at 2.5 years. With our model we provide a suggestion when to define lack of MMR as therapy failure and thus treatment change should be considered. The optimal response time for 1% BCR-ABL at about 12-15 months was confirmed and for deep molecular remission no specific time point was detected. Nevertheless, it was demonstrated that the earlier the MMR is achieved the higher is the chance to attain deep molecular response later.
Artificial neural network modelling of uncertainty in gamma-ray spectrometry
NASA Astrophysics Data System (ADS)
Dragović, S.; Onjia, A.; Stanković, S.; Aničin, I.; Bačić, G.
2005-03-01
An artificial neural network (ANN) model for the prediction of measuring uncertainties in gamma-ray spectrometry was developed and optimized. A three-layer feed-forward ANN with back-propagation learning algorithm was used to model uncertainties of measurement of activity levels of eight radionuclides ( 226Ra, 238U, 235U, 40K, 232Th, 134Cs, 137Cs and 7Be) in soil samples as a function of measurement time. It was shown that the neural network provides useful data even from small experimental databases. The performance of the optimized neural network was found to be very good, with correlation coefficients ( R2) between measured and predicted uncertainties ranging from 0.9050 to 0.9915. The correlation coefficients did not significantly deteriorate when the network was tested on samples with greatly different uranium-to-thorium ( 238U/ 232Th) ratios. The differences between measured and predicted uncertainties were not influenced by the absolute values of uncertainties of measured radionuclide activities. Once the ANN is trained, it could be employed in analyzing soil samples regardless of the 238U/ 232Th ratio. It was concluded that a considerable saving in time could be obtained using the trained neural network model for predicting the measurement times needed to attain the desired statistical accuracy.
Chu, Chu; Wei, Mengmeng; Wang, Shan; Zheng, Liqiong; He, Zheng; Cao, Jun; Yan, Jizhong
2017-09-15
A simple and effective method was developed for determining lignans in Schisandrae Chinensis Fructus by using a micro-matrix solid phase dispersion (MSPD) technique coupled with microemulsion electrokinetic chromatography (MEEKC). Molecular sieve, TS-1, was applied as a solid supporting material in micro MSPD extraction for the first time. Parameters that affect extraction efficiency, such as type of dispersant, mass ratio of the sample to the dispersant, grinding time, elution solvent and volume were optimized. The optimal extraction conditions involve dispersing 25mg of powdered Schisandrae samples with 50mg of TS-1 by a mortar and pestle. A grinding time of 150s was adopted. The blend was then transferred to a solid-phase extraction cartridge and the target analytes were eluted with 500μL of methanol. Moreover, several parameters affecting MEEKC separation were studied, including the type of oil, SDS concentration, type and concentration of cosurfactant, and concentration of organic modifier. A satisfactory linearity (R>0.9998) was obtained, and the calculated limits of quantitation were less than 2.77μg/mL. Finally, the micro MSPD-MEEKC method was successfully applied to the analysis of lignans in complex Schisandrae fructus samples. Copyright © 2017 Elsevier B.V. All rights reserved.
Zhou, Manshui; McDonald, John F; Fernández, Facundo M
2010-01-01
Metabolomic fingerprinting of bodily fluids can reveal the underlying causes of metabolic disorders associated with many diseases, and has thus been recognized as a potential tool for disease diagnosis and prognosis following therapy. Here we report a rapid approach in which direct analysis in real time (DART) coupled with time-of-flight (TOF) mass spectrometry (MS) and hybrid quadrupole TOF (Q-TOF) MS is used as a means for metabolomic fingerprinting of human serum. In this approach, serum samples are first treated to precipitate proteins, and the volatility of the remaining metabolites increased by derivatization, followed by DART MS analysis. Maximum DART MS performance was obtained by optimizing instrumental parameters such as ionizing gas temperature and flow rate for the analysis of identical aliquots of a healthy human serum samples. These variables were observed to have a significant effect on the overall mass range of the metabolites detected as well as the signal-to-noise ratios in DART mass spectra. Each DART run requires only 1.2 min, during which more than 1500 different spectral features are observed in a time-dependent fashion. A repeatability of 4.1% to 4.5% was obtained for the total ion signal using a manual sampling arm. With the appealing features of high-throughput, lack of memory effects, and simplicity, DART MS has shown potential to become an invaluable tool for metabolomic fingerprinting. 2010 American Society for Mass Spectrometry. Published by Elsevier Inc. All rights reserved.
Sedehi, Samira; Tabani, Hadi; Nojavan, Saeed
2018-03-01
In this work, polypropylene hollow fiber was replaced by agarose gel in conventional electro membrane extraction (EME) to develop a novel approach. The proposed EME method was then employed to extract two amino acids (tyrosine and phenylalanine) as model polar analytes, followed by HPLC-UV. The method showed acceptable results under optimized conditions. This green methodology outperformed conventional EME, and required neither organic solvents nor carriers. The effective parameters such as the pH values of the acceptor and the donor solutions, the thickness and pH of the gel, the extraction voltage, the stirring rate, and the extraction time were optimized. Under the optimized conditions (acceptor solution pH: 1.5; donor solution pH: 2.5; agarose gel thickness: 7mm; agarose gel pH: 1.5; stirring rate of the sample solution: 1000rpm; extraction potential: 40V; and extraction time: 15min), the limits of detection and quantification were 7.5ngmL -1 and 25ngmL -1 , respectively. The extraction recoveries were between 56.6% and 85.0%, and the calibration curves were linear with correlation coefficients above 0.996 over a concentration range of 25.0-1000.0ngmL -1 for both amino acids. The intra- and inter-day precisions were in the range of 5.5-12.5%, and relative errors were smaller than 12.0%. Finally, the optimized method was successfully applied to preconcentrate, clean up, and quantify amino acids in watermelon and grapefruit juices as well as a plasma sample, and acceptable relative recoveries in the range of 53.9-84.0% were obtained. Copyright © 2017 Elsevier B.V. All rights reserved.
Efficient Bayesian experimental design for contaminant source identification
NASA Astrophysics Data System (ADS)
Zhang, J.; Zeng, L.
2013-12-01
In this study, an efficient full Bayesian approach is developed for the optimal sampling well location design and source parameter identification of groundwater contaminants. An information measure, i.e., the relative entropy, is employed to quantify the information gain from indirect concentration measurements in identifying unknown source parameters such as the release time, strength and location. In this approach, the sampling location that gives the maximum relative entropy is selected as the optimal one. Once the sampling location is determined, a Bayesian approach based on Markov Chain Monte Carlo (MCMC) is used to estimate unknown source parameters. In both the design and estimation, the contaminant transport equation is required to be solved many times to evaluate the likelihood. To reduce the computational burden, an interpolation method based on the adaptive sparse grid is utilized to construct a surrogate for the contaminant transport. The approximated likelihood can be evaluated directly from the surrogate, which greatly accelerates the design and estimation process. The accuracy and efficiency of our approach are demonstrated through numerical case studies. Compared with the traditional optimal design, which is based on the Gaussian linear assumption, the method developed in this study can cope with arbitrary nonlinearity. It can be used to assist in groundwater monitor network design and identification of unknown contaminant sources. Contours of the expected information gain. The optimal observing location corresponds to the maximum value. Posterior marginal probability densities of unknown parameters, the thick solid black lines are for the designed location. For comparison, other 7 lines are for randomly chosen locations. The true values are denoted by vertical lines. It is obvious that the unknown parameters are estimated better with the desinged location.
Design and implementation of real-time wireless projection system based on ARM embedded system
NASA Astrophysics Data System (ADS)
Long, Zhaohua; Tang, Hao; Huang, Junhua
2018-04-01
Aiming at the shortage of existing real-time screen sharing system, a real-time wireless projection system is proposed in this paper. Based on the proposed system, a weight-based frame deletion strategy combined sampling time period and data variation is proposed. By implementing the system on the hardware platform, the results show that the system can achieve good results. The weight-based strategy can improve the service quality, reduce the delay and optimize the real-time customer service system [1].
Least squares polynomial chaos expansion: A review of sampling strategies
NASA Astrophysics Data System (ADS)
Hadigol, Mohammad; Doostan, Alireza
2018-04-01
As non-institutive polynomial chaos expansion (PCE) techniques have gained growing popularity among researchers, we here provide a comprehensive review of major sampling strategies for the least squares based PCE. Traditional sampling methods, such as Monte Carlo, Latin hypercube, quasi-Monte Carlo, optimal design of experiments (ODE), Gaussian quadratures, as well as more recent techniques, such as coherence-optimal and randomized quadratures are discussed. We also propose a hybrid sampling method, dubbed alphabetic-coherence-optimal, that employs the so-called alphabetic optimality criteria used in the context of ODE in conjunction with coherence-optimal samples. A comparison between the empirical performance of the selected sampling methods applied to three numerical examples, including high-order PCE's, high-dimensional problems, and low oversampling ratios, is presented to provide a road map for practitioners seeking the most suitable sampling technique for a problem at hand. We observed that the alphabetic-coherence-optimal technique outperforms other sampling methods, specially when high-order ODE are employed and/or the oversampling ratio is low.
Properties of lithium aluminate for application as an OSL dosimeter
NASA Astrophysics Data System (ADS)
Twardak, A.; Bilski, P.; Marczewska, B.; Lee, J. I.; Kim, J. L.; Gieszczyk, W.; Mrozik, A.; Sądel, M.; Wróbel, D.
2014-11-01
Several samples of undoped and carbon or copper doped lithium aluminate (LiAlO2) were prepared in an attempt to achieve a material, which can be applicable in optically stimulated luminescence (OSL) dosimetry. All investigated samples are highly sensitive to ionizing radiation and show good reproducibility. The undoped and copper doped samples exhibit sensitivity several times higher than that of Al2O3:C, while sensitivity of the carbon doped samples is lower. The studied samples exhibit significant fading, but dynamics of signal loss is different for differently doped samples, what indicates a possibility of improving this characteristic by optimizing dopant composition.
Optimization of the Bridgman crystal growth process
NASA Astrophysics Data System (ADS)
Margulies, M.; Witomski, P.; Duffar, T.
2004-05-01
A numerical optimization method of the vertical Bridgman growth configuration is presented and developed. It permits to optimize the furnace temperature field and the pulling rate versus time in order to decrease the radial thermal gradients in the sample. Some constraints are also included in order to insure physically realistic results. The model includes the two classical non-linearities associated to crystal growth processes, the radiative thermal exchange and the release of latent heat at the solid-liquid interface. The mathematical analysis and development of the problem is shortly described. On some examples, it is shown that the method works in a satisfactory way; however the results are dependent on the numerical parameters. Improvements of the optimization model, on the physical and numerical point of view, are suggested.
A random optimization approach for inherent optic properties of nearshore waters
NASA Astrophysics Data System (ADS)
Zhou, Aijun; Hao, Yongshuai; Xu, Kuo; Zhou, Heng
2016-10-01
Traditional method of water quality sampling is time-consuming and highly cost. It can not meet the needs of social development. Hyperspectral remote sensing technology has well time resolution, spatial coverage and more general segment information on spectrum. It has a good potential in water quality supervision. Via the method of semi-analytical, remote sensing information can be related with the water quality. The inherent optical properties are used to quantify the water quality, and an optical model inside the water is established to analysis the features of water. By stochastic optimization algorithm Threshold Acceptance, a global optimization of the unknown model parameters can be determined to obtain the distribution of chlorophyll, organic solution and suspended particles in water. Via the improvement of the optimization algorithm in the search step, the processing time will be obviously reduced, and it will create more opportunity for the increasing the number of parameter. For the innovation definition of the optimization steps and standard, the whole inversion process become more targeted, thus improving the accuracy of inversion. According to the application result for simulated data given by IOCCG and field date provided by NASA, the approach model get continuous improvement and enhancement. Finally, a low-cost, effective retrieval model of water quality from hyper-spectral remote sensing can be achieved.
Efficiency and optimal allocation in the staggered entry design
Link, W.A.
1993-01-01
The staggered entry design for survival analysis specifies that r left-truncated samples are to be used in estimation of a population survival function. The ith sample is taken at time Bi, from the subpopulation of individuals having survival time exceeding Bi. This paper investigates the performance of the staggered entry design relative to the usual design in which all samples have a common time origin. The staggered entry design is shown to be an attractive alternative, even when not necessitated by logistical constraints. The staggered entry design allows for increased precision in estimation of the right tail of the survival function, especially when some of the data may be censored. A trade-off between the range of values for which the increased precision occurs and the magnitude of the increased precision is demonstrated.
Adaptive Sensing of Time Series with Application to Remote Exploration
NASA Technical Reports Server (NTRS)
Thompson, David R.; Cabrol, Nathalie A.; Furlong, Michael; Hardgrove, Craig; Low, Bryan K. H.; Moersch, Jeffrey; Wettergreen, David
2013-01-01
We address the problem of adaptive informationoptimal data collection in time series. Here a remote sensor or explorer agent throttles its sampling rate in order to track anomalous events while obeying constraints on time and power. This problem is challenging because the agent has limited visibility -- all collected datapoints lie in the past, but its resource allocation decisions require predicting far into the future. Our solution is to continually fit a Gaussian process model to the latest data and optimize the sampling plan on line to maximize information gain. We compare the performance characteristics of stationary and nonstationary Gaussian process models. We also describe an application based on geologic analysis during planetary rover exploration. Here adaptive sampling can improve coverage of localized anomalies and potentially benefit mission science yield of long autonomous traverses.
Jezová, Vera; Skládal, Jan; Eisner, Ales; Bajerová, Petra; Ventura, Karel
2007-12-07
This paper deals with comparison of efficiency of extraction techniques (solid-phase extraction, SPE and solid-phase microextraction, SPME) used for extraction of nitrate esters (ethyleneglycoldinitrate, EGDN and nitroglycerin, NG), representing the first step of the method of quantitative determination of trace concentrations of nitrate esters in water samples. EGDN and NG are subsequently determined by means of high-performance liquid chromatography with ultraviolet detection (HPLC-UV). Optimization of SPE and SPME conditions was carried out using model water samples. Seven SPE cartridges were tested and the conditions were optimized (type of sorbent, type and volume of solvent to be used as eluent). For both nitrate esters the limit of detection (LOD) and the limit of quantification (LOQ) obtained using SPE/HPLC-UV were 0.23 microg mL(-1) and 0.70 microg mL(-1), respectively. Optimization of SPME conditions: type of SPME fibre (four fibres were tested), type and time of sorption/desorption, temperature of sorption. PDMS/DVB (polydimethylsiloxane/divinylbenzene) fibre coating proved to be suitable for extraction of EGDN and NG. For this fibre the LOD and the LOQ for both nitrate esters were 0.16 microg mL(-1) and 0.50 microg mL(-1), respectively. Optimized methods SPE/HPLC-UV and SPME/HPLC-UV were then used for quantitative determination of nitrate esters content in real water samples from the production of EGDN and NG.
Riesgo, Ana; Pérez-Porro, Alicia R; Carmona, Susana; Leys, Sally P; Giribet, Gonzalo
2012-03-01
Transcriptome sequencing with next-generation sequencing technologies has the potential for addressing many long-standing questions about the biology of sponges. Transcriptome sequence quality depends on good cDNA libraries, which requires high-quality mRNA. Standard protocols for preserving and isolating mRNA often require optimization for unusual tissue types. Our aim was assessing the efficiency of two preservation modes, (i) flash freezing with liquid nitrogen (LN₂) and (ii) immersion in RNAlater, for the recovery of high-quality mRNA from sponge tissues. We also tested whether the long-term storage of samples at -80 °C affects the quantity and quality of mRNA. We extracted mRNA from nine sponge species and analysed the quantity and quality (A260/230 and A260/280 ratios) of mRNA according to preservation method, storage time, and taxonomy. The quantity and quality of mRNA depended significantly on the preservation method used (LN₂) outperforming RNAlater), the sponge species, and the interaction between them. When the preservation was analysed in combination with either storage time or species, the quantity and A260/230 ratio were both significantly higher for LN₂-preserved samples. Interestingly, individual comparisons for each preservation method over time indicated that both methods performed equally efficiently during the first month, but RNAlater lost efficiency in storage times longer than 2 months compared with flash-frozen samples. In summary, we find that for long-term preservation of samples, flash freezing is the preferred method. If LN₂ is not available, RNAlater can be used, but mRNA extraction during the first month of storage is advised. © 2011 Blackwell Publishing Ltd.
Sawoszczuk, Tomasz; Syguła-Cholewińska, Justyna; del Hoyo-Meléndez, Julio M
2015-08-28
The main goal of this work was to optimize the SPME sampling method for measuring microbial volatile organic compounds (MVOCs) emitted by active molds that may deteriorate historical objects. A series of artificially aged model materials that resemble those found in historical objects was prepared and evaluated after exposure to four different types of fungi. The investigated pairs consisted of: Alternaria alternata on silk, Aspergillus niger on parchment, Chaetomium globosum on paper and wool, and Cladosporium herbarum on paper. First of all, a selection of the most efficient SPME fibers was carried out as there are six different types of fibers commercially available. It was important to find a fiber that absorbs the biggest number and the highest amount of MVOCs. The results allowed establishing and selecting the DVB/CAR/PDMS fiber as the most effective SPME fiber for this kind of an analysis. Another task was to optimize the time of MVOCs extraction on the fiber. It was recognized that a time between 12 and 24h is adequate for absorbing a high enough amount of MVOCs. In the last step the temperature of MVOCs desorption in the GC injection port was optimized. It was found that desorption at a temperature of 250°C allowed obtaining chromatograms with the highest abundances of compounds. To the best of our knowledge this work constitutes the first attempt of the SPME method optimization for sampling MVOCs emitted by molds growing on historical objects. Copyright © 2015 Elsevier B.V. All rights reserved.
Ciaccio, Edward J; Micheli-Tzanakou, Evangelia
2007-07-01
Common-mode noise degrades cardiovascular signal quality and diminishes measurement accuracy. Filtering to remove noise components in the frequency domain often distorts the signal. Two adaptive noise canceling (ANC) algorithms were tested to adjust weighted reference signals for optimal subtraction from a primary signal. Update of weight w was based upon the gradient term of the steepest descent equation: [see text], where the error epsilon is the difference between primary and weighted reference signals. nabla was estimated from Deltaepsilon(2) and Deltaw without using a variable Deltaw in the denominator which can cause instability. The Parallel Comparison (PC) algorithm computed Deltaepsilon(2) using fixed finite differences +/- Deltaw in parallel during each discrete time k. The ALOPEX algorithm computed Deltaepsilon(2)x Deltaw from time k to k + 1 to estimate nabla, with a random number added to account for Deltaepsilon(2) . Deltaw--> 0 near the optimal weighting. Using simulated data, both algorithms stably converged to the optimal weighting within 50-2000 discrete sample points k even with a SNR = 1:8 and weights which were initialized far from the optimal. Using a sharply pulsatile cardiac electrogram signal with added noise so that the SNR = 1:5, both algorithms exhibited stable convergence within 100 ms (100 sample points). Fourier spectral analysis revealed minimal distortion when comparing the signal without added noise to the ANC restored signal. ANC algorithms based upon difference calculations can rapidly and stably converge to the optimal weighting in simulated and real cardiovascular data. Signal quality is restored with minimal distortion, increasing the accuracy of biophysical measurement.
Time and expected value of sample information wait for no patient.
Eckermann, Simon; Willan, Andrew R
2008-01-01
The expected value of sample information (EVSI) from prospective trials has previously been modeled as the product of EVSI per patient, and the number of patients across the relevant time horizon less those "used up" in trials. However, this implicitly assumes the eligible patient population to which information from a trial can be applied across a time horizon are independent of time for trial accrual, follow-up and analysis. This article demonstrates that in calculating the EVSI of a trial, the number of patients who benefit from trial information should be reduced by those treated outside as well as within the trial over the time until trial evidence is updated, including time for accrual, follow-up and analysis. Accounting for time is shown to reduce the eligible patient population: 1) independent of the size of trial in allowing for time of follow-up and analysis, and 2) dependent on the size of trial for time of accrual, where the patient accrual rate is less than incidence. Consequently, the EVSI and expected net gain (ENG) at any given trial size are shown to be lower when accounting for time, with lower ENG reinforced in the case of trials undertaken while delaying decisions by additional opportunity costs of time. Appropriately accounting for time reduces the EVSI of trial design and increase opportunity costs of trials undertaken with delay, leading to lower likelihood of trialing being optimal and smaller trial designs where optimal.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ding, Xuanfeng, E-mail: Xuanfeng.ding@beaumont.org; Li, Xiaoqiang; Zhang, J. Michele
Purpose: To present a novel robust and delivery-efficient spot-scanning proton arc (SPArc) therapy technique. Methods and Materials: A SPArc optimization algorithm was developed that integrates control point resampling, energy layer redistribution, energy layer filtration, and energy layer resampling. The feasibility of such a technique was evaluated using sample patients: 1 patient with locally advanced head and neck oropharyngeal cancer with bilateral lymph node coverage, and 1 with a nonmobile lung cancer. Plan quality, robustness, and total estimated delivery time were compared with the robust optimized multifield step-and-shoot arc plan without SPArc optimization (Arc{sub multi-field}) and the standard robust optimized intensity modulatedmore » proton therapy (IMPT) plan. Dose-volume histograms of target and organs at risk were analyzed, taking into account the setup and range uncertainties. Total delivery time was calculated on the basis of a 360° gantry room with 1 revolutions per minute gantry rotation speed, 2-millisecond spot switching time, 1-nA beam current, 0.01 minimum spot monitor unit, and energy layer switching time of 0.5 to 4 seconds. Results: The SPArc plan showed potential dosimetric advantages for both clinical sample cases. Compared with IMPT, SPArc delivered 8% and 14% less integral dose for oropharyngeal and lung cancer cases, respectively. Furthermore, evaluating the lung cancer plan compared with IMPT, it was evident that the maximum skin dose, the mean lung dose, and the maximum dose to ribs were reduced by 60%, 15%, and 35%, respectively, whereas the conformity index was improved from 7.6 (IMPT) to 4.0 (SPArc). The total treatment delivery time for lung and oropharyngeal cancer patients was reduced by 55% to 60% and 56% to 67%, respectively, when compared with Arc{sub multi-field} plans. Conclusion: The SPArc plan is the first robust and delivery-efficient proton spot-scanning arc therapy technique, which could potentially be implemented into routine clinical practice.« less
USDA-ARS?s Scientific Manuscript database
A highly sensitive detection test for Rinderpest virus (RPV), based on a real-time reverse transcription-PCR (RT-PR) system, was developed. Five different RPV genomic targets were examined, and one was selected and optimized to detect viral RNA in infected tissue culture fluid with a level of detec...
Effect of plot and sample size on timing and precision of urban forest assessments
David J. Nowak; Jeffrey T. Walton; Jack C. Stevens; Daniel E. Crane; Robert E. Hoehn
2008-01-01
Accurate field data can be used to assess ecosystem services from trees and to improve urban forest management, yet little is known about the optimization of field data collection in the urban environment. Various field and Geographic Information System (GIS) tests were performed to help understand how time costs and precision of tree population estimates change with...
2012-05-29
Hunter College has completed work on baseline measurements of relaxation times for pentacene at various temperatures in order to determine optimal...temperatures for measuring relaxation rate as a function of doping. We have also repeated these measurements on pentacene samples at 2 different...P3HT using a time-lag method. 2 Technical Accomplishments This Period Relaxation Measurements on Pentacene . As described initially in the 1Q
Decision Models for Determining the Optimal Life Test Sampling Plans
NASA Astrophysics Data System (ADS)
Nechval, Nicholas A.; Nechval, Konstantin N.; Purgailis, Maris; Berzins, Gundars; Strelchonok, Vladimir F.
2010-11-01
Life test sampling plan is a technique, which consists of sampling, inspection, and decision making in determining the acceptance or rejection of a batch of products by experiments for examining the continuous usage time of the products. In life testing studies, the lifetime is usually assumed to be distributed as either a one-parameter exponential distribution, or a two-parameter Weibull distribution with the assumption that the shape parameter is known. Such oversimplified assumptions can facilitate the follow-up analyses, but may overlook the fact that the lifetime distribution can significantly affect the estimation of the failure rate of a product. Moreover, sampling costs, inspection costs, warranty costs, and rejection costs are all essential, and ought to be considered in choosing an appropriate sampling plan. The choice of an appropriate life test sampling plan is a crucial decision problem because a good plan not only can help producers save testing time, and reduce testing cost; but it also can positively affect the image of the product, and thus attract more consumers to buy it. This paper develops the frequentist (non-Bayesian) decision models for determining the optimal life test sampling plans with an aim of cost minimization by identifying the appropriate number of product failures in a sample that should be used as a threshold in judging the rejection of a batch. The two-parameter exponential and Weibull distributions with two unknown parameters are assumed to be appropriate for modelling the lifetime of a product. A practical numerical application is employed to demonstrate the proposed approach.
Marinova, Mariela; Artusi, Carlo; Brugnolo, Laura; Antonelli, Giorgia; Zaninotto, Martina; Plebani, Mario
2013-11-01
Although, due to its high specificity and sensitivity, LC-MS/MS is an efficient technique for the routine determination of immunosuppressants in whole blood, it involves time-consuming manual sample preparation. The aim of the present study was therefore to develop an automated sample-preparation protocol for the quantification of sirolimus, everolimus and tacrolimus by LC-MS/MS using a liquid handling platform. Six-level commercially available blood calibrators were used for assay development, while four quality control materials and three blood samples from patients under immunosuppressant treatment were employed for the evaluation of imprecision. Barcode reading, sample re-suspension, transfer of whole blood samples into 96-well plates, addition of internal standard solution, mixing, and protein precipitation were performed with a liquid handling platform. After plate filtration, the deproteinised supernatants were submitted for SPE on-line. The only manual steps in the entire process were de-capping of the tubes, and transfer of the well plates to the HPLC autosampler. Calibration curves were linear throughout the selected ranges. The imprecision and accuracy data for all analytes were highly satisfactory. The agreement between the results obtained with manual and those obtained with automated sample preparation was optimal (n=390, r=0.96). In daily routine (100 patient samples) the typical overall total turnaround time was less than 6h. Our findings indicate that the proposed analytical system is suitable for routine analysis, since it is straightforward and precise. Furthermore, it incurs less manual workload and less risk of error in the quantification of whole blood immunosuppressant concentrations than conventional methods. © 2013.
Optimal flexible sample size design with robust power.
Zhang, Lanju; Cui, Lu; Yang, Bo
2016-08-30
It is well recognized that sample size determination is challenging because of the uncertainty on the treatment effect size. Several remedies are available in the literature. Group sequential designs start with a sample size based on a conservative (smaller) effect size and allow early stop at interim looks. Sample size re-estimation designs start with a sample size based on an optimistic (larger) effect size and allow sample size increase if the observed effect size is smaller than planned. Different opinions favoring one type over the other exist. We propose an optimal approach using an appropriate optimality criterion to select the best design among all the candidate designs. Our results show that (1) for the same type of designs, for example, group sequential designs, there is room for significant improvement through our optimization approach; (2) optimal promising zone designs appear to have no advantages over optimal group sequential designs; and (3) optimal designs with sample size re-estimation deliver the best adaptive performance. We conclude that to deal with the challenge of sample size determination due to effect size uncertainty, an optimal approach can help to select the best design that provides most robust power across the effect size range of interest. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Microwave surface resistance of MgB2
NASA Astrophysics Data System (ADS)
Zhukov, A. A.; Purnell, A.; Miyoshi, Y.; Bugoslavsky, Y.; Lockman, Z.; Berenov, A.; Zhai, H. Y.; Christen, H. M.; Paranthaman, M. P.; Lowndes, D. H.; Jo, M. H.; Blamire, M. G.; Hao, Ling; Gallop, J.; MacManus-Driscoll, J. L.; Cohen, L. F.
2002-04-01
The microwave power and frequency dependence of the surface resistance of MgB2 films and powder samples were studied. Sample quality is relatively easy to identify by the breakdown in the ω2 law for poor-quality samples at all temperatures. The performance of MgB2 at 10 GHz and 21 K was compared directly with that of high-quality YBCO films. The surface resistance of MgB2 was found to be approximately three times higher at low microwave power and showed an onset of nonlinearity at microwave surface fields ten times lower than the YBCO film. It is clear that MgB2 films are not yet optimized for microwave applications.
Optimization of adenovirus 40 and 41 recovery from tap water using small disk filters.
McMinn, Brian R
2013-11-01
Currently, the U.S. Environmental Protection Agency's Information Collection Rule (ICR) for the primary concentration of viruses from drinking and surface waters uses the 1MDS filter, but a more cost effective option, the NanoCeram® filter, has been shown to recover comparable levels of enterovirus and norovirus from both matrices. In order to achieve the highest viral recoveries, filtration methods require the identification of optimal concentration conditions that are unique for each virus type. This study evaluated the effectiveness of 1MDS and NanoCeram filters in recovering adenovirus (AdV) 40 and 41 from tap water, and optimized two secondary concentration procedures the celite and organic flocculation method. Adjustments in pH were made to both virus elution solutions and sample matrices to determine which resulted in higher virus recovery. Samples were analyzed by quantitative PCR (qPCR) and Most Probable Number (MPN) techniques and AdV recoveries were determined by comparing levels of virus in sample concentrates to that in the initial input. The recovery of adenovirus was highest for samples in unconditioned tap water (pH 8) using the 1MDS filter and celite for secondary concentration. Elution buffer containing 0.1% sodium polyphosphate at pH 10.0 was determined to be most effective overall for both AdV types. Under these conditions, the average recovery for AdV40 and 41 was 49% and 60%, respectively. By optimizing secondary elution steps, AdV recovery from tap water could be improved at least two-fold compared to the currently used methodology. Identification of the optimal concentration conditions for human AdV (HAdV) is important for timely and sensitive detection of these viruses from both surface and drinking waters. Published by Elsevier B.V.
Inverse Statistics and Asset Allocation Efficiency
NASA Astrophysics Data System (ADS)
Bolgorian, Meysam
In this paper using inverse statistics analysis, the effect of investment horizon on the efficiency of portfolio selection is examined. Inverse statistics analysis is a general tool also known as probability distribution of exit time that is used for detecting the distribution of the time in which a stochastic process exits from a zone. This analysis was used in Refs. 1 and 2 for studying the financial returns time series. This distribution provides an optimal investment horizon which determines the most likely horizon for gaining a specific return. Using samples of stocks from Tehran Stock Exchange (TSE) as an emerging market and S&P 500 as a developed market, effect of optimal investment horizon in asset allocation is assessed. It is found that taking into account the optimal investment horizon in TSE leads to more efficiency for large size portfolios while for stocks selected from S&P 500, regardless of portfolio size, this strategy does not only not produce more efficient portfolios, but also longer investment horizons provides more efficiency.
Dausey, David J; Chandra, Anita; Schaefer, Agnes G; Bahney, Ben; Haviland, Amelia; Zakowski, Sarah; Lurie, Nicole
2008-09-01
We tested telephone-based disease surveillance systems in local health departments to identify system characteristics associated with consistent and timely responses to urgent case reports. We identified a stratified random sample of 74 health departments and conducted a series of unannounced tests of their telephone-based surveillance systems. We used regression analyses to identify system characteristics that predicted fast connection with an action officer (an appropriate public health professional). Optimal performance in consistently connecting callers with an action officer in 30 minutes or less was achieved by 31% of participating health departments. Reaching a live person upon dialing, regardless of who that person was, was the strongest predictor of optimal performance both in being connected with an action officer and in consistency of connection times. Health departments can achieve optimal performance in consistently connecting a caller with an action officer in 30 minutes or less and may improve performance by using a telephone-based disease surveillance system in which the phone is answered by a live person at all times.
Real-time PCR detection of Plasmodium directly from whole blood and filter paper samples
2011-01-01
Background Real-time PCR is a sensitive and specific method for the analysis of Plasmodium DNA. However, prior purification of genomic DNA from blood is necessary since PCR inhibitors and quenching of fluorophores from blood prevent efficient amplification and detection of PCR products. Methods Reagents designed to specifically overcome PCR inhibition and quenching of fluorescence were evaluated for real-time PCR amplification of Plasmodium DNA directly from blood. Whole blood from clinical samples and dried blood spots collected in the field in Colombia were tested. Results Amplification and fluorescence detection by real-time PCR were optimal with 40× SYBR® Green dye and 5% blood volume in the PCR reaction. Plasmodium DNA was detected directly from both whole blood and dried blood spots from clinical samples. The sensitivity and specificity ranged from 93-100% compared with PCR performed on purified Plasmodium DNA. Conclusions The methodology described facilitates high-throughput testing of blood samples collected in the field by fluorescence-based real-time PCR. This method can be applied to a broad range of clinical studies with the advantages of immediate sample testing, lower experimental costs and time-savings. PMID:21851640
Sun, Jian-Nan; Chen, Juan; Shi, Yan-Ping
2014-07-01
A new mode of ionic liquid based dispersive liquid-liquid microextraction (IL-DLLME) is developed. In this work, [C6MIm][PF6] was chosen as the extraction solvent, and two kinds of hydrophilic ionic liquids, [EMIm][BF4] and [BSO3HMIm][OTf], functioned as the dispersive solvent. So in the whole extraction procedure, no organic solvent was used. With the aid of SO3H group, the acidic compound was extracted from the sample solution without pH adjustment. Two phenolic compounds, namely, 2-naphthol and 4-nitrophenol were chosen as the target analytes. Important parameters affecting the extraction efficiency, such as the type of hydrophilic ionic liquids, the volume ratio of [EMIm][BF4] to [BSO3HMIm][OTf], type and volume of extraction solvent, pH value of sample solution, sonication time, extraction time and centrifugation time were investigated and optimized. Under the optimized extraction conditions, the method exhibited good sensitivity with the limits of detection (LODs) at 5.5 μg L(-1)and 10.0 μg L(-1) for 4-nitrophenol and 2-naphthol, respectively. Good linearity over the concentration ranges of 24-384 μg L(-1) for 4-nitrophenol and 28-336 μg L(-1) for 2-naphthol was obtained with correlation coefficients of 0.9998 and 0.9961, respectively. The proposed method can directly extract acidic compound from environmental sample or even more complex sample matrix without any pH adjustment procedure. Copyright © 2014 Elsevier B.V. All rights reserved.
Krücken, Jürgen; Fraundorfer, Kira; Mugisha, Jean Claude; Ramünke, Sabrina; Sifft, Kevin C; Geus, Dominik; Habarugira, Felix; Ndoli, Jules; Sendegeya, Augustin; Mukampunga, Caritas; Aebischer, Toni; McKay-Demeler, Janina; Gahutu, Jean Bosco; Mockenhaupt, Frank P; von Samson-Himmelstjerna, Georg
2018-05-18
A recent publication by Levecke et al. (Int. J. Parasitol, 2018, 8, 67-69) provides important insights into the kinetics of worm expulsion from humans following treatment with albendazole. This is an important aspect of determining the optimal time-point for post treatment sampling to examine anthelmintic drug efficacy. The authors conclude that for the determination of drug efficacy against Ascaris, samples should be taken not before day 14 and recommend a period between days 14 and 21. Using this recommendation, they conclude that previous data (Krücken et al., 2017; Int. J. Parasitol, 7, 262-271) showing a reduction of egg shedding by 75.4% in schoolchildren in Rwanda and our conclusions from these data should be interpreted with caution. In reply to this, we would like to indicate that the very low efficacy of 0% in one school and 52-56% in three other schools, while the drug was fully efficient in other schools, cannot simply be explained by the time point of sampling. Moreover, there was no correlation between the sampling day and albendazole efficacy. We would also like to indicate that we very carefully interpreted our data and, for example, nowhere claimed that we found anthelmintic resistance. Rather, we stated that our data indicated that benzimidazole resistance may be suspected in the study population. We strongly agree that the data presented by Levecke et al. suggests that recommendations for efficacy testing of anthelmintic drugs should be revised. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.
A parallel Jacobson-Oksman optimization algorithm. [parallel processing (computers)
NASA Technical Reports Server (NTRS)
Straeter, T. A.; Markos, A. T.
1975-01-01
A gradient-dependent optimization technique which exploits the vector-streaming or parallel-computing capabilities of some modern computers is presented. The algorithm, derived by assuming that the function to be minimized is homogeneous, is a modification of the Jacobson-Oksman serial minimization method. In addition to describing the algorithm, conditions insuring the convergence of the iterates of the algorithm and the results of numerical experiments on a group of sample test functions are presented. The results of these experiments indicate that this algorithm will solve optimization problems in less computing time than conventional serial methods on machines having vector-streaming or parallel-computing capabilities.
Prenatal yoga in late pregnancy and optimism, power, and well-being.
Reis, Pamela J; Alligood, Martha R
2014-01-01
The study reported here explored changes in optimism, power, and well-being over time in women who participated in a six-week prenatal yoga program during their second and third trimesters of pregnancy. The study was conceptualized from the perspective of Rogers' science of unitary human beings. A correlational, one-group, pre-post-assessment survey design with a convenience sample was conducted. Increases in mean scores for optimism, power, and well-being were statistically significant from baseline to completion of the prenatal yoga program. Findings from this study suggested that yoga as a self-care practice that nurses might recommend to promote well-being in pregnant women.
Ma, Jian; Yang, Bo; Byrne, Robert H
2012-06-15
Determination of chromate at low concentration levels in drinking water is an important analytical objective for both human health and environmental science. Here we report the use of solid phase extraction (SPE) in combination with a custom-made portable light-emitting diode (LED) spectrophotometer to achieve detection of chromate in the field at nanomolar levels. The measurement chemistry is based on a highly selective reaction between 1,5-diphenylcarbazide (DPC) and chromate under acidic conditions. The Cr-DPC complex formed in the reaction can be extracted on a commercial C18 SPE cartridge. Concentrated Cr-DPC is subsequently eluted with methanol and detected by spectrophotometry. Optimization of analytical conditions involved investigation of reagent compositions and concentrations, eluent type, flow rate (sample loading), sample volume, and stability of the SPE cartridge. Under optimized conditions, detection limits are on the order of 3 nM. Only 50 mL of sample is required for an analysis, and total analysis time is around 10 min. The targeted analytical range of 0-500 nM can be easily extended by changing the sample volume. Compared to previous SPE-based spectrophotometric methods, this analytical procedure offers the benefits of improved sensitivity, reduced sample consumption, shorter analysis time, greater operational convenience, and lower cost. Copyright © 2012 Elsevier B.V. All rights reserved.
Rozi, Siti Khalijah Mahmad; Nodeh, Hamid Rashidi; Kamboh, Muhammad Afzal; Manan, Ninie Suhana Abdul; Mohamad, Sharifah
2017-07-01
A novel adsorbent, palm fatty acid coated magnetic Fe 3 O 4 nanoparticles (MNP-FA) was successfully synthesized with immobilization of the palm fatty acid onto the surface of MNPs. The successful synthesis of MNP-FA was further confirmed by X-Ray diffraction (XRD), transmission electron microscopy (TEM), Fourier transform infrared spectroscopy (FT-IR) and Energy dispersive X-Ray spectroscopy (EDX) analyses and water contact angle (WCA) measurement. This newly synthesized MNP-FA was applied as magnetic solid phase extraction (MSPE) adsorbent for the enrichment of polycyclic aromatic hydrocarbons (PAHs), namely fluoranthene (FLT), pyrene (Pyr), chrysene (Cry) and benzo(a)pyrene (BaP) from environmental samples prior to High Performance Liquid Chromatography- Diode Array Detector (HPLC-DAD) analysis. The MSPE method was optimized by several parameters such as amount of sorbent, desorption solvent, volume of desorption solvent, extraction time, desorption time, pH and sample volume. Under the optimized conditions, MSPE method provided a low detection limit (LOD) for FLT, Pyr, Cry and BaP in the range of 0.01-0.05 ng mL -1 . The PAHs recoveries of the spiked leachate samples ranged from 98.5% to 113.8% with the RSDs (n = 5) ranging from 3.5% to 12.2%, while for the spiked sludge samples, the recoveries ranged from 81.1% to 119.3% with the RSDs (n = 5) ranging from 3.1% to 13.6%. The recyclability study revealed that MNP-FA has excellent reusability up to five times. Chromatrographic analysis demonstrated the suitability of MNP-FA as MSPE adsorbent for the efficient extraction of PAHs from environmental samples.
Shin, Saeam; Kim, Juwon; Kim, Yoonjung; Cho, Sun-Mi; Lee, Kyung-A
2017-10-26
EGFR mutation is an emerging biomarker for treatment selection in non-small-cell lung cancer (NSCLC) patients. However, optimal mutation detection is hindered by complications associated with the biopsy procedure, tumor heterogeneity and limited sensitivity of test methodology. In this study, we evaluated the diagnostic utility of real-time PCR using malignant pleural effusion samples. A total of 77 pleural fluid samples from 77 NSCLC patients were tested using the cobas EGFR mutation test (Roche Molecular Systems). Pleural fluid was centrifuged, and separated cell pellets and supernatants were tested in parallel. Results were compared with Sanger sequencing and/or peptide nucleic acid (PNA)-mediated PCR clamping of matched tumor tissue or pleural fluid samples. All samples showed valid real-time PCR results in one or more DNA samples extracted from cell pellets and supernatants. Compared with other molecular methods, the sensitivity of real-time PCR method was 100%. Concordance rate of real-time PCR and Sanger sequencing plus PNA-mediated PCR clamping was 98.7%. We have confirmed that real-time PCR using pleural fluid had a high concordance rate compared to conventional methods, with no failed samples. Our data demonstrated that the parallel real-time PCR testing using supernatant and cell pellet could offer reliable and robust surrogate strategy when tissue is not available.
Berglund, E. Carina; Kuklinski, Nicholas J.; Karagündüz, Ekin; Ucar, Kubra; Hanrieder, Jörg; Ewing, Andrew G.
2013-01-01
Micellar electrokinetic capillary chromatography with electrochemical detection has been used to quantify biogenic amines in freeze-dried Drosophila melanogaster brains. Freeze drying samples offers a way to preserve the biological sample while making dissection of these tiny samples easier and faster. Fly samples were extracted in cold acetone and dried in a rotary evaporator. Extraction and drying times were optimized in order to avoid contamination by red-pigment from the fly eyes and still have intact brain structures. Single freeze-dried fly-brain samples were found to produce representative electropherograms as a single hand-dissected brain sample. Utilizing the faster dissection time that freeze drying affords, the number of brains in a fixed homogenate volume can be increased to concentrate the sample. Thus, concentrated brain samples containing five or fifteen preserved brains were analyzed for their neurotransmitter content, and five analytes; dopamine N-acetyloctopamine, Nacetylserotonin, N-acetyltyramine, N-acetyldopamine were found to correspond well with previously reported values. PMID:23387977
Farajmand, Bahman; Esteki, Mahnaz; Koohpour, Elham; Salmani, Vahid
2017-04-01
The reversed-phase mode of single drop microextraction has been used as a preparation method for the extraction of some phenolic antioxidants from edible oil samples. Butylated hydroxyl anisole, tert-butylhydroquinone and butylated hydroxytoluene were employed as target compounds for this study. High-performance liquid chromatography followed by fluorescence detection was applied for final determination of target compounds. The most interesting feature of this study is the application of a disposable insulin syringe with some modification for microextraction procedure that efficiently improved the volume and stability of the solvent microdrop. Different parameters such as the type and volume of solvent, sample stirring rate, extraction temperature, and time were investigated and optimized. Analytical performances of the method were evaluated under optimized conditions. Under the optimal conditions, relative standard deviations were between 4.4 and 10.2%. Linear dynamic ranges were 20-10 000 to 2-1000 μg/g (depending on the analytes). Detection limits were 5-670 ng/g. Finally, the proposed method was successfully used for quantification of the antioxidants in some edible oil samples prepared from market. Relative recoveries were achieved from 88 to 111%. The proposed method had a simplicity of operation, low cost, and successful application for real samples. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
A predictive control framework for optimal energy extraction of wind farms
NASA Astrophysics Data System (ADS)
Vali, M.; van Wingerden, J. W.; Boersma, S.; Petrović, V.; Kühn, M.
2016-09-01
This paper proposes an adjoint-based model predictive control for optimal energy extraction of wind farms. It employs the axial induction factor of wind turbines to influence their aerodynamic interactions through the wake. The performance index is defined here as the total power production of the wind farm over a finite prediction horizon. A medium-fidelity wind farm model is utilized to predict the inflow propagation in advance. The adjoint method is employed to solve the formulated optimization problem in a cost effective way and the first part of the optimal solution is implemented over the control horizon. This procedure is repeated at the next controller sample time providing the feedback into the optimization. The effectiveness and some key features of the proposed approach are studied for a two turbine test case through simulations.
Trace DNA Sampling Success from Evidence Items Commonly Encountered in Forensic Casework.
Dziak, Renata; Peneder, Amy; Buetter, Alicia; Hageman, Cecilia
2018-05-01
Trace DNA analysis is a significant part of a forensic laboratory's workload. Knowing optimal sampling strategies and item success rates for particular item types can assist in evidence selection and examination processes and shorten turnaround times. In this study, forensic short tandem repeat (STR) casework results were reviewed to determine how often STR profiles suitable for comparison were obtained from "handler" and "wearer" areas of 764 items commonly submitted for examination. One hundred and fifty-five (155) items obtained from volunteers were also sampled. Items were analyzed for best sampling location and strategy. For casework items, headwear and gloves provided the highest success rates. Experimentally, eyeglasses and earphones, T-shirts, fabric gloves and watches provided the highest success rates. Eyeglasses and latex gloves provided optimal results if the entire surfaces were swabbed. In general, at least 10%, and up to 88% of all trace DNA analyses resulted in suitable STR profiles for comparison. © 2017 American Academy of Forensic Sciences.
Verant, Michelle L; Bohuski, Elizabeth A; Lorch, Jeffery M; Blehert, David S
2016-03-01
The continued spread of white-nose syndrome and its impacts on hibernating bat populations across North America has prompted nationwide surveillance efforts and the need for high-throughput, noninvasive diagnostic tools. Quantitative real-time polymerase chain reaction (qPCR) analysis has been increasingly used for detection of the causative fungus, Pseudogymnoascus destructans, in both bat- and environment-associated samples and provides a tool for quantification of fungal DNA useful for research and monitoring purposes. However, precise quantification of nucleic acid from P. destructans is dependent on effective and standardized methods for extracting nucleic acid from various relevant sample types. We describe optimized methodologies for extracting fungal nucleic acids from sediment, guano, and swab-based samples using commercial kits together with a combination of chemical, enzymatic, and mechanical modifications. Additionally, we define modifications to a previously published intergenic spacer-based qPCR test for P. destructans to refine quantification capabilities of this assay. © 2016 The Author(s).
Randomly Sampled-Data Control Systems. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Han, Kuoruey
1990-01-01
The purpose is to solve the Linear Quadratic Regulator (LQR) problem with random time sampling. Such a sampling scheme may arise from imperfect instrumentation as in the case of sampling jitter. It can also model the stochastic information exchange among decentralized controllers to name just a few. A practical suboptimal controller is proposed with the nice property of mean square stability. The proposed controller is suboptimal in the sense that the control structure is limited to be linear. Because of i. i. d. assumption, this does not seem unreasonable. Once the control structure is fixed, the stochastic discrete optimal control problem is transformed into an equivalent deterministic optimal control problem with dynamics described by the matrix difference equation. The N-horizon control problem is solved using the Lagrange's multiplier method. The infinite horizon control problem is formulated as a classical minimization problem. Assuming existence of solution to the minimization problem, the total system is shown to be mean square stable under certain observability conditions. Computer simulations are performed to illustrate these conditions.
Verant, Michelle; Bohuski, Elizabeth A.; Lorch, Jeffrey M.; Blehert, David
2016-01-01
The continued spread of white-nose syndrome and its impacts on hibernating bat populations across North America has prompted nationwide surveillance efforts and the need for high-throughput, noninvasive diagnostic tools. Quantitative real-time polymerase chain reaction (qPCR) analysis has been increasingly used for detection of the causative fungus, Pseudogymnoascus destructans, in both bat- and environment-associated samples and provides a tool for quantification of fungal DNA useful for research and monitoring purposes. However, precise quantification of nucleic acid fromP. destructans is dependent on effective and standardized methods for extracting nucleic acid from various relevant sample types. We describe optimized methodologies for extracting fungal nucleic acids from sediment, guano, and swab-based samples using commercial kits together with a combination of chemical, enzymatic, and mechanical modifications. Additionally, we define modifications to a previously published intergenic spacer–based qPCR test for P. destructans to refine quantification capabilities of this assay.
Soejima, Mikiko; Egashira, Kouichi; Kawano, Hiroyuki; Kawaguchi, Atsushi; Sagawa, Kimitaka; Koda, Yoshiro
2011-01-01
Anhaptoglobinemic patients run the risk of severe anaphylactic transfusion reaction because they produce serum haptoglobin antibodies. Being homozygous for the haptoglobin gene deletion allele (HPdel) is the only known cause of congenital anhaptoglobinemia, and detection of HPdel before transfusion is important to prevent anaphylactic shock. In this study, we developed a loop-mediated isothermal amplification (LAMP)-based screening for HPdel. Optimal primer sets and temperature for LAMP were selected for HPdel and the 5′ region of the HP using genomic DNA as a template. Then, the effects of diluent and boiling on LAMP amplification were examined using whole blood as a template. Blood samples diluted 1:100 with 50 mmol/L NaOH without boiling gave optimal results as well as those diluted 1:2 with water followed by boiling. The results from 100 blood samples were fully concordant with those obtained by real-time PCR methods. Detection of the HPdel allele by LAMP using alkaline-denatured blood samples is rapid, simple, accurate, and cost effective, and is readily applicable in various clinical settings because this method requires only basic instruments. In addition, the simple preparation of blood samples using NaOH saves time and effort for various genetic tests. PMID:21497293
NASA Astrophysics Data System (ADS)
Mogolodi Dimpe, K.; Mpupa, Anele; Nomngongo, Philiswa N.
2018-01-01
This work was chiefly encouraged by the continuous consumption of antibiotics which eventually pose harmful effects on animals and human beings when present in water systems. In this study, the activated carbon (AC) was used as a solid phase material for the removal of sulfamethoxazole (SMX) in wastewater samples. The microwave assisted solid phase extraction (MASPE) as a sample extraction method was employed to better extract SMX in water samples and finally the analysis of SMX was done by the UV-Vis spectrophotometer. The microwave assisted solid phase extraction method was optimized using a two-level fractional factorial design by evaluating parameters such as pH, mass of adsorbent (MA), extraction time (ET), eluent ratio (ER) and microwave power (MP). Under optimized conditions, the limit of detection (LOD) and limit of quantification (LOQ) were 0.5 μg L- 1 and 1.7 μg L- 1, respectively, and intraday and interday precision expressed in terms of relative standard deviation were > 6%.The maximum adsorption capacity was 138 mg g- 1 for SMX and the adsorbent could be reused eight times. Lastly, the MASPE method was applied for the removal of SMX in wastewater samples collected from a domestic wastewater treatment plant (WWTP) and river water.
Tokalıoğlu, Şerife; Yavuz, Emre; Demir, Selçuk; Patat, Şaban
2017-12-15
In this study, zirconium-based highly porous metal-organic framework, MOF-545, was synthesized and characterized. The surface area of MOF-545 was found to be 2192m 2 /g. This adsorbent was used for the first time as an adsorbent for the vortex assisted-solid phase extraction of Pb(II) from cereal, beverage and water samples. Lead in solutions was determined by FAAS. The optimal experimental conditions were as follows: the amount of MOF-545, 10mg; pH of sample, 7; adsorption and elution time, 15min; and elution solvent, 2mL of 1molL -1 HCl. Under the optimal conditions of the method, the limit of detection, preconcentration factor and precision as RSD% were found to be 1.78μgL -1 , 125 and 2.6%, respectively. The adsorption capacity of the adsorbent for lead was found to be 73mgg -1 . The method was successfully verified by analyzing two certified reference materials (BCR-482 Lichen and SPS-WW1 Batch 114) and spiked chickpea, bean, wheat, lentil, cherry juice, mineral water, well water and wastewater samples. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Alemany, Kristina
Electric propulsion has recently become a viable technology for spacecraft, enabling shorter flight times, fewer required planetary gravity assists, larger payloads, and/or smaller launch vehicles. With the maturation of this technology, however, comes a new set of challenges in the area of trajectory design. Because low-thrust trajectory optimization has historically required long run-times and significant user-manipulation, mission design has relied on expert-based knowledge for selecting departure and arrival dates, times of flight, and/or target bodies and gravitational swing-bys. These choices are generally based on known configurations that have worked well in previous analyses or simply on trial and error. At the conceptual design level, however, the ability to explore the full extent of the design space is imperative to locating the best solutions in terms of mass and/or flight times. Beginning in 2005, the Global Trajectory Optimization Competition posed a series of difficult mission design problems, all requiring low-thrust propulsion and visiting one or more asteroids. These problems all had large ranges on the continuous variables---launch date, time of flight, and asteroid stay times (when applicable)---as well as being characterized by millions or even billions of possible asteroid sequences. Even with recent advances in low-thrust trajectory optimization, full enumeration of these problems was not possible within the stringent time limits of the competition. This investigation develops a systematic methodology for determining a broad suite of good solutions to the combinatorial, low-thrust, asteroid tour problem. The target application is for conceptual design, where broad exploration of the design space is critical, with the goal being to rapidly identify a reasonable number of promising solutions for future analysis. The proposed methodology has two steps. The first step applies a three-level heuristic sequence developed from the physics of the problem, which allows for efficient pruning of the design space. The second phase applies a global optimization scheme to locate a broad suite of good solutions to the reduced problem. The global optimization scheme developed combines a novel branch-and-bound algorithm with a genetic algorithm and an industry-standard low-thrust trajectory optimization program to solve for the following design variables: asteroid sequence, launch date, times of flight, and asteroid stay times. The methodology is developed based on a small sample problem, which is enumerated and solved so that all possible discretized solutions are known. The methodology is then validated by applying it to a larger intermediate sample problem, which also has a known solution. Next, the methodology is applied to several larger combinatorial asteroid rendezvous problems, using previously identified good solutions as validation benchmarks. These problems include the 2nd and 3rd Global Trajectory Optimization Competition problems. The methodology is shown to be capable of achieving a reduction in the number of asteroid sequences of 6-7 orders of magnitude, in terms of the number of sequences that require low-thrust optimization as compared to the number of sequences in the original problem. More than 70% of the previously known good solutions are identified, along with several new solutions that were not previously reported by any of the competitors. Overall, the methodology developed in this investigation provides an organized search technique for the low-thrust mission design of asteroid rendezvous problems.
Fully Automated Sample Preparation for Ultrafast N-Glycosylation Analysis of Antibody Therapeutics.
Szigeti, Marton; Lew, Clarence; Roby, Keith; Guttman, Andras
2016-04-01
There is a growing demand in the biopharmaceutical industry for high-throughput, large-scale N-glycosylation profiling of therapeutic antibodies in all phases of product development, but especially during clone selection when hundreds of samples should be analyzed in a short period of time to assure their glycosylation-based biological activity. Our group has recently developed a magnetic bead-based protocol for N-glycosylation analysis of glycoproteins to alleviate the hard-to-automate centrifugation and vacuum-centrifugation steps of the currently used protocols. Glycan release, fluorophore labeling, and cleanup were all optimized, resulting in a <4 h magnetic bead-based process with excellent yield and good repeatability. This article demonstrates the next level of this work by automating all steps of the optimized magnetic bead-based protocol from endoglycosidase digestion, through fluorophore labeling and cleanup with high-throughput sample processing in 96-well plate format, using an automated laboratory workstation. Capillary electrophoresis analysis of the fluorophore-labeled glycans was also optimized for rapid (<3 min) separation to accommodate the high-throughput processing of the automated sample preparation workflow. Ultrafast N-glycosylation analyses of several commercially relevant antibody therapeutics are also shown and compared to their biosimilar counterparts, addressing the biological significance of the differences. © 2015 Society for Laboratory Automation and Screening.
Pudda, Catherine; Boizot, François; Verplanck, Nicolas; Revol-Cavalier, Frédéric; Berthier, Jean; Thuaire, Aurélie
2018-01-01
Particle separation in microfluidic devices is a common problematic for sample preparation in biology. Deterministic lateral displacement (DLD) is efficiently implemented as a size-based fractionation technique to separate two populations of particles around a specific size. However, real biological samples contain components of many different sizes and a single DLD separation step is not sufficient to purify these complex samples. When connecting several DLD modules in series, pressure balancing at the DLD outlets of each step becomes critical to ensure an optimal separation efficiency. A generic microfluidic platform is presented in this paper to optimize pressure balancing, when DLD separation is connected either to another DLD module or to a different microfluidic function. This is made possible by generating droplets at T-junctions connected to the DLD outlets. Droplets act as pressure controllers, which perform at the same time the encapsulation of DLD sorted particles and the balance of output pressures. The optimized pressures to apply on DLD modules and on T-junctions are determined by a general model that ensures the equilibrium of the entire platform. The proposed separation platform is completely modular and reconfigurable since the same predictive model applies to any cascaded DLD modules of the droplet-based cartridge. PMID:29768490
Momen, Awad A; Zachariadis, George A; Anthemidis, Aristidis N; Stratis, John A
2007-01-15
Two digestion procedures have been tested on nut samples for application in the determination of essential (Cr, Cu, Fe, Mg, Mn, Zn) and non-essential (Al, Ba, Cd, Pb) elements by inductively coupled plasma-optical emission spectrometry (ICP-OES). These included wet digestions with HNO(3)/H(2)SO(4) and HNO(3)/H(2)SO(4)/H(2)O(2). The later one is recommended for better analytes recoveries (relative error<11%). Two calibrations (aqueous standard and standard addition) procedures were studied and proved that standard addition was preferable for all analytes. Experimental designs for seven factors (HNO(3), H(2)SO(4) and H(2)O(2) volumes, digestion time, pre-digestion time, temperature of the hot plate and sample weight) were used for optimization of sample digestion procedures. For this purpose Plackett-Burman fractional factorial design, which involve eight experiments was adopted. The factors HNO(3) and H(2)O(2) volume, and the digestion time were found to be the most important parameters. The instrumental conditions were also optimized (using peanut matrix rather than aqueous standard solutions) considering radio-frequency (rf) incident power, nebulizer argon gas flow rate and sample uptake flow rate. The analytical performance, such as limits of detection (LOD<0.74mugg(-1)), precision of the overall procedures (relative standard deviation between 2.0 and 8.2%) and accuracy (relative errors between 0.4 and 11%) were assessed statistically to evaluate the developed analytical procedures. The good agreement between measured and certified values for all analytes (relative error <11%) with respect to IAEA-331 (spinach leaves) and IAEA-359 (cabbage) indicates that the developed analytical method is well suited for further studies on the fate of major elements in nuts and possibly similar matrices.
Escudero, Luis A; Cerutti, S; Olsina, R A; Salonia, J A; Gasquez, J A
2010-11-15
An on-line preconcentration procedure using solid phase extraction (SPE) for the determination of copper in different water samples by inductively coupled plasma optical emission spectrometry (ICP-OES) is proposed. The copper was retained on a minicolumn filled with ethyl vinyl acetate (EVA) at pH 8.0 without using any complexing reagent. The experimental optimization step was performed using a two-level full factorial design. The results showed that pH, sample loading flow rate, and their interaction (at the tested levels) were statistically significant. In order to determine the best conditions for preconcentration and determination of copper, a final optimization of the significant factors was carried out using a central composite design (CCD). The calibration graph was linear with a regression coefficient of 0.995 at levels near the detection limit up to at least 300 μg L(-1). An enrichment factor (EF) of 54 with a preconcentration time of 187.5 s was obtained. The limit of detection (3σ) was 0.26 μg L(-1). The sampling frequency for the developed methodology was about 15 samples/h. The relative standard deviation (RSD) for six replicates containing 50 μg L(-1) of copper was 3.76%. The methodology was successfully applied to the determination of Cu in tap, mineral, river water samples, and in a certified VKI standard reference material. Copyright © 2010 Elsevier B.V. All rights reserved.
Preparation of alpha-emitting nuclides by electrodeposition
NASA Astrophysics Data System (ADS)
Lee, M. H.; Lee, C. W.
2000-06-01
A method is described for electrodepositing the alpha-emitting nuclides. To determine the optimum conditions for plating plutonium, the effects of electrolyte concentration, chelating reagent, current, pH of electrolyte and the time of plating on the electrodeposition were investigated on the base of the ammonium oxalate-ammonium sulfate electrolyte containing diethyl triamino pentaacetic acid. An optimized electrodeposition procedure for the determination of plutonium was validated by application to environmental samples. The chemical yield of the optimized method of electrodeposition step in the environmental sample was a little higher than that of Talvitie's method. The developed electrodeposition procedure in this study was applied to determine the radionuclides such as thorium, uranium and americium that the electrodeposition yields were a little higher than those of the conventional method.
Ramírez-Godínez, Juan; Jaimez-Ordaz, Judith; Castañeda-Ovando, Araceli; Añorve-Morga, Javier; Salazar-Pereda, Verónica; González-Olivares, Luis Guillermo; Contreras-López, Elizabeth
2017-03-01
Since ancient times, ginger (Zingiber officinale) has been widely used for culinary and medicinal purposes. This rhizome possesses several chemical constituents; most of them present antioxidant capacity due mainly to the presence of phenolic compounds. Thus, the physical conditions for the optimal extraction of antioxidant components of ginger were investigated by applying a Box-Behnken experimental design. Extracts of ginger were prepared using water as solvent in a conventional solid-liquid extraction. The analyzed variables were time (5, 15 and 25 min), temperature (20, 55 and 90 °C) and sample concentration (2, 6 and 10 %). The antioxidant activity was measured using the 2,2-diphenyl-1-picrylhydrazyl method and a modified ferric reducing antioxidant power assay while total phenolics were measured by Folin & Ciocalteu's method. The suggested experimental design allowed the acquisition of aqueous extracts of ginger with diverse antioxidant activity (100-555 mg Trolox/100 g, 147-1237 mg Fe 2+ /100 g and 50-332 mg gallic acid/100 g). Temperature was the determining factor in the extraction of components with antioxidant activity, regardless of time and sample quantity. The optimal physical conditions that allowed the highest antioxidant activity were: 90 °C, 15 min and 2 % of the sample. The correlation value between the antioxidant activity by ferric reducing antioxidant power assay and the content of total phenolics was R 2 = 0.83. The experimental design applied allowed the determination of the physical conditions under which ginger aqueous extracts liberate compounds with antioxidant activity. Most of them are of the phenolic type as it was demonstrated through the correlation established between different methods used to measure antioxidant capacity.
You, David J; Geshell, Kenneth J; Yoon, Jeong-Yeol
2011-10-15
Direct and sensitive detection of foodborne pathogens from fresh produce samples was accomplished using a handheld lab-on-a-chip device, requiring little to no sample processing and enrichment steps for a near-real-time detection and truly field-deployable device. The detection of Escherichia coli K12 and O157:H7 in iceberg lettuce was achieved utilizing optimized Mie light scatter parameters with a latex particle immunoagglutination assay. The system exhibited good sensitivity, with a limit of detection of 10 CFU mL(-1) and an assay time of <6 min. Minimal pretreatment with no detrimental effects on assay sensitivity and reproducibility was accomplished with a simple and cost-effective KimWipes filter and disposable syringe. Mie simulations were used to determine the optimal parameters (particle size d, wavelength λ, and scatter angle θ) for the assay that maximize light scatter intensity of agglutinated latex microparticles and minimize light scatter intensity of the tissue fragments of iceberg lettuce, which were experimentally validated. This introduces a powerful method for detecting foodborne pathogens in fresh produce and other potential sample matrices. The integration of a multi-channel microfluidic chip allowed for differential detection of the agglutinated particles in the presence of the antigen, revealing a true field-deployable detection system with decreased assay time and improved robustness over comparable benchtop systems. Additionally, two sample preparation methods were evaluated through simulated field studies based on overall sensitivity, protocol complexity, and assay time. Preparation of the plant tissue sample by grinding resulted in a two-fold improvement in scatter intensity over washing, accompanied with a significant increase in assay time: ∼5 min (grinding) versus ∼1 min (washing). Specificity studies demonstrated binding of E. coli O157:H7 EDL933 to only O157:H7 antibody conjugated particles, with no cross-reactivity to K12. This suggests the adaptability of the system for use with a wide variety of pathogens, and the potential to detect in a variety of biological matrices with little to no sample pretreatment. Copyright © 2011 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ding, X; Li, X; Zhang, J
Purpose: To develop a delivery-efficient proton spot-scanning arc therapy technique with robust plan quality. Methods: We developed a Scanning Proton Arc(SPArc) optimization algorithm integrated with (1)Control point re-sampling by splitting control point into adjacent sub-control points; (2)Energy layer re-distribution by assigning the original energy layers to the new sub-control points; (3)Energy layer filtration by deleting low MU weighting energy layers; (4)Energy layer re-sampling by sampling additional layers to ensure the optimal solution. A bilateral head and neck oropharynx case and a non-mobile lung target case were tested. Plan quality and total estimated delivery time were compared to original robust optimizedmore » multi-field step-and-shoot arc plan without SPArc optimization (Arcmulti-field) and standard robust optimized Intensity Modulated Proton Therapy(IMPT) plans. Dose-Volume-Histograms (DVH) of target and Organ-at-Risks (OARs) were analyzed along with all worst case scenarios. Total delivery time was calculated based on the assumption of a 360 degree gantry room with 1 RPM rotation speed, 2ms spot switching time, beam current 1nA, minimum spot weighting 0.01 MU, energy-layer-switching-time (ELST) from 0.5 to 4s. Results: Compared to IMPT, SPArc delivered less integral dose(−14% lung and −8% oropharynx). For lung case, SPArc reduced 60% of skin max dose, 35% of rib max dose and 15% of lung mean dose. Conformity Index is improved from 7.6(IMPT) to 4.0(SPArc). Compared to Arcmulti-field, SPArc reduced number of energy layers by 61%(276 layers in lung) and 80%(1008 layers in oropharynx) while kept the same robust plan quality. With ELST from 0.5s to 4s, it reduced 55%–60% of Arcmulti-field delivery time for the lung case and 56%–67% for the oropharynx case. Conclusion: SPArc is the first robust and delivery-efficient proton spot-scanning arc therapy technique which could be implemented in routine clinic. For modern proton machine with ELST close to 0.5s, SPArc would be a popular treatment option for both single and multi-room center.« less
Şakıyan, Özge
2015-05-01
The aim of present work is to optimize the formulation of a functional cake (soy-cake) to be baked in infrared-microwave combination oven. For this optimization process response surface methodology was utilized. It was also aimed to optimize the processing conditions of the combination baking. The independent variables were the baking time (8, 9, 10 min), the soy flour concentration (30, 40, 50 %) and the DATEM (diacetyltartaric acid esters of monoglycerides) concentration (0.4, 0.6 and 0.8 %). The quality parameters that were examined in the study were specific volume, weight loss, total color change and firmness of the cake samples. The results were analyzed by multiple regression; and the significant linear, quadratic, and interaction terms were used in the second order mathematical model. The optimum baking time, soy-flour concentration and DATEM concentration were found as 9.5 min, 30 and 0.72 %, respectively. The corresponding responses of the optimum points were almost comparable with those of conventionally baked soy-cakes. So it may be declared that it is possible to produce high quality soy cakes in a very short time by using infrared-microwave combination oven.
Manenti, Diego R; Módenes, Aparecido N; Soares, Petrick A; Boaventura, Rui A R; Palácio, Soraya M; Borba, Fernando H; Espinoza-Quiñones, Fernando R; Bergamasco, Rosângela; Vilar, Vítor J P
2015-01-01
In this work, the application of an iron electrode-based electrocoagulation (EC) process on the treatment of a real textile wastewater (RTW) was investigated. In order to perform an efficient integration of the EC process with a biological oxidation one, an enhancement in the biodegradability and low toxicity of final compounds was sought. Optimal values of EC reactor operation parameters (pH, current density and electrolysis time) were achieved by applying a full factorial 3(3) experimental design. Biodegradability and toxicity assays were performed on treated RTW samples obtained at the optimal values of: pH of the solution (7.0), current density (142.9 A m(-2)) and different electrolysis times. As response variables for the biodegradability and toxicity assessment, the Zahn-Wellens test (Dt), the ratio values of dissolved organic carbon (DOC) relative to low-molecular-weight carboxylates anions (LMCA) and lethal concentration 50 (LC50) were used. According to the Dt, the DOC/LMCA ratio and LC50, an electrolysis time of 15 min along with the optimal values of pH and current density were suggested as suitable for a next stage of treatment based on a biological oxidation process.
Genova, Alessandro; Pavanello, Michele
2015-12-16
In order to approximately satisfy the Bloch theorem, simulations of complex materials involving periodic systems are made n(k) times more complex by the need to sample the first Brillouin zone at n(k) points. By combining ideas from Kohn-Sham density-functional theory (DFT) and orbital-free DFT, for which no sampling is needed due to the absence of waves, subsystem DFT offers an interesting middle ground capable of sizable theoretical speedups against Kohn-Sham DFT. By splitting the supersystem into interacting subsystems, and mapping their quantum problem onto separate auxiliary Kohn-Sham systems, subsystem DFT allows an optimal topical sampling of the Brillouin zone. We elucidate this concept with two proof of principle simulations: a water bilayer on Pt[1 1 1]; and a complex system relevant to catalysis-a thiophene molecule physisorbed on a molybdenum sulfide monolayer deposited on top of an α-alumina support. For the latter system, a speedup of 300% is achieved against the subsystem DTF reference by using an optimized Brillouin zone sampling (600% against KS-DFT).
Xia, Zhining; Gan, Tingting; Chen, Hua; Lv, Rui; Wei, Weili; Yang, Fengqing
2010-10-01
A sample pre-concentration method based on the in-line coupling of in-tube solid-phase microextraction and electrophoretic sweeping was developed for the analysis of hydrophobic compounds. The sample pre-concentration and electrophoretic separation processes were simply and sequentially carried out with a (35%-phenyl)-methylpolysiloxane-coated capillary. The developed method was validated and applied to enrich and separate several pharmaceuticals including loratadine, indomethacin, ibuprofen and doxazosin. Several parameters of microextration were investigated such as temperature, pH and eluant. And the concentration of microemulsion that influences separation efficiency and microextraction efficiency were also studied. Central composite design was applied for the optimization of sampling flow rate and sampling time that interact in a very complex way with each other. The precision, sensitivity and recovery of the method were investigated. Under the optimal conditions, the maximum enrichment factors for loratadine, indomethacin, ibuprofen and doxazosin in aqueous solutions are 1355, 571, 523 and 318, respectively. In addition, the developed method was applied to determine loratadine in rabbit blood sample.
Advanced Intelligent System Application to Load Forecasting and Control for Hybrid Electric Bus
NASA Technical Reports Server (NTRS)
Momoh, James; Chattopadhyay, Deb; Elfayoumy, Mahmoud
1996-01-01
The primary motivation for this research emanates from providing a decision support system to the electric bus operators in the municipal and urban localities which will guide the operators to maintain an optimal compromise among the noise level, pollution level, fuel usage etc. This study is backed up by our previous studies on study of battery characteristics, permanent magnet DC motor studies and electric traction motor size studies completed in the first year. The operator of the Hybrid Electric Car must determine optimal power management schedule to meet a given load demand for different weather and road conditions. The decision support system for the bus operator comprises three sub-tasks viz. forecast of the electrical load for the route to be traversed divided into specified time periods (few minutes); deriving an optimal 'plan' or 'preschedule' based on the load forecast for the entire time-horizon (i.e., for all time periods) ahead of time; and finally employing corrective control action to monitor and modify the optimal plan in real-time. A fully connected artificial neural network (ANN) model is developed for forecasting the kW requirement for hybrid electric bus based on inputs like climatic conditions, passenger load, road inclination, etc. The ANN model is trained using back-propagation algorithm employing improved optimization techniques like projected Lagrangian technique. The pre-scheduler is based on a Goal-Programming (GP) optimization model with noise, pollution and fuel usage as the three objectives. GP has the capability of analyzing the trade-off among the conflicting objectives and arriving at the optimal activity levels, e.g., throttle settings. The corrective control action or the third sub-task is formulated as an optimal control model with inputs from the real-time data base as well as the GP model to minimize the error (or deviation) from the optimal plan. These three activities linked with the ANN forecaster proving the output to the GP model which in turn produces the pre-schedule of the optimal control model. Some preliminary results based on a hypothetical test case will be presented for the load forecasting module. The computer codes for the three modules will be made available fe adoption by bus operating agencies. Sample results will be provided using these models. The software will be a useful tool for supporting the control systems for the Electric Bus project of NASA.
Inverse Analysis of Irradiated NuclearMaterial Gamma Spectra via Nonlinear Optimization
NASA Astrophysics Data System (ADS)
Dean, Garrett James
Nuclear forensics is the collection of technical methods used to identify the provenance of nuclear material interdicted outside of regulatory control. Techniques employed in nuclear forensics include optical microscopy, gas chromatography, mass spectrometry, and alpha, beta, and gamma spectrometry. This dissertation focuses on the application of inverse analysis to gamma spectroscopy to estimate the history of pulse irradiated nuclear material. Previous work in this area has (1) utilized destructive analysis techniques to supplement the nondestructive gamma measurements, and (2) been applied to samples composed of spent nuclear fuel with long irradiation and cooling times. Previous analyses have employed local nonlinear solvers, simple empirical models of gamma spectral features, and simple detector models of gamma spectral features. The algorithm described in this dissertation uses a forward model of the irradiation and measurement process within a global nonlinear optimizer to estimate the unknown irradiation history of pulse irradiated nuclear material. The forward model includes a detector response function for photopeaks only. The algorithm uses a novel hybrid global and local search algorithm to quickly estimate the irradiation parameters, including neutron fluence, cooling time and original composition. Sequential, time correlated series of measurements are used to reduce the uncertainty in the estimated irradiation parameters. This algorithm allows for in situ measurements of interdicted irradiated material. The increase in analysis speed comes with a decrease in information that can be determined, but the sample fluence, cooling time, and composition can be determined within minutes of a measurement. Furthermore, pulse irradiated nuclear material has a characteristic feature that irradiation time and flux cannot be independently estimated. The algorithm has been tested against pulse irradiated samples of pure special nuclear material with cooling times of four minutes to seven hours. The algorithm described is capable of determining the cooling time and fluence the sample was exposed to within 10% as well as roughly estimating the relative concentrations of nuclides present in the original composition.
Characterizing lentic freshwater fish assemblages using multiple sampling methods
Fischer, Jesse R.; Quist, Michael C.
2014-01-01
Characterizing fish assemblages in lentic ecosystems is difficult, and multiple sampling methods are almost always necessary to gain reliable estimates of indices such as species richness. However, most research focused on lentic fish sampling methodology has targeted recreationally important species, and little to no information is available regarding the influence of multiple methods and timing (i.e., temporal variation) on characterizing entire fish assemblages. Therefore, six lakes and impoundments (48–1,557 ha surface area) were sampled seasonally with seven gear types to evaluate the combined influence of sampling methods and timing on the number of species and individuals sampled. Probabilities of detection for species indicated strong selectivities and seasonal trends that provide guidance on optimal seasons to use gears when targeting multiple species. The evaluation of species richness and number of individuals sampled using multiple gear combinations demonstrated that appreciable benefits over relatively few gears (e.g., to four) used in optimal seasons were not present. Specifically, over 90 % of the species encountered with all gear types and season combinations (N = 19) from six lakes and reservoirs were sampled with nighttime boat electrofishing in the fall and benthic trawling, modified-fyke, and mini-fyke netting during the summer. Our results indicated that the characterization of lentic fish assemblages was highly influenced by the selection of sampling gears and seasons, but did not appear to be influenced by waterbody type (i.e., natural lake, impoundment). The standardization of data collected with multiple methods and seasons to account for bias is imperative to monitoring of lentic ecosystems and will provide researchers with increased reliability in their interpretations and decisions made using information on lentic fish assemblages.
Ng, Nyuk Ting; Sanagi, Mohd Marsin; Wan Ibrahim, Wan Nazihah; Wan Ibrahim, Wan Aini
2017-05-01
Agarose-chitosan-immobilized octadecylsilyl-silica (C 18 ) film micro-solid phase extraction (μSPE) was developed and applied for the determination of phenanthrene (PHE) and pyrene (PYR) in chrysanthemum tea samples using high performance liquid chromatography-ultraviolet detection (HPLC-UV). The film of blended agarose and chitosan allows good dispersion of C 18 , prevents the leaching of C 18 during application and enhances the film mechanical stability. Important μSPE parameters were optimized including amount of sorbent loading, extraction time, desorption solvent and desorption time. The matrix match calibration curves showed good linearity (r⩾0.994) over a concentration range of 1-500ppb. Under the optimized conditions, the proposed method showed good limits of detection (0.549-0.673ppb), good analyte recoveries (100.8-105.99%) and good reproducibilities (RSDs⩽13.53%, n=3) with preconcentration factors of 4 and 72 for PHE and PYR, respectively. Copyright © 2016 Elsevier Ltd. All rights reserved.
[Application of an artificial neural network in the design of sustained-release dosage forms].
Wei, X H; Wu, J J; Liang, W Q
2001-09-01
To use the artificial neural network (ANN) in Matlab 5.1 tool-boxes to predict the formulations of sustained-release tablets. The solubilities of nine drugs and various ratios of HPMC: Dextrin for 63 tablet formulations were used as the ANN model input, and in vitro accumulation released at 6 sampling times were used as output. The ANN model was constructed by selecting the optimal number of iterations (25) and model structure in which there are one hidden layer and five hidden layer nodes. The optimized ANN model was used for prediction of formulation based on desired target in vitro dissolution-time profiles. ANN predicted profiles based on ANN predicted formulations were closely similar to the target profiles. The ANN could be used for predicting the dissolution profiles of sustained release dosage form and for the design of optimal formulation.
Computational Process Modeling for Additive Manufacturing
NASA Technical Reports Server (NTRS)
Bagg, Stacey; Zhang, Wei
2014-01-01
Computational Process and Material Modeling of Powder Bed additive manufacturing of IN 718. Optimize material build parameters with reduced time and cost through modeling. Increase understanding of build properties. Increase reliability of builds. Decrease time to adoption of process for critical hardware. Potential to decrease post-build heat treatments. Conduct single-track and coupon builds at various build parameters. Record build parameter information and QM Meltpool data. Refine Applied Optimization powder bed AM process model using data. Report thermal modeling results. Conduct metallography of build samples. Calibrate STK models using metallography findings. Run STK models using AO thermal profiles and report STK modeling results. Validate modeling with additional build. Photodiode Intensity measurements highly linear with power input. Melt Pool Intensity highly correlated to Melt Pool Size. Melt Pool size and intensity increase with power. Applied Optimization will use data to develop powder bed additive manufacturing process model.
Optimizing Urine Processing Protocols for Protein and Metabolite Detection.
Siddiqui, Nazema Y; DuBois, Laura G; St John-Williams, Lisa; Will, Thompson J; Grenier, Carole; Burke, Emily; Fraser, Matthew O; Amundsen, Cindy L; Murphy, Susan K
In urine, factors such as timing of voids, and duration at room temperature (RT) may affect the quality of recovered protein and metabolite data. Additives may aid with detection, but can add more complexity in sample collection or analysis. We aimed to identify the optimal urine processing protocol for clinically-obtained urine samples that allows for the highest protein and metabolite yields with minimal degradation. Healthy women provided multiple urine samples during the same day. Women collected their first morning (1 st AM) void and another "random void". Random voids were aliquotted with: 1) no additive; 2) boric acid (BA); 3) protease inhibitor (PI); or 4) both BA + PI. Of these aliquots, some were immediately stored at 4°C, and some were left at RT for 4 hours. Proteins and individual metabolites were quantified, normalized to creatinine concentrations, and compared across processing conditions. Sample pools corresponding to each processing condition were analyzed using mass spectrometry to assess protein degradation. Ten Caucasian women between 35-65 years of age provided paired 1 st morning and random voided urine samples. Normalized protein concentrations were slightly higher in 1 st AM compared to random "spot" voids. The addition of BA did not significantly change proteins, while PI significantly improved normalized protein concentrations, regardless of whether samples were immediately cooled or left at RT for 4 hours. In pooled samples, there were minimal differences in protein degradation under the various conditions we tested. In metabolite analyses, there were significant differences in individual amino acids based on the timing of the void. For comparative translational research using urine, information about void timing should be collected and standardized. For urine samples processed in the same day, BA does not appear to be necessary while the addition of PI enhances protein yields, regardless of 4°C or RT storage temperature.
Hong, Bo; Wang, Zhe; Xu, Tianjiao; Li, Chengchong; Li, Wenjing
2015-03-25
A simple and low-cost method based on matrix solid-phase dispersion (MSPD) extraction, HPLC separation, diode array detection and UPLC-Q-TOF-MS have been developed for the determination of Hydroxysafflor yellow A (HSYA), Kaempferol and other main compounds in Carthamus tinctorius. The experimental parameters that may affect the MSPD method, including dispersing sorbent, ratio of dispersing sorbent to sample, elution solvent, and volume of the elution solvent were examined and optimized. The optimized conditions were determined to be that silica gel was used as dispersing sorbent, the ratio of silica gel to sample mass was selected to be 3:1, and 10 mL of methanol: water (1:3, v:v) was used as elution solvent. The highest extraction yields of the two compounds were obtained under the optimized conditions. The method showed good linearity (r(2)≥0.999 2) and precision (RSD≤3.4%) for HSYA and Kaempferol, with the limits of detection of 35.2 and 14.5 ng mL(-1), respectively. The recoveries were in the range of 92.62-101.7% with RSD values ranging from 1.5 to 3.5%. At the meanwhile, there were 21 compounds in the extraction by MSPD method were identified by TOF-MS method to improve the quality control for safflower. Comparing to ultrasonic and soxhlet methods, the proposed MSPD procedure was more convenient and less time-consuming with reduced requirements on sample and solvent amounts. The proposed procedure was applied to analyze four real samples that were collected from different localities. Copyright © 2015 Elsevier B.V. All rights reserved.
Fernández, Elena; Vidal, Lorena; Canals, Antonio
2016-08-05
This study reports a new composite based on ZSM-5 zeolite decorated with iron oxide magnetic nanoparticles as a valuable sorbent for magnetic solid-phase extraction (MSPE). A proposal is made to determine benzene, toluene, ethylbenzene and xylenes (BTEX) as model analytes in water samples using gas chromatography-mass spectrometry. A two-step multivariate optimization strategy, using Plackett⬜Burman and circumscribed central composite designs, was employed to optimize experimental parameters affecting MSPE. The method was evaluated under optimized extraction conditions (i.e., amount of sorbent, 138mg; extraction time, 11min; sample pH, pH of water (i.e., 5.5⬜6.5); eluent solvent volume, 0.5mL; and elution time, 5min), obtaining a linear response from 1 to 100μgL(↙1) for benzene; from 10 to 100μgL(↙1) for toluene, ethylbenzene and o-xylene; and from 10 to 75μgL(↙1) for m,p-xylene. The repeatability of the proposed method was evaluated at a 40μgL(↙1) spiking level and coefficients of variation ranged between 8 and 11% (n=5). Limits of detection were found to be 0.3μgL(↙1) for benzene and 3μgL(↙1) for the other analytes. These values satisfy the current normative of the Environmental Protection Agency and European Union for BTEX content in waters for human consumption. Finally, drinking water, wastewater and river water were selected as real water samples to assess the applicability of the method. Relative recoveries varied between 85% and 114% showing negligible matrix effects. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Enzenhöfer, R.; Geiges, A.; Nowak, W.
2011-12-01
Advection-based well-head protection zones are commonly used to manage the contamination risk of drinking water wells. Considering the insufficient knowledge about hazards and transport properties within the catchment, current Water Safety Plans recommend that catchment managers and stakeholders know, control and monitor all possible hazards within the catchments and perform rational risk-based decisions. Our goal is to supply catchment managers with the required probabilistic risk information, and to generate tools that allow for optimal and rational allocation of resources between improved monitoring versus extended safety margins and risk mitigation measures. To support risk managers with the indispensable information, we address the epistemic uncertainty of advective-dispersive solute transport and well vulnerability (Enzenhoefer et al., 2011) within a stochastic simulation framework. Our framework can separate between uncertainty of contaminant location and actual dilution of peak concentrations by resolving heterogeneity with high-resolution Monte-Carlo simulation. To keep computational costs low, we solve the reverse temporal moment transport equation. Only in post-processing, we recover the time-dependent solute breakthrough curves and the deduced well vulnerability criteria from temporal moments by non-linear optimization. Our first step towards optimal risk management is optimal positioning of sampling locations and optimal choice of data types to reduce best the epistemic prediction uncertainty for well-head delineation, using the cross-bred Likelihood Uncertainty Estimator (CLUE, Leube et al., 2011) for optimal sampling design. Better monitoring leads to more reliable and realistic protection zones and thus helps catchment managers to better justify smaller, yet conservative safety margins. In order to allow an optimal choice in sampling strategies, we compare the trade-off in monitoring versus the delineation costs by accounting for ill-delineated fractions of protection zones. Within an illustrative simplified 2D synthetic test case, we demonstrate our concept, involving synthetic transmissivity and head measurements for conditioning. We demonstrate the worth of optimally collected data in the context of protection zone delineation by assessing the reduced areal demand of delineated area at user-specified risk acceptance level. Results indicate that, thanks to optimally collected data, risk-aware delineation can be made at low to moderate additional costs compared to conventional delineation strategies.
NASA Astrophysics Data System (ADS)
Ghosh, Pratik
1992-01-01
The investigations focussed on in vivo NMR imaging studies of magnetic particles with and within neural cells. NMR imaging methods, both Fourier transform and projection reconstruction, were implemented and new protocols were developed to perform "Neuronal Tracing with Magnetic Labels" on small animal brains. Having performed the preliminary experiments with neuronal tracing, new optimized coils and experimental set-up were devised. A novel gradient coil technology along with new rf-coils were implemented, and optimized for future use with small animals in them. A new magnetic labelling procedure was developed that allowed labelling of billions of cells with ultra -small magnetite particles in a short time. The relationships among the viability of such cells, the amount of label and the contrast in the images were studied as quantitatively as possible. Intracerebral grafting of magnetite labelled fetal rat brain cells made it possible for the first time to attempt monitoring in vivo the survival, differentiation, and possible migration of both host and grafted cells in the host rat brain. This constituted the early steps toward future experiments that may lead to the monitoring of human brain grafts of fetal brain cells. Preliminary experiments with direct injection of horse radish peroxidase-conjugated magnetite particles into neurons, followed by NMR imaging, revealed a possible non-invasive alternative, allowing serial study of the dynamic transport pattern of tracers in single living animals. New gradient coils were built by using parallel solid-conductor ribbon cables that could be wrapped easily and quickly. Rapid rise times provided by these coils allowed implementation of fast imaging methods. Optimized rf-coil circuit development made it possible to understand better the sample-coil properties and the associated trade -offs in cases of small but conducting samples.
Sanchez, Gaëtan; Lecaignard, Françoise; Otman, Anatole; Maby, Emmanuel; Mattout, Jérémie
2016-01-01
The relatively young field of Brain-Computer Interfaces has promoted the use of electrophysiology and neuroimaging in real-time. In the meantime, cognitive neuroscience studies, which make extensive use of functional exploration techniques, have evolved toward model-based experiments and fine hypothesis testing protocols. Although these two developments are mostly unrelated, we argue that, brought together, they may trigger an important shift in the way experimental paradigms are being designed, which should prove fruitful to both endeavors. This change simply consists in using real-time neuroimaging in order to optimize advanced neurocognitive hypothesis testing. We refer to this new approach as the instantiation of an Active SAmpling Protocol (ASAP). As opposed to classical (static) experimental protocols, ASAP implements online model comparison, enabling the optimization of design parameters (e.g., stimuli) during the course of data acquisition. This follows the well-known principle of sequential hypothesis testing. What is radically new, however, is our ability to perform online processing of the huge amount of complex data that brain imaging techniques provide. This is all the more relevant at a time when physiological and psychological processes are beginning to be approached using more realistic, generative models which may be difficult to tease apart empirically. Based upon Bayesian inference, ASAP proposes a generic and principled way to optimize experimental design adaptively. In this perspective paper, we summarize the main steps in ASAP. Using synthetic data we illustrate its superiority in selecting the right perceptual model compared to a classical design. Finally, we briefly discuss its future potential for basic and clinical neuroscience as well as some remaining challenges.
Neuro-genetic system for optimization of GMI samples sensitivity.
Pitta Botelho, A C O; Vellasco, M M B R; Hall Barbosa, C R; Costa Silva, E
2016-03-01
Magnetic sensors are largely used in several engineering areas. Among them, magnetic sensors based on the Giant Magnetoimpedance (GMI) effect are a new family of magnetic sensing devices that have a huge potential for applications involving measurements of ultra-weak magnetic fields. The sensitivity of magnetometers is directly associated with the sensitivity of their sensing elements. The GMI effect is characterized by a large variation of the impedance (magnitude and phase) of a ferromagnetic sample, when subjected to a magnetic field. Recent studies have shown that phase-based GMI magnetometers have the potential to increase the sensitivity by about 100 times. The sensitivity of GMI samples depends on several parameters, such as sample length, external magnetic field, DC level and frequency of the excitation current. However, this dependency is yet to be sufficiently well-modeled in quantitative terms. So, the search for the set of parameters that optimizes the samples sensitivity is usually empirical and very time consuming. This paper deals with this problem by proposing a new neuro-genetic system aimed at maximizing the impedance phase sensitivity of GMI samples. A Multi-Layer Perceptron (MLP) Neural Network is used to model the impedance phase and a Genetic Algorithm uses the information provided by the neural network to determine which set of parameters maximizes the impedance phase sensitivity. The results obtained with a data set composed of four different GMI sample lengths demonstrate that the neuro-genetic system is able to correctly and automatically determine the set of conditioning parameters responsible for maximizing their phase sensitivities. Copyright © 2015 Elsevier Ltd. All rights reserved.
Autopilot for frequency-modulation atomic force microscopy.
Kuchuk, Kfir; Schlesinger, Itai; Sivan, Uri
2015-10-01
One of the most challenging aspects of operating an atomic force microscope (AFM) is finding optimal feedback parameters. This statement applies particularly to frequency-modulation AFM (FM-AFM), which utilizes three feedback loops to control the cantilever excitation amplitude, cantilever excitation frequency, and z-piezo extension. These loops are regulated by a set of feedback parameters, tuned by the user to optimize stability, sensitivity, and noise in the imaging process. Optimization of these parameters is difficult due to the coupling between the frequency and z-piezo feedback loops by the non-linear tip-sample interaction. Four proportional-integral (PI) parameters and two lock-in parameters regulating these loops require simultaneous optimization in the presence of a varying unknown tip-sample coupling. Presently, this optimization is done manually in a tedious process of trial and error. Here, we report on the development and implementation of an algorithm that computes the control parameters automatically. The algorithm reads the unperturbed cantilever resonance frequency, its quality factor, and the z-piezo driving signal power spectral density. It analyzes the poles and zeros of the total closed loop transfer function, extracts the unknown tip-sample transfer function, and finds four PI parameters and two lock-in parameters for the frequency and z-piezo control loops that optimize the bandwidth and step response of the total system. Implementation of the algorithm in a home-built AFM shows that the calculated parameters are consistently excellent and rarely require further tweaking by the user. The new algorithm saves the precious time of experienced users, facilitates utilization of FM-AFM by casual users, and removes the main hurdle on the way to fully automated FM-AFM.
Autopilot for frequency-modulation atomic force microscopy
NASA Astrophysics Data System (ADS)
Kuchuk, Kfir; Schlesinger, Itai; Sivan, Uri
2015-10-01
One of the most challenging aspects of operating an atomic force microscope (AFM) is finding optimal feedback parameters. This statement applies particularly to frequency-modulation AFM (FM-AFM), which utilizes three feedback loops to control the cantilever excitation amplitude, cantilever excitation frequency, and z-piezo extension. These loops are regulated by a set of feedback parameters, tuned by the user to optimize stability, sensitivity, and noise in the imaging process. Optimization of these parameters is difficult due to the coupling between the frequency and z-piezo feedback loops by the non-linear tip-sample interaction. Four proportional-integral (PI) parameters and two lock-in parameters regulating these loops require simultaneous optimization in the presence of a varying unknown tip-sample coupling. Presently, this optimization is done manually in a tedious process of trial and error. Here, we report on the development and implementation of an algorithm that computes the control parameters automatically. The algorithm reads the unperturbed cantilever resonance frequency, its quality factor, and the z-piezo driving signal power spectral density. It analyzes the poles and zeros of the total closed loop transfer function, extracts the unknown tip-sample transfer function, and finds four PI parameters and two lock-in parameters for the frequency and z-piezo control loops that optimize the bandwidth and step response of the total system. Implementation of the algorithm in a home-built AFM shows that the calculated parameters are consistently excellent and rarely require further tweaking by the user. The new algorithm saves the precious time of experienced users, facilitates utilization of FM-AFM by casual users, and removes the main hurdle on the way to fully automated FM-AFM.
Autopilot for frequency-modulation atomic force microscopy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kuchuk, Kfir; Schlesinger, Itai; Sivan, Uri, E-mail: phsivan@tx.technion.ac.il
2015-10-15
One of the most challenging aspects of operating an atomic force microscope (AFM) is finding optimal feedback parameters. This statement applies particularly to frequency-modulation AFM (FM-AFM), which utilizes three feedback loops to control the cantilever excitation amplitude, cantilever excitation frequency, and z-piezo extension. These loops are regulated by a set of feedback parameters, tuned by the user to optimize stability, sensitivity, and noise in the imaging process. Optimization of these parameters is difficult due to the coupling between the frequency and z-piezo feedback loops by the non-linear tip-sample interaction. Four proportional-integral (PI) parameters and two lock-in parameters regulating these loopsmore » require simultaneous optimization in the presence of a varying unknown tip-sample coupling. Presently, this optimization is done manually in a tedious process of trial and error. Here, we report on the development and implementation of an algorithm that computes the control parameters automatically. The algorithm reads the unperturbed cantilever resonance frequency, its quality factor, and the z-piezo driving signal power spectral density. It analyzes the poles and zeros of the total closed loop transfer function, extracts the unknown tip-sample transfer function, and finds four PI parameters and two lock-in parameters for the frequency and z-piezo control loops that optimize the bandwidth and step response of the total system. Implementation of the algorithm in a home-built AFM shows that the calculated parameters are consistently excellent and rarely require further tweaking by the user. The new algorithm saves the precious time of experienced users, facilitates utilization of FM-AFM by casual users, and removes the main hurdle on the way to fully automated FM-AFM.« less
Torres Padrón, M E; Sosa Ferrera, Z; Santana Rodríguez, J J
2006-09-01
A solid-phase microextraction (SPME) procedure using two commercial fibers coupled with high-performance liquid chromatography (HPLC) is presented for the extraction and determination of organochlorine pesticides in water samples. We have evaluated the extraction efficiency of this kind of compound using two different fibers: 60-mum polydimethylsiloxane-divinylbenzene (PDMS-DVB) and Carbowax/TPR-100 (CW/TPR). Parameters involved in the extraction and desorption procedures (e.g. extraction time, ionic strength, extraction temperature, desorption and soaking time) were studied and optimized to achieve the maximum efficiency. Results indicate that both PDMS-DVB and CW/TPR fibers are suitable for the extraction of this type of compound, and a simple calibration curve method based on simple aqueous standards can be used. All the correlation coefficients were better than 0.9950, and the RSDs ranged from 7% to 13% for 60-mum PDMS-DVB fiber and from 3% to 10% for CW/TPR fiber. Optimized procedures were applied to the determination of a mixture of six organochlorine pesticides in environmental liquid samples (sea, sewage and ground waters), employing HPLC with UV-diode array detector.
Wu, Xiaoling; Yang, Miyi; Zeng, Haozhe; Xi, Xuefei; Zhang, Sanbing; Lu, Runhua; Gao, Haixiang; Zhou, Wenfeng
2016-11-01
In this study, a simple effervescence-assisted dispersive solid-phase extraction method was developed to detect fungicides in honey and juice. Most significantly, an innovative ionic-liquid-modified magnetic β-cyclodextrin/attapulgite sorbent was used because its large specific surface area enhanced the extraction capacity and also led to facile separation. A one-factor-at-a-time approach and orthogonal design were employed to optimize the experimental parameters. Under the optimized conditions, the entire extraction procedure was completed within 3 min. In addition, the calibration curves exhibited good linearity, and high enrichment factors were achieved for pure water and honey samples. For the honey samples, the extraction efficiencies for the target fungicides ranged from 77.0 to 94.3% with relative standard deviations of 2.3-5.44%. The detection and quantitation limits were in the ranges of 0.07-0.38 and 0.23-1.27 μg/L, respectively. Finally, the developed technique was successfully applied to real samples, and satisfactory results were achieved. This analytical technique is cost-effective, environmentally friendly, and time-saving. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Fast, Safe, Propellant-Efficient Spacecraft Motion Planning Under Clohessy-Wiltshire-Hill Dynamics
NASA Technical Reports Server (NTRS)
Starek, Joseph A.; Schmerling, Edward; Maher, Gabriel D.; Barbee, Brent W.; Pavone, Marco
2016-01-01
This paper presents a sampling-based motion planning algorithm for real-time and propellant-optimized autonomous spacecraft trajectory generation in near-circular orbits. Specifically, this paper leverages recent algorithmic advances in the field of robot motion planning to the problem of impulsively actuated, propellant- optimized rendezvous and proximity operations under the Clohessy-Wiltshire-Hill dynamics model. The approach calls upon a modified version of the FMT* algorithm to grow a set of feasible trajectories over a deterministic, low-dispersion set of sample points covering the free state space. To enforce safety, the tree is only grown over the subset of actively safe samples, from which there exists a feasible one-burn collision-avoidance maneuver that can safely circularize the spacecraft orbit along its coasting arc under a given set of potential thruster failures. Key features of the proposed algorithm include 1) theoretical guarantees in terms of trajectory safety and performance, 2) amenability to real-time implementation, and 3) generality, in the sense that a large class of constraints can be handled directly. As a result, the proposed algorithm offers the potential for widespread application, ranging from on-orbit satellite servicing to orbital debris removal and autonomous inspection missions.
Evaluating the efficiency of environmental monitoring programs
Levine, Carrie R.; Yanai, Ruth D.; Lampman, Gregory G.; Burns, Douglas A.; Driscoll, Charles T.; Lawrence, Gregory B.; Lynch, Jason; Schoch, Nina
2014-01-01
Statistical uncertainty analyses can be used to improve the efficiency of environmental monitoring, allowing sampling designs to maximize information gained relative to resources required for data collection and analysis. In this paper, we illustrate four methods of data analysis appropriate to four types of environmental monitoring designs. To analyze a long-term record from a single site, we applied a general linear model to weekly stream chemistry data at Biscuit Brook, NY, to simulate the effects of reducing sampling effort and to evaluate statistical confidence in the detection of change over time. To illustrate a detectable difference analysis, we analyzed a one-time survey of mercury concentrations in loon tissues in lakes in the Adirondack Park, NY, demonstrating the effects of sampling intensity on statistical power and the selection of a resampling interval. To illustrate a bootstrapping method, we analyzed the plot-level sampling intensity of forest inventory at the Hubbard Brook Experimental Forest, NH, to quantify the sampling regime needed to achieve a desired confidence interval. Finally, to analyze time-series data from multiple sites, we assessed the number of lakes and the number of samples per year needed to monitor change over time in Adirondack lake chemistry using a repeated-measures mixed-effects model. Evaluations of time series and synoptic long-term monitoring data can help determine whether sampling should be re-allocated in space or time to optimize the use of financial and human resources.
Fei Cheng; Lin Hou; Keith Woeste; Zhengchun Shang; Xiaobang Peng; Peng Zhao; Shuoxin Zhang
2016-01-01
Humic substances in soil DNA samples can influence the assessment of microbial diversity and community composition. Using multiple steps during or after cell lysis adds expenses, is time-consuming, and causes DNA loss. A pretreatment of soil samples and a single step DNA extraction may improve experimental results. In order to optimize a protocol for obtaining high...
Fajar, N M; Carro, A M; Lorenzo, R A; Fernandez, F; Cela, R
2008-08-01
The efficiency of microwave-assisted extraction with saponification (MAES) for the determination of seven polybrominated flame retardants (polybrominated biphenyls, PBBs; and polybrominated diphenyl ethers, PBDEs) in aquaculture samples is described and compared with microwave-assisted extraction (MAE). Chemometric techniques based on experimental designs and desirability functions were used for simultaneous optimization of the operational parameters used in both MAES and MAE processes. Application of MAES to this group of contaminants in aquaculture samples, which had not been previously applied to this type of analytes, was shown to be superior to MAE in terms of extraction efficiency, extraction time and lipid content extracted from complex matrices (0.7% as against 18.0% for MAE extracts). PBBs and PBDEs were determined by gas chromatography with micro-electron capture detection (GC-muECD). The quantification limits for the analytes were 40-750 pg g(-1) (except for BB-15, which was 1.43 ng g(-1)). Precision for MAES-GC-muECD (%RSD < 11%) was significantly better than for MAE-GC-muECD (%RSD < 20%). The accuracy of both optimized methods was satisfactorily demonstrated by analysis of appropriate certified reference material (CRM), WMF-01.
Yuan, Wenjia; Shen, Weidong; Zhang, Yueguang; Liu, Xu
2014-05-05
Dielectric multilayer beam splitter with differential phase shift on transmission and reflection for division-of-amplitude photopolarimeter (DOAP) was presented for the first time to our knowledge. The optimal parameters for the beam splitter are Tp = 78.9%, Ts = 21.1% and Δr - Δt = π/2 at 532nm at an angle of incidence of 45°. Multilayer anti-reflection coating with low phase shift was applied to reduce the backside reflection. Different design strategies that can achieve all optimal targets at the wavelength were tested. Two design methods were presented to optimize the differential phase shift. The samples were prepared by ion beam sputtering (IBS). The experimental results show good agreement with those of the design. The ellipsometric parameters of samples were measured in reflection (ψr, Δr) = (26.5°, 135.1°) and (28.2°, 133.5°), as well as in transmission (ψt, Δt) = (62.5°, 46.1°) and (63.5°, 46°) at 532.6nm. The normalized determinant of instrument matrix to evaluate the performance of samples is respectively 0.998 and 0.991 at 532.6nm.
Towards an optimal flow: Density-of-states-informed replica-exchange simulations
Vogel, Thomas; Perez, Danny
2015-11-05
Here we learn that replica exchange (RE) is one of the most popular enhanced-sampling simulations technique in use today. Despite widespread successes, RE simulations can sometimes fail to converge in practical amounts of time, e.g., when sampling around phase transitions, or when a few hard-to-find configurations dominate the statistical averages. We introduce a generalized RE scheme, density-of-states-informed RE, that addresses some of these challenges. The key feature of our approach is to inform the simulation with readily available, but commonly unused, information on the density of states of the system as the RE simulation proceeds. This enables two improvements, namely,more » the introduction of resampling moves that actively move the system towards equilibrium and the continual adaptation of the optimal temperature set. As a consequence of these two innovations, we show that the configuration flow in temperature space is optimized and that the overall convergence of RE simulations can be dramatically accelerated.« less
[Extraction and purification technologies of total flavonoids from Aconitum tanguticum].
Li, Yan-Rong; Yan, Li-Xin; Feng, Wei-Hong; Li, Chun; Wang, Zhi-Min
2014-04-01
To optimize the extraction and purification technologies of total flavonoids from Aconitum tanguticum whole plant. With the content of total flavonoids as index, the optimum extraction conditions for the concentration, volume of alcohol, extracting time and times were selected by orthogonal optimized; Comparing the adsorption quantity (mg/g) and resolution (%), four kinds of macroporous adsorption resins including D101, AB-8, X-5 and XAD-16 were investigated for the enrichment ability of total flavonoids from Aconitum tanguticum; Concentration and pH value of sample, sampling amount, elution solvent and loading and elution velocity for the optimum adsorption resin were determined. The content of total flavonoids in Aconitum tanguticum was about 4.39%; The optimum extraction technique was 70% alcohol reflux extraction for three times,each time for one hour, the ratio of material and liquid was 1:10 (w/v); The optimum purification technology was: using XAD-16 macroporous resin, the initial concentration of total flavonoids of Aconitum tanguticum was 8 mg/mL, the sampling amount was 112 mg/g dry resin, the pH value was 5, the loading velocity was 3 mL/min, the elution solvent was 70% ethanol and the elution velocity was 5 mL/min. Under the optimum conditions, the average content of total flavonoids was raised from 4.39% to 46.19%. The optimum extraction and purification technologies for total flavonoids of Aconitum tanguticum were suitable for industrial production for its simplicity and responsibility.
Limited sampling strategies to predict the area under the concentration-time curve for rifampicin.
Medellín-Garibay, Susanna E; Correa-López, Tania; Romero-Méndez, Carmen; Milán-Segovia, Rosa C; Romano-Moreno, Silvia
2014-12-01
Rifampicin (RMP) is the most effective first-line antituberculosis drug. One of the most critical aspects of using it in fixed-drug combination formulations is to ensure it reaches therapeutic levels in blood. The determination of the area under the concentration-time curve (AUC) and appropriate dose adjustment of this drug may contribute to optimization of therapy. Even when the maximal concentration (Cmax) of RMP also predicts its sterilizing effect, the time to reach it (Tmax) takes 40 minutes to 6 hours. The aim of this study was to develop a limited sampling strategy (LSS) for therapeutic drug monitoring assistance for RMP. Full concentration-time curves were obtained from 58 patients with tuberculosis (TB) after the oral administration of RMP in fixed-drug combination formulation. A validated high-performance liquid chromatographic method was used. Pharmacokinetic parameters were estimated with a noncompartmental model. Generalized linear models were obtained by forward steps, and bootstrapping was performed to develop LSS to predict AUC curve from time 0 to the last measured at 24 hours postdose (AUC0-24). The predictive performance of the proposed models was assessed using RMP profiles from 25 other TB patients by comparing predicted and observed AUC0-24. The mean AUC0-24 in the current study was 91.46 ± 36.7 mg·h·L, and the most convenient sampling time points to predict it were 2, 4 and 12 hours postdose (slope [m] = 0.955 ± 0.06; r = 0.92). The mean prediction error was -0.355%, and the root mean square error was 5.6% in the validation group. Alternate LSSs are proposed with 2 of these sampling time points, which also provide good predictions when the 3 most convenient are not feasible. The AUC0-24 for RMP in TB patients can be predicted with acceptable precision through a 2- or 3-point sampling strategy, despite wide interindividual variability. These LSSs could be applied in clinical practice to optimize anti-TB therapy based on therapeutic drug monitoring.
Murphy, Helen R; Lee, Seulgi; da Silva, Alexandre J
2017-07-01
Cyclospora cayetanensis is a protozoan parasite that causes human diarrheal disease associated with the consumption of fresh produce or water contaminated with C. cayetanensis oocysts. In the United States, foodborne outbreaks of cyclosporiasis have been linked to various types of imported fresh produce, including cilantro and raspberries. An improved method was developed for identification of C. cayetanensis in produce at the U.S. Food and Drug Administration. The method relies on a 0.1% Alconox produce wash solution for efficient recovery of oocysts, a commercial kit for DNA template preparation, and an optimized TaqMan real-time PCR assay with an internal amplification control for molecular detection of the parasite. A single laboratory validation study was performed to assess the method's performance and compare the optimized TaqMan real-time PCR assay and a reference nested PCR assay by examining 128 samples. The samples consisted of 25 g of cilantro or 50 g of raspberries seeded with 0, 5, 10, or 200 C. cayetanensis oocysts. Detection rates for cilantro seeded with 5 and 10 oocysts were 50.0 and 87.5%, respectively, with the real-time PCR assay and 43.7 and 94.8%, respectively, with the nested PCR assay. Detection rates for raspberries seeded with 5 and 10 oocysts were 25.0 and 75.0%, respectively, with the real-time PCR assay and 18.8 and 68.8%, respectively, with the nested PCR assay. All unseeded samples were negative, and all samples seeded with 200 oocysts were positive. Detection rates using the two PCR methods were statistically similar, but the real-time PCR assay is less laborious and less prone to amplicon contamination and allows monitoring of amplification and analysis of results, making it more attractive to diagnostic testing laboratories. The improved sample preparation steps and the TaqMan real-time PCR assay provide a robust, streamlined, and rapid analytical procedure for surveillance, outbreak response, and regulatory testing of foods for detection of C. cayetanensis.
A novel heterogeneous training sample selection method on space-time adaptive processing
NASA Astrophysics Data System (ADS)
Wang, Qiang; Zhang, Yongshun; Guo, Yiduo
2018-04-01
The performance of ground target detection about space-time adaptive processing (STAP) decreases when non-homogeneity of clutter power is caused because of training samples contaminated by target-like signals. In order to solve this problem, a novel nonhomogeneous training sample selection method based on sample similarity is proposed, which converts the training sample selection into a convex optimization problem. Firstly, the existing deficiencies on the sample selection using generalized inner product (GIP) are analyzed. Secondly, the similarities of different training samples are obtained by calculating mean-hausdorff distance so as to reject the contaminated training samples. Thirdly, cell under test (CUT) and the residual training samples are projected into the orthogonal subspace of the target in the CUT, and mean-hausdorff distances between the projected CUT and training samples are calculated. Fourthly, the distances are sorted in order of value and the training samples which have the bigger value are selective preference to realize the reduced-dimension. Finally, simulation results with Mountain-Top data verify the effectiveness of the proposed method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Malik, Hitendra K., E-mail: hkmalik@physics.iitd.ac.in; Singh, Omveer; Dahiya, Raj P.
We have established a hot cathode arc discharge plasma system, where different stainless steel samples can be treated by monitoring the plasma parameters and nitriding parameters independently. In the present work, a mixture of 70% N{sub 2} and 30% H{sub 2} gases was fed into the plasma chamber and the treatment time and substrate temperature were optimized for treating 304L Stainless Steel samples. Various physical techniques such as x-ray diffraction, energy dispersive x-ray spectroscopy and micro-vickers hardness tester were employed to determine the structural, surface composition and surface hardness of the treated samples.
Study of probe-sample distance for biomedical spectra measurement.
Wang, Bowen; Fan, Shuzhen; Li, Lei; Wang, Cong
2011-11-02
Fiber-based optical spectroscopy has been widely used for biomedical applications. However, the effect of probe-sample distance on the collection efficiency has not been well investigated. In this paper, we presented a theoretical model to maximize the illumination and collection efficiency in designing fiber optic probes for biomedical spectra measurement. This model was in general applicable to probes with single or multiple fibers at an arbitrary incident angle. In order to demonstrate the theory, a fluorescence spectrometer was used to measure the fluorescence of human finger skin at various probe-sample distances. The fluorescence spectrum and the total fluorescence intensity were recorded. The theoretical results show that for single fiber probes, contact measurement always provides the best results. While for multi-fiber probes, there is an optimal probe distance. When a 400- μm excitation fiber is used to deliver the light to the skin and another six 400- μm fibers surrounding the excitation fiber are used to collect the fluorescence signal, the experimental results show that human finger skin has very strong fluorescence between 475 nm and 700 nm under 450 nm excitation. The fluorescence intensity is heavily dependent on the probe-sample distance and there is an optimal probe distance. We investigated a number of probe-sample configurations and found that contact measurement could be the primary choice for single-fiber probes, but was very inefficient for multi-fiber probes. There was an optimal probe-sample distance for multi-fiber probes. By carefully choosing the probe-sample distance, the collection efficiency could be enhanced by 5-10 times. Our experiments demonstrated that the experimental results of the probe-sample distance dependence of collection efficiency in multi-fiber probes were in general agreement with our theory.
Wei, Na
2015-01-01
Lightweight aggregate (LWA) production with sewage sludge and municipal solid waste incineration (MSWI) fly ash is an effective approach for waste disposal. This study investigated the stability of heavy metals in LWA made from sewage sludge and MSWI fly ash. Leaching tests were conducted to find out the effects of MSWI fly ash/sewage sludge (MSWI FA/SS) ratio, sintering temperature and sintering time. It was found that with the increase of MSWI FA/SS ratio, leaching rates of all heavy metals firstly decreased and then increased, indicating the optimal ratio of MSWI fly ash/sewage sludge was 2:8. With the increase of sintering temperature and sintering time, the heavy metal solidifying efficiencies were strongly enhanced by crystallization and chemical incorporations within the aluminosilicate or silicate frameworks during the sintering process. However, taking cost-savings and lower energy consumption into account, 1100 °C and 8 min were selected as the optimal parameters for LWA sample- containing sludge production. Furthermore, heavy metal leaching concentrations under these optimal LWA production parameters were found to be in the range of China’s regulatory requirements. It is concluded that heavy metals can be properly stabilized in LWA samples containing sludge and cannot be easily released into the environment again to cause secondary pollution. PMID:25961800
Mentana, Annalisa; Palermo, Carmen; Nardiello, Donatella; Quinto, Maurizio; Centonze, Diego
2013-01-09
In this work the optimization and application of a dual-amperometric biosensor for simultaneous monitoring of glucose and ethanol content, as quality markers in drinks and alcoholic fermentation media, are described. The biosensor is based on glucose oxidase (GOD) and alcohol oxidase (AOD) immobilized by co-cross-linking with bovine serum albumin (BSA) and glutaraldehyde (GLU) both onto a dual gold electrode, modified with a permselective overoxidized polypyrrole film (PPYox). Response, rejection of interferents, and stability of the dual biosensor were optimized in terms of PPYox thickness, BSA, and enzyme loading. The biosensor was integrated in a flow injection system coupled with an at-line microdialysis fiber as a sampling tool. Flow rates inside and outside the fiber were optimized in terms of linear responses (0.01-1 and 0.01-1.5 M) and sensitivities (27.6 ± 0.4 and 31.0 ± 0.6 μA·M(-1)·cm(-2)) for glucose and ethanol. Excellent anti-interference characteristics, the total absence of "cross-talk", and good response stability under operational conditions allowed application of the dual biosensor in accurate real-time monitoring (at least 15 samples/h) of alcoholic drinks, white grape must, and woody biomass.
Wei, Na
2015-05-07
Lightweight aggregate (LWA) production with sewage sludge and municipal solid waste incineration (MSWI) fly ash is an effective approach for waste disposal. This study investigated the stability of heavy metals in LWA made from sewage sludge and MSWI fly ash. Leaching tests were conducted to find out the effects of MSWI fly ash/sewage sludge (MSWI FA/SS) ratio, sintering temperature and sintering time. It was found that with the increase of MSWI FA/SS ratio, leaching rates of all heavy metals firstly decreased and then increased, indicating the optimal ratio of MSWI fly ash/sewage sludge was 2:8. With the increase of sintering temperature and sintering time, the heavy metal solidifying efficiencies were strongly enhanced by crystallization and chemical incorporations within the aluminosilicate or silicate frameworks during the sintering process. However, taking cost-savings and lower energy consumption into account, 1100 °C and 8 min were selected as the optimal parameters for LWA sample- containing sludge production. Furthermore, heavy metal leaching concentrations under these optimal LWA production parameters were found to be in the range of China's regulatory requirements. It is concluded that heavy metals can be properly stabilized in LWA samples containing sludge and cannot be easily released into the environment again to cause secondary pollution.
Musci, Marilena; Yao, Shicong
2017-12-01
Pu-erh tea is a post-fermented tea that has recently gained popularity worldwide, due to potential health benefits related to the antioxidant activity resulting from its high polyphenolic content. The Folin-Ciocalteu method is a simple, rapid, and inexpensive assay widely applied for the determination of total polyphenol content. Over the past years, it has been subjected to many modifications, often without any systematic optimization or validation. In our study, we sought to optimize the Folin-Ciocalteu method, evaluate quality parameters including linearity, precision and stability, and then apply the optimized model to determine the total polyphenol content of 57 Chinese teas, including green tea, aged and ripened Pu-erh tea. Our optimized Folin-Ciocalteu method reduced analysis time, allowed for the analysis of a large number of samples, to discriminate among the different teas, and to assess the effect of the post-fermentation process on polyphenol content.
Use of an algal hydrolysate to improve enzymatic hydrolysis of anaerobically digested fiber
USDA-ARS?s Scientific Manuscript database
This study investigated the use of acid hydrolyzed algae to enhance the enzymatic hydrolysis of cellulosic biomass. We first characterized wastewater-grown algal samples and determined the optimal conditions (acid concentration, reaction temperature, and reaction time) for algal hydrolysis using di...
Cui, Jian; Zhao, Xue-Hong; Wang, Yan; Xiao, Ya-Bing; Jiang, Xue-Hui; Dai, Li
2014-01-01
Flow injection-hydride generation-atomic fluorescence spectrometry was a widely used method in the industries of health, environmental, geological and metallurgical fields for the merit of high sensitivity, wide measurement range and fast analytical speed. However, optimization of this method was too difficult as there exist so many parameters affecting the sensitivity and broadening. Generally, the optimal conditions were sought through several experiments. The present paper proposed a mathematical model between the parameters and sensitivity/broadening coefficients using the law of conservation of mass according to the characteristics of hydride chemical reaction and the composition of the system, which was proved to be accurate as comparing the theoretical simulation and experimental results through the test of arsanilic acid standard solution. Finally, this paper has put a relation map between the parameters and sensitivity/broadening coefficients, and summarized that GLS volume, carrier solution flow rate and sample loop volume were the most factors affecting sensitivity and broadening coefficients. Optimizing these three factors with this relation map, the relative sensitivity was advanced by 2.9 times and relative broadening was reduced by 0.76 times. This model can provide a theoretical guidance for the optimization of the experimental conditions.
Liu, Gui-Song; Guo, Hao-Song; Pan, Tao; Wang, Ji-Hua; Cao, Gan
2014-10-01
Based on Savitzky-Golay (SG) smoothing screening, principal component analysis (PCA) combined with separately supervised linear discriminant analysis (LDA) and unsupervised hierarchical clustering analysis (HCA) were used for non-destructive visible and near-infrared (Vis-NIR) detection for breed screening of transgenic sugarcane. A random and stability-dependent framework of calibration, prediction, and validation was proposed. A total of 456 samples of sugarcane leaves planting in the elongating stage were collected from the field, which was composed of 306 transgenic (positive) samples containing Bt and Bar gene and 150 non-transgenic (negative) samples. A total of 156 samples (negative 50 and positive 106) were randomly selected as the validation set; the remaining samples (negative 100 and positive 200, a total of 300 samples) were used as the modeling set, and then the modeling set was subdivided into calibration (negative 50 and positive 100, a total of 150 samples) and prediction sets (negative 50 and positive 100, a total of 150 samples) for 50 times. The number of SG smoothing points was ex- panded, while some modes of higher derivative were removed because of small absolute value, and a total of 264 smoothing modes were used for screening. The pairwise combinations of first three principal components were used, and then the optimal combination of principal components was selected according to the model effect. Based on all divisions of calibration and prediction sets and all SG smoothing modes, the SG-PCA-LDA and SG-PCA-HCA models were established, the model parameters were optimized based on the average prediction effect for all divisions to produce modeling stability. Finally, the model validation was performed by validation set. With SG smoothing, the modeling accuracy and stability of PCA-LDA, PCA-HCA were signif- icantly improved. For the optimal SG-PCA-LDA model, the recognition rate of positive and negative validation samples were 94.3%, 96.0%; and were 92.5%, 98.0% for the optimal SG-PCA-LDA model, respectively. Vis-NIR spectro- scopic pattern recognition combined with SG smoothing could be used for accurate recognition of transgenic sugarcane leaves, and provided a convenient screening method for transgenic sugarcane breeding.
Support, shape and number of replicate samples for tree foliage analysis.
Luyssaert, Sebastiaan; Mertens, Jan; Raitio, Hannu
2003-06-01
Many fundamental features of a sampling program are determined by the heterogeneity of the object under study and the settings for the error (alpha), the power (beta), the effect size (ES), the number of replicate samples, and sample support, which is a feature that is often overlooked. The number of replicates, alpha, beta, ES, and sample support are interconnected. The effect of the sample support and its shape on the required number of replicate samples was investigated by means of a resampling method. The method was applied to a simulated distribution of Cd in the crown of a Salix fragilis L. tree. Increasing the dimensions of the sample support results in a decrease in the variance of the element concentration under study. Analysis of the variance is often the foundation of statistical tests, therefore, valid statistical testing requires the use of a fixed sample support during the experiment. This requirement might be difficult to meet in time-series analyses and long-term monitoring programs. Sample supports have their largest dimension in the direction with the largest heterogeneity, i.e. the direction representing the crown height, and this will give more accurate results than supports with other shapes. Taking the relationships between the sample support and the variance of the element concentrations in tree crowns into account provides guidelines for sampling efficiency in terms of precision and costs. In terms of time, the optimal support to test whether the average Cd concentration of the crown exceeds a threshold value is 0.405 m3 (alpha = 0.05, beta = 0.20, ES = 1.0 mg kg(-1) dry mass). The average weight of this support is 23 g dry mass, and 11 replicate samples need to be taken. It should be noted that in this case the optimal support applies to Cd under conditions similar to those of the simulation, but not necessarily all the examinations for this tree species, element, and hypothesis test.
Janson, Lucas; Schmerling, Edward; Clark, Ashley; Pavone, Marco
2015-01-01
In this paper we present a novel probabilistic sampling-based motion planning algorithm called the Fast Marching Tree algorithm (FMT*). The algorithm is specifically aimed at solving complex motion planning problems in high-dimensional configuration spaces. This algorithm is proven to be asymptotically optimal and is shown to converge to an optimal solution faster than its state-of-the-art counterparts, chiefly PRM* and RRT*. The FMT* algorithm performs a “lazy” dynamic programming recursion on a predetermined number of probabilistically-drawn samples to grow a tree of paths, which moves steadily outward in cost-to-arrive space. As such, this algorithm combines features of both single-query algorithms (chiefly RRT) and multiple-query algorithms (chiefly PRM), and is reminiscent of the Fast Marching Method for the solution of Eikonal equations. As a departure from previous analysis approaches that are based on the notion of almost sure convergence, the FMT* algorithm is analyzed under the notion of convergence in probability: the extra mathematical flexibility of this approach allows for convergence rate bounds—the first in the field of optimal sampling-based motion planning. Specifically, for a certain selection of tuning parameters and configuration spaces, we obtain a convergence rate bound of order O(n−1/d+ρ), where n is the number of sampled points, d is the dimension of the configuration space, and ρ is an arbitrarily small constant. We go on to demonstrate asymptotic optimality for a number of variations on FMT*, namely when the configuration space is sampled non-uniformly, when the cost is not arc length, and when connections are made based on the number of nearest neighbors instead of a fixed connection radius. Numerical experiments over a range of dimensions and obstacle configurations confirm our the-oretical and heuristic arguments by showing that FMT*, for a given execution time, returns substantially better solutions than either PRM* or RRT*, especially in high-dimensional configuration spaces and in scenarios where collision-checking is expensive. PMID:27003958
Ji, Julie L; Holmes, Emily A; Blackwell, Simon E
2017-01-01
Optimism is associated with positive outcomes across many health domains, from cardiovascular disease to depression. However, we know little about cognitive processes underlying optimism in psychopathology. The present study tested whether the ability to vividly imagine positive events in one's future was associated with dispositional optimism in a sample of depressed adults. Cross-sectional and longitudinal analyses were conducted, using baseline (all participants, N=150) and follow-up data (participants in the control condition only, N=63) from a clinical trial (Blackwell et al., 2015). Vividness of positive prospective imagery, assessed on a laboratory-administered task at baseline, was significantly associated with both current optimism levels at baseline and future (seven months later) optimism levels, including when controlling for potential confounds. Even when depressed, those individuals able to envision a brighter future were more optimistic, and regained optimism more quickly over time, than those less able to do so at baseline. Strategies to increase the vividness of positive prospective imagery may aid development of mental health interventions to boost optimism. Copyright © 2016 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.
Zhu, Fengxiang; Pan, Zaifa; Hong, Chunlai; Wang, Weiping; Chen, Xiaoyang; Xue, Zhiyong; Yao, Yanlai
2016-12-01
Changes in volatile organic compound contents in compost samples during pig manure composting were studied using a headspace, solid-phase micro-extraction method (HS-SPME) followed by gas chromatography with mass spectrometric detection (GC/MS). Parameters affecting the SPME procedure were optimized as follows: the coating was carbon molecular sieve/polydimethylsiloxane (CAR/PDMS) fiber, the temperature was 60°C and the time was 30min. Under these conditions, 87 compounds were identified from 17 composting samples. Most of the volatile components could only be detected before day 22. However, benzenes, alkanes and alkenes increased and eventually stabilized after day 22. Phenol and acid substances, which are important factors for compost quality, were almost undetectable on day 39 in natural compost (NC) samples and on day 13 in maggot-treated compost (MC) samples. Our results indicate that the approach can be effectively used to determine the composting times by analysis of volatile substances in compost samples. An appropriate composting time not only ensures the quality of compost and reduces the loss of composting material but also reduces the generation of hazardous substances. The appropriate composting times for MC and NC were approximately 22days and 40days, respectively, during the summer in Zhejiang. Copyright © 2016 Elsevier Ltd. All rights reserved.
Wan Ibrahim, Wan Nazihah; Sanagi, Mohd Marsin; Mohamad Hanapi, Nor Suhaila; Kamaruzaman, Sazlinda; Yahaya, Noorfatimah; Wan Ibrahim, Wan Aini
2018-06-07
We describe the preparation, characterization and application of a composite film adsorbent based on blended agarose-chitosan-multi-walled carbon nanotubes for the preconcentration of selected non-steroidal anti-inflammatory drugs in aqueous samples before determination by high-performance liquid chromatography with UV detection. The composite film showed high surface area (4.0258 m 2 /g) and strong hydrogen bonding between multi-walled carbon nanotubes and agarose/chitosan matrix, which prevent adsorbent deactivation and ensure long-term stability. Several parameters, namely, sample pH, addition of salt, extraction time, desorption solvent and concentration of multi-walled carbon nanotubes in the composite film were optimized using a one-factor-at-time approach. The optimum extraction conditions obtained were as follows: isopropanol as conditioning solvent, 10 mL of sample solution at pH 2, extraction time of 30 min, stirring speed of 600 rpm, 100 μL of isopropanol as desorption solvent, desorption time of 5 min under ultrasonication, and 0.4% w/v of composite film. Under optimized conditions, the calibration curved showed good linearity in the range of 1-500 ng/mL (r 2 = 0.997-0.999), good limits of detection (0.89-8.05 ng/mL) were obtained with good relative standard deviations of < 4.59% (n = 3) for the determination of naproxen, diclofenac sodium salt and mefenamic acid drugs. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
Du, Li-Jing; Huang, Jian-Ping; Wang, Bin; Wang, Chen-Hui; Wang, Qiu-Yan; Hu, Yu-Han; Yi, Ling; Cao, Jun; Peng, Li-Qing; Chen, Yu-Bo; Zhang, Qi-Dong
2018-06-04
A rapid, simple and efficient sample extraction method based on micro-matrix-solid-phase dispersion (micro-MSPD) was applied to the extraction of polyphenols from pomegranate peel. Five target analytes were determined by ultra-high-performance liquid chromatography coupled with quadrupole-time-of-flight mass spectrometry. Carbon molecular sieve (CMS) was firstly used as dispersant to improve extraction efficiency in micro-MSPD. The major micro-MSPD parameters, such as type of dispersant, amount of dispersant, grinding time and the type and the volume of elution solvents, were studied and optimized. Under optimized conditions, 26 mg of pomegranate peel was dispersed with 32.5 mg of CMS, the grinding time was selected as 90 s, the dispersed sample was eluted with 100 μL of methanol. Results showed that the proposed method was of good linearity for concentrations of analytes against their peak areas (coefficient of determination r 2 >0.990), the limit of the detection was as low as 3.2 ng/mL, and the spiking recoveries were between 88.1% and 106%. Satisfactory results were obtained for the extraction of gallic acid, punicalagin A, punicalagin B, catechin and ellagic acid from pomegranate peel sample, which demonstrated nice reliability and high sensitivity of this approach. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
A novel continuous toxicity test system using a luminously modified freshwater bacterium.
Cho, Jang-Cheon; Park, Kyung-Je; Ihm, Hyuk-Soon; Park, Ji-Eun; Kim, Se-Young; Kang, Ilnam; Lee, Kyu-Ho; Jahng, Deokjin; Lee, Dong-Hun; Kim, Sang-Jong
2004-09-15
An automated continuous toxicity test system was developed using a recombinant bioluminescent freshwater bacterium. The groundwater-borne bacterium, Janthinobacterium lividum YH9-RC, was modified with luxAB and optimized for toxicity tests using different kinds of organic carbon compounds and heavy metals. luxAB-marked YH9-RC cells were much more sensitive (average 7.3-8.6 times) to chemicals used for toxicity detection than marine Vibrio fischeri cells used in the Microtox assay. Toxicity tests for wastewater samples using the YH9-RC-based toxicity assay showed that EC50-5 min values in an untreated raw wastewater sample (23.9 +/- 12.8%) were the lowest, while those in an effluent sample (76.7 +/- 14.9%) were the highest. Lyophilization conditions were optimized in 384-multiwell plates containing bioluminescent bacteria that were pre-incubated for 15 min in 0.16 M of trehalose prior to freeze-drying, increasing the recovery of bioluminescence and viability by 50%. Luminously modified cells exposed to continuous phenol or wastewater stream showed a rapid decrease in bioluminescence, which fell below detectable range within 1 min. An advanced toxicity test system, featuring automated real-time toxicity monitoring and alerting functions, was designed and finely tuned. This novel continuous toxicity test system can be used for real-time biomonitoring of water toxicity, and can potentially be used as a biological early warning system.
Pizarro, C; Pérez-del-Notario, N; González-Sáiz, J M
2010-09-24
A simple, accurate and sensitive method based on headspace solid-phase microextraction (HS-SPME) coupled to gas chromatography-tandem mass spectrometry (GC-MS/MS) was developed for the analysis of 4-ethylguaiacol, 4-ethylphenol, 4-vinylguaiacol and 4-vinylphenol in beer. The effect of the presence of CO2 in the sample on the extraction of analytes was examined. The influence on extraction efficiency of different fibre coatings, of salt addition and stirring was also evaluated. Divinylbenzene/carboxen/polydimethylsiloxane was selected as extraction fibre and was used to evaluate the influence of exposure time, extraction temperature and sample volume/total volume ratio (Vs/Vt) by means of a central composite design (CCD). The optimal conditions identified were 80 degrees C for extraction temperature, 55 min for extraction time and 6 mL of beer (Vs/Vt 0.30). Under optimal conditions, the proposed method showed satisfactory linearity (correlation coefficients between 0.993 and 0.999), precision (between 6.3% and 9.7%) and detection limits (lower than those previously reported for volatile phenols in beers). The method was applied successfully to the analysis of beer samples. To our knowledge, this is the first time that a HS-SPME based method has been developed to determine simultaneously these four volatile phenols in beers. Copyright 2010 Elsevier B.V. All rights reserved.
2017-01-01
Summary The present study was done to optimize the power ultrasound processing for maximizing diastase activity of and minimizing hydroxymethylfurfural (HMF) content in honey using response surface methodology. Experimental design with treatment time (1-15 min), amplitude (20-100%) and volume (40-80 mL) as independent variables under controlled temperature conditions was studied and it was concluded that treatment time of 8 min, amplitude of 60% and volume of 60 mL give optimal diastase activity and HMF content, i.e. 32.07 Schade units and 30.14 mg/kg, respectively. Further thermal profile analyses were done with initial heating temperatures of 65, 75, 85 and 95 ºC until temperature of honey reached up to 65 ºC followed by holding time of 25 min at 65 ºC, and the results were compared with thermal profile of honey treated with optimized power ultrasound. The quality characteristics like moisture, pH, diastase activity, HMF content, colour parameters and total colour difference were least affected by optimized power ultrasound treatment. Microbiological analysis also showed lower counts of aerobic mesophilic bacteria and in ultrasonically treated honey than in thermally processed honey samples complete destruction of coliforms, yeasts and moulds. Thus, it was concluded that power ultrasound under suggested operating conditions is an alternative nonthermal processing technique for honey. PMID:29540991
Optimization of Milling Parameters Employing Desirability Functions
NASA Astrophysics Data System (ADS)
Ribeiro, J. L. S.; Rubio, J. C. Campos; Abrão, A. M.
2011-01-01
The principal aim of this paper is to investigate the influence of tool material (one cermet and two coated carbide grades), cutting speed and feed rate on the machinability of hardened AISI H13 hot work steel, in order to identify the cutting conditions which lead to optimal performance. A multiple response optimization procedure based on tool life, surface roughness, milling forces and the machining time (required to produce a sample cavity) was employed. The results indicated that the TiCN-TiN coated carbide and cermet presented similar results concerning the global optimum values for cutting speed and feed rate per tooth, outperforming the TiN-TiCN-Al2O3 coated carbide tool.
Optimizing Sampling Efficiency for Biomass Estimation Across NEON Domains
NASA Astrophysics Data System (ADS)
Abercrombie, H. H.; Meier, C. L.; Spencer, J. J.
2013-12-01
Over the course of 30 years, the National Ecological Observatory Network (NEON) will measure plant biomass and productivity across the U.S. to enable an understanding of terrestrial carbon cycle responses to ecosystem change drivers. Over the next several years, prior to operational sampling at a site, NEON will complete construction and characterization phases during which a limited amount of sampling will be done at each site to inform sampling designs, and guide standardization of data collection across all sites. Sampling biomass in 60+ sites distributed among 20 different eco-climatic domains poses major logistical and budgetary challenges. Traditional biomass sampling methods such as clip harvesting and direct measurements of Leaf Area Index (LAI) involve collecting and processing plant samples, and are time and labor intensive. Possible alternatives include using indirect sampling methods for estimating LAI such as digital hemispherical photography (DHP) or using a LI-COR 2200 Plant Canopy Analyzer. These LAI estimations can then be used as a proxy for biomass. The biomass estimates calculated can then inform the clip harvest sampling design during NEON operations, optimizing both sample size and number so that standardized uncertainty limits can be achieved with a minimum amount of sampling effort. In 2011, LAI and clip harvest data were collected from co-located sampling points at the Central Plains Experimental Range located in northern Colorado, a short grass steppe ecosystem that is the NEON Domain 10 core site. LAI was measured with a LI-COR 2200 Plant Canopy Analyzer. The layout of the sampling design included four, 300 meter transects, with clip harvests plots spaced every 50m, and LAI sub-transects spaced every 10m. LAI was measured at four points along 6m sub-transects running perpendicular to the 300m transect. Clip harvest plots were co-located 4m from corresponding LAI transects, and had dimensions of 0.1m by 2m. We conducted regression analyses with LAI and clip harvest data to determine whether LAI can be used as a suitable proxy for aboveground standing biomass. We also compared optimal sample sizes derived from LAI data, and clip-harvest data from two different size clip harvest areas (0.1m by 1m vs. 0.1m by 2m). Sample sizes were calculated in order to estimate the mean to within a standardized level of uncertainty that will be used to guide sampling effort across all vegetation types (i.e. estimated within × 10% with 95% confidence). Finally, we employed a Semivariogram approach to determine optimal sample size and spacing.
Patel, Nitin R; Ankolekar, Suresh; Antonijevic, Zoran; Rajicic, Natasa
2013-05-10
We describe a value-driven approach to optimizing pharmaceutical portfolios. Our approach incorporates inputs from research and development and commercial functions by simultaneously addressing internal and external factors. This approach differentiates itself from current practices in that it recognizes the impact of study design parameters, sample size in particular, on the portfolio value. We develop an integer programming (IP) model as the basis for Bayesian decision analysis to optimize phase 3 development portfolios using expected net present value as the criterion. We show how this framework can be used to determine optimal sample sizes and trial schedules to maximize the value of a portfolio under budget constraints. We then illustrate the remarkable flexibility of the IP model to answer a variety of 'what-if' questions that reflect situations that arise in practice. We extend the IP model to a stochastic IP model to incorporate uncertainty in the availability of drugs from earlier development phases for phase 3 development in the future. We show how to use stochastic IP to re-optimize the portfolio development strategy over time as new information accumulates and budget changes occur. Copyright © 2013 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Hanan, Lu; Qiushi, Li; Shaobin, Li
2016-12-01
This paper presents an integrated optimization design method in which uniform design, response surface methodology and genetic algorithm are used in combination. In detail, uniform design is used to select the experimental sampling points in the experimental domain and the system performance is evaluated by means of computational fluid dynamics to construct a database. After that, response surface methodology is employed to generate a surrogate mathematical model relating the optimization objective and the design variables. Subsequently, genetic algorithm is adopted and applied to the surrogate model to acquire the optimal solution in the case of satisfying some constraints. The method has been applied to the optimization design of an axisymmetric diverging duct, dealing with three design variables including one qualitative variable and two quantitative variables. The method of modeling and optimization design performs well in improving the duct aerodynamic performance and can be also applied to wider fields of mechanical design and seen as a useful tool for engineering designers, by reducing the design time and computation consumption.
NASA Astrophysics Data System (ADS)
Marusak, Piotr M.; Kuntanapreeda, Suwat
2018-01-01
The paper considers application of a neural network based implementation of a model predictive control (MPC) control algorithm to electromechanical plants. Properties of such control plants implicate that a relatively short sampling time should be used. However, in such a case, finding the control value numerically may be too time-consuming. Therefore, the current paper tests the solution based on transforming the MPC optimization problem into a set of differential equations whose solution is the same as that of the original optimization problem. This set of differential equations can be interpreted as a dynamic neural network. In such an approach, the constraints can be introduced into the optimization problem with relative ease. Moreover, the solution of the optimization problem can be obtained faster than when the standard numerical quadratic programming routine is used. However, a very careful tuning of the algorithm is needed to achieve this. A DC motor and an electrohydraulic actuator are taken as illustrative examples. The feasibility and effectiveness of the proposed approach are demonstrated through numerical simulations.
Biswas, Rajib; Teller, Philip J; Ahring, Birgitte K
2015-09-01
The logging and lumbering industry in the Pacific Northwest region generates huge amount of forest residues, offering an inexpensive raw material for biorefineries. Wet explosion (WEx) pretreatment was applied to the recalcitrant biomass to optimize process conditions including temperature (170-190 °C), time (10-30 min), and oxygen loading (0.5-7.5% of DM) through an experimental design. Optimal pH for enzymatic hydrolysis of the optimized samples and a complete mass balance have been evaluated. Results indicated that cellulose digestibility improved in all conditions tested with maximum digestibility achieved at 190 °C, time 30 min, and oxygen loading of 7.5%. Glucose yield at optimal pH of 5.5 was 63.3% with an excellent recovery of cellulose and lignin of 99.9% and 96.3%, respectively. Hemicellulose sugars recovery for xylose and mannose was found to be 69.2% and 76.0%, respectively, indicating that WEx is capable of producing relative high sugar yield even from the recalcitrant forest residues. Copyright © 2015 Elsevier Ltd. All rights reserved.
ORACLS: A system for linear-quadratic-Gaussian control law design
NASA Technical Reports Server (NTRS)
Armstrong, E. S.
1978-01-01
A modern control theory design package (ORACLS) for constructing controllers and optimal filters for systems modeled by linear time-invariant differential or difference equations is described. Numerical linear-algebra procedures are used to implement the linear-quadratic-Gaussian (LQG) methodology of modern control theory. Algorithms are included for computing eigensystems of real matrices, the relative stability of a matrix, factored forms for nonnegative definite matrices, the solutions and least squares approximations to the solutions of certain linear matrix algebraic equations, the controllability properties of a linear time-invariant system, and the steady state covariance matrix of an open-loop stable system forced by white noise. Subroutines are provided for solving both the continuous and discrete optimal linear regulator problems with noise free measurements and the sampled-data optimal linear regulator problem. For measurement noise, duality theory and the optimal regulator algorithms are used to solve the continuous and discrete Kalman-Bucy filter problems. Subroutines are also included which give control laws causing the output of a system to track the output of a prescribed model.
Optimized, Budget-constrained Monitoring Well Placement Using DREAM
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yonkofski, Catherine M. R.; Davidson, Casie L.; Rodriguez, Luke R.
Defining the ideal suite of monitoring technologies to be deployed at a carbon capture and storage (CCS) site presents a challenge to project developers, financers, insurers, regulators and other stakeholders. The monitoring, verification, and accounting (MVA) toolkit offers a suite of technologies to monitor an extensive range of parameters across a wide span of spatial and temporal resolutions, each with their own degree of sensitivity to changes in the parameter being monitored. Understanding how best to optimize MVA budgets to minimize the time to leak detection could help to address issues around project risks, and in turn help support broadmore » CCS deployment. This paper presents a case study demonstrating an application of the Designs for Risk Evaluation and Management (DREAM) tool using an ensemble of CO 2 leakage scenarios taken from a previous study on leakage impacts to groundwater. Impacts were assessed and monitored as a function of pH, total dissolved solids (TDS), and trace metal concentrations of arsenic (As), cadmium (Cd), chromium (Cr), and lead (Pb). Using output from the previous study, DREAM was used to optimize monitoring system designs based on variable sampling locations and parameters. The algorithm requires the user to define a finite budget to limit the number of monitoring wells and technologies deployed, and then iterates well placement and sensor type and location until it converges on the configuration with the lowest time to first detection of the leak averaged across all scenarios. To facilitate an understanding of the optimal number of sampling wells, DREAM was used to assess the marginal utility of additional sampling locations. Based on assumptions about monitoring costs and replacement costs of degraded water, the incremental cost of each additional sampling well can be compared against its marginal value in terms of avoided aquifer degradation. Applying this method, DREAM identified the most cost-effective ensemble with 14 monitoring locations. Here, while this preliminary study applied relatively simplistic cost and technology assumptions, it provides an exciting proof-of-concept for the application of DREAM to questions of cost-optimized MVA system design that are informed not only by site-specific costs and technology options, but also by reservoir simulation results developed during site characterization and operation.« less
Optimized, Budget-constrained Monitoring Well Placement Using DREAM
Yonkofski, Catherine M. R.; Davidson, Casie L.; Rodriguez, Luke R.; ...
2017-08-18
Defining the ideal suite of monitoring technologies to be deployed at a carbon capture and storage (CCS) site presents a challenge to project developers, financers, insurers, regulators and other stakeholders. The monitoring, verification, and accounting (MVA) toolkit offers a suite of technologies to monitor an extensive range of parameters across a wide span of spatial and temporal resolutions, each with their own degree of sensitivity to changes in the parameter being monitored. Understanding how best to optimize MVA budgets to minimize the time to leak detection could help to address issues around project risks, and in turn help support broadmore » CCS deployment. This paper presents a case study demonstrating an application of the Designs for Risk Evaluation and Management (DREAM) tool using an ensemble of CO 2 leakage scenarios taken from a previous study on leakage impacts to groundwater. Impacts were assessed and monitored as a function of pH, total dissolved solids (TDS), and trace metal concentrations of arsenic (As), cadmium (Cd), chromium (Cr), and lead (Pb). Using output from the previous study, DREAM was used to optimize monitoring system designs based on variable sampling locations and parameters. The algorithm requires the user to define a finite budget to limit the number of monitoring wells and technologies deployed, and then iterates well placement and sensor type and location until it converges on the configuration with the lowest time to first detection of the leak averaged across all scenarios. To facilitate an understanding of the optimal number of sampling wells, DREAM was used to assess the marginal utility of additional sampling locations. Based on assumptions about monitoring costs and replacement costs of degraded water, the incremental cost of each additional sampling well can be compared against its marginal value in terms of avoided aquifer degradation. Applying this method, DREAM identified the most cost-effective ensemble with 14 monitoring locations. Here, while this preliminary study applied relatively simplistic cost and technology assumptions, it provides an exciting proof-of-concept for the application of DREAM to questions of cost-optimized MVA system design that are informed not only by site-specific costs and technology options, but also by reservoir simulation results developed during site characterization and operation.« less
Fast and Accurate Support Vector Machines on Large Scale Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vishnu, Abhinav; Narasimhan, Jayenthi; Holder, Larry
Support Vector Machines (SVM) is a supervised Machine Learning and Data Mining (MLDM) algorithm, which has become ubiquitous largely due to its high accuracy and obliviousness to dimensionality. The objective of SVM is to find an optimal boundary --- also known as hyperplane --- which separates the samples (examples in a dataset) of different classes by a maximum margin. Usually, very few samples contribute to the definition of the boundary. However, existing parallel algorithms use the entire dataset for finding the boundary, which is sub-optimal for performance reasons. In this paper, we propose a novel distributed memory algorithm to eliminatemore » the samples which do not contribute to the boundary definition in SVM. We propose several heuristics, which range from early (aggressive) to late (conservative) elimination of the samples, such that the overall time for generating the boundary is reduced considerably. In a few cases, a sample may be eliminated (shrunk) pre-emptively --- potentially resulting in an incorrect boundary. We propose a scalable approach to synchronize the necessary data structures such that the proposed algorithm maintains its accuracy. We consider the necessary trade-offs of single/multiple synchronization using in-depth time-space complexity analysis. We implement the proposed algorithm using MPI and compare it with libsvm--- de facto sequential SVM software --- which we enhance with OpenMP for multi-core/many-core parallelism. Our proposed approach shows excellent efficiency using up to 4096 processes on several large datasets such as UCI HIGGS Boson dataset and Offending URL dataset.« less
Roostaie, Ali; Allahnoori, Farzad; Ehteshami, Shokooh
2017-09-01
In this work, novel composite magnetic nanoparticles (CuFe2O4) were synthesized based on sol-gel combustion in the laboratory. Next, a simple production method was optimized for the preparation of the copper nanoferrites (CuFe2O4), which are stable in water, magnetically active, and have a high specific area used as sorbent material for organic dye extraction in water solution. CuFe2O4 nanopowders were characterized by field-emission scanning electron microscopy (SEM), FTIR spectroscopy, and energy dispersive X-ray spectroscopy. The size range of the nanoparticles obtained in such conditions was estimated by SEM images to be 35-45 nm. The parameters influencing the extraction of CuFe2O4 nanoparticles, such as desorption solvent, amount of sorbent, desorption time, sample pH, ionic strength, and extraction time, were investigated and optimized. Under the optimum conditions, a linear calibration curve in the range of 0.75-5.00 μg/L with R2 = 0.9996 was obtained. The LOQ (10Sb) and LOD (3Sb) of the method were 0.75 and 0.25 μg/L (n = 3), respectively. The RSD for a water sample spiked with 1 μg/L rhodamine B was 3% (n = 5). The method was applied for the determination of rhodamine B in tap water, dishwashing foam, dishwashing liquid, and shampoo samples. The relative recovery percentages for these samples were in the range of 95-99%.
Huang, Yan-Feng; Liu, Qiao-Huan; Li, Kang; Li, Ying; Chang, Na
2018-03-01
We adopted a facile hydrofluoric acid-free hydro-/solvothermal method for the preparation of four magnetic iron(III)-based framework composites (MIL-101@Fe 3 O 4 -COOH, MIL-101-NH 2 @Fe 3 O 4 -COOH, MIL-53@Fe 3 O 4 -COOH, and MIL-53-NH 2 @Fe 3 O 4 -COOH). The obtained four magnetic iron(III)-based framework composites were applied to magnetic separation and enrichment of the fungicides (prochloraz, myclobutanil, tebuconazole, and iprodione) from environmental samples before high-performance liquid chromatographic analysis. MIL-101-NH 2 @Fe 3 O 4 -COOH showed more remarkable pre-concentration ability for the fungicides as compared to the other three magnetic iron(III)-based framework composites. The extraction parameters affecting enrichment efficiency including extraction time, sample pH, elution time, and the desorption solvent were investigated and optimized. Under the optimized conditions, the standard curve of correlation coefficients were all above 0.991, the limits of detection were 0.04-0.4 μg/L, and the relative standard deviations were below 10.2%. The recoveries of two real water samples ranged from 71.1-99.1% at the low spiking level (30 μg/L). Therefore, the MIL-101-NH 2 @Fe 3 O 4 -COOH composites are attractive for the rapid and efficient extraction of fungicides from environmental water samples. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Fast Filtration of Bacterial or Mammalian Suspension Cell Cultures for Optimal Metabolomics Results
Bordag, Natalie; Janakiraman, Vijay; Nachtigall, Jonny; González Maldonado, Sandra; Bethan, Bianca; Laine, Jean-Philippe; Fux, Elie
2016-01-01
The metabolome offers real time detection of the adaptive, multi-parametric response of the organisms to environmental changes, pathophysiological stimuli or genetic modifications and thus rationalizes the optimization of cell cultures in bioprocessing. In bioprocessing the measurement of physiological intracellular metabolite levels is imperative for successful applications. However, a sampling method applicable to all cell types with little to no validation effort which simultaneously offers high recovery rates, high metabolite coverage and sufficient removal of extracellular contaminations is still missing. Here, quenching, centrifugation and fast filtration were compared and fast filtration in combination with a stabilizing washing solution was identified as the most promising sampling method. Different influencing factors such as filter type, vacuum pressure, washing solutions were comprehensively tested. The improved fast filtration method (MxP® FastQuench) followed by routine lipid/polar extraction delivers a broad metabolite coverage and recovery reflecting well physiological intracellular metabolite levels for different cell types, such as bacteria (Escherichia coli) as well as mammalian cells chinese hamster ovary (CHO) and mouse myeloma cells (NS0).The proposed MxP® FastQuench allows sampling, i.e. separation of cells from medium with washing and quenching, in less than 30 seconds and is robustly designed to be applicable to all cell types. The washing solution contains the carbon source respectively the 13C-labeled carbon source to avoid nutritional stress during sampling. This method is also compatible with automation which would further reduce sampling times and the variability of metabolite profiling data. PMID:27438065
Least-mean-square spatial filter for IR sensors.
Takken, E H; Friedman, D; Milton, A F; Nitzberg, R
1979-12-15
A new least-mean-square filter is defined for signal-detection problems. The technique is proposed for scanning IR surveillance systems operating in poorly characterized but primarily low-frequency clutter interference. Near-optimal detection of point-source targets is predicted both for continuous-time and sampled-data systems.
Recent advances in targeted RNA-Seq technology allow researchers to efficiently and cost-effectively obtain whole transcriptome profiles using picograms of mRNA from human cell lysates. Low mRNA input requirements and sample multiplexing capabilities has made time- and concentrat...
Design of access-tube TDR sensor for soil water content: Theory
USDA-ARS?s Scientific Manuscript database
The design of a cylindrical access-tube mounted waveguide was developed for in-situ soil water content sensing using time-domain reflectometry (TDR). To optimize the design with respect to sampling volume and losses, we derived the electromagnetic fields produced by a TDR sensor with cylindrical geo...
Yang, Qi; Franco, Christopher M M; Zhang, Wei
2015-10-01
Experiments were designed to validate the two common DNA extraction protocols (CTAB-based method and DNeasy Blood & Tissue Kit) used to effectively recover actinobacterial DNA from sponge samples in order to study the sponge-associated actinobacterial diversity. This was done by artificially spiking sponge samples with actinobacteria (spores, mycelia and a combination of the two). Our results demonstrated that both DNA extraction methods were effective in obtaining DNA from the sponge samples as well as the sponge samples spiked with different amounts of actinobacteria. However, it was noted that in the presence of the sponge, the bacterial 16S rRNA gene could not be amplified unless the combined DNA template was diluted. To test the hypothesis that the extracted sponge DNA contained inhibitors, dilutions of the DNA extracts were tested for six sponge species representing five orders. The results suggested that the inhibitors were co-extracted with the sponge DNA, and a high dilution of this DNA was required for the successful PCR amplification for most of the samples. The optimized PCR conditions, including primer selection, PCR reaction system and program optimization, further improved the PCR performance. However, no single PCR condition was found to be suitable for the diverse sponge samples using various primer sets. These results highlight for the first time that the DNA extraction methods used are effective in obtaining actinobacterial DNA and that the presence of inhibitors in the sponge DNA requires high dilution coupled with fine tuning of the PCR conditions to achieve success in the study of sponge-associated actinobacterial diversity.
NASA Astrophysics Data System (ADS)
Lin, Qingyang; Andrew, Matthew; Thompson, William; Blunt, Martin J.; Bijeljic, Branko
2018-05-01
Non-invasive laboratory-based X-ray microtomography has been widely applied in many industrial and research disciplines. However, the main barrier to the use of laboratory systems compared to a synchrotron beamline is its much longer image acquisition time (hours per scan compared to seconds to minutes at a synchrotron), which results in limited application for dynamic in situ processes. Therefore, the majority of existing laboratory X-ray microtomography is limited to static imaging; relatively fast imaging (tens of minutes per scan) can only be achieved by sacrificing imaging quality, e.g. reducing exposure time or number of projections. To alleviate this barrier, we introduce an optimized implementation of a well-known iterative reconstruction algorithm that allows users to reconstruct tomographic images with reasonable image quality, but requires lower X-ray signal counts and fewer projections than conventional methods. Quantitative analysis and comparison between the iterative and the conventional filtered back-projection reconstruction algorithm was performed using a sandstone rock sample with and without liquid phases in the pore space. Overall, by implementing the iterative reconstruction algorithm, the required image acquisition time for samples such as this, with sparse object structure, can be reduced by a factor of up to 4 without measurable loss of sharpness or signal to noise ratio.
Optimal two-phase sampling design for comparing accuracies of two binary classification rules.
Xu, Huiping; Hui, Siu L; Grannis, Shaun
2014-02-10
In this paper, we consider the design for comparing the performance of two binary classification rules, for example, two record linkage algorithms or two screening tests. Statistical methods are well developed for comparing these accuracy measures when the gold standard is available for every unit in the sample, or in a two-phase study when the gold standard is ascertained only in the second phase in a subsample using a fixed sampling scheme. However, these methods do not attempt to optimize the sampling scheme to minimize the variance of the estimators of interest. In comparing the performance of two classification rules, the parameters of primary interest are the difference in sensitivities, specificities, and positive predictive values. We derived the analytic variance formulas for these parameter estimates and used them to obtain the optimal sampling design. The efficiency of the optimal sampling design is evaluated through an empirical investigation that compares the optimal sampling with simple random sampling and with proportional allocation. Results of the empirical study show that the optimal sampling design is similar for estimating the difference in sensitivities and in specificities, and both achieve a substantial amount of variance reduction with an over-sample of subjects with discordant results and under-sample of subjects with concordant results. A heuristic rule is recommended when there is no prior knowledge of individual sensitivities and specificities, or the prevalence of the true positive findings in the study population. The optimal sampling is applied to a real-world example in record linkage to evaluate the difference in classification accuracy of two matching algorithms. Copyright © 2013 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Yan, Zhiqiang; Jerabkova, Tereza; Kroupa, Pavel
2017-11-01
Here we present a full description of the integrated galaxy-wide initial mass function (IGIMF) theory in terms of the optimal sampling and compare it with available observations. Optimal sampling is the method we use to discretize the IMF deterministically into stellar masses. Evidence indicates that nature may be closer to deterministic sampling as observations suggest a smaller scatter of various relevant observables than random sampling would give, which may result from a high level of self-regulation during the star formation process. We document the variation of IGIMFs under various assumptions. The results of the IGIMF theory are consistent with the empirical relation between the total mass of a star cluster and the mass of its most massive star, and the empirical relation between the star formation rate (SFR) of a galaxy and the mass of its most massive cluster. Particularly, we note a natural agreement with the empirical relation between the IMF power-law index and the SFR of a galaxy. The IGIMF also results in a relation between the SFR of a galaxy and the mass of its most massive star such that, if there were no binaries, galaxies with SFR < 10-4M⊙/yr should host no Type II supernova events. In addition, a specific list of initial stellar masses can be useful in numerical simulations of stellar systems. For the first time, we show optimally sampled galaxy-wide IMFs (OSGIMF) that mimic the IGIMF with an additional serrated feature. Finally, a Python module, GalIMF, is provided allowing the calculation of the IGIMF and OSGIMF dependent on the galaxy-wide SFR and metallicity. A copy of the python code model is available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/607/A126
Lakade, Sameer S; Borrull, Francesc; Furton, Kenneth G; Kabir, Abuzar; Marcé, Rosa Maria; Fontanals, Núria
2018-05-01
A novel sample preparation technique named capsule phase microextraction (CPME) is presented here. The technique utilizes a miniaturized microextraction capsule (MEC) as the extraction medium. The MEC consists of two conjoined porous tubular polypropylene membranes, one of which encapsulates the sorbent through sol-gel technology, while the other encapsulates a magnetic metal rod. As such, MEC integrates both the extraction and stirring mechanisms into a single device. The aim of this article is to demonstrate the application potential of CPME as sample preparation technique for the extraction of a group of personal care products (PCPs) from water matrices. Among the different sol-gel sorbent materials (UCON ® , poly(caprolactone-dimethylsiloxane-caprolactone) (PCAP-DMS-CAP) and Carbowax 20M (CW-20M)) evaluated, CW-20M MEC demonstrated the best extraction performance for the selected PCPs. The extraction conditions for sol-gel CW-20M MEC were optimized, including sample pH, stirring speed, addition of salt, extraction time, sample volume, liquid desorption solvent, and time. Under the optimal conditions, sol-gel CW-20M MEC provided recoveries, ranging between 47 and 90% for all analytes, except for ethylparaben, which showed a recovery of 26%. The method based on CPME with sol-gel CW-20M followed by liquid chromatography-tandem mass spectrometry was developed and validated for the extraction of PCPs from river water and effluent wastewater samples. When analyzing different environmental samples, some analytes such as 2,4-dihydroxybenzophenone, 2,2-dihydroxy-4-4 methoxybenzophenone and 3-benzophenone were found at low ng L -1 .
Nojavan, Saeed; Bidarmanesh, Tina; Mohammadi, Ali; Yaripour, Saeid
2016-03-01
In the present study, for the first time electromembrane extraction followed by high performance liquid chromatography coupled with ultraviolet detection was optimized and validated for quantification of four gonadotropin-releasing hormone agonist anticancer peptides (alarelin, leuprolide, buserelin and triptorelin) in biological and aqueous samples. The parameters influencing electromigration were investigated and optimized. The membrane consists 95% of 1-octanol and 5% di-(2-ethylhexyl)-phosphate immobilized in the pores of a hollow fiber. A 20 V electrical field was applied to make the analytes migrate from sample solution with pH 7.0, through the supported liquid membrane into an acidic acceptor solution with pH 1.0 which was located inside the lumen of hollow fiber. Extraction recoveries in the range of 49 and 71% within 15 min extraction time were obtained in different biological matrices which resulted in preconcentration factors in the range of 82-118 and satisfactory repeatability (7.1 < RSD% < 19.8). The method offers good linearity (2.0-1000 ng/mL) with estimation of regression coefficient higher than 0.998. The procedure allows very low detection and quantitation limits of 0.2 and 0.6 ng/mL, respectively. Finally, it was applied to determination and quantification of peptides in human plasma and wastewater samples and satisfactory results were yielded. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Mogolodi Dimpe, K; Mpupa, Anele; Nomngongo, Philiswa N
2018-01-05
This work was chiefly encouraged by the continuous consumption of antibiotics which eventually pose harmful effects on animals and human beings when present in water systems. In this study, the activated carbon (AC) was used as a solid phase material for the removal of sulfamethoxazole (SMX) in wastewater samples. The microwave assisted solid phase extraction (MASPE) as a sample extraction method was employed to better extract SMX in water samples and finally the analysis of SMX was done by the UV-Vis spectrophotometer. The microwave assisted solid phase extraction method was optimized using a two-level fractional factorial design by evaluating parameters such as pH, mass of adsorbent (MA), extraction time (ET), eluent ratio (ER) and microwave power (MP). Under optimized conditions, the limit of detection (LOD) and limit of quantification (LOQ) were 0.5μgL -1 and 1.7μgL -1 , respectively, and intraday and interday precision expressed in terms of relative standard deviation were >6%.The maximum adsorption capacity was 138mgg -1 for SMX and the adsorbent could be reused eight times. Lastly, the MASPE method was applied for the removal of SMX in wastewater samples collected from a domestic wastewater treatment plant (WWTP) and river water. Copyright © 2017 Elsevier B.V. All rights reserved.
Ren, Luquan; Zhou, Xueli; Song, Zhengyi; Zhao, Che; Liu, Qingping; Xue, Jingze; Li, Xiujuan
2017-03-16
Recently, with a broadening range of available materials and alteration of feeding processes, several extrusion-based 3D printing processes for metal materials have been developed. An emerging process is applicable for the fabrication of metal parts into electronics and composites. In this paper, some critical parameters of extrusion-based 3D printing processes were optimized by a series of experiments with a melting extrusion printer. The raw materials were copper powder and a thermoplastic organic binder system and the system included paraffin wax, low density polyethylene, and stearic acid (PW-LDPE-SA). The homogeneity and rheological behaviour of the raw materials, the strength of the green samples, and the hardness of the sintered samples were investigated. Moreover, the printing and sintering parameters were optimized with an orthogonal design method. The influence factors in regard to the ultimate tensile strength of the green samples can be described as follows: infill degree > raster angle > layer thickness. As for the sintering process, the major factor on hardness is sintering temperature, followed by holding time and heating rate. The highest hardness of the sintered samples was very close to the average hardness of commercially pure copper material. Generally, the extrusion-based printing process for producing metal materials is a promising strategy because it has some advantages over traditional approaches for cost, efficiency, and simplicity.
Ren, Luquan; Zhou, Xueli; Song, Zhengyi; Zhao, Che; Liu, Qingping; Xue, Jingze; Li, Xiujuan
2017-01-01
Recently, with a broadening range of available materials and alteration of feeding processes, several extrusion-based 3D printing processes for metal materials have been developed. An emerging process is applicable for the fabrication of metal parts into electronics and composites. In this paper, some critical parameters of extrusion-based 3D printing processes were optimized by a series of experiments with a melting extrusion printer. The raw materials were copper powder and a thermoplastic organic binder system and the system included paraffin wax, low density polyethylene, and stearic acid (PW–LDPE–SA). The homogeneity and rheological behaviour of the raw materials, the strength of the green samples, and the hardness of the sintered samples were investigated. Moreover, the printing and sintering parameters were optimized with an orthogonal design method. The influence factors in regard to the ultimate tensile strength of the green samples can be described as follows: infill degree > raster angle > layer thickness. As for the sintering process, the major factor on hardness is sintering temperature, followed by holding time and heating rate. The highest hardness of the sintered samples was very close to the average hardness of commercially pure copper material. Generally, the extrusion-based printing process for producing metal materials is a promising strategy because it has some advantages over traditional approaches for cost, efficiency, and simplicity. PMID:28772665
NASA Astrophysics Data System (ADS)
Peng, Haijun; Wang, Wei
2016-10-01
An adaptive surrogate model-based multi-objective optimization strategy that combines the benefits of invariant manifolds and low-thrust control toward developing a low-computational-cost transfer trajectory between libration orbits around the L1 and L2 libration points in the Sun-Earth system has been proposed in this paper. A new structure for a multi-objective transfer trajectory optimization model that divides the transfer trajectory into several segments and gives the dominations for invariant manifolds and low-thrust control in different segments has been established. To reduce the computational cost of multi-objective transfer trajectory optimization, a mixed sampling strategy-based adaptive surrogate model has been proposed. Numerical simulations show that the results obtained from the adaptive surrogate-based multi-objective optimization are in agreement with the results obtained using direct multi-objective optimization methods, and the computational workload of the adaptive surrogate-based multi-objective optimization is only approximately 10% of that of direct multi-objective optimization. Furthermore, the generating efficiency of the Pareto points of the adaptive surrogate-based multi-objective optimization is approximately 8 times that of the direct multi-objective optimization. Therefore, the proposed adaptive surrogate-based multi-objective optimization provides obvious advantages over direct multi-objective optimization methods.
NASA Astrophysics Data System (ADS)
Kocken, I.; Ziegler, M.
2017-12-01
Clumped isotope measurements on carbonates are a quickly developing and promising palaeothermometry proxy1-3. Developments in the field have brought down the necessary sample amount and improved the precision and accuracy of the measurements. The developments have included inter-laboratory comparison and the introduction of an absolute reference frame4, determination of acid fractionation effects5, correction for the pressure baseline6, as well as improved temperature calibrations2, and most recently new approaches to improve efficiency in terms of sample gas usage7. However, a large-scale application of clumped isotope thermometry is still hampered by required large sample amounts, but also the time-consuming analysis. In general, a lot of time is goes into the measurement of standards. Here we present a study on the optimal ratio between standard- and sample measurements using the Kiel Carbonate Device method. We also consider the optimal initial signal intensity. We analyse ETH-standard measurements from several months to determine the measurement regime with the highest precision and optimised measurement time management.References 1. Eiler, J. M. Earth Planet. Sci. Lett. 262, 309-327 (2007).2. Kelson, J. R., et al. Geochim. Cosmochim. Acta 197, 104-131 (2017).3. Kele, S. et al. Geochim. Cosmochim. Acta 168, 172-192 (2015).4. Dennis, K. J. et al. Geochim. Cosmochim. Acta 75, 7117-7131 (2011).5. Müller, I. A. et al. Chem. Geol. 449, 1-14 (2017).6. Meckler, A. N. et al. Rapid Commun. Mass Spectrom. 28, 1705-1715 (2014).7. Hu, B. et al. Rapid Commun. Mass Spectrom. 28, 1413-1425 (2014).
2014-08-01
The system abilities of two chromatographic techniques, capillary electrophoresis (CE) and high performance liquid chromatography (HPLC), were compared for the analysis of four tetracyclines (tetracycline, chlorotetracycline, oxytetracycline and doxycycline). The pH, concentration of background electrolyte (BGE) were optimized for the analysis of the standard mixture sample, meanwhile, the effects of separation voltage and water matrix (pH value and hardness) effects were investigated. In hydrodynamic injection (HDI) mode, a good quantitative linearity and baseline separation within 9. 0 min were obtained for the four tetracyclines at the optimal conditions; the analytical time was about half of that of HPLC. The limits of detection (LODs) were in the range of 0. 28 - 0. 62 mg/L, and the relative standard deviations (RSDs) (n= 6) of migration time and peak area were 0. 42% - 0. 56% and 2. 24% - 2. 95%, respectively. The obtained recoveries spiked in tap water and fishpond water were at the ranges of 96. 3% - 107. 2% and 87. 1% - 105. 2%, respectively. In addition, the stacking method, field-amplified sample injection (FASI), was employed to improve the sensitivity, and the LOD was down to the range of 17.8-35.5 μg/L. With FASI stacking, the RSDs (n=6) of migration time and peak area were 0. 85%-0. 95% and 1. 69%-3.43%, respectively. Due to the advantages of simple sample pretreatment and fast speed, CE is promising in the analysis of the antibiotics in environmental water.
Su, Xiaoquan; Wang, Xuetao; Jing, Gongchao; Ning, Kang
2014-04-01
The number of microbial community samples is increasing with exponential speed. Data-mining among microbial community samples could facilitate the discovery of valuable biological information that is still hidden in the massive data. However, current methods for the comparison among microbial communities are limited by their ability to process large amount of samples each with complex community structure. We have developed an optimized GPU-based software, GPU-Meta-Storms, to efficiently measure the quantitative phylogenetic similarity among massive amount of microbial community samples. Our results have shown that GPU-Meta-Storms would be able to compute the pair-wise similarity scores for 10 240 samples within 20 min, which gained a speed-up of >17 000 times compared with single-core CPU, and >2600 times compared with 16-core CPU. Therefore, the high-performance of GPU-Meta-Storms could facilitate in-depth data mining among massive microbial community samples, and make the real-time analysis and monitoring of temporal or conditional changes for microbial communities possible. GPU-Meta-Storms is implemented by CUDA (Compute Unified Device Architecture) and C++. Source code is available at http://www.computationalbioenergy.org/meta-storms.html.
Optimizing Fungal DNA Extraction Methods from Aerosol Filters
NASA Astrophysics Data System (ADS)
Jimenez, G.; Mescioglu, E.; Paytan, A.
2016-12-01
Fungi and fungal spores can be picked up from terrestrial ecosystems, transported long distances, and deposited into marine ecosystems. It is important to study dust-borne fungal communities, because they can stay viable and effect the ambient microbial populations, which are key players in biogeochemical cycles. One of the challenges of studying dust-borne fungal populations is that aerosol samples contain low biomass, making extracting good quality DNA very difficult. The aim of this project was to increase DNA yield by optimizing DNA extraction methods. We tested aerosol samples collected from Haifa, Israel (polycarbonate filter), Monterey Bay, CA (quartz filter) and Bermuda (quartz filter). Using the Qiagen DNeasy Plant Kit, we tested the effect of altering bead beating times and incubation times, adding three freeze and thaw steps, initially washing the filters with buffers for various lengths of time before using the kit, and adding a step with 30 minutes of sonication in 65C water. Adding three freeze/thaw steps, adding a sonication step, washing with a phosphate buffered saline overnight, and increasing incubation time to two hours, in that order, resulted in the highest increase in DNA for samples from Israel (polycarbonate). DNA yield of samples from Monterey (quart filter) increased about 5 times when washing with buffers overnight (phosphate buffered saline and potassium phophate buffer), adding a sonication step, and adding three freeze and thaw steps. Samples collected in Bermuda (quartz filter) had the highest increase in DNA yield from increasing incubation to 2 hours, increasing bead beating time to 6 minutes, and washing with buffers overnight (phosphate buffered saline and potassium phophate buffer). Our results show that DNA yield can be increased by altering various steps of the Qiagen DNeasy Plant Kit protocol, but different types of filters collected at different sites respond differently to alterations. These results can be used as preliminary results to continue developing fungi DNA extraction methods. Developing these methods will be important as dust storms are predicted to increase due to increased draughts and anthropogenic activity, and the fungal communities of these dust-storms are currently relatively understudied.
Souza Silva, Érica A; Saboia, Giovanni; Jorge, Nina C; Hoffmann, Camila; Dos Santos Isaias, Rosy Mary; Soares, Geraldo L G; Zini, Claudia A
2017-12-01
A headspace solid phase microextraction (HS-SPME) method combined with gas chromatography-mass spectrometry (GC/MS) was developed and optimized for extraction and analysis of volatile organic compounds (VOC) of leaves and galls of Myrcia splendens. Through a process of optimization of main factors affecting HS-SPME efficiency, the coating divivnilbenzene-carboxen-polydimethylsiloxane (DVB/Car/PDMS) was chosen as the optimum extraction phase, not only in terms of extraction efficiency, but also for its broader analyte coverage. Optimum extraction temperature was 30°C, while an extraction time of 15min provided the best compromise between extraction efficiencies of lower and higher molecular weight compounds. The optimized protocol was demonstrated to be capable of sampling plant material with high reproducibility, considering that most classes of analytes met the 20% RSD FDA criterion. The optimized method was employed for the analysis of three classes of M. splendens samples, generating a final list of 65 tentatively identified VOC, including alcohols, aldehydes, esters, ketones, phenol derivatives, as well as mono and sesquiterpenes. Significant differences were evident amongst the volatile profiles obtained from non-galled leaves (NGL) and leaf-folding galls (LFG) of M. splendens. Several differences pertaining to amounts of alcohols and aldehydes were detected between samples, particularly regarding quantities of green leaf volatiles (GLV). Alcohols represented about 14% of compounds detected in gall samples, whereas in non-galled samples, alcohol content was below 5%. Phenolic derived compounds were virtually absent in reference samples, while in non-galled leaves and galls their content ranged around 0.2% and 0.4%, respectively. Likewise, methyl salicylate, a well-known signal of plant distress, amounted for 1.2% of the sample content of galls, whereas it was only present in trace levels in reference samples. Chemometric analysis based on Heatmap associated with Hierarchical Cluster Analysis (HCA) and Principal Component Analysis (PCA) provided a suitable tool to differentiate VOC profiles in vegetal material, and could open new perspectives and opportunities in agricultural and ecological studies for the detection and identification of herbivore-induced plant VOC emissions. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Das, Chandan; Das, Arijit; Kumar Golder, Animes
2016-10-01
The present work illustrates the Microwave-Assisted Drying (MWAD) characteristic of aloe vera gel combined with process optimization and artificial neural network modeling. The influence of microwave power (160-480 W), gel quantity (4-8 g) and drying time (1-9 min) on the moisture ratio was investigated. The drying of aloe gel exhibited typical diffusion-controlled characteristics with a predominant interaction between input power and drying time. Falling rate period was observed for the entire MWAD of aloe gel. Face-centered Central Composite Design (FCCD) developed a regression model to evaluate their effects on moisture ratio. The optimal MWAD conditions were established as microwave power of 227.9 W, sample amount of 4.47 g and 5.78 min drying time corresponding to the moisture ratio of 0.15. A computer-stimulated Artificial Neural Network (ANN) model was generated for mapping between process variables and the desired response. `Levenberg-Marquardt Back Propagation' algorithm with 3-5-1 architect gave the best prediction, and it showed a clear superiority over FCCD.
Sereshti, Hassan; Heravi, Yeganeh Entezari; Samadi, Soheila
2012-08-15
Ultrasonic-assisted emulsification microextraction (USAEME) combined with inductively coupled plasma-optical emission spectrometry (ICP-OES) was used for preconcentration and determination of aluminum, bismuth, cadmium, cobalt, copper, iron, gallium, indium, nickel, lead, thallium and zinc in real water samples. Ammonium pyrrolidine dithiocarbamate (APDC) and carbon tetrachloride (CCl(4)) were used as the chelating agent and extraction solvent, respectively. The effective parameters (factors) of the extraction process such as volume of extraction solvent, pH, sonication time, and concentration of chelating agent were optimized by a small central composite design (CCD). The optimum conditions were found to be 98 μL for extraction solvent, 1476 mg L(-1) for chelating agent, 3.8 for pH and 9 min for sonication time. Under the optimal conditions, the limits of detection (LODs) for Al, Bi, Cd, Co, Cu, Fe, Ga, In, Ni, Pb, Tl and Zn were 0.13, 0.48, 0.19, 0.28, 0.29, 0.27, 0.27, 0.38, 0.44, 0.47, 0.52 and 0.17 μg L(-1), respectively. The linear dynamic range (LDR) was 1-1000 μ gL(-1) with determination coefficients of 0.991-0.998. Relative standard deviations (RSDs, C=200 μg L(-1), n=6) were between 1.87%-5.65%. The proposed method was successfully applied to the extraction and determination of heavy metals in real water samples and the satisfactory relative recoveries (90.3%-105.5%) were obtained. Copyright © 2012 Elsevier B.V. All rights reserved.
Makino, Yoshinori; Watanabe, Michiko; Makihara, Reiko Ando; Nokihara, Hiroshi; Yamamoto, Noboru; Ohe, Yuichiro; Sugiyama, Erika; Sato, Hitoshi; Hayashi, Yoshikazu
2016-09-01
Limited sampling points for both amrubicin (AMR) and its active metabolite amrubicinol (AMR-OH) were simultaneously optimized using Akaike's information criterion (AIC) calculated by pharmacokinetic modeling. In this pharmacokinetic study, 40 mg/m(2) of AMR was administered as a 5-min infusion on three consecutive days to 21 Japanese lung cancer patients. Blood samples were taken at 0, 0.08, 0.25, 0.5, 1, 2, 4, 8 and 24 h after drug infusion, and AMR and AMR-OH concentrations in plasma were quantitated using a high-performance liquid chromatography. The pharmacokinetic profile of AMR was characterized using a three-compartment model and that of AMR-OH using a one-compartment model following a first-order absorption process. These pharmacokinetic profiles were then integrated into one pharmacokinetic model for simultaneous fitting of AMR and AMR-OH. After fitting to the pharmacokinetic model, 65 combinations of four sampling points from the concentration profiles were evaluated for their AICs. Stepwise regression analysis was applied to select the sampling points for AMR and AMR-OH to predict the area under the concentration-time curves (AUCs) at best. Of the three combinations that yielded favorable AIC values, 0.25, 2, 4 and 8 h yielded the best AUC prediction for both AMR (R(2) = 0.977) and AMR-OH (R(2) = 0.886). The prediction error for AUC was less than 15%. The optimal limited sampling points of AMR and AMR-OH after AMR infusion were found to be 0.25, 2, 4 and 8 h, enabling less frequent blood sampling in further expanded pharmacokinetic studies for both AMR and AMR-OH. © 2016 John Wiley & Sons Australia, Ltd.
Adaptive Metropolis Sampling with Product Distributions
NASA Technical Reports Server (NTRS)
Wolpert, David H.; Lee, Chiu Fan
2005-01-01
The Metropolis-Hastings (MH) algorithm is a way to sample a provided target distribution pi(z). It works by repeatedly sampling a separate proposal distribution T(x,x') to generate a random walk {x(t)}. We consider a modification of the MH algorithm in which T is dynamically updated during the walk. The update at time t uses the {x(t' less than t)} to estimate the product distribution that has the least Kullback-Leibler distance to pi. That estimate is the information-theoretically optimal mean-field approximation to pi. We demonstrate through computer experiments that our algorithm produces samples that are superior to those of the conventional MH algorithm.
Evaluation of the quality of herbal teas by DART/TOF-MS.
Prchalová, J; Kovařík, F; Rajchl, A
2017-02-01
The paper focuses on the optimization, settings and validation of direct analysis in real time coupled with time-of-flight detector when used for the evaluation of the quality of selected herbal teas (fennel, chamomile, nettle, linden, peppermint, thyme, lemon balm, marigold, sage, rose hip and St. John's wort). The ionization mode, the optimal ionization temperature and the type of solvent for sample extraction were optimized. The characteristic compounds of the analysed herbal teas (glycosides, flavonoids and phenolic and terpenic substances, such as chamazulene, anethole, menthol, thymol, salviol and hypericin) were detected. The obtained mass spectra were evaluated by multidimensional chemometric methods, such as cluster analysis, linear discriminate analysis and principal component analysis. The chemometric methods showed that the single variety herbal teas were grouped according to their taxonomic affiliation. The developed method is suitable for quick identification of herbs and can be potentially used for assessing the quality and authenticity of herbal teas. Direct analysis in real time/time-of-flight-MS is also suitable for the evaluation of selected substances contained in the mentioned herbs and herbal products. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
NASA Technical Reports Server (NTRS)
Sadovsky, A. V.; Davis, D.; Isaacson, D. R.
2012-01-01
We address the problem of navigating a set of moving agents, e.g. automated guided vehicles, through a transportation network so as to bring each agent to its destination at a specified time. Each pair of agents is required to be separated by a minimal distance, generally agent-dependent, at all times. The speed range, initial position, required destination, and required time of arrival at destination for each agent are assumed provided. The movement of each agent is governed by a controlled differential equation (state equation). The problem consists in choosing for each agent a path and a control strategy so as to meet the constraints and reach the destination at the required time. This problem arises in various fields of transportation, including Air Traffic Management and train coordination, and in robotics. The main contribution of the paper is a model that allows to recast this problem as a decoupled collection of problems in classical optimal control and is easily generalized to the case when inertia cannot be neglected. Some qualitative insight into solution behavior is obtained using the Pontryagin Maximum Principle. Sample numerical solutions are computed using a numerical optimal control solver.
[Mission oriented diagnostic real-time PCR].
Tomaso, Herbert; Scholz, Holger C; Al Dahouk, Sascha; Splettstoesser, Wolf D; Neubauer, Heinrich; Pfeffer, Martin; Straube, Eberhard
2007-01-01
In out of area military missions soldiers are potentially exposed to bacteria that are endemic in tropical areas and can be used as biological agents. It can be difficult to culture these bacteria due to sample contamination, low number of bacteria or pretreatment with antibiotics. Commercial biochemical identification systems are not optimized for these agents which can result in misidentification. Immunological assays are often not commercially available or not specific. Real-time PCR assays are very specific and sensitive and can shorten the time required to establish a diagnosis markedly. Therefore, real-time PCRs for the identification of Bacillus anthracis, Brucella spp., Burkholderia mallei und Burkholderia pseudomallei, Francisella tularensis und Yersinia pestis have been developed. PCR results can be false negative due to inadequate clinical samples, low number of bacteria in samples, DNA degradation, inhibitory substances and inappropriate DNA preparation. Hence, it is crucial to cultivate the organisms as a prerequisite for adequate antibiotic therapy and typing of the agent. In a bioterrorist scenario samples have to be treated according to rules applied in forensic medicine and documentation has to be flawless.
Optimization conditions of samples saponification for tocopherol analysis.
Souza, Aloisio Henrique Pereira; Gohara, Aline Kirie; Rodrigues, Ângela Claudia; Ströher, Gisely Luzia; Silva, Danielle Cristina; Visentainer, Jesuí Vergílio; Souza, Nilson Evelázio; Matsushita, Makoto
2014-09-01
A full factorial design 2(2) (two factors at two levels) with duplicates was performed to investigate the influence of the factors agitation time (2 and 4 h) and the percentage of KOH (60% and 80% w/v) in the saponification of samples for the determination of α, β and γ+δ-tocopherols. The study used samples of peanuts (cultivar armadillo), produced and marketed in Maringá, PR. The factors % KOH and agitation time were significant, and an increase in their values contributed negatively to the responses. The interaction effect was not significant for the response δ-tocopherol, and the contribution of this effect to the other responses was positive, but less than 10%. The ANOVA and response surfaces analysis showed that the most efficient saponification procedure was obtained using a 60% (w/v) solution of KOH and with an agitation time of 2 h. Copyright © 2014 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Song, Yanpo; Peng, Xiaoqi; Tang, Ying; Hu, Zhikun
2013-07-01
To improve the operation level of copper converter, the approach to optimal decision making modeling for coppermatte converting process based on data mining is studied: in view of the characteristics of the process data, such as containing noise, small sample size and so on, a new robust improved ANN (artificial neural network) modeling method is proposed; taking into account the application purpose of decision making model, three new evaluation indexes named support, confidence and relative confidence are proposed; using real production data and the methods mentioned above, optimal decision making model for blowing time of S1 period (the 1st slag producing period) are developed. Simulation results show that this model can significantly improve the converting quality of S1 period, increase the optimal probability from about 70% to about 85%.
Active learning based segmentation of Crohns disease from abdominal MRI.
Mahapatra, Dwarikanath; Vos, Franciscus M; Buhmann, Joachim M
2016-05-01
This paper proposes a novel active learning (AL) framework, and combines it with semi supervised learning (SSL) for segmenting Crohns disease (CD) tissues from abdominal magnetic resonance (MR) images. Robust fully supervised learning (FSL) based classifiers require lots of labeled data of different disease severities. Obtaining such data is time consuming and requires considerable expertise. SSL methods use a few labeled samples, and leverage the information from many unlabeled samples to train an accurate classifier. AL queries labels of most informative samples and maximizes gain from the labeling effort. Our primary contribution is in designing a query strategy that combines novel context information with classification uncertainty and feature similarity. Combining SSL and AL gives a robust segmentation method that: (1) optimally uses few labeled samples and many unlabeled samples; and (2) requires lower training time. Experimental results show our method achieves higher segmentation accuracy than FSL methods with fewer samples and reduced training effort. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Zunder, Eli R.; Finck, Rachel; Behbehani, Gregory K.; Amir, El-ad D.; Krishnaswamy, Smita; Gonzalez, Veronica D.; Lorang, Cynthia G.; Bjornson, Zach; Spitzer, Matthew H.; Bodenmiller, Bernd; Fantl, Wendy J.; Pe’er, Dana; Nolan, Garry P.
2015-01-01
SUMMARY Mass-tag cell barcoding (MCB) labels individual cell samples with unique combinatorial barcodes, after which they are pooled for processing and measurement as a single multiplexed sample. The MCB method eliminates variability between samples in antibody staining and instrument sensitivity, reduces antibody consumption, and shortens instrument measurement time. Here, we present an optimized MCB protocol with several improvements over previously described methods. The use of palladium-based labeling reagents expands the number of measurement channels available for mass cytometry and reduces interference with lanthanide-based antibody measurement. An error-detecting combinatorial barcoding scheme allows cell doublets to be identified and removed from the analysis. A debarcoding algorithm that is single cell-based rather than population-based improves the accuracy and efficiency of sample deconvolution. This debarcoding algorithm has been packaged into software that allows rapid and unbiased sample deconvolution. The MCB procedure takes 3–4 h, not including sample acquisition time of ~1 h per million cells. PMID:25612231
Bahar, Shahriyar; Es'haghi, Zarrin; Nezhadali, Azizollah; Banaei, Alireza; Bohlooli, Shahab
2017-04-15
In the present study, nano-sized titanium oxides were applied for preconcentration and determination of Pb(II) in aqueous samples using hollow fiber based solid-liquid phase microextraction (HF-SLPME) combined with flame atomic absorption spectrometry (FAAS). In this work, the nanoparticles dispersed in caprylic acid as an extraction solvent was placed into a polypropylene porous hollow fiber segment supported by capillary forces and sonification. This membrane was in direct contact with solutions containing Pb (II). The effect of experimental conditions on the extraction, such as pH, stirring rate, sample volume, and extraction time were optimized. Under the optimal conditions, the performance of the proposed method was investigated for the determination of Pb (II) in food and water samples. The method was linear in the range of 0.6-3000μgmL -1 . The relative standard deviations and relative recovery of Pb (II) was 4.9% and 99.3%, respectively (n=5). Copyright © 2016 Elsevier Ltd. All rights reserved.
Efficient Bayesian experimental design for contaminant source identification
NASA Astrophysics Data System (ADS)
Zhang, Jiangjiang; Zeng, Lingzao; Chen, Cheng; Chen, Dingjiang; Wu, Laosheng
2015-01-01
In this study, an efficient full Bayesian approach is developed for the optimal sampling well location design and source parameters identification of groundwater contaminants. An information measure, i.e., the relative entropy, is employed to quantify the information gain from concentration measurements in identifying unknown parameters. In this approach, the sampling locations that give the maximum expected relative entropy are selected as the optimal design. After the sampling locations are determined, a Bayesian approach based on Markov Chain Monte Carlo (MCMC) is used to estimate unknown parameters. In both the design and estimation, the contaminant transport equation is required to be solved many times to evaluate the likelihood. To reduce the computational burden, an interpolation method based on the adaptive sparse grid is utilized to construct a surrogate for the contaminant transport equation. The approximated likelihood can be evaluated directly from the surrogate, which greatly accelerates the design and estimation process. The accuracy and efficiency of our approach are demonstrated through numerical case studies. It is shown that the methods can be used to assist in both single sampling location and monitoring network design for contaminant source identifications in groundwater.
Bukhari, Mahwish; Awan, M. Ali; Qazi, Ishtiaq A.; Baig, M. Anwar
2012-01-01
This paper illustrates systematic development of a convenient analytical method for the determination of chromium and cadmium in tannery wastewater using laser-induced breakdown spectroscopy (LIBS). A new approach was developed by which liquid was converted into solid phase sample surface using absorption paper for subsequent LIBS analysis. The optimized values of LIBS parameters were 146.7 mJ for chromium and 89.5 mJ for cadmium (laser pulse energy), 4.5 μs (delay time), 70 mm (lens to sample surface distance), and 7 mm (light collection system to sample surface distance). Optimized values of LIBS parameters demonstrated strong spectrum lines for each metal keeping the background noise at minimum level. The new method of preparing metal standards on absorption papers exhibited calibration curves with good linearity with correlation coefficients, R2 in the range of 0.992 to 0.998. The developed method was tested on real tannery wastewater samples for determination of chromium and cadmium. PMID:22567570
Pierson, Stephen A; Trujillo-Rodríguez, María J; Anderson, Jared L
2018-05-29
An ionic-liquid-based in situ dispersive liquid-liquid microextraction method coupled to headspace gas chromatography and mass spectrometry was developed for the rapid analysis of ultraviolet filters. The chemical structures of five ionic liquids were specifically designed to incorporate various functional groups for the favorable extraction of the target analytes. Extraction parameters including ionic liquid mass, molar ratio of ionic liquid to metathesis reagent, vortex time, ionic strength, pH, and total sample volume were studied and optimized. The effect of the headspace temperature and volume during the headspace sampling step was also evaluated to increase the sensitivity of the method. The optimized procedure is fast as it only required ∼7-10 min per extraction and allowed for multiple extractions to be performed simultaneously. In addition, the method exhibited high precision, good linearity, and low limits of detection for six ultraviolet filters in aqueous samples. The developed method was applied to both pool and lake water samples attaining acceptable relative recovery values. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Teglia, Carla M; Gil García, María D; Galera, María Martínez; Goicoechea, Héctor C
2014-08-01
When determining endogenous compounds in biological samples, the lack of blank or analyte-free matrix samples involves the use of alternative strategies for calibration and quantitation. This article deals with the development, optimization and validation of a high performance liquid chromatography method for the determination of retinoic acid in plasma, obtaining at the same time information about its isomers, taking into account the basal concentration of these endobiotica. An experimental design was used for the optimization of three variables: mobile phase composition, flow rate and column temperature through a central composite design. Four responses were selected for optimization purposes (area under the peaks, quantity of peaks, analysis time and resolution between the first principal peak and the following one). The optimum conditions resulted in a mobile phase consisting of methanol 83.4% (v/v), acetonitrile 0.6% (v/v) and acid aqueous solution 16.0% (v/v); flow rate of 0.68 mL min(-1) and an column temperature of 37.10 °C. Detection was performed at 350 nm by a diode array detector. The method was validated following a holistic approach that included not only the classical parameters related to method performance but also the robustness and the expected proportion of acceptable results lying inside predefined acceptability intervals, i.e., the uncertainty of measurements. The method validation results indicated a high selectivity and good precision characteristics that were studied at four concentration levels, with RSD less than 5.0% for retinoic acid (less than 7.5% for the LOQ concentration level), in intra and inter-assay precision studies. Linearity was proved for a range from 0.00489 to 15.109 ng mL(-1) of retinoic acid and the recovery, which was studied at four different fortification levels in phuman plasma samples, varied from 99.5% to 106.5% for retinoic acid. The applicability of the method was demonstrated by determining retinoic acid and obtaining information about its isomers in human and frog plasma samples from different origins. Copyright © 2014 Elsevier B.V. All rights reserved.
Separation-Compliant, Optimal Routing and Control of Scheduled Arrivals in a Terminal Airspace
NASA Technical Reports Server (NTRS)
Sadovsky, Alexander V.; Davis, Damek; Isaacson, Douglas R.
2013-01-01
We address the problem of navigating a set (fleet) of aircraft in an aerial route network so as to bring each aircraft to its destination at a specified time and with minimal distance separation assured between all aircraft at all times. The speed range, initial position, required destination, and required time of arrival at destination for each aircraft are assumed provided. Each aircraft's movement is governed by a controlled differential equation (state equation). The problem consists in choosing for each aircraft a path in the route network and a control strategy so as to meet the constraints and reach the destination at the required time. The main contribution of the paper is a model that allows to recast this problem as a decoupled collection of problems in classical optimal control and is easily generalized to the case when inertia cannot be neglected. Some qualitative insight into solution behavior is obtained using the Pontryagin Maximum Principle. Sample numerical solutions are computed using a numerical optimal control solver. The proposed model is first step toward increasing the fidelity of continuous time control models of air traffic in a terminal airspace. The Pontryagin Maximum Principle implies the polygonal shape of those portions of the state trajectories away from those states in which one or more aircraft pair are at minimal separation. The model also confirms the intuition that, the narrower the allowed speed ranges of the aircraft, the smaller the space of optimal solutions, and that an instance of the optimal control problem may not have a solution at all (i.e., no control strategy that meets the separation requirement and other constraints).
A proposal of optimal sampling design using a modularity strategy
NASA Astrophysics Data System (ADS)
Simone, A.; Giustolisi, O.; Laucelli, D. B.
2016-08-01
In real water distribution networks (WDNs) are present thousands nodes and optimal placement of pressure and flow observations is a relevant issue for different management tasks. The planning of pressure observations in terms of spatial distribution and number is named sampling design and it was faced considering model calibration. Nowadays, the design of system monitoring is a relevant issue for water utilities e.g., in order to manage background leakages, to detect anomalies and bursts, to guarantee service quality, etc. In recent years, the optimal location of flow observations related to design of optimal district metering areas (DMAs) and leakage management purposes has been faced considering optimal network segmentation and the modularity index using a multiobjective strategy. Optimal network segmentation is the basis to identify network modules by means of optimal conceptual cuts, which are the candidate locations of closed gates or flow meters creating the DMAs. Starting from the WDN-oriented modularity index, as a metric for WDN segmentation, this paper proposes a new way to perform the sampling design, i.e., the optimal location of pressure meters, using newly developed sampling-oriented modularity index. The strategy optimizes the pressure monitoring system mainly based on network topology and weights assigned to pipes according to the specific technical tasks. A multiobjective optimization minimizes the cost of pressure meters while maximizing the sampling-oriented modularity index. The methodology is presented and discussed using the Apulian and Exnet networks.
Bayesian Phase II optimization for time-to-event data based on historical information.
Bertsche, Anja; Fleischer, Frank; Beyersmann, Jan; Nehmiz, Gerhard
2017-01-01
After exploratory drug development, companies face the decision whether to initiate confirmatory trials based on limited efficacy information. This proof-of-concept decision is typically performed after a Phase II trial studying a novel treatment versus either placebo or an active comparator. The article aims to optimize the design of such a proof-of-concept trial with respect to decision making. We incorporate historical information and develop pre-specified decision criteria accounting for the uncertainty of the observed treatment effect. We optimize these criteria based on sensitivity and specificity, given the historical information. Specifically, time-to-event data are considered in a randomized 2-arm trial with additional prior information on the control treatment. The proof-of-concept criterion uses treatment effect size, rather than significance. Criteria are defined on the posterior distribution of the hazard ratio given the Phase II data and the historical control information. Event times are exponentially modeled within groups, allowing for group-specific conjugate prior-to-posterior calculation. While a non-informative prior is placed on the investigational treatment, the control prior is constructed via the meta-analytic-predictive approach. The design parameters including sample size and allocation ratio are then optimized, maximizing the probability of taking the right decision. The approach is illustrated with an example in lung cancer.
da Costa, Nuno Maçarico; Hepp, Klaus; Martin, Kevan A C
2009-05-30
Synapses can only be morphologically identified by electron microscopy and this is often a very labor-intensive and time-consuming task. When quantitative estimates are required for pathways that contribute a small proportion of synapses to the neuropil, the problems of accurate sampling are particularly severe and the total time required may become prohibitive. Here we present a sampling method devised to count the percentage of rarely occurring synapses in the neuropil using a large sample (approximately 1000 sampling sites), with the strong constraint of doing it in reasonable time. The strategy, which uses the unbiased physical disector technique, resembles that used in particle physics to detect rare events. We validated our method in the primary visual cortex of the cat, where we used biotinylated dextran amine to label thalamic afferents and measured the density of their synapses using the physical disector method. Our results show that we could obtain accurate counts of the labeled synapses, even when they represented only 0.2% of all the synapses in the neuropil.
Koziel, Jacek A; Nguyen, Lam T; Glanville, Thomas D; Ahn, Heekwon; Frana, Timothy S; Hans van Leeuwen, J
2017-10-01
A passive sampling method, using retracted solid-phase microextraction (SPME) - gas chromatography-mass spectrometry and time-weighted averaging, was developed and validated for tracking marker volatile organic compounds (VOCs) emitted during aerobic digestion of biohazardous animal tissue. The retracted SPME configuration protects the fragile fiber from buffeting by the process gas stream, and it requires less equipment and is potentially more biosecure than conventional active sampling methods. VOC concentrations predicted via a model based on Fick's first law of diffusion were within 6.6-12.3% of experimentally controlled values after accounting for VOC adsorption to the SPME fiber housing. Method detection limits for five marker VOCs ranged from 0.70 to 8.44ppbv and were statistically equivalent (p>0.05) to those for active sorbent-tube-based sampling. The sampling time of 30min and fiber retraction of 5mm were found to be optimal for the tissue digestion process. Copyright © 2017 Elsevier Ltd. All rights reserved.
Optimal estimation of suspended-sediment concentrations in streams
Holtschlag, D.J.
2001-01-01
Optimal estimators are developed for computation of suspended-sediment concentrations in streams. The estimators are a function of parameters, computed by use of generalized least squares, which simultaneously account for effects of streamflow, seasonal variations in average sediment concentrations, a dynamic error component, and the uncertainty in concentration measurements. The parameters are used in a Kalman filter for on-line estimation and an associated smoother for off-line estimation of suspended-sediment concentrations. The accuracies of the optimal estimators are compared with alternative time-averaging interpolators and flow-weighting regression estimators by use of long-term daily-mean suspended-sediment concentration and streamflow data from 10 sites within the United States. For sampling intervals from 3 to 48 days, the standard errors of on-line and off-line optimal estimators ranged from 52.7 to 107%, and from 39.5 to 93.0%, respectively. The corresponding standard errors of linear and cubic-spline interpolators ranged from 48.8 to 158%, and from 50.6 to 176%, respectively. The standard errors of simple and multiple regression estimators, which did not vary with the sampling interval, were 124 and 105%, respectively. Thus, the optimal off-line estimator (Kalman smoother) had the lowest error characteristics of those evaluated. Because suspended-sediment concentrations are typically measured at less than 3-day intervals, use of optimal estimators will likely result in significant improvements in the accuracy of continuous suspended-sediment concentration records. Additional research on the integration of direct suspended-sediment concentration measurements and optimal estimators applied at hourly or shorter intervals is needed.
Hou, Zeyu; Lu, Wenxi; Xue, Haibo; Lin, Jin
2017-08-01
Surrogate-based simulation-optimization technique is an effective approach for optimizing the surfactant enhanced aquifer remediation (SEAR) strategy for clearing DNAPLs. The performance of the surrogate model, which is used to replace the simulation model for the aim of reducing computation burden, is the key of corresponding researches. However, previous researches are generally based on a stand-alone surrogate model, and rarely make efforts to improve the approximation accuracy of the surrogate model to the simulation model sufficiently by combining various methods. In this regard, we present set pair analysis (SPA) as a new method to build ensemble surrogate (ES) model, and conducted a comparative research to select a better ES modeling pattern for the SEAR strategy optimization problems. Surrogate models were developed using radial basis function artificial neural network (RBFANN), support vector regression (SVR), and Kriging. One ES model is assembling RBFANN model, SVR model, and Kriging model using set pair weights according their performance, and the other is assembling several Kriging (the best surrogate modeling method of three) models built with different training sample datasets. Finally, an optimization model, in which the ES model was embedded, was established to obtain the optimal remediation strategy. The results showed the residuals of the outputs between the best ES model and simulation model for 100 testing samples were lower than 1.5%. Using an ES model instead of the simulation model was critical for considerably reducing the computation time of simulation-optimization process and maintaining high computation accuracy simultaneously. Copyright © 2017 Elsevier B.V. All rights reserved.
Continuous-time adaptive critics.
Hanselmann, Thomas; Noakes, Lyle; Zaknich, Anthony
2007-05-01
A continuous-time formulation of an adaptive critic design (ACD) is investigated. Connections to the discrete case are made, where backpropagation through time (BPTT) and real-time recurrent learning (RTRL) are prevalent. Practical benefits are that this framework fits in well with plant descriptions given by differential equations and that any standard integration routine with adaptive step-size does an adaptive sampling for free. A second-order actor adaptation using Newton's method is established for fast actor convergence for a general plant and critic. Also, a fast critic update for concurrent actor-critic training is introduced to immediately apply necessary adjustments of critic parameters induced by actor updates to keep the Bellman optimality correct to first-order approximation after actor changes. Thus, critic and actor updates may be performed at the same time until some substantial error build up in the Bellman optimality or temporal difference equation, when a traditional critic training needs to be performed and then another interval of concurrent actor-critic training may resume.
Optimal inverse functions created via population-based optimization.
Jennings, Alan L; Ordóñez, Raúl
2014-06-01
Finding optimal inputs for a multiple-input, single-output system is taxing for a system operator. Population-based optimization is used to create sets of functions that produce a locally optimal input based on a desired output. An operator or higher level planner could use one of the functions in real time. For the optimization, each agent in the population uses the cost and output gradients to take steps lowering the cost while maintaining their current output. When an agent reaches an optimal input for its current output, additional agents are generated in the output gradient directions. The new agents then settle to the local optima for the new output values. The set of associated optimal points forms an inverse function, via spline interpolation, from a desired output to an optimal input. In this manner, multiple locally optimal functions can be created. These functions are naturally clustered in input and output spaces allowing for a continuous inverse function. The operator selects the best cluster over the anticipated range of desired outputs and adjusts the set point (desired output) while maintaining optimality. This reduces the demand from controlling multiple inputs, to controlling a single set point with no loss in performance. Results are demonstrated on a sample set of functions and on a robot control problem.
Surface Navigation Using Optimized Waypoints and Particle Swarm Optimization
NASA Technical Reports Server (NTRS)
Birge, Brian
2013-01-01
The design priority for manned space exploration missions is almost always placed on human safety. Proposed manned surface exploration tasks (lunar, asteroid sample returns, Mars) have the possibility of astronauts traveling several kilometers away from a home base. Deviations from preplanned paths are expected while exploring. In a time-critical emergency situation, there is a need to develop an optimal home base return path. The return path may or may not be similar to the outbound path, and what defines optimal may change with, and even within, each mission. A novel path planning algorithm and prototype program was developed using biologically inspired particle swarm optimization (PSO) that generates an optimal path of traversal while avoiding obstacles. Applications include emergency path planning on lunar, Martian, and/or asteroid surfaces, generating multiple scenarios for outbound missions, Earth-based search and rescue, as well as human manual traversal and/or path integration into robotic control systems. The strategy allows for a changing environment, and can be re-tasked at will and run in real-time situations. Given a random extraterrestrial planetary or small body surface position, the goal was to find the fastest (or shortest) path to an arbitrary position such as a safe zone or geographic objective, subject to possibly varying constraints. The problem requires a workable solution 100% of the time, though it does not require the absolute theoretical optimum. Obstacles should be avoided, but if they cannot be, then the algorithm needs to be smart enough to recognize this and deal with it. With some modifications, it works with non-stationary error topologies as well.
Protein Folding Free Energy Landscape along the Committor - the Optimal Folding Coordinate.
Krivov, Sergei V
2018-06-06
Recent advances in simulation and experiment have led to dramatic increases in the quantity and complexity of produced data, which makes the development of automated analysis tools very important. A powerful approach to analyze dynamics contained in such data sets is to describe/approximate it by diffusion on a free energy landscape - free energy as a function of reaction coordinates (RC). For the description to be quantitatively accurate, RCs should be chosen in an optimal way. Recent theoretical results show that such an optimal RC exists; however, determining it for practical systems is a very difficult unsolved problem. Here we describe a solution to this problem. We describe an adaptive nonparametric approach to accurately determine the optimal RC (the committor) for an equilibrium trajectory of a realistic system. In contrast to alternative approaches, which require a functional form with many parameters to approximate an RC and thus extensive expertise with the system, the suggested approach is nonparametric and can approximate any RC with high accuracy without system specific information. To avoid overfitting for a realistically sampled system, the approach performs RC optimization in an adaptive manner by focusing optimization on less optimized spatiotemporal regions of the RC. The power of the approach is illustrated on a long equilibrium atomistic folding simulation of HP35 protein. We have determined the optimal folding RC - the committor, which was confirmed by passing a stringent committor validation test. It allowed us to determine a first quantitatively accurate protein folding free energy landscape. We have confirmed the recent theoretical results that diffusion on such a free energy profile can be used to compute exactly the equilibrium flux, the mean first passage times, and the mean transition path times between any two points on the profile. We have shown that the mean squared displacement along the optimal RC grows linear with time as for simple diffusion. The free energy profile allowed us to obtain a direct rigorous estimate of the pre-exponential factor for the folding dynamics.
Asadi, Mohammad
2018-03-01
A rapid, simple, and green vortex-assisted emulsification microextraction method based on solidification of floating organic drop was developed for the extraction and determination of ochratoxin A (OTA) with high-performance liquid chromatography. Some factors influencing the extraction efficiency of OTA such as the type and volume of extraction solvent, sample pH, salt concentration, vortex time, and sample volume were optimized. Under optimized conditions, the calibration curve exhibited linearity in the range of 50.0-500 ng L -1 with a coefficient of determination higher than 0.999. The limit of detection was 15.0 ng L -1 . The inter- and intra-assays relative standard deviations were in a range of 4.7-8.7%. The accuracy of the developed method was investigated through recovery experiments, and it was successfully used for the quantification of OTA in 40 samples of fruit juice.
Drop-on-Demand Sample Delivery for Studying Biocatalysts in Action at XFELs
Fuller, Franklin D.; Gul, Sheraz; Chatterjee, Ruchira; Burgie, Ernest S.; Young, Iris D.; Lebrette, Hugo; Srinivas, Vivek; Brewster, Aaron S.; Michels-Clark, Tara; Clinger, Jonathan A.; Andi, Babak; Ibrahim, Mohamed; Pastor, Ernest; de Lichtenberg, Casper; Hussein, Rana; Pollock, Christopher J.; Zhang, Miao; Stan, Claudiu A.; Kroll, Thomas; Fransson, Thomas; Weninger, Clemens; Kubin, Markus; Aller, Pierre; Lassalle, Louise; Bräuer, Philipp; Miller, Mitchell D.; Amin, Muhamed; Koroidov, Sergey; Roessler, Christian G.; Allaire, Marc; Sierra, Raymond G.; Docker, Peter T.; Glownia, James M.; Nelson, Silke; Koglin, Jason E.; Zhu, Diling; Chollet, Matthieu; Song, Sanghoon; Lemke, Henrik; Liang, Mengning; Sokaras, Dimosthenis; Alonso-Mori, Roberto; Zouni, Athina; Messinger, Johannes; Bergmann, Uwe; Boal, Amie K.; Bollinger, J. Martin; Krebs, Carsten; Högbom, Martin; Phillips, George N.; Vierstra, Richard D.; Sauter, Nicholas K.; Orville, Allen M.; Kern, Jan; Yachandra, Vittal K.; Yano, Junko
2017-01-01
X-ray crystallography at X-ray free-electron laser (XFEL) sources is a powerful method for studying macromolecules at biologically relevant temperatures. Moreover, when combined with complementary techniques like X-ray emission spectroscopy (XES), both global structures and chemical properties of metalloenzymes can be obtained concurrently, providing new insights into the interplay between the protein structure/dynamics and chemistry at an active site. Implementing such a multimodal approach can be compromised by conflicting requirements to optimize each individual method. In particular, the method used for sample delivery greatly impacts the data quality. We present here a new, robust way of delivering controlled sample amounts on demand using acoustic droplet ejection coupled with a conveyor belt drive that is optimized for crystallography and spectroscopy measurements of photochemical and chemical reactions over a wide range of time scales. Studies with photosystem II, the phytochrome photoreceptor, and ribonucleotide reductase R2 illustrate the power and versatility of this method. PMID:28250468
Drop-on-demand sample delivery for studying biocatalysts in action at X-ray free-electron lasers.
Fuller, Franklin D; Gul, Sheraz; Chatterjee, Ruchira; Burgie, E Sethe; Young, Iris D; Lebrette, Hugo; Srinivas, Vivek; Brewster, Aaron S; Michels-Clark, Tara; Clinger, Jonathan A; Andi, Babak; Ibrahim, Mohamed; Pastor, Ernest; de Lichtenberg, Casper; Hussein, Rana; Pollock, Christopher J; Zhang, Miao; Stan, Claudiu A; Kroll, Thomas; Fransson, Thomas; Weninger, Clemens; Kubin, Markus; Aller, Pierre; Lassalle, Louise; Bräuer, Philipp; Miller, Mitchell D; Amin, Muhamed; Koroidov, Sergey; Roessler, Christian G; Allaire, Marc; Sierra, Raymond G; Docker, Peter T; Glownia, James M; Nelson, Silke; Koglin, Jason E; Zhu, Diling; Chollet, Matthieu; Song, Sanghoon; Lemke, Henrik; Liang, Mengning; Sokaras, Dimosthenis; Alonso-Mori, Roberto; Zouni, Athina; Messinger, Johannes; Bergmann, Uwe; Boal, Amie K; Bollinger, J Martin; Krebs, Carsten; Högbom, Martin; Phillips, George N; Vierstra, Richard D; Sauter, Nicholas K; Orville, Allen M; Kern, Jan; Yachandra, Vittal K; Yano, Junko
2017-04-01
X-ray crystallography at X-ray free-electron laser sources is a powerful method for studying macromolecules at biologically relevant temperatures. Moreover, when combined with complementary techniques like X-ray emission spectroscopy, both global structures and chemical properties of metalloenzymes can be obtained concurrently, providing insights into the interplay between the protein structure and dynamics and the chemistry at an active site. The implementation of such a multimodal approach can be compromised by conflicting requirements to optimize each individual method. In particular, the method used for sample delivery greatly affects the data quality. We present here a robust way of delivering controlled sample amounts on demand using acoustic droplet ejection coupled with a conveyor belt drive that is optimized for crystallography and spectroscopy measurements of photochemical and chemical reactions over a wide range of time scales. Studies with photosystem II, the phytochrome photoreceptor, and ribonucleotide reductase R2 illustrate the power and versatility of this method.
Taghvimi, Arezou; Hamishehkar, Hamed; Ebrahimi, Mahmoud
2016-06-01
The simultaneous determination of amphetamine and methadone was carried out by magnetic graphene oxide nanoparticles, a magnetic solid-phase extraction adsorbent, as a new sample treatment technique. The main factors (the amounts of sample volume, amount of adsorbent, type and amount of extraction organic solvent, time of extraction and desorption, pH, the ionic strength of extraction medium, and agitation rate) influencing the extraction efficiency were investigated and optimized. Under the optimized conditions, good linearity was observed in the range of 100-1500 ng/mL for amphetamine and 100-1000 ng/mL for methadone. The method was evaluated for determination of AM and methadone in positive urine samples, satisfactory results were obtained, therefore magnetic solid-phase extraction can be applied as a novel method for the determination of drugs of abuse in forensic laboratories. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Drop-on-demand sample delivery for studying biocatalysts in action at X-ray free-electron lasers
Fuller, Franklin D.; Gul, Sheraz; Chatterjee, Ruchira; ...
2017-02-27
X-ray crystallography at X-ray free-electron laser (XFEL) sources is a powerful method for studying macromolecules at biologically relevant temperatures. Moreover, when combined with complementary techniques like X-ray emission spectroscopy (XES), both global structures and chemical properties of metalloenzymes can be obtained concurrently, providing new insights into the interplay between the protein structure/dynamics and chemistry at an active site. However, implementing such a multimodal approach can be compromised by conflicting requirements to optimize each individual method. In particular, the method used for sample delivery greatly impacts the data quality. We present here a new, robust way of delivering controlled sample amountsmore » on demand using acoustic droplet ejection coupled with a conveyor belt drive that is optimized for crystallography and spectroscopy measurements of photochemical and chemical reactions over a wide range of time scales. Studies with photosystem II, the phytochrome photoreceptor, and ribonucleotide reductase R2 illustrate the power and versatility of this method.« less
Drop-on-demand sample delivery for studying biocatalysts in action at X-ray free-electron lasers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fuller, Franklin D.; Gul, Sheraz; Chatterjee, Ruchira
X-ray crystallography at X-ray free-electron laser (XFEL) sources is a powerful method for studying macromolecules at biologically relevant temperatures. Moreover, when combined with complementary techniques like X-ray emission spectroscopy (XES), both global structures and chemical properties of metalloenzymes can be obtained concurrently, providing new insights into the interplay between the protein structure/dynamics and chemistry at an active site. However, implementing such a multimodal approach can be compromised by conflicting requirements to optimize each individual method. In particular, the method used for sample delivery greatly impacts the data quality. We present here a new, robust way of delivering controlled sample amountsmore » on demand using acoustic droplet ejection coupled with a conveyor belt drive that is optimized for crystallography and spectroscopy measurements of photochemical and chemical reactions over a wide range of time scales. Studies with photosystem II, the phytochrome photoreceptor, and ribonucleotide reductase R2 illustrate the power and versatility of this method.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Man, Jun; Zhang, Jiangjiang; Li, Weixuan
2016-10-01
The ensemble Kalman filter (EnKF) has been widely used in parameter estimation for hydrological models. The focus of most previous studies was to develop more efficient analysis (estimation) algorithms. On the other hand, it is intuitively understandable that a well-designed sampling (data-collection) strategy should provide more informative measurements and subsequently improve the parameter estimation. In this work, a Sequential Ensemble-based Optimal Design (SEOD) method, coupled with EnKF, information theory and sequential optimal design, is proposed to improve the performance of parameter estimation. Based on the first-order and second-order statistics, different information metrics including the Shannon entropy difference (SD), degrees ofmore » freedom for signal (DFS) and relative entropy (RE) are used to design the optimal sampling strategy, respectively. The effectiveness of the proposed method is illustrated by synthetic one-dimensional and two-dimensional unsaturated flow case studies. It is shown that the designed sampling strategies can provide more accurate parameter estimation and state prediction compared with conventional sampling strategies. Optimal sampling designs based on various information metrics perform similarly in our cases. The effect of ensemble size on the optimal design is also investigated. Overall, larger ensemble size improves the parameter estimation and convergence of optimal sampling strategy. Although the proposed method is applied to unsaturated flow problems in this study, it can be equally applied in any other hydrological problems.« less
NASA Astrophysics Data System (ADS)
Shupp, Aaron M.; Rodier, Dan; Rowley, Steven
2007-03-01
Monitoring and controlling Airborne Molecular Contamination (AMC) has become essential in deep ultraviolet (DUV) photolithography for both optimizing yields and protecting tool optics. A variety of technologies have been employed for both real-time and grab-sample monitoring. Real-time monitoring has the advantage of quickly identifying "spikes" and upset conditions, while 2 - 24 hour plus grab sampling allows for extremely low detection limits by concentrating the mass of the target contaminant over a period of time. Employing a combination of both monitoring techniques affords the highest degree of control, lowest detection limits, and the most detailed data possible in terms of speciation. As happens with many technologies, there can be concern regarding the accuracy and agreement between real-time and grab-sample methods. This study utilizes side by side comparisons of two different real-time monitors operating in parallel with both liquid impingers and dry sorbent tubes to measure NIST traceable gas standards as well as real world samples. By measuring in parallel, a truly valid comparison is made between methods while verifying the results against a certified standard. The final outcome for this investigation is that a dry sorbent tube grab-sample technique produced results that agreed in terms of accuracy with NIST traceable standards as well as the two real-time techniques Ion Mobility Spectrometry (IMS) and Pulsed Fluorescence Detection (PFD) while a traditional liquid impinger technique showed discrepancies.
An Optimized Method for the Measurement of Acetaldehyde by High-Performance Liquid Chromatography
Guan, Xiangying; Rubin, Emanuel; Anni, Helen
2011-01-01
Background Acetaldehyde is produced during ethanol metabolism predominantly in the liver by alcohol dehydrogenase, and rapidly eliminated by oxidation to acetate via aldehyde dehydrogenase. Assessment of circulating acetaldehyde levels in biological matrices is performed by headspace gas chromatography and reverse phase high-performance liquid chromatography (RP-HPLC). Methods We have developed an optimized method for the measurement of acetaldehyde by RP-HPLC in hepatoma cell culture medium, blood and plasma. After sample deproteinization, acetaldehyde was derivatized with 2,4-dinitrophenylhydrazine (DNPH). The reaction was optimized for pH, amount of derivatization reagent,, time and temperature. Extraction methods of the acetaldehyde-hydrazone (AcH-DPN) stable derivative and product stability studies were carried out. Acetaldehyde was identified by its retention time in comparison to AcH-DPN standard, using a new chromatography gradient program, and quantitated based on external reference standards and standard addition calibration curves in the presence and absence of ethanol. Results Derivatization of acetaldehyde was performed at pH 4.0 with a 80-fold molar excess of DNPH. The reaction was completed in 40 min at ambient temperature, and the product was stable for 2 days. A clear separation of AcH-DNP from DNPH was obtained with a new 11-min chromatography program. Acetaldehyde detection was linear up to 80 μM. The recovery of acetaldehyde was >88% in culture media, and >78% in plasma. We quantitatively determined the ethanol-derived acetaldehyde in hepatoma cells, rat blood and plasma with a detection limit around 3 μM. The accuracy of the method was <9% for intraday and <15% for interday measurements, in small volume (70 μl) plasma sampling. Conclusions An optimized method for the quantitative determination of acetaldehyde in biological systems was developed using derivatization with DNPH, followed by a short RP-HPLC separation of AcH-DNP. The method has an extended linear range, is reproducible and applicable to small volume sampling of culture media and biological fluids. PMID:21895715
An optimized method for the measurement of acetaldehyde by high-performance liquid chromatography.
Guan, Xiangying; Rubin, Emanuel; Anni, Helen
2012-03-01
Acetaldehyde is produced during ethanol metabolism predominantly in the liver by alcohol dehydrogenase and rapidly eliminated by oxidation to acetate via aldehyde dehydrogenase. Assessment of circulating acetaldehyde levels in biological matrices is performed by headspace gas chromatography and reverse phase high-performance liquid chromatography (RP-HPLC). We have developed an optimized method for the measurement of acetaldehyde by RP-HPLC in hepatoma cell culture medium, blood, and plasma. After sample deproteinization, acetaldehyde was derivatized with 2,4-dinitrophenylhydrazine (DNPH). The reaction was optimized for pH, amount of derivatization reagent, time, and temperature. Extraction methods of the acetaldehyde-hydrazone (AcH-DNP) stable derivative and product stability studies were carried out. Acetaldehyde was identified by its retention time in comparison with AcH-DNP standard, using a new chromatography gradient program, and quantitated based on external reference standards and standard addition calibration curves in the presence and absence of ethanol. Derivatization of acetaldehyde was performed at pH 4.0 with an 80-fold molar excess of DNPH. The reaction was completed in 40 minutes at ambient temperature, and the product was stable for 2 days. A clear separation of AcH-DNP from DNPH was obtained with a new 11-minute chromatography program. Acetaldehyde detection was linear up to 80 μM. The recovery of acetaldehyde was >88% in culture media and >78% in plasma. We quantitatively determined the ethanol-derived acetaldehyde in hepatoma cells, rat blood and plasma with a detection limit around 3 μM. The accuracy of the method was <9% for intraday and <15% for interday measurements, in small volume (70 μl) plasma sampling. An optimized method for the quantitative determination of acetaldehyde in biological systems was developed using derivatization with DNPH, followed by a short RP-HPLC separation of AcH-DNP. The method has an extended linear range, is reproducible and applicable to small-volume sampling of culture media and biological fluids. Copyright © 2011 by the Research Society on Alcoholism.
Shao, Yuyu; Wang, Zhaoxia; Bao, Qiuhua; Zhang, Heping
2016-12-01
In this study, a combination of propidium monoazide (PMA) and quantitative real-time PCR (qPCR) was used to develop a method to determine the viability of cells of Lactobacillus delbrueckii ssp. bulgaricus ND02 (L. bulgaricus) that may have entered into a viable but nonculturable state. This can happen due to its susceptibility to cold shock during lyophilization and storage. Propidium monoazide concentration, PMA incubation time, and light exposure time were optimized to fully exploit the PMA-qPCR approach to accurately assess the total number of living L. bulgaricus ND02. Although PMA has little influence on living cells, when concentrations of PMA were higher than 30μg/mL the number of PCR-positive living bacteria decreased from 10 6 to 10 5 cfu/mL in comparison with qPCR enumeration. Mixtures of living and dead cells were used as method verification samples for enumeration by PMA-qPCR, demonstrating that this method was feasible and effective for distinguishing living cells of L. bulgaricus when mixed with a known number of dead cells. We suggest that several conditions need to be studied further before PMA-qPCR methods can be accurately used to distinguish living from dead cells for enumeration under more realistic sampling situations. However, this research provides a rapid way to enumerate living cells of L. bulgaricus and could be used to optimize selection of cryoprotectants in the lyophilization process and develop technologies for high cell density cultivation and optimal freeze-drying processes. Copyright © 2016 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Allen, Mark B; Brey, Richard R; Gesell, Thomas; Derryberry, Dewayne; Poudel, Deepesh
2016-01-01
This study had a goal to evaluate the predictive capabilities of the National Council on Radiation Protection and Measurements (NCRP) wound model coupled to the International Commission on Radiological Protection (ICRP) systemic model for 90Sr-contaminated wounds using non-human primate data. Studies were conducted on 13 macaque (Macaca mulatta) monkeys, each receiving one-time intramuscular injections of 90Sr solution. Urine and feces samples were collected up to 28 d post-injection and analyzed for 90Sr activity. Integrated Modules for Bioassay Analysis (IMBA) software was configured with default NCRP and ICRP model transfer coefficients to calculate predicted 90Sr intake via the wound based on the radioactivity measured in bioassay samples. The default parameters of the combined models produced adequate fits of the bioassay data, but maximum likelihood predictions of intake were overestimated by a factor of 1.0 to 2.9 when bioassay data were used as predictors. Skeletal retention was also over-predicted, suggesting an underestimation of the excretion fraction. Bayesian statistics and Monte Carlo sampling were applied using IMBA to vary the default parameters, producing updated transfer coefficients for individual monkeys that improved model fit and predicted intake and skeletal retention. The geometric means of the optimized transfer rates for the 11 cases were computed, and these optimized sample population parameters were tested on two independent monkey cases and on the 11 monkeys from which the optimized parameters were derived. The optimized model parameters did not improve the model fit in most cases, and the predicted skeletal activity produced improvements in three of the 11 cases. The optimized parameters improved the predicted intake in all cases but still over-predicted the intake by an average of 50%. The results suggest that the modified transfer rates were not always an improvement over the default NCRP and ICRP model values.
Analysis of the Enameled AISI 316LVM Stainless Steel
NASA Astrophysics Data System (ADS)
Bukovec, Mitja; Xhanari, Klodian; Lešer, Tadej; Petovar, Barbara; Finšgar, Matjaž
2018-03-01
In this work, four different enamels were coated on AISI 316LVM stainless steel and the corrosion resistance of these samples was tested in 5 wt.% NaCl solution at room temperature. The preparation procedure of the enamels was optimized in terms of firing temperature, time and composition. First the thermal expansion was measured using dilatometry followed by electrochemical analysis using chronopotentiometry, electrochemical impedance spectroscopy and cyclic polarization. The topography of the most resistant sample was obtained by 3D-profilometry. All samples coated with enamel showed significantly higher corrosion and dilatation resistance compared with the uncoated stainless steel material.
Martínez-Ceron, María C; Giudicessi, Silvana L; Marani, Mariela M; Albericio, Fernando; Cascone, Osvaldo; Erra-Balsells, Rosa; Camperi, Silvia A
2010-05-15
Optimization of bead analysis by matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOF-MS) after the screening of one-bead-one-peptide combinatorial libraries was achieved, involving the fine-tuning of the whole process. Guanidine was replaced by acetonitrile (MeCN)/acetic acid (AcOH)/water (H(2)O), improving matrix crystallization. Peptide-bead cleavage with NH(4)OH was cheaper and safer than, yet as efficient as, NH(3)/tetrahydrofuran (THF). Peptide elution in microtubes instead of placing the beads in the sample plate yielded more sample aliquots. Successive dry layers deposit sample preparation was better than the dried droplet method. Among the matrices analyzed, alpha-cyano-4-hydroxycinnamic acid resulted in the best peptide ion yield. Cluster formation was minimized by the addition of additives to the matrix. Copyright 2010 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Khajeh, Mostafa; Pedersen-Bjergaard, Stig; Barkhordar, Afsaneh; Bohlooli, Mousa
2015-02-01
In this study, wheat stem was used for electromembrane extraction (EME) for the first time. The EME technique involved the use of a wheat stem whose channel was filled with 3 M HCl, immersed in 10 mL of an aqueous sample solution. Thorium migrated from aqueous samples, through a thin layer of 1-octanol and 5%v/v Di-(2-ethylhexyl) phosphate (DEHP) immobilized in the pores of a porous stem, and into an acceptor phase solution present inside the lumen of the stem. The pH of donor and acceptor phases, extraction time, voltage, and stirring speed were optimized. At the optimum conditions, an enrichment factor of 50 and a limit of detection of 0.29 ng mL-1 was obtained for thorium. The developed procedure was then applied to the extraction and determination of thorium in water samples and in reference material.
Absolute nuclear material assay
Prasad, Manoj K [Pleasanton, CA; Snyderman, Neal J [Berkeley, CA; Rowland, Mark S [Alamo, CA
2012-05-15
A method of absolute nuclear material assay of an unknown source comprising counting neutrons from the unknown source and providing an absolute nuclear material assay utilizing a model to optimally compare to the measured count distributions. In one embodiment, the step of providing an absolute nuclear material assay comprises utilizing a random sampling of analytically computed fission chain distributions to generate a continuous time-evolving sequence of event-counts by spreading the fission chain distribution in time.
Absolute nuclear material assay
Prasad, Manoj K [Pleasanton, CA; Snyderman, Neal J [Berkeley, CA; Rowland, Mark S [Alamo, CA
2010-07-13
A method of absolute nuclear material assay of an unknown source comprising counting neutrons from the unknown source and providing an absolute nuclear material assay utilizing a model to optimally compare to the measured count distributions. In one embodiment, the step of providing an absolute nuclear material assay comprises utilizing a random sampling of analytically computed fission chain distributions to generate a continuous time-evolving sequence of event-counts by spreading the fission chain distribution in time.
Nie, Jianhui; Huang, Weijin; Wu, Xueling; Wang, Youchun
2014-09-01
The pseudoviron-based neutralization assay is accepted as the gold standard to evaluate the functional humoral immune response against HPV. The goal of this study was to develop and optimize a human papillomavirus (HPV) neutralization assay using HPV pseudovirons with Gaussia luciferase (Gluc) as the reporter gene. For this purpose, high-titers Gluc pseudovirons were generated by cotransfecting 293TT cells with HPV structural genes and Gluc expressing plasmids. Six types of neutralizing monoclonal antibodies, vaccines immunized serum samples and WHO international antibody standard were used to validate the new developed assay. The ideal circumstances of the assay were identified for cell counts (30,000/well for 96-well plate), pseudoviron inoculating size (100 times RLU above background) and incubation time (72 hr). The sensitivity of the Gluc assay was comparable to secreted alkaline phosphatase (SEAP) assay and higher than the green florescent protein (GFP) assay. The non-specific background for different types of sample was significantly different (rabbit sera > human sera > mouse sera, P < 0.01). The non-specific neutralization effects were not attributed to IgG antibody. The cutoff value for this assay was determined as 50% inhibition at a dilution of 1:40. Without requirements of sample dilution and different incubation times at different temperature before processing, the detection time was shortened from more than 90 min to less than 5 min for a 96-well plate compared with the SEAP-based assay. With the advantages of short detection time and easy-to-use procedure, the newly developed assay is more suitable for large sero-epidemiological studies or clinical trials and more amenable to automation. © 2014 Wiley Periodicals, Inc.
Conditional Optimal Design in Three- and Four-Level Experiments
ERIC Educational Resources Information Center
Hedges, Larry V.; Borenstein, Michael
2014-01-01
The precision of estimates of treatment effects in multilevel experiments depends on the sample sizes chosen at each level. It is often desirable to choose sample sizes at each level to obtain the smallest variance for a fixed total cost, that is, to obtain optimal sample allocation. This article extends previous results on optimal allocation to…
Mecozzi, M; Amici, M; Romanelli, G; Pietrantonio, E; Deluca, A
2002-07-19
This paper reports an analytical procedure based on ultrasound to extract lipids in marine mucilage samples. The experimental conditions of the ultrasound procedure (solvent and time) were identified by a FT-IR study performed on different standard samples of lipids and of a standard humic sample, before and after the sonication treatment. This study showed that diethyl ether was a more suitable solvent than methanol for the ultrasonic extraction of lipids from environmental samples because it allowed to minimize the possible oxidative modifications of lipids due to the acoustic cavitation phenomena. The optimized conditions were applied to the extraction of total lipid amount in marine mucilage samples and TLC-flame ionization detection analysis was used to identify the relevant lipid sub-fractions present in samples.
NASA Technical Reports Server (NTRS)
Lahti, G. P.
1972-01-01
A two- or three-constraint, two-dimensional radiation shield weight optimization procedure and a computer program, DOPEX, is described. The DOPEX code uses the steepest descent method to alter a set of initial (input) thicknesses for a shield configuration to achieve a minimum weight while simultaneously satisfying dose constaints. The code assumes an exponential dose-shield thickness relation with parameters specified by the user. The code also assumes that dose rates in each principal direction are dependent only on thicknesses in that direction. Code input instructions, FORTRAN 4 listing, and a sample problem are given. Typical computer time required to optimize a seven-layer shield is about 0.1 minute on an IBM 7094-2.
Sinkó, József; Kákonyi, Róbert; Rees, Eric; Metcalf, Daniel; Knight, Alex E.; Kaminski, Clemens F.; Szabó, Gábor; Erdélyi, Miklós
2014-01-01
Localization-based super-resolution microscopy image quality depends on several factors such as dye choice and labeling strategy, microscope quality and user-defined parameters such as frame rate and number as well as the image processing algorithm. Experimental optimization of these parameters can be time-consuming and expensive so we present TestSTORM, a simulator that can be used to optimize these steps. TestSTORM users can select from among four different structures with specific patterns, dye and acquisition parameters. Example results are shown and the results of the vesicle pattern are compared with experimental data. Moreover, image stacks can be generated for further evaluation using localization algorithms, offering a tool for further software developments. PMID:24688813
González-Andrade, Martin; Benito-Peña, Elena; Mata, Rachel; Moreno-Bondi, Maria C
2012-04-01
This paper describes the development of a novel on-line biosensor based on a fluorescently labeled human calmodulin (CaM), hCaM M124C-mBBr, immobilized on controlled-pore glass (CPG), for the analysis of trifluoroperazine (TFP); a phenothiazine drug in human urine samples. The device was automated by packing hCaM M124C-mBBr-CPG in a continuous-flow microcell connected to a monitoring system, composed of a bifurcated optical fiber coupled to a spectrofluorometer. Operating parameters of the on-line biosensor (flow rate, sample injection volume, and carrier solution and buffer pH) were studied and optimized. Under the optimal conditions, the biosensor provides a detection and a quantification limit of 0.24 and 0.52 μg mL(-1), respectively, and a dynamic range from 0.52 to 61.05 μg mL(-1) TFP (n = 5, correlation coefficient 0.998). The response time (t(100)) was shorter than 42 s (recovery time <4.5 min) and reproducibility and repeatability of the TFP measurements, within the linear response range, were lower than 1.4 and 2.7%, respectively. The device was successfully applied to the analysis of TFP in spiked human urine samples with recoveries ranging between 97 and 101% and with RSDs lower than 5.9%.
Ren, Keyu; Zhang, Wenlin; Cao, Shurui; Wang, Guomin; Zhou, Zhiqin
2018-05-06
Carbon-based Fe₃O₄ nanocomposites (C/Fe₃O₄ NCs) were synthesized by a simple one-step hydrothermal method using waste pomelo peels as the carbon precursors. The characterization results showed that they had good structures and physicochemical properties. The prepared C/Fe₃O₄ NCs could be applied as excellent and recyclable adsorbents for magnetic solid phase extraction (MSPE) of 11 triazole fungicides in fruit samples. In the MSPE procedure, several parameters including the amount of adsorbents, extraction time, the type and volume of desorption solvent, and desorption time were optimized in detail. Under the optimized conditions, the good linearity ( R ² > 0.9916), the limits of detection (LOD), and quantification (LOQ) were obtained in the range of 1⁻100, 0.12⁻0.55, and 0.39⁻1.85 μg/kg for 11 pesticides, respectively. Lastly, the proposed MSPE method was successfully applied to analyze triazole fungicides in real apple, pear, orange, peach, and banana samples with recoveries in the range of 82.1% to 109.9% and relative standard deviations (RSDs) below 8.4%. Therefore, the C/Fe₃O₄ NCs based MSPE method has a great potential for isolating and pre-concentrating trace levels of triazole fungicides in fruits.
Gao, Li; Wei, Yinmao
2016-08-01
A novel mixed-mode adsorbent was prepared by functionalizing silica with tris(2-aminoethyl)amine and 3-phenoxybenzaldehyde as the main mixed-mode scaffold due to the presence of the plentiful amino groups and benzene rings in their molecules. The adsorption mechanism was probed with acidic, natural and basic compounds, and the mixed hydrophobic and ion-exchange interactions were found to be responsible for the adsorption of analytes. The suitability of dispersive solid-phase extraction was demonstrated in the determination of chlorophenols in environmental water. Several parameters, including sample pH, desorption solvent, ionic strength, adsorbent dose, and extraction time were optimized. Under the optimal extraction conditions, the proposed dispersive solid-phase extraction coupled with high-performance liquid chromatography showed good linearity range and acceptable limits of detection (0.22∽0.54 ng/mL) for five chlorophenols. Notably, the higher extraction recoveries (88.7∽109.7%) for five chlorophenols were obtained with smaller adsorbent dose (10 mg) and shorter extraction time (15 min) compared with the reported methods. The proposed method might be potentially applied in the determination of trace chlorophenols in real water samples. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Adaptive Swarm Balancing Algorithms for rare-event prediction in imbalanced healthcare data
Wong, Raymond K.; Mohammed, Sabah; Fiaidhi, Jinan; Sung, Yunsick
2017-01-01
Clinical data analysis and forecasting have made substantial contributions to disease control, prevention and detection. However, such data usually suffer from highly imbalanced samples in class distributions. In this paper, we aim to formulate effective methods to rebalance binary imbalanced dataset, where the positive samples take up only the minority. We investigate two different meta-heuristic algorithms, particle swarm optimization and bat algorithm, and apply them to empower the effects of synthetic minority over-sampling technique (SMOTE) for pre-processing the datasets. One approach is to process the full dataset as a whole. The other is to split up the dataset and adaptively process it one segment at a time. The experimental results reported in this paper reveal that the performance improvements obtained by the former methods are not scalable to larger data scales. The latter methods, which we call Adaptive Swarm Balancing Algorithms, lead to significant efficiency and effectiveness improvements on large datasets while the first method is invalid. We also find it more consistent with the practice of the typical large imbalanced medical datasets. We further use the meta-heuristic algorithms to optimize two key parameters of SMOTE. The proposed methods lead to more credible performances of the classifier, and shortening the run time compared to brute-force method. PMID:28753613
Neves Dias, Adriana; Simão, Vanessa; Merib, Josias; Carasek, Eduardo
2015-03-01
A novel method for the determination of organochlorine pesticides in water samples with extraction using cork fiber and analysis by gas chromatography with electron capture detector was developed. Also, the procedure to extract these pesticides with DVB/Car/PDMS fiber was optimized. The optimization of the variables involved in the extraction of organochlorine pesticides using the aforementioned fibers was carried out by multivariate design. The optimum extraction conditions were sample temperature 75 °C, extraction time 60 min and sodium chloride concentration 10% for the cork fiber and sample temperature 50 °C and extraction time 60 min (without salt) for the DVB/Car/PDMS fiber. The quantification limits for the two fibers varied between 1.0 and 10.0 ng L(-1). The linear correlation coefficients were >0.98 for both fibers. The method applied with the use of the cork fiber provided recovery values between 60.3 and 112.7 and RSD≤25.5 (n=3). The extraction efficiency values for the cork and DVB/Car/PDMS fibers were similar. The results show that cork is a promising alternative as a coating for SPME. Copyright © 2014 Elsevier B.V. All rights reserved.
Using Genotype Abundance to Improve Phylogenetic Inference
Mesin, Luka; Victora, Gabriel D; Minin, Vladimir N; Matsen, Frederick A
2018-01-01
Abstract Modern biological techniques enable very dense genetic sampling of unfolding evolutionary histories, and thus frequently sample some genotypes multiple times. This motivates strategies to incorporate genotype abundance information in phylogenetic inference. In this article, we synthesize a stochastic process model with standard sequence-based phylogenetic optimality, and show that tree estimation is substantially improved by doing so. Our method is validated with extensive simulations and an experimental single-cell lineage tracing study of germinal center B cell receptor affinity maturation. PMID:29474671
Fernández, Purificación; Fernández, Ana M; Bermejo, Ana M; Lorenzo, Rosa A; Carro, Antonia M
2013-04-01
The performance of microwave-assisted extraction and HPLC with photodiode array detection method for determination of six analgesic and anti-inflammatory drugs from plasma and urine, is described, optimized, and validated. Several parameters affecting the extraction technique were optimized using experimental designs. A four-factor (temperature, phosphate buffer pH 4.0 volume, extraction solvent volume, and time) hybrid experimental design was used for extraction optimization in plasma, and three-factor (temperature, extraction solvent volume, and time) Doehlert design was chosen to extraction optimization in urine. The use of desirability functions revealed the optimal extraction conditions as follows: 67°C, 4 mL phosphate buffer pH 4.0, 12 mL of ethyl acetate and 9 min, for plasma and the same volume of buffer and ethyl acetate, 115°C and 4 min for urine. Limits of detection ranged from 4 to 45 ng/mL in plasma and from 8 to 85 ng/mL in urine. The reproducibility evaluated at two concentration levels was less than 6.5% for both specimens. The recoveries were from 89 to 99% for plasma and from 83 to 99% for urine. The proposed method was successfully applied in plasma and urine samples obtained from analgesic users. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Uysal, Deniz; Karadaş, Cennet; Kara, Derya
2017-05-01
A new, simple, efficient, and environmentally friendly ionic liquid dispersive liquid-liquid microextraction method was developed for the determination of irinotecan, an anticancer drug, in water and urine samples using UV-Vis spectrophotometry. The ionic liquid 1-hexyl-3-methylimidazolium hexafluorophosphate was used as the extraction solvent, and ethanol was used as the disperser solvent. The main parameters affecting the extraction efficiency, including sample pH, volume of the ionic liquid, choice of the dispersive solvent and its volume, concentration of NaCl, and extraction and centrifugation times, were investigated and optimized. The effect of interfering species on the recovery of irinotecan was also examined. Under optimal conditions, the LOD (3σ) was 48.7 μg/L without any preconcentration. Because the urine sample was diluted 10-fold, the LOD for urine would be 487 μg/L. However, this could be improved 16-fold if preconcentration using a 40 mL aliquot of the sample is used. The proposed method was successfully applied to the determination of irinotecan in tap water, river water, and urine samples spiked with 10.20 mg/L for the water samples and 8.32 mg/L for the urine sample. The average recovery values of irinotecan determined were 99.1% for tap water, 109.4% for river water, and 96.1% for urine.
Adaptive Sampling-Based Information Collection for Wireless Body Area Networks.
Xu, Xiaobin; Zhao, Fang; Wang, Wendong; Tian, Hui
2016-08-31
To collect important health information, WBAN applications typically sense data at a high frequency. However, limited by the quality of wireless link, the uploading of sensed data has an upper frequency. To reduce upload frequency, most of the existing WBAN data collection approaches collect data with a tolerable error. These approaches can guarantee precision of the collected data, but they are not able to ensure that the upload frequency is within the upper frequency. Some traditional sampling based approaches can control upload frequency directly, however, they usually have a high loss of information. Since the core task of WBAN applications is to collect health information, this paper aims to collect optimized information under the limitation of upload frequency. The importance of sensed data is defined according to information theory for the first time. Information-aware adaptive sampling is proposed to collect uniformly distributed data. Then we propose Adaptive Sampling-based Information Collection (ASIC) which consists of two algorithms. An adaptive sampling probability algorithm is proposed to compute sampling probabilities of different sensed values. A multiple uniform sampling algorithm provides uniform samplings for values in different intervals. Experiments based on a real dataset show that the proposed approach has higher performance in terms of data coverage and information quantity. The parameter analysis shows the optimized parameter settings and the discussion shows the underlying reason of high performance in the proposed approach.
Adaptive Sampling-Based Information Collection for Wireless Body Area Networks
Xu, Xiaobin; Zhao, Fang; Wang, Wendong; Tian, Hui
2016-01-01
To collect important health information, WBAN applications typically sense data at a high frequency. However, limited by the quality of wireless link, the uploading of sensed data has an upper frequency. To reduce upload frequency, most of the existing WBAN data collection approaches collect data with a tolerable error. These approaches can guarantee precision of the collected data, but they are not able to ensure that the upload frequency is within the upper frequency. Some traditional sampling based approaches can control upload frequency directly, however, they usually have a high loss of information. Since the core task of WBAN applications is to collect health information, this paper aims to collect optimized information under the limitation of upload frequency. The importance of sensed data is defined according to information theory for the first time. Information-aware adaptive sampling is proposed to collect uniformly distributed data. Then we propose Adaptive Sampling-based Information Collection (ASIC) which consists of two algorithms. An adaptive sampling probability algorithm is proposed to compute sampling probabilities of different sensed values. A multiple uniform sampling algorithm provides uniform samplings for values in different intervals. Experiments based on a real dataset show that the proposed approach has higher performance in terms of data coverage and information quantity. The parameter analysis shows the optimized parameter settings and the discussion shows the underlying reason of high performance in the proposed approach. PMID:27589758
Xing, Han-Zhu; Wang, Xia; Chen, Xiang-Feng; Wang, Ming-Lin; Zhao, Ru-Song
2015-05-01
A method combining accelerated solvent extraction with dispersive liquid-liquid microextraction was developed for the first time as a sample pretreatment for the rapid analysis of phenols (including phenol, m-cresol, 2,4-dichlorophenol, and 2,4,6-trichlorophenol) in soil samples. In the accelerated solvent extraction procedure, water was used as an extraction solvent, and phenols were extracted from soil samples into water. The dispersive liquid-liquid microextraction technique was then performed on the obtained aqueous solution. Important accelerated solvent extraction and dispersive liquid-liquid microextraction parameters were investigated and optimized. Under optimized conditions, the new method provided wide linearity (6.1-3080 ng/g), low limits of detection (0.06-1.83 ng/g), and excellent reproducibility (<10%) for phenols. Four real soil samples were analyzed by the proposed method to assess its applicability. Experimental results showed that the soil samples were free of our target compounds, and average recoveries were in the range of 87.9-110%. These findings indicate that accelerated solvent extraction with dispersive liquid-liquid microextraction as a sample pretreatment procedure coupled with gas chromatography and mass spectrometry is an excellent method for the rapid analysis of trace levels of phenols in environmental soil samples. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Manju, Md Abu; Candel, Math J J M; Berger, Martijn P F
2014-07-10
In this paper, the optimal sample sizes at the cluster and person levels for each of two treatment arms are obtained for cluster randomized trials where the cost-effectiveness of treatments on a continuous scale is studied. The optimal sample sizes maximize the efficiency or power for a given budget or minimize the budget for a given efficiency or power. Optimal sample sizes require information on the intra-cluster correlations (ICCs) for effects and costs, the correlations between costs and effects at individual and cluster levels, the ratio of the variance of effects translated into costs to the variance of the costs (the variance ratio), sampling and measuring costs, and the budget. When planning, a study information on the model parameters usually is not available. To overcome this local optimality problem, the current paper also presents maximin sample sizes. The maximin sample sizes turn out to be rather robust against misspecifying the correlation between costs and effects at the cluster and individual levels but may lose much efficiency when misspecifying the variance ratio. The robustness of the maximin sample sizes against misspecifying the ICCs depends on the variance ratio. The maximin sample sizes are robust under misspecification of the ICC for costs for realistic values of the variance ratio greater than one but not robust under misspecification of the ICC for effects. Finally, we show how to calculate optimal or maximin sample sizes that yield sufficient power for a test on the cost-effectiveness of an intervention.
Létant, Sonia E; Murphy, Gloria A; Alfaro, Teneile M; Avila, Julie R; Kane, Staci R; Raber, Ellen; Bunt, Thomas M; Shah, Sanjiv R
2011-09-01
In the event of a biothreat agent release, hundreds of samples would need to be rapidly processed to characterize the extent of contamination and determine the efficacy of remediation activities. Current biological agent identification and viability determination methods are both labor- and time-intensive such that turnaround time for confirmed results is typically several days. In order to alleviate this issue, automated, high-throughput sample processing methods were developed in which real-time PCR analysis is conducted on samples before and after incubation. The method, referred to as rapid-viability (RV)-PCR, uses the change in cycle threshold after incubation to detect the presence of live organisms. In this article, we report a novel RV-PCR method for detection of live, virulent Bacillus anthracis, in which the incubation time was reduced from 14 h to 9 h, bringing the total turnaround time for results below 15 h. The method incorporates a magnetic bead-based DNA extraction and purification step prior to PCR analysis, as well as specific real-time PCR assays for the B. anthracis chromosome and pXO1 and pXO2 plasmids. A single laboratory verification of the optimized method applied to the detection of virulent B. anthracis in environmental samples was conducted and showed a detection level of 10 to 99 CFU/sample with both manual and automated RV-PCR methods in the presence of various challenges. Experiments exploring the relationship between the incubation time and the limit of detection suggest that the method could be further shortened by an additional 2 to 3 h for relatively clean samples.
Career Patterns: A Twenty-Year Panel Study
ERIC Educational Resources Information Center
Biemann, Torsten; Zacher, Hannes; Feldman, Daniel C.
2012-01-01
Using 20years of employment and job mobility data from a representative German sample (N = 1259), we employ optimal matching analysis (OMA) to identify six career patterns which deviate from the traditional career path of long-term, full-time employment in one organization. Then, in further analyses, we examine which socio-demographic predictors…
Design and testing of access-tube TDR soil water sensor
USDA-ARS?s Scientific Manuscript database
We developed the design of a waveguide on the exterior of an access tube for use in time-domain reflectometry (TDR) for in-situ soil water content sensing. In order to optimize the design with respect to sampling volume and losses, we derived the electromagnetic (EM) fields produced by a TDR sensor...
Gao, Huiju; Chu, Xiang; Wang, Yanwen; Zhou, Fei; Zhao, Kai; Mu, Zhimei; Liu, Qingxin
2013-12-01
Trichoderma harzianum ZF-2 producing laccase was isolated from decaying samples from Shandong, China, and showed dye decolorization activities. The objective of this study was to optimize its culture conditions using a statistical analysis of its laccase production. The interactions between different fermentation parameters for laccase production were characterized using a Plackett-Burman design and the response surface methodology. The different media components were initially optimized using the conventional one-factor-at-a-time method and an orthogonal test design, and a Plackett-Burman experiment was then performed to evaluate the effects on laccase production. Wheat straw powder, soybean meal, and CuSO4 were all found to have a significant influence on laccase production, and the optimal concentrations of these three factors were then sequentially investigated using the response surface methodology with a central composite design. The resulting optimal medium components for laccase production were determined as follows: wheat straw powder 7.63 g/l, soybean meal 23.07 g/l, (NH4)2SO4 1 g/l, CuSO4 0.51 g/l, Tween-20 1 g/l, MgSO4 1 g/l, and KH2PO4 0.6 g/l. Using this optimized fermentation method, the yield of laccase was increased 59.68 times to 67.258 U/ml compared with the laccase production with an unoptimized medium. This is the first report on the statistical optimization of laccase production by Trichoderma harzianum ZF-2.
Alves, Claudete; Fernandes, Christian; Dos Santos Neto, Alvaro José; Rodrigues, José Carlos; Costa Queiroz, Maria Eugênia; Lanças, Fernando Mauro
2006-07-01
Solid-phase microextraction (SPME)-liquid chromatography (LC) is used to analyze tricyclic antidepressant drugs desipramine, imipramine, nortriptyline, amitriptyline, and clomipramine (internal standard) in plasma samples. Extraction conditions are optimized using a 2(3) factorial design plus a central point to evaluate the influence of the time, temperature, and matrix pH. A Polydimethylsiloxane-divinylbenzene (60-mum film thickness) fiber is selected after the assessment of different types of coating. The chromatographic separation is realized using a C(18) column (150 x 4.6 mm, 5-microm particles), ammonium acetate buffer (0.05 mol/L, pH 5.50)-acetonitrile (55:45 v/v) with 0.1% of triethylamine as mobile phase and UV-vis detection at 214 nm. Among the factorial design conditions evaluated, the best results are obtained at a pH 11.0, temperature of 30 degrees C, and extraction time of 45 min. The proposed method, using a lab-made SPME-LC interface, allowed the determination of tricyclic antidepressants in in plasma at therapeutic concentration levels.
NASA Astrophysics Data System (ADS)
Toropov, A. A.; Shevchenko, E. A.; Shubina, T. V.; Jmerik, V. N.; Nechaev, D. V.; Evropeytsev, E. A.; Kaibyshev, V. Kh.; Pozina, G.; Rouvimov, S.; Ivanov, S. V.
2017-07-01
We present theoretical optimization of the design of a quantum well (QW) heterostructure based on AlGaN alloys, aimed at achievement of the maximum possible internal quantum efficiency of emission in the mid-ultraviolet spectral range below 300 nm at room temperature. A sample with optimized parameters was fabricated by plasma-assisted molecular beam epitaxy using the submonolayer digital alloying technique for QW formation. High-angle annular dark-field scanning transmission electron microscopy confirmed strong compositional disordering of the thus-fabricated QW, which presumably facilitates lateral localization of charge carriers in the QW plane. Stress evolution in the heterostructure was monitored in real time during growth using a multibeam optical stress sensor intended for measurements of substrate curvature. Time-resolved photoluminescence spectroscopy confirmed that radiative recombination in the fabricated sample dominated in the whole temperature range up to 300 K. This leads to record weak temperature-induced quenching of the QW emission intensity, which at 300 K does not exceed 20% of the low-temperature value.
Li, Dongyue; Jia, Jianbo; Wang, Jianguo
2010-12-15
A bismuth-film modified graphite nanofibers-Nafion glassy carbon electrode (BiF/GNFs-NA/GCE) was constructed for the simultaneous determination of trace Cd(II) and Pb(II). The electrochemical properties and applications of the modified electrode were studied. Operational parameters such as deposition potential, deposition time, and bismuth ion concentration were optimized for the purpose of determination of trace metal ions in 0.10 M acetate buffer solution (pH 4.5). Under optimal conditions, based on three times the standard deviation of the baseline, the limits of detection were 0.09 μg L(-1) for Cd(II) and 0.02 μg L(-1) for Pb(II) with a 10 min preconcentration. In addition, the BiF/GNFs-NA/GCE displayed good reproducibility and selectivity, making it suitable for the simultaneous determination of Cd(II) and Pb(II) in real sample such as river water and human blood samples. Copyright © 2010 Elsevier B.V. All rights reserved.
Non-parametric early seizure detection in an animal model of temporal lobe epilepsy
NASA Astrophysics Data System (ADS)
Talathi, Sachin S.; Hwang, Dong-Uk; Spano, Mark L.; Simonotto, Jennifer; Furman, Michael D.; Myers, Stephen M.; Winters, Jason T.; Ditto, William L.; Carney, Paul R.
2008-03-01
The performance of five non-parametric, univariate seizure detection schemes (embedding delay, Hurst scale, wavelet scale, nonlinear autocorrelation and variance energy) were evaluated as a function of the sampling rate of EEG recordings, the electrode types used for EEG acquisition, and the spatial location of the EEG electrodes in order to determine the applicability of the measures in real-time closed-loop seizure intervention. The criteria chosen for evaluating the performance were high statistical robustness (as determined through the sensitivity and the specificity of a given measure in detecting a seizure) and the lag in seizure detection with respect to the seizure onset time (as determined by visual inspection of the EEG signal by a trained epileptologist). An optimality index was designed to evaluate the overall performance of each measure. For the EEG data recorded with microwire electrode array at a sampling rate of 12 kHz, the wavelet scale measure exhibited better overall performance in terms of its ability to detect a seizure with high optimality index value and high statistics in terms of sensitivity and specificity.
NASA Astrophysics Data System (ADS)
Son, Ji-Su; Hyeon Baik, Kwang; Gon Seo, Yong; Song, Hooyoung; Hoon Kim, Ji; Hwang, Sung-Min; Kim, Tae-Geun
2011-07-01
The optimal conditions of p-type activation for nonpolar a-plane (1 1 -2 0) p-type GaN films on r-plane (1 -1 0 2) sapphire substrates with various off-axis orientations have been investigated. Secondary ion mass spectrometry (SIMS) measurements show that Mg doping concentrations of 6.58×10 19 cm -3 were maintained in GaN during epitaxial growth. The samples were activated at various temperatures and periods of time in air, oxygen (O 2) and nitrogen (N 2) gas ambient by conventional furnace annealing (CFA) and rapid thermal annealing (RTA). The activation of nonpolar a-plane p-type GaN was successful in similar annealing times and temperatures when compared with polar c-plane p-type GaN. However, activation ambient of nonpolar a-plane p-type GaN was clearly different, where a-plane p-type GaN was effectively activated in air ambient. Photoluminescence shows that the optical properties of Mg-doped a-plane GaN samples are enhanced when activated in air ambient.
Cao, Wan; Hu, Shuai-Shuai; Ye, Li-Hong; Cao, Jun; Pang, Xiao-Qing; Xu, Jing-Jing
2016-01-01
A simple, rapid, and highly selective trace matrix solid phase dispersion (MSPD) technique, coupled with ultra-performance liquid chromatography-ultraviolet detection, was proposed for extracting flavonoids from orange fruit peel matrices. Molecular sieve SBA-15 was applied for the first time as a solid support in trace MSPD. Parameters, such as the type of dispersant, mass ratio of the sample to the dispersant, grinding time, and elution pH, were optimized in detail. The optimal extraction conditions involved dispersing a powdered fruit peel sample (25 mg) into 25mg of SBA-15 and then eluting the target analytes with 500 μL of methanol. A satisfactory linearity (r(2) > 0.9990) was obtained, and the calculated limits of detection reached 0.02-0.03 μg/mL for the compounds. The results showed that the method developed was successfully applied to determine the content of flavonoids in complex fruit peel matrices. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Engwall, E.; Glimelius, L.; Hynning, E.
2018-05-01
Non-small cell lung cancer (NSCLC) is a tumour type thought to be well-suited for proton radiotherapy. However, the lung region poses many problems related to organ motion and can for actively scanned beams induce severe interplay effects. In this study we investigate four mitigating rescanning techniques: (1) volumetric rescanning, (2) layered rescanning, (3) breath-sampled (BS) layered rescanning, and (4) continuous breath-sampled (CBS) layered rescanning. The breath-sampled methods will spread the layer rescans over a full breathing cycle, resulting in an improved averaging effect at the expense of longer treatment times. In CBS, we aim at further improving the averaging by delivering as many rescans as possible within one breathing cycle. The interplay effect was evaluated for 4D robustly optimized treatment plans (with and without rescanning) for seven NSCLC patients in the treatment planning system RayStation. The optimization and final dose calculation used a Monte Carlo dose engine to account for the density heterogeneities in the lung region. A realistic treatment delivery time structure given from the IBA ScanAlgo simulation tool served as basis for the interplay evaluation. Both slow (2.0 s) and fast (0.1 s) energy switching times were simulated. For all seven studied patients, rescanning improves the dose conformity to the target. The general trend is that the breath-sampled techniques are superior to layered and volumetric rescanning with respect to both target coverage and variability in dose to OARs. The spacing between rescans in our breath-sampled techniques is set at planning, based on the average breathing cycle length obtained in conjunction with CT acquisition. For moderately varied breathing cycle lengths between planning and delivery (up to 15%), the breath-sampled techniques still mitigate the interplay effect well. This shows the potential for smooth implementation at the clinic without additional motion monitoring equipment.