NASA Astrophysics Data System (ADS)
Beck, Joakim; Dia, Ben Mansour; Espath, Luis F. R.; Long, Quan; Tempone, Raúl
2018-06-01
In calculating expected information gain in optimal Bayesian experimental design, the computation of the inner loop in the classical double-loop Monte Carlo requires a large number of samples and suffers from underflow if the number of samples is small. These drawbacks can be avoided by using an importance sampling approach. We present a computationally efficient method for optimal Bayesian experimental design that introduces importance sampling based on the Laplace method to the inner loop. We derive the optimal values for the method parameters in which the average computational cost is minimized according to the desired error tolerance. We use three numerical examples to demonstrate the computational efficiency of our method compared with the classical double-loop Monte Carlo, and a more recent single-loop Monte Carlo method that uses the Laplace method as an approximation of the return value of the inner loop. The first example is a scalar problem that is linear in the uncertain parameter. The second example is a nonlinear scalar problem. The third example deals with the optimal sensor placement for an electrical impedance tomography experiment to recover the fiber orientation in laminate composites.
An Overview of Importance Splitting for Rare Event Simulation
ERIC Educational Resources Information Center
Morio, Jerome; Pastel, Rudy; Le Gland, Francois
2010-01-01
Monte Carlo simulations are a classical tool to analyse physical systems. When unlikely events are to be simulated, the importance sampling technique is often used instead of Monte Carlo. Importance sampling has some drawbacks when the problem dimensionality is high or when the optimal importance sampling density is complex to obtain. In this…
Importance Sampling in the Evaluation and Optimization of Buffered Failure Probability
2015-07-01
12th International Conference on Applications of Statistics and Probability in Civil Engineering, ICASP12 Vancouver, Canada, July 12-15, 2015...Importance Sampling in the Evaluation and Optimization of Buffered Failure Probability Marwan M. Harajli Graduate Student, Dept. of Civil and Environ...criterion is usually the failure probability . In this paper, we examine the buffered failure probability as an attractive alternative to the failure
Optimal Inspection of Imports to Prevent Invasive Pest Introduction.
Chen, Cuicui; Epanchin-Niell, Rebecca S; Haight, Robert G
2018-03-01
The United States imports more than 1 billion live plants annually-an important and growing pathway for introduction of damaging nonnative invertebrates and pathogens. Inspection of imports is one safeguard for reducing pest introductions, but capacity constraints limit inspection effort. We develop an optimal sampling strategy to minimize the costs of pest introductions from trade by posing inspection as an acceptance sampling problem that incorporates key features of the decision context, including (i) simultaneous inspection of many heterogeneous lots, (ii) a lot-specific sampling effort, (iii) a budget constraint that limits total inspection effort, (iv) inspection error, and (v) an objective of minimizing cost from accepted defective units. We derive a formula for expected number of accepted infested units (expected slippage) given lot size, sample size, infestation rate, and detection rate, and we formulate and analyze the inspector's optimization problem of allocating a sampling budget among incoming lots to minimize the cost of slippage. We conduct an empirical analysis of live plant inspection, including estimation of plant infestation rates from historical data, and find that inspections optimally target the largest lots with the highest plant infestation rates, leaving some lots unsampled. We also consider that USDA-APHIS, which administers inspections, may want to continue inspecting all lots at a baseline level; we find that allocating any additional capacity, beyond a comprehensive baseline inspection, to the largest lots with the highest infestation rates allows inspectors to meet the dual goals of minimizing the costs of slippage and maintaining baseline sampling without substantial compromise. © 2017 Society for Risk Analysis.
Wang, Junxiao; Wang, Xiaorui; Zhou, Shenglu; Wu, Shaohua; Zhu, Yan; Lu, Chunfeng
2016-01-01
With China’s rapid economic development, the reduction in arable land has emerged as one of the most prominent problems in the nation. The long-term dynamic monitoring of arable land quality is important for protecting arable land resources. An efficient practice is to select optimal sample points while obtaining accurate predictions. To this end, the selection of effective points from a dense set of soil sample points is an urgent problem. In this study, data were collected from Donghai County, Jiangsu Province, China. The number and layout of soil sample points are optimized by considering the spatial variations in soil properties and by using an improved simulated annealing (SA) algorithm. The conclusions are as follows: (1) Optimization results in the retention of more sample points in the moderate- and high-variation partitions of the study area; (2) The number of optimal sample points obtained with the improved SA algorithm is markedly reduced, while the accuracy of the predicted soil properties is improved by approximately 5% compared with the raw data; (3) With regard to the monitoring of arable land quality, a dense distribution of sample points is needed to monitor the granularity. PMID:27706051
Li, Huili; Ostermann, Anne; Karunarathna, Samantha C; Xu, Jianchu; Hyde, Kevin D; Mortimer, Peter E
2018-07-01
The species-area relationship is an important factor in the study of species diversity, conservation biology, and landscape ecology. A deeper understanding of this relationship is necessary, in order to provide recommendations on how to improve the quality of data collection on macrofungal diversity in different land use systems in future studies, a systematic assessment of methodological parameters, in particular optimal plot sizes. The species-area relationship of macrofungi in tropical and temperate climatic zones and four different land use systems were investigated by determining the macrofungal species richness in plot sizes ranging from 100 m 2 to 10 000 m 2 over two sampling seasons. We found that the effect of plot size on recorded species richness significantly differed between land use systems with the exception of monoculture systems. For both climate zones, land use system needs to be considered when determining optimal plot size. Using an optimal plot size was more important than temporal replication (over two sampling seasons) in accurately recording species richness. Copyright © 2018 British Mycological Society. Published by Elsevier Ltd. All rights reserved.
Optimism, Social Support, and Adjustment in African American Women with Breast Cancer
Shelby, Rebecca A.; Crespin, Tim R.; Wells-Di Gregorio, Sharla M.; Lamdan, Ruth M.; Siegel, Jamie E.; Taylor, Kathryn L.
2013-01-01
Past studies show that optimism and social support are associated with better adjustment following breast cancer treatment. Most studies have examined these relationships in predominantly non-Hispanic White samples. The present study included 77 African American women treated for nonmetastatic breast cancer. Women completed measures of optimism, social support, and adjustment within 10-months of surgical treatment. In contrast to past studies, social support did not mediate the relationship between optimism and adjustment in this sample. Instead, social support was a moderator of the optimism-adjustment relationship, as it buffered the negative impact of low optimism on psychological distress, well-being, and psychosocial functioning. Women with high levels of social support experienced better adjustment even when optimism was low. In contrast, among women with high levels of optimism, increasing social support did not provide an added benefit. These data suggest that perceived social support is an important resource for women with low optimism. PMID:18712591
Moderate deviations-based importance sampling for stochastic recursive equations
Dupuis, Paul; Johnson, Dane
2017-11-17
Abstract Subsolutions to the Hamilton–Jacobi–Bellman equation associated with a moderate deviations approximation are used to design importance sampling changes of measure for stochastic recursive equations. Analogous to what has been done for large deviations subsolution-based importance sampling, these schemes are shown to be asymptotically optimal under the moderate deviations scaling. We present various implementations and numerical results to contrast their performance, and also discuss the circumstances under which a moderate deviation scaling might be appropriate.
Moderate deviations-based importance sampling for stochastic recursive equations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dupuis, Paul; Johnson, Dane
Abstract Subsolutions to the Hamilton–Jacobi–Bellman equation associated with a moderate deviations approximation are used to design importance sampling changes of measure for stochastic recursive equations. Analogous to what has been done for large deviations subsolution-based importance sampling, these schemes are shown to be asymptotically optimal under the moderate deviations scaling. We present various implementations and numerical results to contrast their performance, and also discuss the circumstances under which a moderate deviation scaling might be appropriate.
Vogel, Michael W; Vegh, Viktor; Reutens, David C
2013-05-01
This paper investigates optimal placement of a localized single-axis magnetometer for ultralow field (ULF) relaxometry in view of various sample shapes and sizes. The authors used finite element method for the numerical analysis to determine the sample magnetic field environment and evaluate the optimal location of the single-axis magnetometer. Given the different samples, the authors analysed the magnetic field distribution around the sample and determined the optimal orientation and possible positions of the sensor to maximize signal strength, that is, the power of the free induction decay. The authors demonstrate that a glass vial with flat bottom and 10 ml volume is the best structure to achieve the highest signal out of samples studied. This paper demonstrates the importance of taking into account the combined effects of sensor configuration and sample parameters for signal generation prior to designing and constructing ULF systems with a single-axis magnetometer. Through numerical simulations the authors were able to optimize structural parameters, such as sample shape and size, sensor orientation and location, to maximize the measured signal in ultralow field relaxometry.
Gilbert, Peter B; Yu, Xuesong; Rotnitzky, Andrea
2014-03-15
To address the objective in a clinical trial to estimate the mean or mean difference of an expensive endpoint Y, one approach employs a two-phase sampling design, wherein inexpensive auxiliary variables W predictive of Y are measured in everyone, Y is measured in a random sample, and the semiparametric efficient estimator is applied. This approach is made efficient by specifying the phase two selection probabilities as optimal functions of the auxiliary variables and measurement costs. While this approach is familiar to survey samplers, it apparently has seldom been used in clinical trials, and several novel results practicable for clinical trials are developed. We perform simulations to identify settings where the optimal approach significantly improves efficiency compared to approaches in current practice. We provide proofs and R code. The optimality results are developed to design an HIV vaccine trial, with objective to compare the mean 'importance-weighted' breadth (Y) of the T-cell response between randomized vaccine groups. The trial collects an auxiliary response (W) highly predictive of Y and measures Y in the optimal subset. We show that the optimal design-estimation approach can confer anywhere between absent and large efficiency gain (up to 24 % in the examples) compared to the approach with the same efficient estimator but simple random sampling, where greater variability in the cost-standardized conditional variance of Y given W yields greater efficiency gains. Accurate estimation of E[Y | W] is important for realizing the efficiency gain, which is aided by an ample phase two sample and by using a robust fitting method. Copyright © 2013 John Wiley & Sons, Ltd.
Gilbert, Peter B.; Yu, Xuesong; Rotnitzky, Andrea
2014-01-01
To address the objective in a clinical trial to estimate the mean or mean difference of an expensive endpoint Y, one approach employs a two-phase sampling design, wherein inexpensive auxiliary variables W predictive of Y are measured in everyone, Y is measured in a random sample, and the semi-parametric efficient estimator is applied. This approach is made efficient by specifying the phase-two selection probabilities as optimal functions of the auxiliary variables and measurement costs. While this approach is familiar to survey samplers, it apparently has seldom been used in clinical trials, and several novel results practicable for clinical trials are developed. Simulations are performed to identify settings where the optimal approach significantly improves efficiency compared to approaches in current practice. Proofs and R code are provided. The optimality results are developed to design an HIV vaccine trial, with objective to compare the mean “importance-weighted” breadth (Y) of the T cell response between randomized vaccine groups. The trial collects an auxiliary response (W) highly predictive of Y, and measures Y in the optimal subset. We show that the optimal design-estimation approach can confer anywhere between absent and large efficiency gain (up to 24% in the examples) compared to the approach with the same efficient estimator but simple random sampling, where greater variability in the cost-standardized conditional variance of Y given W yields greater efficiency gains. Accurate estimation of E[Y∣W] is important for realizing the efficiency gain, which is aided by an ample phase-two sample and by using a robust fitting method. PMID:24123289
NASA Astrophysics Data System (ADS)
Wahl, N.; Hennig, P.; Wieser, H. P.; Bangert, M.
2017-07-01
The sensitivity of intensity-modulated proton therapy (IMPT) treatment plans to uncertainties can be quantified and mitigated with robust/min-max and stochastic/probabilistic treatment analysis and optimization techniques. Those methods usually rely on sparse random, importance, or worst-case sampling. Inevitably, this imposes a trade-off between computational speed and accuracy of the uncertainty propagation. Here, we investigate analytical probabilistic modeling (APM) as an alternative for uncertainty propagation and minimization in IMPT that does not rely on scenario sampling. APM propagates probability distributions over range and setup uncertainties via a Gaussian pencil-beam approximation into moments of the probability distributions over the resulting dose in closed form. It supports arbitrary correlation models and allows for efficient incorporation of fractionation effects regarding random and systematic errors. We evaluate the trade-off between run-time and accuracy of APM uncertainty computations on three patient datasets. Results are compared against reference computations facilitating importance and random sampling. Two approximation techniques to accelerate uncertainty propagation and minimization based on probabilistic treatment plan optimization are presented. Runtimes are measured on CPU and GPU platforms, dosimetric accuracy is quantified in comparison to a sampling-based benchmark (5000 random samples). APM accurately propagates range and setup uncertainties into dose uncertainties at competitive run-times (GPU ≤slant {5} min). The resulting standard deviation (expectation value) of dose show average global γ{3% / {3}~mm} pass rates between 94.2% and 99.9% (98.4% and 100.0%). All investigated importance sampling strategies provided less accuracy at higher run-times considering only a single fraction. Considering fractionation, APM uncertainty propagation and treatment plan optimization was proven to be possible at constant time complexity, while run-times of sampling-based computations are linear in the number of fractions. Using sum sampling within APM, uncertainty propagation can only be accelerated at the cost of reduced accuracy in variance calculations. For probabilistic plan optimization, we were able to approximate the necessary pre-computations within seconds, yielding treatment plans of similar quality as gained from exact uncertainty propagation. APM is suited to enhance the trade-off between speed and accuracy in uncertainty propagation and probabilistic treatment plan optimization, especially in the context of fractionation. This brings fully-fledged APM computations within reach of clinical application.
Wahl, N; Hennig, P; Wieser, H P; Bangert, M
2017-06-26
The sensitivity of intensity-modulated proton therapy (IMPT) treatment plans to uncertainties can be quantified and mitigated with robust/min-max and stochastic/probabilistic treatment analysis and optimization techniques. Those methods usually rely on sparse random, importance, or worst-case sampling. Inevitably, this imposes a trade-off between computational speed and accuracy of the uncertainty propagation. Here, we investigate analytical probabilistic modeling (APM) as an alternative for uncertainty propagation and minimization in IMPT that does not rely on scenario sampling. APM propagates probability distributions over range and setup uncertainties via a Gaussian pencil-beam approximation into moments of the probability distributions over the resulting dose in closed form. It supports arbitrary correlation models and allows for efficient incorporation of fractionation effects regarding random and systematic errors. We evaluate the trade-off between run-time and accuracy of APM uncertainty computations on three patient datasets. Results are compared against reference computations facilitating importance and random sampling. Two approximation techniques to accelerate uncertainty propagation and minimization based on probabilistic treatment plan optimization are presented. Runtimes are measured on CPU and GPU platforms, dosimetric accuracy is quantified in comparison to a sampling-based benchmark (5000 random samples). APM accurately propagates range and setup uncertainties into dose uncertainties at competitive run-times (GPU [Formula: see text] min). The resulting standard deviation (expectation value) of dose show average global [Formula: see text] pass rates between 94.2% and 99.9% (98.4% and 100.0%). All investigated importance sampling strategies provided less accuracy at higher run-times considering only a single fraction. Considering fractionation, APM uncertainty propagation and treatment plan optimization was proven to be possible at constant time complexity, while run-times of sampling-based computations are linear in the number of fractions. Using sum sampling within APM, uncertainty propagation can only be accelerated at the cost of reduced accuracy in variance calculations. For probabilistic plan optimization, we were able to approximate the necessary pre-computations within seconds, yielding treatment plans of similar quality as gained from exact uncertainty propagation. APM is suited to enhance the trade-off between speed and accuracy in uncertainty propagation and probabilistic treatment plan optimization, especially in the context of fractionation. This brings fully-fledged APM computations within reach of clinical application.
Training set optimization under population structure in genomic selection.
Isidro, Julio; Jannink, Jean-Luc; Akdemir, Deniz; Poland, Jesse; Heslot, Nicolas; Sorrells, Mark E
2015-01-01
Population structure must be evaluated before optimization of the training set population. Maximizing the phenotypic variance captured by the training set is important for optimal performance. The optimization of the training set (TRS) in genomic selection has received much interest in both animal and plant breeding, because it is critical to the accuracy of the prediction models. In this study, five different TRS sampling algorithms, stratified sampling, mean of the coefficient of determination (CDmean), mean of predictor error variance (PEVmean), stratified CDmean (StratCDmean) and random sampling, were evaluated for prediction accuracy in the presence of different levels of population structure. In the presence of population structure, the most phenotypic variation captured by a sampling method in the TRS is desirable. The wheat dataset showed mild population structure, and CDmean and stratified CDmean methods showed the highest accuracies for all the traits except for test weight and heading date. The rice dataset had strong population structure and the approach based on stratified sampling showed the highest accuracies for all traits. In general, CDmean minimized the relationship between genotypes in the TRS, maximizing the relationship between TRS and the test set. This makes it suitable as an optimization criterion for long-term selection. Our results indicated that the best selection criterion used to optimize the TRS seems to depend on the interaction of trait architecture and population structure.
Collecting Quality Infrared Spectra from Microscopic Samples of Suspicious Powders in a Sealed Cell.
Kammrath, Brooke W; Leary, Pauline E; Reffner, John A
2017-03-01
The infrared (IR) microspectroscopical analysis of samples within a sealed-cell containing barium fluoride is a critical need when identifying toxic agents or suspicious powders of unidentified composition. The dispersive nature of barium fluoride is well understood and experimental conditions can be easily adjusted during reflection-absorption measurements to account for differences in focus between the visible and IR regions of the spectrum. In most instances, the ability to collect a viable spectrum is possible when using the sealed cell regardless of whether visible or IR focus is optimized. However, when IR focus is optimized, it is possible to collect useful data from even smaller samples. This is important when a minimal sample is available for analysis or the desire to minimize risk of sample exposure is important. While the use of barium fluoride introduces dispersion effects that are unavoidable, it is possible to adjust instrument settings when collecting IR spectra in the reflection-absorption mode to compensate for dispersion and minimize impact on the quality of the sample spectrum.
Alshaikh, Nahla; Brunklaus, Andreas; Davis, Tracey; Robb, Stephanie A; Quinlivan, Ros; Munot, Pinki; Sarkozy, Anna; Muntoni, Francesco; Manzur, Adnan Y
2016-10-01
Assessment of the efficacy of vitamin D replenishment and maintenance doses required to attain optimal levels in boys with Duchenne muscular dystrophy (DMD). 25(OH)-vitamin D levels and concurrent vitamin D dosage were collected from retrospective case-note review of boys with DMD at the Dubowitz Neuromuscular Centre. Vitamin D levels were stratified as deficient at <25 nmol/L, insufficient at 25-49 nmol/L, adequate at 50-75 nmol/L and optimal at >75 nmol/L. 617 vitamin D samples were available from 197 boys (range 2-18 years)-69% from individuals on corticosteroids. Vitamin D-naïve boys (154 samples) showed deficiency in 28%, insufficiency in 42%, adequate levels in 24% and optimal levels in 6%. The vitamin D-supplemented group (463 samples) was tested while on different maintenance/replenishment doses. Three-month replenishment of daily 3000 IU (23 samples) or 6000 IU (37 samples) achieved optimal levels in 52% and 84%, respectively. 182 samples taken on 400 IU revealed deficiency in 19 (10%), insufficiency in 84 (47%), adequate levels in 67 (37%) and optimal levels in 11 (6%). 97 samples taken on 800 IU showed deficiency in 2 (2%), insufficiency in 17 (17%), adequate levels in 56 (58%) and optimal levels in 22 (23%). 81 samples were on 1000 IU and 14 samples on 1500 IU, with optimal levels in 35 (43%) and 9 (64%), respectively. No toxic level was seen (highest level 230 nmol/L). The prevalence of vitamin D deficiency and insufficiency in DMD is high. A 2-month replenishment regimen of 6000 IU and maintenance regimen of 1000-1500 IU/day was associated with optimal vitamin D levels. These data have important implications for optimising vitamin D dosing in DMD. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fong, Erika J.; Huang, Chao; Hamilton, Julie
Here, a major advantage of microfluidic devices is the ability to manipulate small sample volumes, thus reducing reagent waste and preserving precious sample. However, to achieve robust sample manipulation it is necessary to address device integration with the macroscale environment. To realize repeatable, sensitive particle separation with microfluidic devices, this protocol presents a complete automated and integrated microfluidic platform that enables precise processing of 0.15–1.5 ml samples using microfluidic devices. Important aspects of this system include modular device layout and robust fixtures resulting in reliable and flexible world to chip connections, and fully-automated fluid handling which accomplishes closed-loop sample collection,more » system cleaning and priming steps to ensure repeatable operation. Different microfluidic devices can be used interchangeably with this architecture. Here we incorporate an acoustofluidic device, detail its characterization, performance optimization, and demonstrate its use for size-separation of biological samples. By using real-time feedback during separation experiments, sample collection is optimized to conserve and concentrate sample. Although requiring the integration of multiple pieces of equipment, advantages of this architecture include the ability to process unknown samples with no additional system optimization, ease of device replacement, and precise, robust sample processing.« less
Effects of tree-to-tree variations on sap flux-based transpiration estimates in a forested watershed
NASA Astrophysics Data System (ADS)
Kume, Tomonori; Tsuruta, Kenji; Komatsu, Hikaru; Kumagai, Tomo'omi; Higashi, Naoko; Shinohara, Yoshinori; Otsuki, Kyoichi
2010-05-01
To estimate forest stand-scale water use, we assessed how sample sizes affect confidence of stand-scale transpiration (E) estimates calculated from sap flux (Fd) and sapwood area (AS_tree) measurements of individual trees. In a Japanese cypress plantation, we measured Fd and AS_tree in all trees (n = 58) within a 20 × 20 m study plot, which was divided into four 10 × 10 subplots. We calculated E from stand AS_tree (AS_stand) and mean stand Fd (JS) values. Using Monte Carlo analyses, we examined potential errors associated with sample sizes in E, AS_stand, and JS by using the original AS_tree and Fd data sets. Consequently, we defined optimal sample sizes of 10 and 15 for AS_stand and JS estimates, respectively, in the 20 × 20 m plot. Sample sizes greater than the optimal sample sizes did not decrease potential errors. The optimal sample sizes for JS changed according to plot size (e.g., 10 × 10 m and 10 × 20 m), while the optimal sample sizes for AS_stand did not. As well, the optimal sample sizes for JS did not change in different vapor pressure deficit conditions. In terms of E estimates, these results suggest that the tree-to-tree variations in Fd vary among different plots, and that plot size to capture tree-to-tree variations in Fd is an important factor. This study also discusses planning balanced sampling designs to extrapolate stand-scale estimates to catchment-scale estimates.
Bilayer tablets of Paliperidone for Extended release osmotic drug delivery
NASA Astrophysics Data System (ADS)
Chowdary, K. Sunil; Napoleon, A. A.
2017-11-01
The purpose of this study is to develop and optimize the formulation of paliperidone bilayer tablet core and coating which should meet in vitro performance of trilayered Innovator sample Invega. Optimization of core formulations prepared by different ratio of polyox grades and optimization of coating of (i) sub-coating build-up with hydroxy ethyl cellulose (HEC) and (ii).enteric coating build-up with cellulose acetate (CA). Some important influence factors such as different core tablet compositions and different coating solution ingredients involved in the formulation procedure were investigated. The optimization of formulation and process was conducted by comparing different in vitro release behaviours of Paliperidone. In vitro dissolution studies of Innovator sample (Invega) with formulations of different release rate which ever close release pattern during the whole 24 h test is finalized.
Silber, Hanna E; Nyberg, Joakim; Hooker, Andrew C; Karlsson, Mats O
2009-06-01
Intravenous glucose tolerance test (IVGTT) provocations are informative, but complex and laborious, for studying the glucose-insulin system. The objective of this study was to evaluate, through optimal design methodology, the possibilities of more informative and/or less laborious study design of the insulin modified IVGTT in type 2 diabetic patients. A previously developed model for glucose and insulin regulation was implemented in the optimal design software PopED 2.0. The following aspects of the study design of the insulin modified IVGTT were evaluated; (1) glucose dose, (2) insulin infusion, (3) combination of (1) and (2), (4) sampling times, (5) exclusion of labeled glucose. Constraints were incorporated to avoid prolonged hyper- and/or hypoglycemia and a reduced design was used to decrease run times. Design efficiency was calculated as a measure of the improvement with an optimal design compared to the basic design. The results showed that the design of the insulin modified IVGTT could be substantially improved by the use of an optimized design compared to the standard design and that it was possible to use a reduced number of samples. Optimization of sample times gave the largest improvement followed by insulin dose. The results further showed that it was possible to reduce the total sample time with only a minor loss in efficiency. Simulations confirmed the predictions from PopED. The predicted uncertainty of parameter estimates (CV) was low in all tested cases, despite the reduction in the number of samples/subject. The best design had a predicted average CV of parameter estimates of 19.5%. We conclude that improvement can be made to the design of the insulin modified IVGTT and that the most important design factor was the placement of sample times followed by the use of an optimal insulin dose. This paper illustrates how complex provocation experiments can be improved by sequential modeling and optimal design.
GMOtrack: generator of cost-effective GMO testing strategies.
Novak, Petra Krau; Gruden, Kristina; Morisset, Dany; Lavrac, Nada; Stebih, Dejan; Rotter, Ana; Zel, Jana
2009-01-01
Commercialization of numerous genetically modified organisms (GMOs) has already been approved worldwide, and several additional GMOs are in the approval process. Many countries have adopted legislation to deal with GMO-related issues such as food safety, environmental concerns, and consumers' right of choice, making GMO traceability a necessity. The growing extent of GMO testing makes it important to study optimal GMO detection and identification strategies. This paper formally defines the problem of routine laboratory-level GMO tracking as a cost optimization problem, thus proposing a shift from "the same strategy for all samples" to "sample-centered GMO testing strategies." An algorithm (GMOtrack) for finding optimal two-phase (screening-identification) testing strategies is proposed. The advantages of cost optimization with increasing GMO presence on the market are demonstrated, showing that optimization approaches to analytic GMO traceability can result in major cost reductions. The optimal testing strategies are laboratory-dependent, as the costs depend on prior probabilities of local GMO presence, which are exemplified on food and feed samples. The proposed GMOtrack approach, publicly available under the terms of the General Public License, can be extended to other domains where complex testing is involved, such as safety and quality assurance in the food supply chain.
Compressive sampling of polynomial chaos expansions: Convergence analysis and sampling strategies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hampton, Jerrad; Doostan, Alireza, E-mail: alireza.doostan@colorado.edu
2015-01-01
Sampling orthogonal polynomial bases via Monte Carlo is of interest for uncertainty quantification of models with random inputs, using Polynomial Chaos (PC) expansions. It is known that bounding a probabilistic parameter, referred to as coherence, yields a bound on the number of samples necessary to identify coefficients in a sparse PC expansion via solution to an ℓ{sub 1}-minimization problem. Utilizing results for orthogonal polynomials, we bound the coherence parameter for polynomials of Hermite and Legendre type under their respective natural sampling distribution. In both polynomial bases we identify an importance sampling distribution which yields a bound with weaker dependence onmore » the order of the approximation. For more general orthonormal bases, we propose the coherence-optimal sampling: a Markov Chain Monte Carlo sampling, which directly uses the basis functions under consideration to achieve a statistical optimality among all sampling schemes with identical support. We demonstrate these different sampling strategies numerically in both high-order and high-dimensional, manufactured PC expansions. In addition, the quality of each sampling method is compared in the identification of solutions to two differential equations, one with a high-dimensional random input and the other with a high-order PC expansion. In both cases, the coherence-optimal sampling scheme leads to similar or considerably improved accuracy.« less
Jamsen, Kris M; Duffull, Stephen B; Tarning, Joel; Lindegardh, Niklas; White, Nicholas J; Simpson, Julie A
2012-07-11
Artemisinin-based combination therapy (ACT) is currently recommended as first-line treatment for uncomplicated malaria, but of concern, it has been observed that the effectiveness of the main artemisinin derivative, artesunate, has been diminished due to parasite resistance. This reduction in effect highlights the importance of the partner drugs in ACT and provides motivation to gain more knowledge of their pharmacokinetic (PK) properties via population PK studies. Optimal design methodology has been developed for population PK studies, which analytically determines a sampling schedule that is clinically feasible and yields precise estimation of model parameters. In this work, optimal design methodology was used to determine sampling designs for typical future population PK studies of the partner drugs (mefloquine, lumefantrine, piperaquine and amodiaquine) co-administered with artemisinin derivatives. The optimal designs were determined using freely available software and were based on structural PK models from the literature and the key specifications of 100 patients with five samples per patient, with one sample taken on the seventh day of treatment. The derived optimal designs were then evaluated via a simulation-estimation procedure. For all partner drugs, designs consisting of two sampling schedules (50 patients per schedule) with five samples per patient resulted in acceptable precision of the model parameter estimates. The sampling schedules proposed in this paper should be considered in future population pharmacokinetic studies where intensive sampling over many days or weeks of follow-up is not possible due to either ethical, logistic or economical reasons.
Wang, Lei; Zhao, Pengyue; Zhang, Fengzu; Bai, Aijuan; Pan, Canping
2013-01-01
Ambient ionization direct analysis in real time (DART) coupled to single-quadrupole MS (DART-MS) was evaluated for rapid detection of caffeine in commercial samples without chromatographic separation or sample preparation. Four commercial samples were examined: tea, instant coffee, green tea beverage, and soft drink. The response-related parameters were optimized for the DART temperature and MS fragmentor. Under optimal conditions, the molecular ion (M+H)+ was the major ion for identification of caffeine. The results showed that DART-MS is a promising tool for the quick analysis of important marker molecules in commercial samples. Furthermore, this system has demonstrated significant potential for high sample throughput and real-time analysis.
Miklós, István; Darling, Aaron E
2009-06-22
Inversions are among the most common mutations acting on the order and orientation of genes in a genome, and polynomial-time algorithms exist to obtain a minimal length series of inversions that transform one genome arrangement to another. However, the minimum length series of inversions (the optimal sorting path) is often not unique as many such optimal sorting paths exist. If we assume that all optimal sorting paths are equally likely, then statistical inference on genome arrangement history must account for all such sorting paths and not just a single estimate. No deterministic polynomial algorithm is known to count the number of optimal sorting paths nor sample from the uniform distribution of optimal sorting paths. Here, we propose a stochastic method that uniformly samples the set of all optimal sorting paths. Our method uses a novel formulation of parallel Markov chain Monte Carlo. In practice, our method can quickly estimate the total number of optimal sorting paths. We introduce a variant of our approach in which short inversions are modeled to be more likely, and we show how the method can be used to estimate the distribution of inversion lengths and breakpoint usage in pathogenic Yersinia pestis. The proposed method has been implemented in a program called "MC4Inversion." We draw comparison of MC4Inversion to the sampler implemented in BADGER and a previously described importance sampling (IS) technique. We find that on high-divergence data sets, MC4Inversion finds more optimal sorting paths per second than BADGER and the IS technique and simultaneously avoids bias inherent in the IS technique.
Xu, Henglong; Yong, Jiang; Xu, Guangjian
2015-12-30
Sampling frequency is important to obtain sufficient information for temporal research of microfauna. To determine an optimal strategy for exploring the seasonal variation in ciliated protozoa, a dataset from the Yellow Sea, northern China was studied. Samples were collected with 24 (biweekly), 12 (monthly), 8 (bimonthly per season) and 4 (seasonally) sampling events. Compared to the 24 samplings (100%), the 12-, 8- and 4-samplings recovered 94%, 94%, and 78% of the total species, respectively. To reveal the seasonal distribution, the 8-sampling regime may result in >75% information of the seasonal variance, while the traditional 4-sampling may only explain <65% of the total variance. With the increase of the sampling frequency, the biotic data showed stronger correlations with seasonal variables (e.g., temperature, salinity) in combination with nutrients. It is suggested that the 8-sampling events per year may be an optimal sampling strategy for ciliated protozoan seasonal research in marine ecosystems. Copyright © 2015 Elsevier Ltd. All rights reserved.
Mizuno, Kana; Dong, Min; Fukuda, Tsuyoshi; Chandra, Sharat; Mehta, Parinda A; McConnell, Scott; Anaissie, Elias J; Vinks, Alexander A
2018-05-01
High-dose melphalan is an important component of conditioning regimens for patients undergoing hematopoietic stem cell transplantation. The current dosing strategy based on body surface area results in a high incidence of oral mucositis and gastrointestinal and liver toxicity. Pharmacokinetically guided dosing will individualize exposure and help minimize overexposure-related toxicity. The purpose of this study was to develop a population pharmacokinetic model and optimal sampling strategy. A population pharmacokinetic model was developed with NONMEM using 98 observations collected from 15 adult patients given the standard dose of 140 or 200 mg/m 2 by intravenous infusion. The determinant-optimal sampling strategy was explored with PopED software. Individual area under the curve estimates were generated by Bayesian estimation using full and the proposed sparse sampling data. The predictive performance of the optimal sampling strategy was evaluated based on bias and precision estimates. The feasibility of the optimal sampling strategy was tested using pharmacokinetic data from five pediatric patients. A two-compartment model best described the data. The final model included body weight and creatinine clearance as predictors of clearance. The determinant-optimal sampling strategies (and windows) were identified at 0.08 (0.08-0.19), 0.61 (0.33-0.90), 2.0 (1.3-2.7), and 4.0 (3.6-4.0) h post-infusion. An excellent correlation was observed between area under the curve estimates obtained with the full and the proposed four-sample strategy (R 2 = 0.98; p < 0.01) with a mean bias of -2.2% and precision of 9.4%. A similar relationship was observed in children (R 2 = 0.99; p < 0.01). The developed pharmacokinetic model-based sparse sampling strategy promises to achieve the target area under the curve as part of precision dosing.
Memory-efficient dynamic programming backtrace and pairwise local sequence alignment.
Newberg, Lee A
2008-08-15
A backtrace through a dynamic programming algorithm's intermediate results in search of an optimal path, or to sample paths according to an implied probability distribution, or as the second stage of a forward-backward algorithm, is a task of fundamental importance in computational biology. When there is insufficient space to store all intermediate results in high-speed memory (e.g. cache) existing approaches store selected stages of the computation, and recompute missing values from these checkpoints on an as-needed basis. Here we present an optimal checkpointing strategy, and demonstrate its utility with pairwise local sequence alignment of sequences of length 10,000. Sample C++-code for optimal backtrace is available in the Supplementary Materials. Supplementary data is available at Bioinformatics online.
Optimization Strategies for Sensor and Actuator Placement
NASA Technical Reports Server (NTRS)
Padula, Sharon L.; Kincaid, Rex K.
1999-01-01
This paper provides a survey of actuator and sensor placement problems from a wide range of engineering disciplines and a variety of applications. Combinatorial optimization methods are recommended as a means for identifying sets of actuators and sensors that maximize performance. Several sample applications from NASA Langley Research Center, such as active structural acoustic control, are covered in detail. Laboratory and flight tests of these applications indicate that actuator and sensor placement methods are effective and important. Lessons learned in solving these optimization problems can guide future research.
Spectral gap optimization of order parameters for sampling complex molecular systems
Tiwary, Pratyush; Berne, B. J.
2016-01-01
In modern-day simulations of many-body systems, much of the computational complexity is shifted to the identification of slowly changing molecular order parameters called collective variables (CVs) or reaction coordinates. A vast array of enhanced-sampling methods are based on the identification and biasing of these low-dimensional order parameters, whose fluctuations are important in driving rare events of interest. Here, we describe a new algorithm for finding optimal low-dimensional CVs for use in enhanced-sampling biasing methods like umbrella sampling, metadynamics, and related methods, when limited prior static and dynamic information is known about the system, and a much larger set of candidate CVs is specified. The algorithm involves estimating the best combination of these candidate CVs, as quantified by a maximum path entropy estimate of the spectral gap for dynamics viewed as a function of that CV. The algorithm is called spectral gap optimization of order parameters (SGOOP). Through multiple practical examples, we show how this postprocessing procedure can lead to optimization of CV and several orders of magnitude improvement in the convergence of the free energy calculated through metadynamics, essentially giving the ability to extract useful information even from unsuccessful metadynamics runs. PMID:26929365
Fong, Erika J.; Huang, Chao; Hamilton, Julie; ...
2015-11-23
Here, a major advantage of microfluidic devices is the ability to manipulate small sample volumes, thus reducing reagent waste and preserving precious sample. However, to achieve robust sample manipulation it is necessary to address device integration with the macroscale environment. To realize repeatable, sensitive particle separation with microfluidic devices, this protocol presents a complete automated and integrated microfluidic platform that enables precise processing of 0.15–1.5 ml samples using microfluidic devices. Important aspects of this system include modular device layout and robust fixtures resulting in reliable and flexible world to chip connections, and fully-automated fluid handling which accomplishes closed-loop sample collection,more » system cleaning and priming steps to ensure repeatable operation. Different microfluidic devices can be used interchangeably with this architecture. Here we incorporate an acoustofluidic device, detail its characterization, performance optimization, and demonstrate its use for size-separation of biological samples. By using real-time feedback during separation experiments, sample collection is optimized to conserve and concentrate sample. Although requiring the integration of multiple pieces of equipment, advantages of this architecture include the ability to process unknown samples with no additional system optimization, ease of device replacement, and precise, robust sample processing.« less
USDA-ARS?s Scientific Manuscript database
Campylobacter jejuni (C. jejuni) is one of the most common causes of gastroenteritis in the world. Given the potential risks to human, animal and environmental health the development and optimization of methods to quantify this important pathogen in environmental samples is essential. Two of the mos...
Optimal model-based sensorless adaptive optics for epifluorescence microscopy.
Pozzi, Paolo; Soloviev, Oleg; Wilding, Dean; Vdovin, Gleb; Verhaegen, Michel
2018-01-01
We report on a universal sample-independent sensorless adaptive optics method, based on modal optimization of the second moment of the fluorescence emission from a point-like excitation. Our method employs a sample-independent precalibration, performed only once for the particular system, to establish the direct relation between the image quality and the aberration. The method is potentially applicable to any form of microscopy with epifluorescence detection, including the practically important case of incoherent fluorescence emission from a three dimensional object, through minor hardware modifications. We have applied the technique successfully to a widefield epifluorescence microscope and to a multiaperture confocal microscope.
Multiple sensitive estimation and optimal sample size allocation in the item sum technique.
Perri, Pier Francesco; Rueda García, María Del Mar; Cobo Rodríguez, Beatriz
2018-01-01
For surveys of sensitive issues in life sciences, statistical procedures can be used to reduce nonresponse and social desirability response bias. Both of these phenomena provoke nonsampling errors that are difficult to deal with and can seriously flaw the validity of the analyses. The item sum technique (IST) is a very recent indirect questioning method derived from the item count technique that seeks to procure more reliable responses on quantitative items than direct questioning while preserving respondents' anonymity. This article addresses two important questions concerning the IST: (i) its implementation when two or more sensitive variables are investigated and efficient estimates of their unknown population means are required; (ii) the determination of the optimal sample size to achieve minimum variance estimates. These aspects are of great relevance for survey practitioners engaged in sensitive research and, to the best of our knowledge, were not studied so far. In this article, theoretical results for multiple estimation and optimal allocation are obtained under a generic sampling design and then particularized to simple random sampling and stratified sampling designs. Theoretical considerations are integrated with a number of simulation studies based on data from two real surveys and conducted to ascertain the efficiency gain derived from optimal allocation in different situations. One of the surveys concerns cannabis consumption among university students. Our findings highlight some methodological advances that can be obtained in life sciences IST surveys when optimal allocation is achieved. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Darling, Aaron E.
2009-01-01
Inversions are among the most common mutations acting on the order and orientation of genes in a genome, and polynomial-time algorithms exist to obtain a minimal length series of inversions that transform one genome arrangement to another. However, the minimum length series of inversions (the optimal sorting path) is often not unique as many such optimal sorting paths exist. If we assume that all optimal sorting paths are equally likely, then statistical inference on genome arrangement history must account for all such sorting paths and not just a single estimate. No deterministic polynomial algorithm is known to count the number of optimal sorting paths nor sample from the uniform distribution of optimal sorting paths. Here, we propose a stochastic method that uniformly samples the set of all optimal sorting paths. Our method uses a novel formulation of parallel Markov chain Monte Carlo. In practice, our method can quickly estimate the total number of optimal sorting paths. We introduce a variant of our approach in which short inversions are modeled to be more likely, and we show how the method can be used to estimate the distribution of inversion lengths and breakpoint usage in pathogenic Yersinia pestis. The proposed method has been implemented in a program called “MC4Inversion.” We draw comparison of MC4Inversion to the sampler implemented in BADGER and a previously described importance sampling (IS) technique. We find that on high-divergence data sets, MC4Inversion finds more optimal sorting paths per second than BADGER and the IS technique and simultaneously avoids bias inherent in the IS technique. PMID:20333186
Dölitzsch, Claudia; Leenarts, Laura E W; Schmeck, Klaus; Fegert, Jorg M; Grisso, Thomas; Schmid, Marc
2017-02-08
There is a growing consensus about the importance of mental health screening of youths in welfare and juvenile justice institutions. The Massachusetts Youth Screening Instrument-second version (MAYSI-2) was specifically designed, normed and validated to assist juvenile justice facilities in the United States of America (USA), in identifying youths with potential emotional or behavioral problems. However, it is not known if the USA norm-based cut-off scores can be used in Switzerland. Therefore, the primary purpose of the current study was to estimate the diagnostic performance and optimal cut-off scores of the MAYSI-2 in a sample of Swiss youths in welfare and juvenile justice institutions. As the sample was drawn from the French-, German- and Italian-speaking parts of Switzerland, the three languages were represented in the total sample of the current study and consequently we could estimate the diagnostic performance and the optimal cut-off scores of the MAYSI-2 for the language regions separately. The other main purpose of the current study was to identify potential gender differences in the diagnostic performance and optimal cut-off scores. Participants were 297 boys and 149 girls (mean age = 16.2, SD = 2.5) recruited from 64 youth welfare and juvenile justice institutions (drawn from the French-, German- and Italian-speaking parts of Switzerland). The MAYSI-2 was used to screen for mental health or behavioral problems that could require further evaluation. Psychiatric classification was based on the Schedule for Affective Disorders and Schizophrenia for School-Age Children, Present and Lifetime version (K-SADS-PL). The MAYSI-2 scores were submitted into Receiver-Operating Characteristic (ROC) analyses to estimate the diagnostic performance and optimal 'caution' cut-off scores of the MAYSI-2. The ROC analyses revealed that nearly all homotypic mappings of MAYSI-2 scales onto (cluster of) psychiatric disorders revealed above chance level accuracy. The optimal 'caution' cut-off scores derived from the ROC curve for predicting (cluster of) psychiatric disorders were, for several MAYSI-2 scales, comparable to the USA norm-based 'caution' cut-off scores. For some MAYSI-2 scales, however, higher optimal 'caution' cut-off scores were found. With adjusted optimal 'caution' cut-off scores, the MAYSI-2 screens potential emotional or behavioral problems well in a sample of Swiss youths in welfare and juvenile justice institutions. However, as for choosing the optimal 'caution' cut off score for the MAYSI-2, both language as well as gender seems to be of importance. The results of this study point to a compelling need to test the diagnostic performance and optimal 'caution' cut-off scores of the MAYSI-2 more elaborately in larger differentiated language samples in Europe.
Adaptive Sampling-Based Information Collection for Wireless Body Area Networks.
Xu, Xiaobin; Zhao, Fang; Wang, Wendong; Tian, Hui
2016-08-31
To collect important health information, WBAN applications typically sense data at a high frequency. However, limited by the quality of wireless link, the uploading of sensed data has an upper frequency. To reduce upload frequency, most of the existing WBAN data collection approaches collect data with a tolerable error. These approaches can guarantee precision of the collected data, but they are not able to ensure that the upload frequency is within the upper frequency. Some traditional sampling based approaches can control upload frequency directly, however, they usually have a high loss of information. Since the core task of WBAN applications is to collect health information, this paper aims to collect optimized information under the limitation of upload frequency. The importance of sensed data is defined according to information theory for the first time. Information-aware adaptive sampling is proposed to collect uniformly distributed data. Then we propose Adaptive Sampling-based Information Collection (ASIC) which consists of two algorithms. An adaptive sampling probability algorithm is proposed to compute sampling probabilities of different sensed values. A multiple uniform sampling algorithm provides uniform samplings for values in different intervals. Experiments based on a real dataset show that the proposed approach has higher performance in terms of data coverage and information quantity. The parameter analysis shows the optimized parameter settings and the discussion shows the underlying reason of high performance in the proposed approach.
Adaptive Sampling-Based Information Collection for Wireless Body Area Networks
Xu, Xiaobin; Zhao, Fang; Wang, Wendong; Tian, Hui
2016-01-01
To collect important health information, WBAN applications typically sense data at a high frequency. However, limited by the quality of wireless link, the uploading of sensed data has an upper frequency. To reduce upload frequency, most of the existing WBAN data collection approaches collect data with a tolerable error. These approaches can guarantee precision of the collected data, but they are not able to ensure that the upload frequency is within the upper frequency. Some traditional sampling based approaches can control upload frequency directly, however, they usually have a high loss of information. Since the core task of WBAN applications is to collect health information, this paper aims to collect optimized information under the limitation of upload frequency. The importance of sensed data is defined according to information theory for the first time. Information-aware adaptive sampling is proposed to collect uniformly distributed data. Then we propose Adaptive Sampling-based Information Collection (ASIC) which consists of two algorithms. An adaptive sampling probability algorithm is proposed to compute sampling probabilities of different sensed values. A multiple uniform sampling algorithm provides uniform samplings for values in different intervals. Experiments based on a real dataset show that the proposed approach has higher performance in terms of data coverage and information quantity. The parameter analysis shows the optimized parameter settings and the discussion shows the underlying reason of high performance in the proposed approach. PMID:27589758
Cepeda-Vázquez, Mayela; Blumenthal, David; Camel, Valérie; Rega, Barbara
2017-03-01
Furan, a possibly carcinogenic compound to humans, and furfural, a naturally occurring volatile contributing to aroma, can be both found in thermally treated foods. These process-induced compounds, formed by close reaction pathways, play an important role as markers of food safety and quality. A method capable of simultaneously quantifying both molecules is thus highly relevant for developing mitigation strategies and preserving the sensory properties of food at the same time. We have developed a unique reliable and sensitive headspace trap (HS trap) extraction method coupled to GC-MS for the simultaneous quantification of furan and furfural in a solid processed food (sponge cake). HS Trap extraction has been optimized using an optimal design of experiments (O-DOE) approach, considering four instrumental and two sample preparation variables, as well as a blocking factor identified during preliminary assays. Multicriteria and multiple response optimization was performed based on a desirability function, yielding the following conditions: thermostatting temperature, 65°C; thermostatting time, 15min; number of pressurization cycles, 4; dry purge time, 0.9min; water / sample amount ratio (dry basis), 16; and total amount (water + sample amount, dry basis), 10g. The performances of the optimized method were also assessed: repeatability (RSD: ≤3.3% for furan and ≤2.6% for furfural), intermediate precision (RSD: 4.0% for furan and 4.3% for furfural), linearity (R 2 : 0.9957 for furan and 0.9996 for furfural), LOD (0.50ng furan g sample dry basis -1 and 10.2ng furfural g sample dry basis -1 ), LOQ (0.99ng furan g sample dry basis -1 and 41.1ng furfural g sample dry basis -1 ). Matrix effect was observed mainly for furan. Finally, the optimized method was applied to other sponge cakes with different matrix characteristics and levels of analytes. Copyright © 2016. Published by Elsevier B.V.
Optimization the composition of sand-lime products modified of diabase aggregate
NASA Astrophysics Data System (ADS)
Komisarczyk, K.; Stępień, A.
2017-10-01
The problem of optimizing the composition of building materials is currently of great importance due to the increasing competitiveness and technological development in the construction industry. This phenomenon also applies to catalog sand-lime. The respective arrangement of individual components or their equivalents, and linking them with the main parameters of the composition of the mixture, i.e. The lime/sand/water should lead to the intended purpose. The introduction of sand-lime diabase aggregate is concluded with a positive effect of final products. The paper presents the results of optimization with the addition of diabase aggregate. The constant value was the amount of water, variable - the mass of the dry ingredients. The program of experimental studies was taken for 6 series of silicates made in industrial conditions. Final samples were tested for mechanical and physico-chemical expanding the analysis of the mercury intrusion porosimetry, SEM and XRD. The results show that, depending on the aggregate’s contribution, exhibit differences. The sample in an amount of 10% diabase aggregate the compressive strength was higher than in the case of reference sample, while modified samples absorbed less water.
Yan, Yiming; Tan, Zhichao; Su, Nan; Zhao, Chunhui
2017-08-24
In this paper, a building extraction method is proposed based on a stacked sparse autoencoder with an optimized structure and training samples. Building extraction plays an important role in urban construction and planning. However, some negative effects will reduce the accuracy of extraction, such as exceeding resolution, bad correction and terrain influence. Data collected by multiple sensors, as light detection and ranging (LIDAR), optical sensor etc., are used to improve the extraction. Using digital surface model (DSM) obtained from LIDAR data and optical images, traditional method can improve the extraction effect to a certain extent, but there are some defects in feature extraction. Since stacked sparse autoencoder (SSAE) neural network can learn the essential characteristics of the data in depth, SSAE was employed to extract buildings from the combined DSM data and optical image. A better setting strategy of SSAE network structure is given, and an idea of setting the number and proportion of training samples for better training of SSAE was presented. The optical data and DSM were combined as input of the optimized SSAE, and after training by an optimized samples, the appropriate network structure can extract buildings with great accuracy and has good robustness.
SPECT System Optimization Against A Discrete Parameter Space
Meng, L. J.; Li, N.
2013-01-01
In this paper, we present an analytical approach for optimizing the design of a static SPECT system or optimizing the sampling strategy with a variable/adaptive SPECT imaging hardware against an arbitrarily given set of system parameters. This approach has three key aspects. First, it is designed to operate over a discretized system parameter space. Second, we have introduced an artificial concept of virtual detector as the basic building block of an imaging system. With a SPECT system described as a collection of the virtual detectors, one can convert the task of system optimization into a process of finding the optimum imaging time distribution (ITD) across all virtual detectors. Thirdly, the optimization problem (finding the optimum ITD) could be solved with a block-iterative approach or other non-linear optimization algorithms. In essence, the resultant optimum ITD could provide a quantitative measure of the relative importance (or effectiveness) of the virtual detectors and help to identify the system configuration or sampling strategy that leads to an optimum imaging performance. Although we are using SPECT imaging as a platform to demonstrate the system optimization strategy, this development also provides a useful framework for system optimization problems in other modalities, such as positron emission tomography (PET) and X-ray computed tomography (CT) [1, 2]. PMID:23587609
Adedeji, A. J.; Abdu, P. A.; Luka, P. D.; Owoade, A. A.; Joannis, T. M.
2017-01-01
Aim: This study was designed to optimize and apply the use of loop-mediated isothermal amplification (LAMP) as an alternative to conventional polymerase chain reaction (PCR) for the detection of herpesvirus of turkeys (HVT) (FC 126 strain) in vaccinated and non-vaccinated poultry in Nigeria. Materials and Methods: HVT positive control (vaccine) was used for optimization of LAMP using six primers that target the HVT070 gene sequence of the virus. These primers can differentiate HVT, a Marek’s disease virus (MDV) serotype 3 from MDV serotypes 1 and 2. Samples were collected from clinical cases of Marek’s disease (MD) in chickens, processed and subjected to LAMP and PCR. Results: LAMP assay for HVT was optimized. HVT was detected in 60% (3/5) and 100% (5/5) of the samples analyzed by PCR and LAMP, respectively. HVT was detected in the feathers, liver, skin, and spleen with average DNA purity of 3.05-4.52 μg DNA/mg (A260/A280) using LAMP. Conventional PCR detected HVT in two vaccinated and one unvaccinated chicken samples, while LAMP detected HVT in two vaccinated and three unvaccinated corresponding chicken samples. However, LAMP was a faster and simpler technique to carry out than PCR. Conclusion: LAMP assay for the detection of HVT was optimized. LAMP and PCR detected HVT in clinical samples collected. LAMP assay can be a very good alternative to PCR for detection of HVT and other viruses. This is the first report of the use of LAMP for the detection of viruses of veterinary importance in Nigeria. LAMP should be optimized as a diagnostic and research tool for investigation of poultry diseases such as MD in Nigeria. PMID:29263603
Strömberg, Eric A; Nyberg, Joakim; Hooker, Andrew C
2016-12-01
With the increasing popularity of optimal design in drug development it is important to understand how the approximations and implementations of the Fisher information matrix (FIM) affect the resulting optimal designs. The aim of this work was to investigate the impact on design performance when using two common approximations to the population model and the full or block-diagonal FIM implementations for optimization of sampling points. Sampling schedules for two example experiments based on population models were optimized using the FO and FOCE approximations and the full and block-diagonal FIM implementations. The number of support points was compared between the designs for each example experiment. The performance of these designs based on simulation/estimations was investigated by computing bias of the parameters as well as through the use of an empirical D-criterion confidence interval. Simulations were performed when the design was computed with the true parameter values as well as with misspecified parameter values. The FOCE approximation and the Full FIM implementation yielded designs with more support points and less clustering of sample points than designs optimized with the FO approximation and the block-diagonal implementation. The D-criterion confidence intervals showed no performance differences between the full and block diagonal FIM optimal designs when assuming true parameter values. However, the FO approximated block-reduced FIM designs had higher bias than the other designs. When assuming parameter misspecification in the design evaluation, the FO Full FIM optimal design was superior to the FO block-diagonal FIM design in both of the examples.
Gaussian process surrogates for failure detection: A Bayesian experimental design approach
NASA Astrophysics Data System (ADS)
Wang, Hongqiao; Lin, Guang; Li, Jinglai
2016-05-01
An important task of uncertainty quantification is to identify the probability of undesired events, in particular, system failures, caused by various sources of uncertainties. In this work we consider the construction of Gaussian process surrogates for failure detection and failure probability estimation. In particular, we consider the situation that the underlying computer models are extremely expensive, and in this setting, determining the sampling points in the state space is of essential importance. We formulate the problem as an optimal experimental design for Bayesian inferences of the limit state (i.e., the failure boundary) and propose an efficient numerical scheme to solve the resulting optimization problem. In particular, the proposed limit-state inference method is capable of determining multiple sampling points at a time, and thus it is well suited for problems where multiple computer simulations can be performed in parallel. The accuracy and performance of the proposed method is demonstrated by both academic and practical examples.
Geostatistical modeling of riparian forest microclimate and its implications for sampling
Eskelson, B.N.I.; Anderson, P.D.; Hagar, J.C.; Temesgen, H.
2011-01-01
Predictive models of microclimate under various site conditions in forested headwater stream - riparian areas are poorly developed, and sampling designs for characterizing underlying riparian microclimate gradients are sparse. We used riparian microclimate data collected at eight headwater streams in the Oregon Coast Range to compare ordinary kriging (OK), universal kriging (UK), and kriging with external drift (KED) for point prediction of mean maximum air temperature (Tair). Several topographic and forest structure characteristics were considered as site-specific parameters. Height above stream and distance to stream were the most important covariates in the KED models, which outperformed OK and UK in terms of root mean square error. Sample patterns were optimized based on the kriging variance and the weighted means of shortest distance criterion using the simulated annealing algorithm. The optimized sample patterns outperformed systematic sample patterns in terms of mean kriging variance mainly for small sample sizes. These findings suggest methods for increasing efficiency of microclimate monitoring in riparian areas.
Statistical considerations in monitoring birds over large areas
Johnson, D.H.
2000-01-01
The proper design of a monitoring effort depends primarily on the objectives desired, constrained by the resources available to conduct the work. Typically, managers have numerous objectives, such as determining abundance of the species, detecting changes in population size, evaluating responses to management activities, and assessing habitat associations. A design that is optimal for one objective will likely not be optimal for others. Careful consideration of the importance of the competing objectives may lead to a design that adequately addresses the priority concerns, although it may not be optimal for any individual objective. Poor design or inadequate sample sizes may result in such weak conclusions that the effort is wasted. Statistical expertise can be used at several stages, such as estimating power of certain hypothesis tests, but is perhaps most useful in fundamental considerations of describing objectives and designing sampling plans.
Dynamics of hepatitis C under optimal therapy and sampling based analysis
NASA Astrophysics Data System (ADS)
Pachpute, Gaurav; Chakrabarty, Siddhartha P.
2013-08-01
We examine two models for hepatitis C viral (HCV) dynamics, one for monotherapy with interferon (IFN) and the other for combination therapy with IFN and ribavirin. Optimal therapy for both the models is determined using the steepest gradient method, by defining an objective functional which minimizes infected hepatocyte levels, virion population and side-effects of the drug(s). The optimal therapies for both the models show an initial period of high efficacy, followed by a gradual decline. The period of high efficacy coincides with a significant decrease in the viral load, whereas the efficacy drops after hepatocyte levels are restored. We use the Latin hypercube sampling technique to randomly generate a large number of patient scenarios and study the dynamics of each set under the optimal therapy already determined. Results show an increase in the percentage of responders (indicated by drop in viral load below detection levels) in case of combination therapy (72%) as compared to monotherapy (57%). Statistical tests performed to study correlations between sample parameters and time required for the viral load to fall below detection level, show a strong monotonic correlation with the death rate of infected hepatocytes, identifying it to be an important factor in deciding individual drug regimens.
Optimal structure and parameter learning of Ising models
Lokhov, Andrey; Vuffray, Marc Denis; Misra, Sidhant; ...
2018-03-16
Reconstruction of the structure and parameters of an Ising model from binary samples is a problem of practical importance in a variety of disciplines, ranging from statistical physics and computational biology to image processing and machine learning. The focus of the research community shifted toward developing universal reconstruction algorithms that are both computationally efficient and require the minimal amount of expensive data. Here, we introduce a new method, interaction screening, which accurately estimates model parameters using local optimization problems. The algorithm provably achieves perfect graph structure recovery with an information-theoretically optimal number of samples, notably in the low-temperature regime, whichmore » is known to be the hardest for learning. Here, the efficacy of interaction screening is assessed through extensive numerical tests on synthetic Ising models of various topologies with different types of interactions, as well as on real data produced by a D-Wave quantum computer. Finally, this study shows that the interaction screening method is an exact, tractable, and optimal technique that universally solves the inverse Ising problem.« less
Optimal structure and parameter learning of Ising models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lokhov, Andrey; Vuffray, Marc Denis; Misra, Sidhant
Reconstruction of the structure and parameters of an Ising model from binary samples is a problem of practical importance in a variety of disciplines, ranging from statistical physics and computational biology to image processing and machine learning. The focus of the research community shifted toward developing universal reconstruction algorithms that are both computationally efficient and require the minimal amount of expensive data. Here, we introduce a new method, interaction screening, which accurately estimates model parameters using local optimization problems. The algorithm provably achieves perfect graph structure recovery with an information-theoretically optimal number of samples, notably in the low-temperature regime, whichmore » is known to be the hardest for learning. Here, the efficacy of interaction screening is assessed through extensive numerical tests on synthetic Ising models of various topologies with different types of interactions, as well as on real data produced by a D-Wave quantum computer. Finally, this study shows that the interaction screening method is an exact, tractable, and optimal technique that universally solves the inverse Ising problem.« less
Optimization of HPV DNA detection in urine by improving collection, storage, and extraction.
Vorsters, A; Van den Bergh, J; Micalessi, I; Biesmans, S; Bogers, J; Hens, A; De Coster, I; Ieven, M; Van Damme, P
2014-11-01
The benefits of using urine for the detection of human papillomavirus (HPV) DNA have been evaluated in disease surveillance, epidemiological studies, and screening for cervical cancers in specific subgroups. HPV DNA testing in urine is being considered for important purposes, notably the monitoring of HPV vaccination in adolescent girls and young women who do not wish to have a vaginal examination. The need to optimize and standardize sampling, storage, and processing has been reported.In this paper, we examined the impact of a DNA-conservation buffer, the extraction method, and urine sampling on the detection of HPV DNA and human DNA in urine provided by 44 women with a cytologically normal but HPV DNA-positive cervical sample. Ten women provided first-void and midstream urine samples. DNA analysis was performed using real-time PCR to allow quantification of HPV and human DNA.The results showed that an optimized method for HPV DNA detection in urine should (a) prevent DNA degradation during extraction and storage, (b) recover cell-free HPV DNA in addition to cell-associated DNA, (c) process a sufficient volume of urine, and (d) use a first-void sample.In addition, we found that detectable human DNA in urine may not be a good internal control for sample validity. HPV prevalence data that are based on urine samples collected, stored, and/or processed under suboptimal conditions may underestimate infection rates.
Patil, A A; Sachin, B S; Shinde, D B; Wakte, P S
2013-07-01
Coumestan wedelolactone is an important phytocomponent from Eclipta alba (L.) Hassk. It possesses diverse pharmacological activities, which have prompted the development of various extraction techniques and strategies for its better utilization. The aim of the present study is to develop and optimize supercritical carbon dioxide assisted sample preparation and HPLC identification of wedelolactone from E. alba (L.) Hassk. The response surface methodology was employed to study the optimization of sample preparation using supercritical carbon dioxide for wedelolactone from E. alba (L.) Hassk. The optimized sample preparation involves the investigation of quantitative effects of sample preparation parameters viz. operating pressure, temperature, modifier concentration and time on yield of wedelolactone using Box-Behnken design. The wedelolactone content was determined using validated HPLC methodology. The experimental data were fitted to second-order polynomial equation using multiple regression analysis and analyzed using the appropriate statistical method. By solving the regression equation and analyzing 3D plots, the optimum extraction conditions were found to be: extraction pressure, 25 MPa; temperature, 56 °C; modifier concentration, 9.44% and extraction time, 60 min. Optimum extraction conditions demonstrated wedelolactone yield of 15.37 ± 0.63 mg/100 g E. alba (L.) Hassk, which was in good agreement with the predicted values. Temperature and modifier concentration showed significant effect on the wedelolactone yield. The supercritical carbon dioxide extraction showed higher selectivity than the conventional Soxhlet assisted extraction method. Copyright © 2013 Elsevier Masson SAS. All rights reserved.
Bates, Timothy C.
2015-01-01
Optimism and pessimism are associated with important outcomes including health and depression. Yet it is unclear if these apparent polar opposites form a single dimension or reflect two distinct systems. The extent to which personality accounts for differences in optimism/pessimism is also controversial. Here, we addressed these questions in a genetically informative sample of 852 pairs of twins. Distinct genetic influences on optimism and pessimism were found. Significant family-level environment effects also emerged, accounting for much of the negative relationship between optimism and pessimism, as well as a link to neuroticism. A general positive genetics factor exerted significant links among both personality and life-orientation traits. Both optimism bias and pessimism also showed genetic variance distinct from all effects of personality, and from each other. PMID:26561494
Fast simulation of packet loss rates in a shared buffer communications switch
NASA Technical Reports Server (NTRS)
Chang, Cheng-Shang; Heidelberger, Philip; Shahabuddin, Perwez
1993-01-01
This paper describes an efficient technique for estimating, via simulation, the probability of buffer overflows in a queueing model that arises in the analysis of ATM (Asynchronous Transfer Mode) communication switches. There are multiple streams of (autocorrelated) traffic feeding the switch that has a buffer of finite capacity. Each stream is designated as either being of high or low priority. When the queue length reaches a certain threshold, only high priority packets are admitted to the switch's buffer. The problem is to estimate the loss rate of high priority packets. An asymptotically optimal importance sampling approach is developed for this rare event simulation problem. In this approach, the importance sampling is done in two distinct phases. In the first phase, an importance sampling change of measure is used to bring the queue length up to the threshold at which low priority packets get rejected. In the second phase, a different importance sampling change of measure is used to move the queue length from the threshold to the buffer capacity.
Unbiased multi-fidelity estimate of failure probability of a free plane jet
NASA Astrophysics Data System (ADS)
Marques, Alexandre; Kramer, Boris; Willcox, Karen; Peherstorfer, Benjamin
2017-11-01
Estimating failure probability related to fluid flows is a challenge because it requires a large number of evaluations of expensive models. We address this challenge by leveraging multiple low fidelity models of the flow dynamics to create an optimal unbiased estimator. In particular, we investigate the effects of uncertain inlet conditions in the width of a free plane jet. We classify a condition as failure when the corresponding jet width is below a small threshold, such that failure is a rare event (failure probability is smaller than 0.001). We estimate failure probability by combining the frameworks of multi-fidelity importance sampling and optimal fusion of estimators. Multi-fidelity importance sampling uses a low fidelity model to explore the parameter space and create a biasing distribution. An unbiased estimate is then computed with a relatively small number of evaluations of the high fidelity model. In the presence of multiple low fidelity models, this framework offers multiple competing estimators. Optimal fusion combines all competing estimators into a single estimator with minimal variance. We show that this combined framework can significantly reduce the cost of estimating failure probabilities, and thus can have a large impact in fluid flow applications. This work was funded by DARPA.
Establishment of the optimum two-dimensional electrophoresis system of ovine ovarian tissue.
Jia, J L; Zhang, L P; Wu, J P; Wang, J; Ding, Q
2014-08-26
Lambing performance of sheep is the most important economic trait and is regarded as a critic factoring affecting the productivity in sheep industry. Ovary plays the most roles in lambing trait. To establish the optimum two-dimensional electrophoresis system (2-DE) of ovine ovarian tissue, the common protein extraction methods of animal tissue (trichloroacetic acid/acetone precipitation and direct schizolysis methods) were used to extract ovine ovarian protein, and 17-cm nonlinear immobilized PH 3-10 gradient strips were used for 2-DE. The sample handling, loading quantity of the protein sample, and isoelectric focusing (IEF) steps were manipulated and optimized in this study. The results indicate that the direct schizolysis III method, a 200-μg loading quantity of the protein sample, and IEF steps II (20°C active hydration, 14 h→500 V, 1 h→1000 V 1 h→1000-9000 V, 6 h→80,000 VH→500 V 24 h) are optimal for 2-DE analysis of ovine ovarian tissue. Therefore, ovine ovarian tissue proteomics 2-DE was preliminarily established by the optimized conditions in this study; meanwhile, the conditions identified herein could provide a reference for ovarian sample preparation and 2-DE using tissues from other animals.
NASA Astrophysics Data System (ADS)
Mann, Erin
Both industry and commercial entities are in the process of using more lightweight composites. Fillers, such as fibers, nanofibers and other nanoconstituents in polymer matrix composites have been proven to enhance the properties of composites and are still being studied in order to optimize the benefits. Further optimization can be studied during the manufacturing process. The air permeability during the out-of-autoclave-vacuum-bag-only (OOA-VBO) cure method is an important property to understand during the optimization of manufacturing processes. Changes in the manufacturing process can improve or decrease composite quality depending on the ability of the composite to evacuate gases such as air and moisture during curing. Therefore, in this study, the axial permeability of a prepreg stack was experimentally studied. Three types of samples were studied: control (no carbon nanofiber (CNF) modification), unaligned CNF modified and aligned CNF modified samples.
Amaro, Rosa; Murillo, Miguel; González, Zurima; Escalona, Andrés; Hernández, Luís
2009-01-01
The treatment of wheat samples was optimized before the determination of phytic acid by high-performance liquid chromatography with refractive index detection. Drying by lyophilization and oven drying were studied; drying by lyophilization gave better results, confirming that this step is critical in preventing significant loss of analyte. In the extraction step, washing of the residue and collection of this water before retention of the phytates in the NH2 Sep-Pak cartridge were important. The retention of phytates in the NH2 Sep-Pak cartridge and elimination of the HCI did not produce significant loss (P = 0.05) in the phytic acid content of the sample. Recoveries of phytic acid averaged 91%, which is a substantial improvement with respect to values reported by others using this methodology.
Probabilistic inference using linear Gaussian importance sampling for hybrid Bayesian networks
NASA Astrophysics Data System (ADS)
Sun, Wei; Chang, K. C.
2005-05-01
Probabilistic inference for Bayesian networks is in general NP-hard using either exact algorithms or approximate methods. However, for very complex networks, only the approximate methods such as stochastic sampling could be used to provide a solution given any time constraint. There are several simulation methods currently available. They include logic sampling (the first proposed stochastic method for Bayesian networks, the likelihood weighting algorithm) the most commonly used simulation method because of its simplicity and efficiency, the Markov blanket scoring method, and the importance sampling algorithm. In this paper, we first briefly review and compare these available simulation methods, then we propose an improved importance sampling algorithm called linear Gaussian importance sampling algorithm for general hybrid model (LGIS). LGIS is aimed for hybrid Bayesian networks consisting of both discrete and continuous random variables with arbitrary distributions. It uses linear function and Gaussian additive noise to approximate the true conditional probability distribution for continuous variable given both its parents and evidence in a Bayesian network. One of the most important features of the newly developed method is that it can adaptively learn the optimal important function from the previous samples. We test the inference performance of LGIS using a 16-node linear Gaussian model and a 6-node general hybrid model. The performance comparison with other well-known methods such as Junction tree (JT) and likelihood weighting (LW) shows that LGIS-GHM is very promising.
Zhou, Fuqun; Zhang, Aining
2016-01-01
Nowadays, various time-series Earth Observation data with multiple bands are freely available, such as Moderate Resolution Imaging Spectroradiometer (MODIS) datasets including 8-day composites from NASA, and 10-day composites from the Canada Centre for Remote Sensing (CCRS). It is challenging to efficiently use these time-series MODIS datasets for long-term environmental monitoring due to their vast volume and information redundancy. This challenge will be greater when Sentinel 2–3 data become available. Another challenge that researchers face is the lack of in-situ data for supervised modelling, especially for time-series data analysis. In this study, we attempt to tackle the two important issues with a case study of land cover mapping using CCRS 10-day MODIS composites with the help of Random Forests’ features: variable importance, outlier identification. The variable importance feature is used to analyze and select optimal subsets of time-series MODIS imagery for efficient land cover mapping, and the outlier identification feature is utilized for transferring sample data available from one year to an adjacent year for supervised classification modelling. The results of the case study of agricultural land cover classification at a regional scale show that using only about a half of the variables we can achieve land cover classification accuracy close to that generated using the full dataset. The proposed simple but effective solution of sample transferring could make supervised modelling possible for applications lacking sample data. PMID:27792152
Zhou, Fuqun; Zhang, Aining
2016-10-25
Nowadays, various time-series Earth Observation data with multiple bands are freely available, such as Moderate Resolution Imaging Spectroradiometer (MODIS) datasets including 8-day composites from NASA, and 10-day composites from the Canada Centre for Remote Sensing (CCRS). It is challenging to efficiently use these time-series MODIS datasets for long-term environmental monitoring due to their vast volume and information redundancy. This challenge will be greater when Sentinel 2-3 data become available. Another challenge that researchers face is the lack of in-situ data for supervised modelling, especially for time-series data analysis. In this study, we attempt to tackle the two important issues with a case study of land cover mapping using CCRS 10-day MODIS composites with the help of Random Forests' features: variable importance, outlier identification. The variable importance feature is used to analyze and select optimal subsets of time-series MODIS imagery for efficient land cover mapping, and the outlier identification feature is utilized for transferring sample data available from one year to an adjacent year for supervised classification modelling. The results of the case study of agricultural land cover classification at a regional scale show that using only about a half of the variables we can achieve land cover classification accuracy close to that generated using the full dataset. The proposed simple but effective solution of sample transferring could make supervised modelling possible for applications lacking sample data.
Coscollà, Clara; Navarro-Olivares, Santiago; Martí, Pedro; Yusà, Vicent
2014-02-01
When attempting to discover the important factors and then optimise a response by tuning these factors, experimental design (design of experiments, DoE) gives a powerful suite of statistical methodology. DoE identify significant factors and then optimise a response with respect to them in method development. In this work, a headspace-solid-phase micro-extraction (HS-SPME) combined with gas chromatography tandem mass spectrometry (GC-MS/MS) methodology for the simultaneous determination of six important organotin compounds namely monobutyltin (MBT), dibutyltin (DBT), tributyltin (TBT), monophenyltin (MPhT), diphenyltin (DPhT), triphenyltin (TPhT) has been optimized using a statistical design of experiments (DOE). The analytical method is based on the ethylation with NaBEt4 and simultaneous headspace-solid-phase micro-extraction of the derivative compounds followed by GC-MS/MS analysis. The main experimental parameters influencing the extraction efficiency selected for optimization were pre-incubation time, incubation temperature, agitator speed, extraction time, desorption temperature, buffer (pH, concentration and volume), headspace volume, sample salinity, preparation of standards, ultrasonic time and desorption time in the injector. The main factors (excitation voltage, excitation time, ion source temperature, isolation time and electron energy) affecting the GC-IT-MS/MS response were also optimized using the same statistical design of experiments. The proposed method presented good linearity (coefficient of determination R(2)>0.99) and repeatibilty (1-25%) for all the compounds under study. The accuracy of the method measured as the average percentage recovery of the compounds in spiked surface and marine waters was higher than 70% for all compounds studied. Finally, the optimized methodology was applied to real aqueous samples enabled the simultaneous determination of all compounds under study in surface and marine water samples obtained from Valencia region (Spain). © 2013 Elsevier B.V. All rights reserved.
West, Joyce C; Pingitore, David; Zarin, Deborah A
2002-12-01
This study assessed characteristics of psychiatric patients for whom financial considerations affected the provision of "optimal" treatment. Psychiatrists reported that for 33.8 percent of 1,228 patients from a national sample, financial considerations such as managed care limitations, the patient's personal finances, and limitations inherent in the public care system adversely affected the provision of optimal treatment. Patients were more likely to have their treatment adversely affected by financial considerations if they were more severely ill, had more than one behavioral health disorder or a psychosocial problem, or were receiving treatment under managed care arrangements. Patients for whom financial considerations affect the provision of optimal treatment represent a population for whom access to treatment may be particularly important.
Souza, C A; Oliveira, T C; Crovella, S; Santos, S M; Rabêlo, K C N; Soriano, E P; Carvalho, M V D; Junior, A F Caldas; Porto, G G; Campello, R I C; Antunes, A A; Queiroz, R A; Souza, S M
2017-04-28
The use of Y chromosome haplotypes, important for the detection of sexual crimes in forensics, has gained prominence with the use of databases that incorporate these genetic profiles in their system. Here, we optimized and validated an amplification protocol for Y chromosome profile retrieval in reference samples using lesser materials than those in commercial kits. FTA ® cards (Flinders Technology Associates) were used to support the oral cells of male individuals, which were amplified directly using the SwabSolution reagent (Promega). First, we optimized and validated the process to define the volume and cycling conditions. Three reference samples and nineteen 1.2 mm-diameter perforated discs were used per sample. Amplification of one or two discs (samples) with the PowerPlex ® Y23 kit (Promega) was performed using 25, 26, and 27 thermal cycles. Twenty percent, 32%, and 100% reagent volumes, one disc, and 26 cycles were used for the control per sample. Thereafter, all samples (N = 270) were amplified using 27 cycles, one disc, and 32% reagents (optimized conditions). Data was analyzed using a study of equilibrium values between fluorophore colors. In the samples analyzed with 20% volume, an imbalance was observed in peak heights, both inside and in-between each dye. In samples amplified with 32% reagents, the values obtained for the intra-color and inter-color standard balance calculations for verification of the quality of the analyzed peaks were similar to those of samples amplified with 100% of the recommended volume. The quality of the profiles obtained with 32% reagents was suitable for insertion into databases.
NASA Astrophysics Data System (ADS)
Senba, Y.; Nagasono, M.; Koyama, T.; Yumoto, H.; Ohashi, H.; Tono, K.; Togashi, T.; Inubushi, Y.; Sato, T.; Yabashi, M.; Ishikawa, T.
2013-03-01
Optimization of focusing conditions is important in free-electron laser applications. A time-of-flight mass analyzer has been designed and constructed for this purpose. The time-of-flight spectra of ionic species evolved from laser ablation of gold were measured. The yields of ionic species showed strong correlations with free-electron-laser intensity. This method conveniently allows for direct estimation of laser intensity on sample and determination of focusing position.
Entropic Comparison of Atomic-Resolution Electron Tomography of Crystals and Amorphous Materials.
Collins, S M; Leary, R K; Midgley, P A; Tovey, R; Benning, M; Schönlieb, C-B; Rez, P; Treacy, M M J
2017-10-20
Electron tomography bears promise for widespread determination of the three-dimensional arrangement of atoms in solids. However, it remains unclear whether methods successful for crystals are optimal for amorphous solids. Here, we explore the relative difficulty encountered in atomic-resolution tomography of crystalline and amorphous nanoparticles. We define an informational entropy to reveal the inherent importance of low-entropy zone-axis projections in the reconstruction of crystals. In turn, we propose considerations for optimal sampling for tomography of ordered and disordered materials.
Dispositional optimism and sleep quality: a test of mediating pathways
Cribbet, Matthew; Kent de Grey, Robert G.; Cronan, Sierra; Trettevik, Ryan; Smith, Timothy W.
2016-01-01
Dispositional optimism has been related to beneficial influences on physical health outcomes. However, its links to global sleep quality and the psychological mediators responsible for such associations are less studied. This study thus examined if trait optimism predicted global sleep quality, and if measures of subjective well-being were statistical mediators of such links. A community sample of 175 participants (93 men, 82 women) completed measures of trait optimism, depression, and life satisfaction. Global sleep quality was assessed using the Pittsburgh Sleep Quality Index. Results indicated that trait optimism was a strong predictor of better PSQI global sleep quality. Moreover, this association was mediated by depression and life satisfaction in both single and multiple mediator models. These results highlight the importance of optimism for the restorative process of sleep, as well as the utility of multiple mediator models in testing distinct psychological pathways. PMID:27592128
Dispositional optimism and sleep quality: a test of mediating pathways.
Uchino, Bert N; Cribbet, Matthew; de Grey, Robert G Kent; Cronan, Sierra; Trettevik, Ryan; Smith, Timothy W
2017-04-01
Dispositional optimism has been related to beneficial influences on physical health outcomes. However, its links to global sleep quality and the psychological mediators responsible for such associations are less studied. This study thus examined if trait optimism predicted global sleep quality, and if measures of subjective well-being were statistical mediators of such links. A community sample of 175 participants (93 men, 82 women) completed measures of trait optimism, depression, and life satisfaction. Global sleep quality was assessed using the Pittsburgh Sleep Quality Index. Results indicated that trait optimism was a strong predictor of better PSQI global sleep quality. Moreover, this association was mediated by depression and life satisfaction in both single and multiple mediator models. These results highlight the importance of optimism for the restorative process of sleep, as well as the utility of multiple mediator models in testing distinct psychological pathways.
Incorporation of physical constraints in optimal surface search for renal cortex segmentation
NASA Astrophysics Data System (ADS)
Li, Xiuli; Chen, Xinjian; Yao, Jianhua; Zhang, Xing; Tian, Jie
2012-02-01
In this paper, we propose a novel approach for multiple surfaces segmentation based on the incorporation of physical constraints in optimal surface searching. We apply our new approach to solve the renal cortex segmentation problem, an important but not sufficiently researched issue. In this study, in order to better restrain the intensity proximity of the renal cortex and renal column, we extend the optimal surface search approach to allow for varying sampling distance and physical separation constraints, instead of the traditional fixed sampling distance and numerical separation constraints. The sampling distance of each vertex-column is computed according to the sparsity of the local triangular mesh. Then the physical constraint learned from a priori renal cortex thickness is applied to the inter-surface arcs as the separation constraints. Appropriate varying sampling distance and separation constraints were learnt from 6 clinical CT images. After training, the proposed approach was tested on a test set of 10 images. The manual segmentation of renal cortex was used as the reference standard. Quantitative analysis of the segmented renal cortex indicates that overall segmentation accuracy was increased after introducing the varying sampling distance and physical separation constraints (the average true positive volume fraction (TPVF) and false positive volume fraction (FPVF) were 83.96% and 2.80%, respectively, by using varying sampling distance and physical separation constraints compared to 74.10% and 0.18%, respectively, by using fixed sampling distance and numerical separation constraints). The experimental results demonstrated the effectiveness of the proposed approach.
Sampling plans for pest mites on physic nut.
Rosado, Jander F; Sarmento, Renato A; Pedro-Neto, Marçal; Galdino, Tarcísio V S; Marques, Renata V; Erasmo, Eduardo A L; Picanço, Marcelo C
2014-08-01
The starting point for generating a pest control decision-making system is a conventional sampling plan. Because the mites Polyphagotarsonemus latus and Tetranychus bastosi are among the most important pests of the physic nut (Jatropha curcas), in the present study, we aimed to establish sampling plans for these mite species on physic nut. Mite densities were monitored in 12 physic nut crops. Based on the obtained results, sampling of P. latus and T. bastosi should be performed by assessing the number of mites per cm(2) in 160 samples using a handheld 20× magnifying glass. The optimal sampling region for T. bastosi is the abaxial surface of the 4th most apical leaf on the branch of the middle third of the canopy. On the abaxial surface, T. bastosi should then be observed on the side parts of the middle portion of the leaf, near its edge. As for P. latus, the optimal sampling region is the abaxial surface of the 4th most apical leaf on the branch of the apical third of the canopy on the abaxial surface. Polyphagotarsonemus latus should then be assessed on the side parts of the leaf's petiole insertion. Each sampling procedure requires 4 h and costs US$ 7.31.
Optimism, well-being, and perceived stigma in individuals living with HIV.
Ammirati, Rachel J; Lamis, Dorian A; Campos, Peter E; Farber, Eugene W
2015-01-01
Given the significant psychological challenges posed by HIV-related stigma for individuals living with HIV, investigating psychological resource factors for coping with HIV-related stigma is important. Optimism, which refers to generalized expectations regarding favorable outcomes, has been associated with enhanced psychological adaptation to health conditions, including HIV. Therefore, this cross-sectional study investigated associations among optimism, psychological well-being, and HIV stigma in a sample of 116 adults living with HIV and seeking mental health services. Consistent with study hypotheses, optimism was positively associated with psychological well-being, and psychological well-being was negatively associated with HIV-related stigma. Moreover, results of a full structural equation model suggested a mediation pattern such that as optimism increases, psychological well-being increases, and perceived HIV-related stigma decreases. The implications of these findings for clinical interventions and future research are discussed.
Çiftçi, Tülin Deniz; Henden, Emur
2016-08-01
Arsenic in drinking water is a serious problem for human health. Since the toxicity of arsenic species As(III) and As(V) is different, it is important to determine the concentrations separately. Therefore, it is necessary to develop an accurate and sensitive method for the speciation of arsenic. It was intended with this work to determine the concentrations of arsenic species in water samples collected from Izmir, Manisa and nearby areas. A batch type hydride generation atomic absorption spectrometer was used. As(V) gave no signal under the optimal measurement conditions of As(III). A certified reference drinking water was analyzed by the method and the results showed excellent agreement with the reported values. The procedure was applied to 34 water samples. Eleven tap water, two spring water, 19 artesian well water and two thermal water samples were analyzed under the optimal conditions.
Lee, Kuo Hao; Chen, Jianhan
2017-06-15
Accurate treatment of solvent environment is critical for reliable simulations of protein conformational equilibria. Implicit treatment of solvation, such as using the generalized Born (GB) class of models arguably provides an optimal balance between computational efficiency and physical accuracy. Yet, GB models are frequently plagued by a tendency to generate overly compact structures. The physical origins of this drawback are relatively well understood, and the key to a balanced implicit solvent protein force field is careful optimization of physical parameters to achieve a sufficient level of cancellation of errors. The latter has been hampered by the difficulty of generating converged conformational ensembles of non-trivial model proteins using the popular replica exchange sampling technique. Here, we leverage improved sampling efficiency of a newly developed multi-scale enhanced sampling technique to re-optimize the generalized-Born with molecular volume (GBMV2) implicit solvent model with the CHARMM36 protein force field. Recursive optimization of key GBMV2 parameters (such as input radii) and protein torsion profiles (via the CMAP torsion cross terms) has led to a more balanced GBMV2 protein force field that recapitulates the structures and stabilities of both helical and β-hairpin model peptides. Importantly, this force field appears to be free of the over-compaction bias, and can generate structural ensembles of several intrinsically disordered proteins of various lengths that seem highly consistent with available experimental data. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Nassar, Ala F; Wisnewski, Adam V; Raddassi, Khadir
2017-03-01
Analysis of multiplexed assays is highly important for clinical diagnostics and other analytical applications. Mass cytometry enables multi-dimensional, single-cell analysis of cell type and state. In mass cytometry, the rare earth metals used as reporters on antibodies allow determination of marker expression in individual cells. Barcode-based bioassays for CyTOF are able to encode and decode for different experimental conditions or samples within the same experiment, facilitating progress in producing straightforward and consistent results. Herein, an integrated protocol for automated sample preparation for barcoding used in conjunction with mass cytometry for clinical bioanalysis samples is described; we offer results of our work with barcoding protocol optimization. In addition, we present some points to be considered in order to minimize the variability of quantitative mass cytometry measurements. For example, we discuss the importance of having multiple populations during titration of the antibodies and effect of storage and shipping of labelled samples on the stability of staining for purposes of CyTOF analysis. Data quality is not affected when labelled samples are stored either frozen or at 4 °C and used within 10 days; we observed that cell loss is greater if cells are washed with deionized water prior to shipment or are shipped in lower concentration. Once the labelled samples for CyTOF are suspended in deionized water, the analysis should be performed expeditiously, preferably within the first hour. Damage can be minimized if the cells are resuspended in phosphate-buffered saline (PBS) rather than deionized water while waiting for data acquisition.
2016-11-28
agents. The use of laser therapy to kill bacteria or to enhance the bactericidal activity of conventional antibiotics has been previously reported in the...biofilm samples and activation of the laser. This ensured highly accurate alignment of the samples with the laser beam and reduced variability in the laser... active against biofilm infections are urgently needed. The data presented here provide important information on optimization and successful
ERIC Educational Resources Information Center
Treen, Emily; Atanasova, Christina; Pitt, Leyland; Johnson, Michael
2016-01-01
Marketing instructors using simulation games as a way of inducing some realism into a marketing course are faced with many dilemmas. Two important quandaries are the optimal size of groups and how much of the students' time should ideally be devoted to the game. Using evidence from a very large sample of teams playing a simulation game, the study…
Lin, Wei; Feng, Rui; Li, Hongzhe
2014-01-01
In genetical genomics studies, it is important to jointly analyze gene expression data and genetic variants in exploring their associations with complex traits, where the dimensionality of gene expressions and genetic variants can both be much larger than the sample size. Motivated by such modern applications, we consider the problem of variable selection and estimation in high-dimensional sparse instrumental variables models. To overcome the difficulty of high dimensionality and unknown optimal instruments, we propose a two-stage regularization framework for identifying and estimating important covariate effects while selecting and estimating optimal instruments. The methodology extends the classical two-stage least squares estimator to high dimensions by exploiting sparsity using sparsity-inducing penalty functions in both stages. The resulting procedure is efficiently implemented by coordinate descent optimization. For the representative L1 regularization and a class of concave regularization methods, we establish estimation, prediction, and model selection properties of the two-stage regularized estimators in the high-dimensional setting where the dimensionality of co-variates and instruments are both allowed to grow exponentially with the sample size. The practical performance of the proposed method is evaluated by simulation studies and its usefulness is illustrated by an analysis of mouse obesity data. Supplementary materials for this article are available online. PMID:26392642
Optimization of adenovirus 40 and 41 recovery from tap water using small disk filters.
McMinn, Brian R
2013-11-01
Currently, the U.S. Environmental Protection Agency's Information Collection Rule (ICR) for the primary concentration of viruses from drinking and surface waters uses the 1MDS filter, but a more cost effective option, the NanoCeram® filter, has been shown to recover comparable levels of enterovirus and norovirus from both matrices. In order to achieve the highest viral recoveries, filtration methods require the identification of optimal concentration conditions that are unique for each virus type. This study evaluated the effectiveness of 1MDS and NanoCeram filters in recovering adenovirus (AdV) 40 and 41 from tap water, and optimized two secondary concentration procedures the celite and organic flocculation method. Adjustments in pH were made to both virus elution solutions and sample matrices to determine which resulted in higher virus recovery. Samples were analyzed by quantitative PCR (qPCR) and Most Probable Number (MPN) techniques and AdV recoveries were determined by comparing levels of virus in sample concentrates to that in the initial input. The recovery of adenovirus was highest for samples in unconditioned tap water (pH 8) using the 1MDS filter and celite for secondary concentration. Elution buffer containing 0.1% sodium polyphosphate at pH 10.0 was determined to be most effective overall for both AdV types. Under these conditions, the average recovery for AdV40 and 41 was 49% and 60%, respectively. By optimizing secondary elution steps, AdV recovery from tap water could be improved at least two-fold compared to the currently used methodology. Identification of the optimal concentration conditions for human AdV (HAdV) is important for timely and sensitive detection of these viruses from both surface and drinking waters. Published by Elsevier B.V.
Optimal Budget Allocation for Sample Average Approximation
2011-06-01
an optimization algorithm applied to the sample average problem. We examine the convergence rate of the estimator as the computing budget tends to...regime for the optimization algorithm . 1 Introduction Sample average approximation (SAA) is a frequently used approach to solving stochastic programs...appealing due to its simplicity and the fact that a large number of standard optimization algorithms are often available to optimize the resulting sample
Dual-mode nested search method for categorical uncertain multi-objective optimization
NASA Astrophysics Data System (ADS)
Tang, Long; Wang, Hu
2016-10-01
Categorical multi-objective optimization is an important issue involved in many matching design problems. Non-numerical variables and their uncertainty are the major challenges of such optimizations. Therefore, this article proposes a dual-mode nested search (DMNS) method. In the outer layer, kriging metamodels are established using standard regular simplex mapping (SRSM) from categorical candidates to numerical values. Assisted by the metamodels, a k-cluster-based intelligent sampling strategy is developed to search Pareto frontier points. The inner layer uses an interval number method to model the uncertainty of categorical candidates. To improve the efficiency, a multi-feature convergent optimization via most-promising-area stochastic search (MFCOMPASS) is proposed to determine the bounds of objectives. Finally, typical numerical examples are employed to demonstrate the effectiveness of the proposed DMNS method.
An experimental sample of the field gamma-spectrometer based on solid state Si-photomultiplier
NASA Astrophysics Data System (ADS)
Denisov, Viktor; Korotaev, Valery; Titov, Aleksandr; Blokhina, Anastasia; Kleshchenok, Maksim
2017-05-01
Design of optical-electronic devices and systems involves the selection of such technical patterns that under given initial requirements and conditions are optimal according to certain criteria. The original characteristic of the OES for any purpose, defining its most important feature ability is a threshold detection. Based on this property, will be achieved the required functional quality of the device or system. Therefore, the original criteria and optimization methods have to subordinate to the idea of a better detectability. Generally reduces to the problem of optimal selection of the expected (predetermined) signals in the predetermined observation conditions. Thus the main purpose of optimization of the system when calculating its detectability is the choice of circuits and components that provide the most effective selection of a target.
Optimal Design for Two-Level Random Assignment and Regression Discontinuity Studies
ERIC Educational Resources Information Center
Rhoads, Christopher H.; Dye, Charles
2016-01-01
An important concern when planning research studies is to obtain maximum precision of an estimate of a treatment effect given a budget constraint. When research designs have a "multilevel" or "hierarchical" structure changes in sample size at different levels of the design will impact precision differently. Furthermore, there…
Optimal design in pediatric pharmacokinetic and pharmacodynamic clinical studies.
Roberts, Jessica K; Stockmann, Chris; Balch, Alfred; Yu, Tian; Ward, Robert M; Spigarelli, Michael G; Sherwin, Catherine M T
2015-03-01
It is not trivial to conduct clinical trials with pediatric participants. Ethical, logistical, and financial considerations add to the complexity of pediatric studies. Optimal design theory allows investigators the opportunity to apply mathematical optimization algorithms to define how to structure their data collection to answer focused research questions. These techniques can be used to determine an optimal sample size, optimal sample times, and the number of samples required for pharmacokinetic and pharmacodynamic studies. The aim of this review is to demonstrate how to determine optimal sample size, optimal sample times, and the number of samples required from each patient by presenting specific examples using optimal design tools. Additionally, this review aims to discuss the relative usefulness of sparse vs rich data. This review is intended to educate the clinician, as well as the basic research scientist, whom plan on conducting a pharmacokinetic/pharmacodynamic clinical trial in pediatric patients. © 2015 John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Furton, Kenneth G.; Almirall, Jose R.; Wang, Jing
1999-02-01
In this paper, we present data comparing a variety of different conditions for extracting ignitable liquid residues from simulated fire debris samples in order to optimize the conditions for using Solid Phase Microextraction. A simulated accelerant mixture containing 30 components, including those from light petroleum distillates, medium petroleum distillates and heavy petroleum distillates were used to study the important variables controlling Solid Phase Microextraction (SPME) recoveries. SPME is an inexpensive, rapid and sensitive method for the analysis of volatile residues from the headspace over solid debris samples in a container or directly from aqueous samples followed by GC. The relative effects of controllable variables, including fiber chemistry, adsorption and desorption temperature, extraction time, and desorption time, have been optimized. The addition of water and ethanol to simulated debris samples in a can was shown to increase the sensitivity when using headspace SPME extraction. The relative enhancement of sensitivity has been compared as a function of the hydrocarbon chain length, sample temperature, time, and added ethanol concentrations. The technique has also been optimized to the extraction of accelerants directly from water added to the fire debris samples. The optimum adsorption time for the low molecular weight components was found to be approximately 25 minutes. The high molecular weight components were found at a higher concentration the longer the fiber was exposed to the headspace (up to 1 hr). The higher molecular weight components were also found in higher concentrations in the headspace when water and/or ethanol was added to the debris.
Luo, Dehui; Wan, Xiang; Liu, Jiming; Tong, Tiejun
2018-06-01
The era of big data is coming, and evidence-based medicine is attracting increasing attention to improve decision making in medical practice via integrating evidence from well designed and conducted clinical research. Meta-analysis is a statistical technique widely used in evidence-based medicine for analytically combining the findings from independent clinical trials to provide an overall estimation of a treatment effectiveness. The sample mean and standard deviation are two commonly used statistics in meta-analysis but some trials use the median, the minimum and maximum values, or sometimes the first and third quartiles to report the results. Thus, to pool results in a consistent format, researchers need to transform those information back to the sample mean and standard deviation. In this article, we investigate the optimal estimation of the sample mean for meta-analysis from both theoretical and empirical perspectives. A major drawback in the literature is that the sample size, needless to say its importance, is either ignored or used in a stepwise but somewhat arbitrary manner, e.g. the famous method proposed by Hozo et al. We solve this issue by incorporating the sample size in a smoothly changing weight in the estimators to reach the optimal estimation. Our proposed estimators not only improve the existing ones significantly but also share the same virtue of the simplicity. The real data application indicates that our proposed estimators are capable to serve as "rules of thumb" and will be widely applied in evidence-based medicine.
Djuris, J; Vasiljevic, D; Jokic, S; Ibric, S
2014-02-01
This study investigates the application of D-optimal mixture experimental design in optimization of O/W cosmetic emulsions. Cetearyl glucoside was used as a natural, biodegradable non-ionic emulsifier in the relatively low concentration (1%), and the mixture of co-emulsifiers (stearic acid, cetyl alcohol, stearyl alcohol and glyceryl stearate) was used to stabilize the formulations. To determine the optimal composition of co-emulsifiers mixture, D-optimal mixture experimental design was used. Prepared emulsions were characterized with rheological measurements, centrifugation test, specific conductivity and pH value measurements. All prepared samples appeared as white and homogenous creams, except for one homogenous and viscous lotion co-stabilized by stearic acid alone. Centrifugation testing revealed some phase separation only in the case of sample co-stabilized using glyceryl stearate alone. The obtained pH values indicated that all samples expressed mild acid value acceptable for cosmetic preparations. Specific conductivity values are attributed to the multiple phases O/W emulsions with high percentages of fixed water. Results of the rheological measurements have shown that the investigated samples exhibited non-Newtonian thixotropic behaviour. To determine the influence of each of the co-emulsifiers on emulsions properties, the obtained results were evaluated by the means of statistical analysis (ANOVA test). On the basis of comparison of statistical parameters for each of the studied responses, mixture reduced quadratic model was selected over the linear model implying that interactions between co-emulsifiers play the significant role in overall influence of co-emulsifiers on emulsions properties. Glyceryl stearate was found to be the dominant co-emulsifier affecting emulsions properties. Interactions between the glyceryl stearate and other co-emulsifiers were also found to significantly influence emulsions properties. These findings are especially important as they can be used for development of the product that meets users' requirements, as represented in the study. © 2013 Society of Cosmetic Scientists and the Société Française de Cosmétologie.
NASA Astrophysics Data System (ADS)
Wang, Ershen; Jia, Chaoying; Tong, Gang; Qu, Pingping; Lan, Xiaoyu; Pang, Tao
2018-03-01
The receiver autonomous integrity monitoring (RAIM) is one of the most important parts in an avionic navigation system. Two problems need to be addressed to improve this system, namely, the degeneracy phenomenon and lack of samples for the standard particle filter (PF). However, the number of samples cannot adequately express the real distribution of the probability density function (i.e., sample impoverishment). This study presents a GPS receiver autonomous integrity monitoring (RAIM) method based on a chaos particle swarm optimization particle filter (CPSO-PF) algorithm with a log likelihood ratio. The chaos sequence generates a set of chaotic variables, which are mapped to the interval of optimization variables to improve particle quality. This chaos perturbation overcomes the potential for the search to become trapped in a local optimum in the particle swarm optimization (PSO) algorithm. Test statistics are configured based on a likelihood ratio, and satellite fault detection is then conducted by checking the consistency between the state estimate of the main PF and those of the auxiliary PFs. Based on GPS data, the experimental results demonstrate that the proposed algorithm can effectively detect and isolate satellite faults under conditions of non-Gaussian measurement noise. Moreover, the performance of the proposed novel method is better than that of RAIM based on the PF or PSO-PF algorithm.
Coping with occupational stress: the role of optimism and coping flexibility.
Reed, Daniel J
2016-01-01
The current study aimed at measuring whether coping flexibility is a reliable and valid construct in a UK sample and subsequently investigating the association between coping flexibility, optimism, and psychological health - measured by perceived stress and life satisfaction. A UK university undergraduate student sample (N=95) completed an online questionnaire. The study is among the first to examine the validity and reliability of the English version of a scale measuring coping flexibility in a Western population and is also the first to investigate the association between optimism and coping flexibility. The results revealed that the scale had good reliability overall; however, factor analysis revealed no support for the existing two-factor structure of the scale. Coping flexibility and optimism were found to be strongly correlated, and hierarchical regression analyses revealed that the interaction between them predicted a large proportion of the variance in both perceived stress and life satisfaction. In addition, structural equation modeling revealed that optimism completely mediated the relationship between coping flexibility and both perceived stress and life satisfaction. The findings add to the occupational stress literature to further our understanding of how optimism is important in psychological health. Furthermore, given that optimism is a personality trait, and consequently relatively stable, the study also provides preliminary support for the potential of targeting coping flexibility to improve psychological health in Western populations. These findings must be replicated, and further analyses of the English version of the Coping Flexibility Scale are needed.
Han, Yanxi; Li, Jinming
2017-10-26
In this era of precision medicine, molecular biology is becoming increasingly significant for the diagnosis and therapeutic management of non-small cell lung cancer. The specimen as the primary element of the whole testing flow is particularly important for maintaining the accuracy of gene alteration testing. Presently, the main sample types applied in routine diagnosis are tissue and cytology biopsies. Liquid biopsies are considered as the most promising alternatives when tissue and cytology samples are not available. Each sample type possesses its own strengths and weaknesses, pertaining to the disparity of sampling, preparation and preservation procedures, the heterogeneity of inter- or intratumors, the tumor cellularity (percentage and number of tumor cells) of specimens, etc., and none of them can individually be a "one size to fit all". Therefore, in this review, we summarized the strengths and weaknesses of different sample types that are widely used in clinical practice, offered solutions to reduce the negative impact of the samples and proposed an optimized strategy for choice of samples during the entire diagnostic course. We hope to provide valuable information to laboratories for choosing optimal clinical specimens to achieve comprehensive functional genomic landscapes and formulate individually tailored treatment plans for NSCLC patients that are in advanced stages.
NASA Astrophysics Data System (ADS)
Ravishankar, Bharani
Conventional space vehicles have thermal protection systems (TPS) that provide protection to an underlying structure that carries the flight loads. In an attempt to save weight, there is interest in an integrated TPS (ITPS) that combines the structural function and the TPS function. This has weight saving potential, but complicates the design of the ITPS that now has both thermal and structural failure modes. The main objectives of this dissertation was to optimally design the ITPS subjected to thermal and mechanical loads through deterministic and reliability based optimization. The optimization of the ITPS structure requires computationally expensive finite element analyses of 3D ITPS (solid) model. To reduce the computational expenses involved in the structural analysis, finite element based homogenization method was employed, homogenizing the 3D ITPS model to a 2D orthotropic plate. However it was found that homogenization was applicable only for panels that are much larger than the characteristic dimensions of the repeating unit cell in the ITPS panel. Hence a single unit cell was used for the optimization process to reduce the computational cost. Deterministic and probabilistic optimization of the ITPS panel required evaluation of failure constraints at various design points. This further demands computationally expensive finite element analyses which was replaced by efficient, low fidelity surrogate models. In an optimization process, it is important to represent the constraints accurately to find the optimum design. Instead of building global surrogate models using large number of designs, the computational resources were directed towards target regions near constraint boundaries for accurate representation of constraints using adaptive sampling strategies. Efficient Global Reliability Analyses (EGRA) facilitates sequentially sampling of design points around the region of interest in the design space. EGRA was applied to the response surface construction of the failure constraints in the deterministic and reliability based optimization of the ITPS panel. It was shown that using adaptive sampling, the number of designs required to find the optimum were reduced drastically, while improving the accuracy. System reliability of ITPS was estimated using Monte Carlo Simulation (MCS) based method. Separable Monte Carlo method was employed that allowed separable sampling of the random variables to predict the probability of failure accurately. The reliability analysis considered uncertainties in the geometry, material properties, loading conditions of the panel and error in finite element modeling. These uncertainties further increased the computational cost of MCS techniques which was also reduced by employing surrogate models. In order to estimate the error in the probability of failure estimate, bootstrapping method was applied. This research work thus demonstrates optimization of the ITPS composite panel with multiple failure modes and large number of uncertainties using adaptive sampling techniques.
Wu, Yiman; Li, Liang
2012-12-18
For mass spectrometry (MS)-based metabolomics, it is important to use the same amount of starting materials from each sample to compare the metabolome changes in two or more comparative samples. Unfortunately, for biological samples, the total amount or concentration of metabolites is difficult to determine. In this work, we report a general approach of determining the total concentration of metabolites based on the use of chemical labeling to attach a UV absorbent to the metabolites to be analyzed, followed by rapid step-gradient liquid chromatography (LC) UV detection of the labeled metabolites. It is shown that quantification of the total labeled analytes in a biological sample facilitates the preparation of an appropriate amount of starting materials for MS analysis as well as the optimization of the sample loading amount to a mass spectrometer for achieving optimal detectability. As an example, dansylation chemistry was used to label the amine- and phenol-containing metabolites in human urine samples. LC-UV quantification of the labeled metabolites could be optimally performed at the detection wavelength of 338 nm. A calibration curve established from the analysis of a mixture of 17 labeled amino acid standards was found to have the same slope as that from the analysis of the labeled urinary metabolites, suggesting that the labeled amino acid standard calibration curve could be used to determine the total concentration of the labeled urinary metabolites. A workflow incorporating this LC-UV metabolite quantification strategy was then developed in which all individual urine samples were first labeled with (12)C-dansylation and the concentration of each sample was determined by LC-UV. The volumes of urine samples taken for producing the pooled urine standard were adjusted to ensure an equal amount of labeled urine metabolites from each sample was used for the pooling. The pooled urine standard was then labeled with (13)C-dansylation. Equal amounts of the (12)C-labeled individual sample and the (13)C-labeled pooled urine standard were mixed for LC-MS analysis. This way of concentration normalization among different samples with varying concentrations of total metabolites was found to be critical for generating reliable metabolome profiles for comparison.
Wang, Hui; Liu, Chunyue; Rong, Luge; Wang, Xiaoxu; Sun, Lina; Luo, Qing; Wu, Hao
2018-01-09
River monitoring networks play an important role in water environmental management and assessment, and it is critical to develop an appropriate method to optimize the monitoring network. In this study, an effective method was proposed based on the attainment rate of National Grade III water quality, optimal partition analysis and Euclidean distance, and Hun River was taken as a method validation case. There were 7 sampling sites in the monitoring network of the Hun River, and 17 monitoring items were analyzed once a month during January 2009 to December 2010. The results showed that the main monitoring items in the surface water of Hun River were ammonia nitrogen (NH 4 + -N), chemical oxygen demand, and biochemical oxygen demand. After optimization, the required number of monitoring sites was reduced from seven to three, and 57% of the cost was saved. In addition, there were no significant differences between non-optimized and optimized monitoring networks, and the optimized monitoring networks could correctly represent the original monitoring network. The duplicate setting degree of monitoring sites decreased after optimization, and the rationality of the monitoring network was improved. Therefore, the optimal method was identified as feasible, efficient, and economic.
A quantitative evaluation of two methods for preserving hair samples
Roon, David A.; Waits, L.P.; Kendall, K.C.
2003-01-01
Hair samples are an increasingly important DNA source for wildlife studies, yet optimal storage methods and DNA degradation rates have not been rigorously evaluated. We tested amplification success rates over a one-year storage period for DNA extracted from brown bear (Ursus arctos) hair samples preserved using silica desiccation and -20C freezing. For three nuclear DNA microsatellites, success rates decreased significantly after a six-month time point, regardless of storage method. For a 1000 bp mitochondrial fragment, a similar decrease occurred after a two-week time point. Minimizing delays between collection and DNA extraction will maximize success rates for hair-based noninvasive genetic sampling projects.
Fernández, Jesús; Toro, Miguel Á; Sonesson, Anna K; Villanueva, Beatriz
2014-01-01
The success of an aquaculture breeding program critically depends on the way in which the base population of breeders is constructed since all the genetic variability for the traits included originally in the breeding goal as well as those to be included in the future is contained in the initial founders. Traditionally, base populations were created from a number of wild strains by sampling equal numbers from each strain. However, for some aquaculture species improved strains are already available and, therefore, mean phenotypic values for economically important traits can be used as a criterion to optimize the sampling when creating base populations. Also, the increasing availability of genome-wide genotype information in aquaculture species could help to refine the estimation of relationships within and between candidate strains and, thus, to optimize the percentage of individuals to be sampled from each strain. This study explores the advantages of using phenotypic and genome-wide information when constructing base populations for aquaculture breeding programs in terms of initial and subsequent trait performance and genetic diversity level. Results show that a compromise solution between diversity and performance can be found when creating base populations. Up to 6% higher levels of phenotypic performance can be achieved at the same level of global diversity in the base population by optimizing the selection of breeders instead of sampling equal numbers from each strain. The higher performance observed in the base population persisted during 10 generations of phenotypic selection applied in the subsequent breeding program.
Optimization of throughput in semipreparative chiral liquid chromatography using stacked injection.
Taheri, Mohammadreza; Fotovati, Mohsen; Hosseini, Seyed-Kiumars; Ghassempour, Alireza
2017-10-01
An interesting mode of chromatography for preparation of pure enantiomers from pure samples is the method of stacked injection as a pseudocontinuous procedure. Maximum throughput and minimal production costs can be achieved by the use of total chiral column length in this mode of chromatography. To maximize sample loading, often touching bands of the two enantiomers is automatically achieved. Conventional equations show direct correlation between touching-band loadability and the selectivity factor of two enantiomers. The important question for one who wants to obtain the highest throughput is "How to optimize different factors including selectivity, resolution, run time, and loading of the sample in order to save time without missing the touching-band resolution?" To answer this question, tramadol and propranolol were separated on cellulose 3,5-dimethyl phenyl carbamate, as two pure racemic mixtures with low and high solubilities in mobile phase, respectively. The mobile phase composition consisted of n-hexane solvent with alcohol modifier and diethylamine as the additive. A response surface methodology based on central composite design was used to optimize separation factors against the main responses. According to the stacked injection properties, two processes were investigated for maximizing throughput: one with a poorly soluble and another with a highly soluble racemic mixture. For each case, different optimization possibilities were inspected. It was revealed that resolution is a crucial response for separations of this kind. Peak area and run time are two critical parameters in optimization of stacked injection for binary mixtures which have low solubility in the mobile phase. © 2017 Wiley Periodicals, Inc.
Wildlife Conservation Planning Using Stochastic Optimization and Importance Sampling
Robert G. Haight; Laurel E. Travis
1997-01-01
Formulations for determining conservation plans for sensitive wildlife species must account for economic costs of habitat protection and uncertainties about how wildlife populations will respond. This paper describes such a formulation and addresses the computational challenge of solving it. The problem is to determine the cost-efficient level of habitat protection...
A Numerical Climate Observing Network Design Study
NASA Technical Reports Server (NTRS)
Stammer, Detlef
2003-01-01
This project was concerned with three related questions of an optimal design of a climate observing system: 1. The spatial sampling characteristics required from an ARGO system. 2. The degree to which surface observations from ARGO can be used to calibrate and test satellite remote sensing observations of sea surface salinity (SSS) as it is anticipated now. 3. The more general design of an climate observing system as it is required in the near future for CLIVAR in the Atlantic. An important question in implementing an observing system is that of the sampling density required to observe climate-related variations in the ocean. For that purpose this project was concerned with the sampling requirements for the ARGO float system, but investigated also other elements of a climate observing system. As part of this project we studied the horizontal and vertical sampling characteristics of a global ARGO system which is required to make it fully complementary to altimeter data with the goal to capture climate related variations on large spatial scales (less thanAttachment: 1000 km). We addressed this question in the framework of a numerical model study in the North Atlantic with an 1/6 horizontal resolution. The advantage of a numerical design study is the knowledge of the full model state. Sampled by a synthetic float array, model results will therefore allow to test and improve existing deployment strategies with the goal to make the system as optimal and cost-efficient as possible. Attachment: "Optimal observations for variational data assimilation".
Ohta, Tomoaki; Maeda, Hiroyuki; Kubota, Ryuji; Koga, Akiko; Terada, Katsuhide
2014-09-10
The ratio of high potent materials in the new chemical entities has recently increased in the pharmaceutical industry. Generally, most of them are highly hazardous, but there is little toxicity information about the active pharmaceutical ingredients in the early development period. Even if their handling amount is quite small, the dustiness of high potent powder generated in the manufacturing process has an important impact on worker health; thus, it is important to understand the powder dustiness. The purpose of this study was to establish a method to evaluate the powder dustiness by the consumption of small amount of samples. The optimized measurement conditions for a commercially available dustmeter were confirmed using lactose monohydrate and naproxen sodium. The optimized test conditions were determined: the dustmeter mode, the flow rate, the drum rotation speed, the total measurement time, and sample loaded weight were type I mode, 4 L/min, 10 rpm, 1 min and 1-10 g , respectively. The setup conditions of the dustmeter are considerably valuable to pharmaceutical industries, especially, at the early development stage and especially for expensive materials, because the amount of air-borne dust can be evaluated with accuracy by the consumption of small amount of samples. Copyright © 2014 Elsevier B.V. All rights reserved.
Peyron, Pierre-Antoine; Baccino, Éric; Nagot, Nicolas; Lehmann, Sylvain; Delaby, Constance
2017-02-01
Determination of skin wound vitality is an important issue in forensic practice. No reliable biomarker currently exists. Quantification of inflammatory cytokines in injured skin with MSD ® technology is an innovative and promising approach. This preliminary study aims to develop a protocol for the preparation and the analysis of skin samples. Samples from ante mortem wounds, post mortem wounds, and intact skin ("control samples") were taken from corpses at the autopsy. After an optimization of the pre-analytical protocol had been performed in terms of skin homogeneisation and proteic extraction, the concentration of TNF-α was measured in each sample with the MSD ® approach. Then five other cytokines of interest (IL-1β, IL-6, IL-10, IL-12p70 and IFN-γ) were simultaneously quantified with a MSD ® multiplex assay. The optimal pre-analytical conditions consist in a proteic extraction from a 6 mm diameter skin sample, in a PBS buffer with triton 0,05%. Our results show the linearity and the reproductibility of the TNF-α quantification with MSD ® , and an inter- and intra-individual variability of the concentrations of proteins. The MSD ® multiplex assay is likely to detect differential skin concentrations for each cytokine of interest. This preliminary study was used to develop and optimize the pre-analytical and analytical conditions of the MSD ® method using injured and healthy skin samples, for the purpose of looking for and identifying the cytokine, or the set of cytokines, that may be biomarkers of skin wound vitality.
Gu, Yingxin; Wylie, Bruce K.; Boyte, Stephen; Picotte, Joshua J.; Howard, Danny; Smith, Kelcy; Nelson, Kurtis
2016-01-01
Regression tree models have been widely used for remote sensing-based ecosystem mapping. Improper use of the sample data (model training and testing data) may cause overfitting and underfitting effects in the model. The goal of this study is to develop an optimal sampling data usage strategy for any dataset and identify an appropriate number of rules in the regression tree model that will improve its accuracy and robustness. Landsat 8 data and Moderate-Resolution Imaging Spectroradiometer-scaled Normalized Difference Vegetation Index (NDVI) were used to develop regression tree models. A Python procedure was designed to generate random replications of model parameter options across a range of model development data sizes and rule number constraints. The mean absolute difference (MAD) between the predicted and actual NDVI (scaled NDVI, value from 0–200) and its variability across the different randomized replications were calculated to assess the accuracy and stability of the models. In our case study, a six-rule regression tree model developed from 80% of the sample data had the lowest MAD (MADtraining = 2.5 and MADtesting = 2.4), which was suggested as the optimal model. This study demonstrates how the training data and rule number selections impact model accuracy and provides important guidance for future remote-sensing-based ecosystem modeling.
Feng, Zufei; Xu, Yuehong; Wei, Shuguang; Zhang, Bao; Guan, Fanglin; Li, Shengbin
2015-01-01
A magnetic carbon nanomaterial for Fe3O4-modified hydroxylated multi-walled carbon nanotubes (Fe3O4-MWCNTs-OH) was prepared by the aggregating effect of Fe3O4 nanoparticles on MWCNTs-OH, and this material was combined with high-performance liquid chromatography (HPLC)/photodiode array detector (PAD) to determine strychnine in human serum samples. Some important parameters that could influence the extraction efficiency of strychnine were optimized, including the extraction time, amounts of Fe3O4-MWCNTs-OH, pH of sample solution, desorption solvent and desorption time. Under optimal conditions, the recoveries of spiked serum samples were between 98.3 and 102.7%, and the relative standard deviations (RSDs) ranged from 0.9 to 5.3%. The correlation coefficient was 0.9997. The LODs and LOQs of strychnine were 6.2 and 20.5 ng mL(-1), at signal-to-noise ratios of 3 and 10, respectively. These experimental results showed that the proposed method is feasible for the analysis of strychnine in serum samples.
Optimal Sampling to Provide User-Specific Climate Information.
NASA Astrophysics Data System (ADS)
Panturat, Suwanna
The types of weather-related world problems which are of socio-economic importance selected in this study as representative of three different levels of user groups include: (i) a regional problem concerned with air pollution plumes which lead to acid rain in the north eastern United States, (ii) a state-level problem in the form of winter wheat production in Oklahoma, and (iii) an individual-level problem involving reservoir management given errors in rainfall estimation at Lake Ellsworth, upstream from Lawton, Oklahoma. The study is aimed at designing optimal sampling networks which are based on customer value systems and also abstracting from data sets that information which is most cost-effective in reducing the climate-sensitive aspects of a given user problem. Three process models being used in this study to interpret climate variability in terms of the variables of importance to the user comprise: (i) the HEFFTER-SAMSON diffusion model as the climate transfer function for acid rain, (ii) the CERES-MAIZE plant process model for winter wheat production and (iii) the AGEHYD streamflow model selected as "a black box" for reservoir management. A state-of-the-art Non Linear Program (NLP) algorithm for minimizing an objective function is employed to determine the optimal number and location of various sensors. Statistical quantities considered in determining sensor locations including Bayes Risk, the chi-squared value, the probability of the Type I error (alpha) and the probability of the Type II error (beta) and the noncentrality parameter delta^2. Moreover, the number of years required to detect a climate change resulting in a given bushel per acre change in mean wheat production is determined; the number of seasons of observations required to reduce the standard deviation of the error variance of the ambient sulfur dioxide to less than a certain percent of the mean is found; and finally the policy of maintaining pre-storm flood pools at selected levels is examined given information from the optimal sampling network as defined by the study.
Least squares polynomial chaos expansion: A review of sampling strategies
NASA Astrophysics Data System (ADS)
Hadigol, Mohammad; Doostan, Alireza
2018-04-01
As non-institutive polynomial chaos expansion (PCE) techniques have gained growing popularity among researchers, we here provide a comprehensive review of major sampling strategies for the least squares based PCE. Traditional sampling methods, such as Monte Carlo, Latin hypercube, quasi-Monte Carlo, optimal design of experiments (ODE), Gaussian quadratures, as well as more recent techniques, such as coherence-optimal and randomized quadratures are discussed. We also propose a hybrid sampling method, dubbed alphabetic-coherence-optimal, that employs the so-called alphabetic optimality criteria used in the context of ODE in conjunction with coherence-optimal samples. A comparison between the empirical performance of the selected sampling methods applied to three numerical examples, including high-order PCE's, high-dimensional problems, and low oversampling ratios, is presented to provide a road map for practitioners seeking the most suitable sampling technique for a problem at hand. We observed that the alphabetic-coherence-optimal technique outperforms other sampling methods, specially when high-order ODE are employed and/or the oversampling ratio is low.
Recommendations for gross examination and sampling of surgical specimens of the spleen.
O'Malley, Dennis P; Louissaint, Abner; Vasef, Mohammad A; Auerbach, Aaron; Miranda, Roberto; Brynes, Russell K; Fedoriw, Yuri; Hudnall, S David
2015-10-01
This review examines handling and processing of spleen biopsies and splenectomy specimens with the aim of providing the pathologist with guidance in optimizing examination and diagnosis of splenic disorders. It also offers recommendations as to relevant reporting factors in gross examination, which may guide diagnostic workup. The role of splenic needle biopsies is discussed. The International Spleen Consortium is a group dedicated to promoting education and research on the anatomy, physiology, and pathology of the spleen. In keeping with these goals, we have undertaken to provide guidelines for gross examination, sectioning, and sampling of spleen tissue to optimize diagnosis (Burke). The pathology of the spleen may be complicated in routine practice due to a number of factors. Among these are lack of familiarity with lesions, complex histopathology, mimicry within several types of lesions, and overall rarity. To optimize diagnosis, appropriate handling and processing of splenic tissue are crucial. The importance of complete and accurate clinical history cannot be overstated. In many cases, significant clinical history such as previous lymphoproliferative disorders, hematologic disorders, trauma, etc, can provide important information to guide the evaluation of spleen specimens. Clinical information helps plan for appropriate processing of the spleen specimen. The pathologist should encourage surgical colleagues, who typically provide the specimens, to include as much clinical information as possible. Copyright © 2015 Elsevier Inc. All rights reserved.
Keramat, Akram; Zare-Dorabei, Rouholah
2017-09-01
In this work, the synthesis of the magnetic graphene oxide modified by 2-pyridinecarboxaldehyde thiosemicarbazone groups (Fe 3 O 4 @GO/2-PTSC) was utilized for preconcentration and determination of mercuric ions in a trace amount by inductively coupled plasma-optical emission spectrometry (ICP-OES). Characterization of the adsorbent was performed using various techniques, such as FT-IR, VSM, SEM and XRD analysis. Central composite design (CCD) under response surface methodology (RSM) was used for obtaining the most important parameters and probable interactions in variables. The variables such as adsorbent dosage, pH, desorption time, and eluent volume was optimized. These values were 8mg, 5.4min, 0.5mL (HCl, 0.1M), respectively. Sonication had an important role in shortening the adsorption time of Hg (II) ions by enhancing the dispersion of adsorbent in solution. Under the optimal conditions, the proposed method presented high enrichment factor of 193, an extraction percentage of 96.5, a detection limit of 0.0079µgL -1 and a relative standard deviation (RSD %) of 1.63%. Finally, the application of the synthesized material was evaluated for preconcentration and determination of mercuric ions from foods and environmental waters samples. Copyright © 2017 Elsevier B.V. All rights reserved.
MEMS resonant load cells for micro-mechanical test frames: feasibility study and optimal design
NASA Astrophysics Data System (ADS)
Torrents, A.; Azgin, K.; Godfrey, S. W.; Topalli, E. S.; Akin, T.; Valdevit, L.
2010-12-01
This paper presents the design, optimization and manufacturing of a novel micro-fabricated load cell based on a double-ended tuning fork. The device geometry and operating voltages are optimized for maximum force resolution and range, subject to a number of manufacturing and electromechanical constraints. All optimizations are enabled by analytical modeling (verified by selected finite elements analyses) coupled with an efficient C++ code based on the particle swarm optimization algorithm. This assessment indicates that force resolutions of ~0.5-10 nN are feasible in vacuum (~1-50 mTorr), with force ranges as large as 1 N. Importantly, the optimal design for vacuum operation is independent of the desired range, ensuring versatility. Experimental verifications on a sub-optimal device fabricated using silicon-on-glass technology demonstrate a resolution of ~23 nN at a vacuum level of ~50 mTorr. The device demonstrated in this article will be integrated in a hybrid micro-mechanical test frame for unprecedented combinations of force resolution and range, displacement resolution and range, optical (or SEM) access to the sample, versatility and cost.
Social support mediates the association between benefit finding and quality of life in caregivers.
Brand, Charles; Barry, Lorna; Gallagher, Stephen
2016-06-01
The psychosocial pathways underlying associations between benefit finding and quality of life are poorly understood. Here, we examined associations between benefit finding, social support, optimism and quality of life in a sample of 84 caregivers. Results revealed that quality of life was predicted by benefit finding, optimism and social support. Moreover, the association between benefit finding and quality of life was explained by social support, but not optimism; caregivers who reported greater benefit finding perceived their social support be higher and this, in turn, had a positive effect on their overall quality of life. These results underscore the importance of harnessing benefit finding to enhance caregiver quality of life. © The Author(s) 2014.
Hasanpour, Foroozan; Hadadzadeh, Hassan; Taei, Masoumeh; Nekouei, Mohsen; Mozafari, Elmira
2016-05-01
Analytical performance of conventional spectrophotometer was developed by coupling of effective dispersive liquid-liquid micro-extraction method with spectrophotometric determination for ultra-trace determination of cobalt. The method was based on the formation of Co(II)-alpha-benzoin oxime complex and its extraction using a dispersive liquid-liquid micro-extraction technique. During the present work, several important variables such as pH, ligand concentration, amount and type of dispersive, and extracting solvent were optimized. It was found that the crucial factor for the Co(II)-alpha benzoin oxime complex formation is the pH of the alkaline alcoholic medium. Under the optimized condition, the calibration graph was linear in the ranges of 1.0-110 μg L(-1) with the detection limit (S/N = 3) of 0.5 μg L(-1). The preconcentration operation of 25 mL of sample gave enhancement factor of 75. The proposed method was applied for determination of Co(II) in soil samples.
Optimal flexible sample size design with robust power.
Zhang, Lanju; Cui, Lu; Yang, Bo
2016-08-30
It is well recognized that sample size determination is challenging because of the uncertainty on the treatment effect size. Several remedies are available in the literature. Group sequential designs start with a sample size based on a conservative (smaller) effect size and allow early stop at interim looks. Sample size re-estimation designs start with a sample size based on an optimistic (larger) effect size and allow sample size increase if the observed effect size is smaller than planned. Different opinions favoring one type over the other exist. We propose an optimal approach using an appropriate optimality criterion to select the best design among all the candidate designs. Our results show that (1) for the same type of designs, for example, group sequential designs, there is room for significant improvement through our optimization approach; (2) optimal promising zone designs appear to have no advantages over optimal group sequential designs; and (3) optimal designs with sample size re-estimation deliver the best adaptive performance. We conclude that to deal with the challenge of sample size determination due to effect size uncertainty, an optimal approach can help to select the best design that provides most robust power across the effect size range of interest. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Wang, Zhaopin; Wu, Juanli; Wu, Shihua; Bao, Aimin
2013-04-24
Histamine, a neurotransmitter crucially involved in a number of basic physiological functions, undergoes changes in neuropsychiatric disorders. Detection of histamine in biological samples such as cerebrospinal fluid (CSF) is thus of clinical importance. The most commonly used method for measuring histamine levels is high performance liquid chromatography (HPLC). However, factors such as very low levels of histamine, the even lower CSF-histamine and CSF-histamine metabolite levels, especially in certain neuropsychiatric diseases, rapid formation of histamine metabolites, and other confounding elements during sample collection, make analysis of CSF-histamine and CSF-histamine metabolites a challenging task. Nonetheless, this challenge can be met, not only with respect to HPLC separation column, derivative reagent, and detector, but also in terms of optimizing the CSF sample collection. This review aims to provide a general insight into the quantitative analyses of histamine in biological samples, with an emphasis on HPLC instruments, methods, and hyphenated techniques, with the aim of promoting the development of an optimal and practical protocol for the determination of CSF-histamine and/or CSF-histamine metabolites. Copyright © 2013 Elsevier B.V. All rights reserved.
Optimizing methods and dodging pitfalls in microbiome research.
Kim, Dorothy; Hofstaedter, Casey E; Zhao, Chunyu; Mattei, Lisa; Tanes, Ceylan; Clarke, Erik; Lauder, Abigail; Sherrill-Mix, Scott; Chehoud, Christel; Kelsen, Judith; Conrad, Máire; Collman, Ronald G; Baldassano, Robert; Bushman, Frederic D; Bittinger, Kyle
2017-05-05
Research on the human microbiome has yielded numerous insights into health and disease, but also has resulted in a wealth of experimental artifacts. Here, we present suggestions for optimizing experimental design and avoiding known pitfalls, organized in the typical order in which studies are carried out. We first review best practices in experimental design and introduce common confounders such as age, diet, antibiotic use, pet ownership, longitudinal instability, and microbial sharing during cohousing in animal studies. Typically, samples will need to be stored, so we provide data on best practices for several sample types. We then discuss design and analysis of positive and negative controls, which should always be run with experimental samples. We introduce a convenient set of non-biological DNA sequences that can be useful as positive controls for high-volume analysis. Careful analysis of negative and positive controls is particularly important in studies of samples with low microbial biomass, where contamination can comprise most or all of a sample. Lastly, we summarize approaches to enhancing experimental robustness by careful control of multiple comparisons and to comparing discovery and validation cohorts. We hope the experimental tactics summarized here will help researchers in this exciting field advance their studies efficiently while avoiding errors.
Steinberg, David M.; Fine, Jason; Chappell, Rick
2009-01-01
Important properties of diagnostic methods are their sensitivity, specificity, and positive and negative predictive values (PPV and NPV). These methods are typically assessed via case–control samples, which include one cohort of cases known to have the disease and a second control cohort of disease-free subjects. Such studies give direct estimates of sensitivity and specificity but only indirect estimates of PPV and NPV, which also depend on the disease prevalence in the tested population. The motivating example arises in assay testing, where usage is contemplated in populations with known prevalences. Further instances include biomarker development, where subjects are selected from a population with known prevalence and assessment of PPV and NPV is crucial, and the assessment of diagnostic imaging procedures for rare diseases, where case–control studies may be the only feasible designs. We develop formulas for optimal allocation of the sample between the case and control cohorts and for computing sample size when the goal of the study is to prove that the test procedure exceeds pre-stated bounds for PPV and/or NPV. Surprisingly, the optimal sampling schemes for many purposes are highly unbalanced, even when information is desired on both PPV and NPV. PMID:18556677
NASA Astrophysics Data System (ADS)
Sapriadil, S.; Setiawan, A.; Suhandi, A.; Malik, A.; Safitri, D.; Lisdiani, S. A. S.; Hermita, N.
2018-05-01
Communication skill is one skill that is very needed in this 21st century. Preparing and teaching this skill in teaching physics is relatively important. The focus of this research is to optimizing of students’ scientific communication skills after the applied higher order thinking virtual laboratory (HOTVL) on topic electric circuit. This research then employed experimental study particularly posttest-only control group design. The subject in this research involved thirty senior high school students which were taken using purposive sampling. A sample of seventy (70) students participated in the research. An equivalent number of thirty five (35) students were assigned to the control and experimental group. The results of this study found that students using higher order thinking virtual laboratory (HOTVL) in laboratory activities had higher scientific communication skills than students who used the verification virtual lab.
In Silico Design of DNP Polarizing Agents: Can Current Dinitroxides Be Improved?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perras, Frédéric A.; Sadow, Aaron; Pruski, Marek
Numerical calculations of enhancement factors offered by dynamic nuclear polarization in solids under magic angle spinning (DNP-MAS) were performed to determine the optimal EPR parameters for a dinitroxide polarizing agent. We found that the DNP performance of a biradical is more tolerant to the relative orientation of the two nitroxide moieties than previously thought. In general, any condition in which the gyy tensor components of both radicals are perpendicular to one another is expected to have near-optimal DNP performance. These results highlight the important role of the exchange coupling, which can lessen the sensitivity of DNP performance to the inter-radicalmore » distance, but also lead to lower enhancements when the number of atoms in the linker becomes less than three. Finally, the calculations showed that the electron T1e value should be near 500μs to yield optimal performance. Importantly, the newest polarizing agents already feature all of the qualities of the optimal polarizing agent, leaving little room for further improvement. Further research into DNP polarizing agents should then target non-nitroxide radicals, as well as improvements in sample formulations to advance high-temperature DNP and limit quenching and reactivity.« less
In Silico Design of DNP Polarizing Agents: Can Current Dinitroxides Be Improved?
Perras, Frédéric A.; Sadow, Aaron; Pruski, Marek
2017-06-09
Numerical calculations of enhancement factors offered by dynamic nuclear polarization in solids under magic angle spinning (DNP-MAS) were performed to determine the optimal EPR parameters for a dinitroxide polarizing agent. We found that the DNP performance of a biradical is more tolerant to the relative orientation of the two nitroxide moieties than previously thought. In general, any condition in which the gyy tensor components of both radicals are perpendicular to one another is expected to have near-optimal DNP performance. These results highlight the important role of the exchange coupling, which can lessen the sensitivity of DNP performance to the inter-radicalmore » distance, but also lead to lower enhancements when the number of atoms in the linker becomes less than three. Finally, the calculations showed that the electron T1e value should be near 500μs to yield optimal performance. Importantly, the newest polarizing agents already feature all of the qualities of the optimal polarizing agent, leaving little room for further improvement. Further research into DNP polarizing agents should then target non-nitroxide radicals, as well as improvements in sample formulations to advance high-temperature DNP and limit quenching and reactivity.« less
Demerouti, Evangelia; Sanz-Vergel, Ana Isabel; Petrou, Paraskevas; van den Heuvel, Machteld
2016-10-01
Although work and family are undoubtedly important life domains, individuals are also active in other life roles which are also important to them (like pursuing personal interests). Building on identity theory and the resource perspective on work-home interface, we examined whether there is an indirect effect of work-self conflict/facilitation on exhaustion and task performance over time through personal resources (i.e., self-efficacy and optimism). The sample was composed of 368 Dutch police officers. Results of the 3-wave longitudinal study confirmed that work-self conflict was related to lower levels of self-efficacy, whereas work-self facilitation was related to improved optimism over time. In turn, self-efficacy was related to higher task performance, whereas optimism was related to diminished levels of exhaustion over time. Further analysis supported the negative, indirect effect of work-self facilitation on exhaustion through optimism over time, and only a few reversed causal effects emerged. The study contributes to the literature on interrole management by showing the role of personal resources in the process of conflict or facilitation over time. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Krücken, Jürgen; Fraundorfer, Kira; Mugisha, Jean Claude; Ramünke, Sabrina; Sifft, Kevin C; Geus, Dominik; Habarugira, Felix; Ndoli, Jules; Sendegeya, Augustin; Mukampunga, Caritas; Aebischer, Toni; McKay-Demeler, Janina; Gahutu, Jean Bosco; Mockenhaupt, Frank P; von Samson-Himmelstjerna, Georg
2018-05-18
A recent publication by Levecke et al. (Int. J. Parasitol, 2018, 8, 67-69) provides important insights into the kinetics of worm expulsion from humans following treatment with albendazole. This is an important aspect of determining the optimal time-point for post treatment sampling to examine anthelmintic drug efficacy. The authors conclude that for the determination of drug efficacy against Ascaris, samples should be taken not before day 14 and recommend a period between days 14 and 21. Using this recommendation, they conclude that previous data (Krücken et al., 2017; Int. J. Parasitol, 7, 262-271) showing a reduction of egg shedding by 75.4% in schoolchildren in Rwanda and our conclusions from these data should be interpreted with caution. In reply to this, we would like to indicate that the very low efficacy of 0% in one school and 52-56% in three other schools, while the drug was fully efficient in other schools, cannot simply be explained by the time point of sampling. Moreover, there was no correlation between the sampling day and albendazole efficacy. We would also like to indicate that we very carefully interpreted our data and, for example, nowhere claimed that we found anthelmintic resistance. Rather, we stated that our data indicated that benzimidazole resistance may be suspected in the study population. We strongly agree that the data presented by Levecke et al. suggests that recommendations for efficacy testing of anthelmintic drugs should be revised. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.
NASA Astrophysics Data System (ADS)
Momoh, James A.; Salkuti, Surender Reddy
2016-06-01
This paper proposes a stochastic optimization technique for solving the Voltage/VAr control problem including the load demand and Renewable Energy Resources (RERs) variation. The RERs often take along some inputs like stochastic behavior. One of the important challenges i. e., Voltage/VAr control is a prime source for handling power system complexity and reliability, hence it is the fundamental requirement for all the utility companies. There is a need for the robust and efficient Voltage/VAr optimization technique to meet the peak demand and reduction of system losses. The voltages beyond the limit may damage costly sub-station devices and equipments at consumer end as well. Especially, the RERs introduces more disturbances and some of the RERs are not even capable enough to meet the VAr demand. Therefore, there is a strong need for the Voltage/VAr control in RERs environment. This paper aims at the development of optimal scheme for Voltage/VAr control involving RERs. In this paper, Latin Hypercube Sampling (LHS) method is used to cover full range of variables by maximally satisfying the marginal distribution. Here, backward scenario reduction technique is used to reduce the number of scenarios effectively and maximally retain the fitting accuracy of samples. The developed optimization scheme is tested on IEEE 24 bus Reliability Test System (RTS) considering the load demand and RERs variation.
Hough, Rachael; Archer, Debra; Probert, Christopher
2018-01-01
Disturbance to the hindgut microbiota can be detrimental to equine health. Metabolomics provides a robust approach to studying the functional aspect of hindgut microorganisms. Sample preparation is an important step towards achieving optimal results in the later stages of analysis. The preparation of samples is unique depending on the technique employed and the sample matrix to be analysed. Gas chromatography mass spectrometry (GCMS) is one of the most widely used platforms for the study of metabolomics and until now an optimised method has not been developed for equine faeces. To compare a sample preparation method for extracting volatile organic compounds (VOCs) from equine faeces. Volatile organic compounds were determined by headspace solid phase microextraction gas chromatography mass spectrometry (HS-SPME-GCMS). Factors investigated were the mass of equine faeces, type of SPME fibre coating, vial volume and storage conditions. The resultant method was unique to those developed for other species. Aliquots of 1000 or 2000 mg in 10 ml or 20 ml SPME headspace were optimal. From those tested, the extraction of VOCs should ideally be performed using a divinylbenzene-carboxen-polydimethysiloxane (DVB-CAR-PDMS) SPME fibre. Storage of faeces for up to 12 months at - 80 °C shared a greater percentage of VOCs with a fresh sample than the equivalent stored at - 20 °C. An optimised method for extracting VOCs from equine faeces using HS-SPME-GCMS has been developed and will act as a standard to enable comparisons between studies. This work has also highlighted storage conditions as an important factor to consider in experimental design for faecal metabolomics studies.
NASA Astrophysics Data System (ADS)
Zeng, Baoping; Liu, Jipeng; Zhang, Yu; Gong, Yajun; Hu, Sanbao
2017-12-01
Deepwater robots are important devices for human to explore the sea, which is being under development towards intellectualization, multitasking, long-endurance and large depth along with the development of science and technology. As far as a deep-water robot is concerned, its mechanical systems is an important subsystem because not only it influences the instrument measuring precision and shorten the service life of cabin devices but also its overlarge vibration and noise lead to disadvantageous effects to marine life within the operational area. Therefore, vibration characteristics shall be key factor for the deep-water robot system design. The sample collection and recycling system of some certain deepwater robot in a mechanism for opening the underwater cabin door for external operation and recycling test equipment is focused in this study. For improving vibration characteristics of locations of the cabin door during opening processes, a vibration model was established to the opening system; and the structural optimization design was carried out to its important structures by utilizing the multi-objective shape optimization and topology optimization method based on analysis of the system vibration. Analysis of characteristics of exciting forces causing vibration was first carried out, which include characteristics of dynamic loads within the hinge clearances and due to friction effects and the fluid dynamic exciting forces during processes of opening the cabin door. Moreover, vibration acceleration responses for a few important locations of the devices for opening the cabin cover were deduced by utilizing the modal synthesis method so that its rigidity and modal frequency may be one primary factor influencing the system vibration performances based on analysis of weighted acceleration responses. Thus, optimization design was carried out to the cabin cover by utilizing the multi-objective topology optimization method to perform reduction of weighted accelerations of key structure locations.
ERIC Educational Resources Information Center
Jan, Show-Li; Shieh, Gwowen
2017-01-01
Equivalence assessment is becoming an increasingly important topic in many application areas including behavioral and social sciences research. Although there exist more powerful tests, the two one-sided tests (TOST) procedure is a technically transparent and widely accepted method for establishing statistical equivalence. Alternatively, a direct…
Xing, Han-Zhu; Wang, Xia; Chen, Xiang-Feng; Wang, Ming-Lin; Zhao, Ru-Song
2015-05-01
A method combining accelerated solvent extraction with dispersive liquid-liquid microextraction was developed for the first time as a sample pretreatment for the rapid analysis of phenols (including phenol, m-cresol, 2,4-dichlorophenol, and 2,4,6-trichlorophenol) in soil samples. In the accelerated solvent extraction procedure, water was used as an extraction solvent, and phenols were extracted from soil samples into water. The dispersive liquid-liquid microextraction technique was then performed on the obtained aqueous solution. Important accelerated solvent extraction and dispersive liquid-liquid microextraction parameters were investigated and optimized. Under optimized conditions, the new method provided wide linearity (6.1-3080 ng/g), low limits of detection (0.06-1.83 ng/g), and excellent reproducibility (<10%) for phenols. Four real soil samples were analyzed by the proposed method to assess its applicability. Experimental results showed that the soil samples were free of our target compounds, and average recoveries were in the range of 87.9-110%. These findings indicate that accelerated solvent extraction with dispersive liquid-liquid microextraction as a sample pretreatment procedure coupled with gas chromatography and mass spectrometry is an excellent method for the rapid analysis of trace levels of phenols in environmental soil samples. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Maulidiani; Rudiyanto; Abas, Faridah; Ismail, Intan Safinar; Lajis, Nordin H
2018-06-01
Optimization process is an important aspect in the natural product extractions. Herein, an alternative approach is proposed for the optimization in extraction, namely, the Generalized Likelihood Uncertainty Estimation (GLUE). The approach combines the Latin hypercube sampling, the feasible range of independent variables, the Monte Carlo simulation, and the threshold criteria of response variables. The GLUE method is tested in three different techniques including the ultrasound, the microwave, and the supercritical CO 2 assisted extractions utilizing the data from previously published reports. The study found that this method can: provide more information on the combined effects of the independent variables on the response variables in the dotty plots; deal with unlimited number of independent and response variables; consider combined multiple threshold criteria, which is subjective depending on the target of the investigation for response variables; and provide a range of values with their distribution for the optimization. Copyright © 2018 Elsevier Ltd. All rights reserved.
Frequency optimization in the eddy current test for high purity niobium
NASA Astrophysics Data System (ADS)
Joung, Mijoung; Jung, Yoochul; Kim, Hyungjin
2017-01-01
The eddy current test (ECT) is frequently used as a non-destructive method to check for the defects of high purity niobium (RRR300, Residual Resistivity Ratio) in a superconducting radio frequency (SRF) cavity. Determining an optimal frequency corresponding to specific material properties and probe specification is a very important step. The ECT experiments for high purity Nb were performed to determine the optimal frequency using the standard sample of high purity Nb having artificial defects. The target depth was considered with the treatment step that the niobium receives as the SRF cavity material. The results were analysed via the selectivity that led to a specific result, depending on the size of the defects. According to the results, the optimal frequency was determined to be 200 kHz, and a few features of the ECT for the high purity Nb were observed.
[Building Mass Spectrometry Spectral Libraries of Human Cancer Cell Lines].
Faktor, J; Bouchal, P
Cancer research often focuses on protein quantification in model cancer cell lines and cancer tissues. SWATH (sequential windowed acquisition of all theoretical fragment ion spectra), the state of the art method, enables the quantification of all proteins included in spectral library. Spectral library contains fragmentation patterns of each detectable protein in a sample. Thorough spectral library preparation will improve quantitation of low abundant proteins which usually play an important role in cancer. Our research is focused on the optimization of spectral library preparation aimed at maximizing the number of identified proteins in MCF-7 breast cancer cell line. First, we optimized the sample preparation prior entering the mass spectrometer. We examined the effects of lysis buffer composition, peptide dissolution protocol and the material of sample vial on the number of proteins identified in spectral library. Next, we optimized mass spectrometry (MS) method for spectral library data acquisition. Our thorough optimized protocol for spectral library building enabled the identification of 1,653 proteins (FDR < 1%) in 1 µg of MCF-7 lysate. This work contributed to the enhancement of protein coverage in SWATH digital biobanks which enable quantification of arbitrary protein from physically unavailable samples. In future, high quality spectral libraries could play a key role in preparing of patient proteome digital fingerprints.Key words: biomarker - mass spectrometry - proteomics - digital biobanking - SWATH - protein quantificationThis work was supported by the project MEYS - NPS I - LO1413.The authors declare they have no potential conflicts of interest concerning drugs, products, or services used in the study.The Editorial Board declares that the manuscript met the ICMJE recommendation for biomedical papers.Submitted: 7. 5. 2016Accepted: 9. 6. 2016.
Sauter, Jennifer L; Grogg, Karen L; Vrana, Julie A; Law, Mark E; Halvorson, Jennifer L; Henry, Michael R
2016-02-01
The objective of the current study was to establish a process for validating immunohistochemistry (IHC) protocols for use on the Cellient cell block (CCB) system. Thirty antibodies were initially tested on CCBs using IHC protocols previously validated on formalin-fixed, paraffin-embedded tissue (FFPE). Cytology samples were split to generate thrombin cell blocks (TCB) and CCBs. IHC was performed in parallel. Antibody immunoreactivity was scored, and concordance or discordance in immunoreactivity between the TCBs and CCBs for each sample was determined. Criteria for validation of an antibody were defined as concordant staining in expected positive and negative cells, in at least 5 samples each, and concordance in at least 90% of the samples total. Antibodies that failed initial validation were retested after alterations in IHC conditions. Thirteen of the 30 antibodies (43%) did not meet initial validation criteria. Of those, 8 antibodies (calretinin, clusters of differentiation [CD] 3, CD20, CDX2, cytokeratin 20, estrogen receptor, MOC-31, and p16) were optimized for CCBs and subsequently validated. Despite several alterations in conditions, 3 antibodies (Ber-EP4, D2-40, and paired box gene 8 [PAX8]) were not successfully validated. Nearly one-half of the antibodies tested in the current study failed initial validation using IHC conditions that were established in the study laboratory for FFPE material. Although some antibodies subsequently met validation criteria after optimization of conditions, a few continued to demonstrate inadequate immunoreactivity. These findings emphasize the importance of validating IHC protocols for methanol-fixed tissue before clinical use and suggest that optimization for alcohol fixation may be needed to obtain adequate immunoreactivity on CCBs. © 2016 American Cancer Society.
Skull removal in MR images using a modified artificial bee colony optimization algorithm.
Taherdangkoo, Mohammad
2014-01-01
Removal of the skull from brain Magnetic Resonance (MR) images is an important preprocessing step required for other image analysis techniques such as brain tissue segmentation. In this paper, we propose a new algorithm based on the Artificial Bee Colony (ABC) optimization algorithm to remove the skull region from brain MR images. We modify the ABC algorithm using a different strategy for initializing the coordinates of scout bees and their direction of search. Moreover, we impose an additional constraint to the ABC algorithm to avoid the creation of discontinuous regions. We found that our algorithm successfully removed all bony skull from a sample of de-identified MR brain images acquired from different model scanners. The obtained results of the proposed algorithm compared with those of previously introduced well known optimization algorithms such as Particle Swarm Optimization (PSO) and Ant Colony Optimization (ACO) demonstrate the superior results and computational performance of our algorithm, suggesting its potential for clinical applications.
Todd Trench, Elaine C.
2004-01-01
A time-series analysis approach developed by the U.S. Geological Survey was used to analyze trends in total phosphorus and evaluate optimal sampling designs for future trend detection, using long-term data for two water-quality monitoring stations on the Quinebaug River in eastern Connecticut. Trend-analysis results for selected periods of record during 1971?2001 indicate that concentrations of total phosphorus in the Quinebaug River have varied over time, but have decreased significantly since the 1970s and 1980s. Total phosphorus concentrations at both stations increased in the late 1990s and early 2000s, but were still substantially lower than historical levels. Drainage areas for both stations are primarily forested, but water quality at both stations is affected by point discharges from municipal wastewater-treatment facilities. Various designs with sampling frequencies ranging from 4 to 11 samples per year were compared to the trend-detection power of the monthly (12-sample) design to determine the most efficient configuration of months to sample for a given annual sampling frequency. Results from this evaluation indicate that the current (2004) 8-sample schedule for the two Quinebaug stations, with monthly sampling from May to September and bimonthly sampling for the remainder of the year, is not the most efficient 8-sample design for future detection of trends in total phosphorus. Optimal sampling schedules for the two stations differ, but in both cases, trend-detection power generally is greater among 8-sample designs that include monthly sampling in fall and winter. Sampling designs with fewer than 8 samples per year generally provide a low level of probability for detection of trends in total phosphorus. Managers may determine an acceptable level of probability for trend detection within the context of the multiple objectives of the state?s water-quality management program and the scientific understanding of the watersheds in question. Managers may identify a threshold of probability for trend detection that is high enough to justify the agency?s investment in the water-quality sampling program. Results from an analysis of optimal sampling designs can provide an important component of information for the decision-making process in which sampling schedules are periodically reviewed and revised. Results from the study described in this report and previous studies indicate that optimal sampling schedules for trend detection may differ substantially for different stations and constituents. A more comprehensive statewide evaluation of sampling schedules for key stations and constituents could provide useful information for any redesign of the schedule for water-quality monitoring in the Quinebaug River Basin and elsewhere in the state.
[Multi-mathematical modelings for compatibility optimization of Jiangzhi granules].
Yang, Ming; Zhang, Li; Ge, Yingli; Lu, Yanliu; Ji, Guang
2011-12-01
To investigate into the method of "multi activity index evaluation and combination optimized of mult-component" for Chinese herbal formulas. According to the scheme of uniform experimental design, efficacy experiment, multi index evaluation, least absolute shrinkage, selection operator (LASSO) modeling, evolutionary optimization algorithm, validation experiment, we optimized the combination of Jiangzhi granules based on the activity indexes of blood serum ALT, ALT, AST, TG, TC, HDL, LDL and TG level of liver tissues, ratio of liver tissue to body. Analytic hierarchy process (AHP) combining with criteria importance through intercriteria correlation (CRITIC) for multi activity index evaluation was more reasonable and objective, it reflected the information of activity index's order and objective sample data. LASSO algorithm modeling could accurately reflect the relationship between different combination of Jiangzhi granule and the activity comprehensive indexes. The optimized combination of Jiangzhi granule showed better values of the activity comprehensive indexed than the original formula after the validation experiment. AHP combining with CRITIC can be used for multi activity index evaluation and LASSO algorithm, it is suitable for combination optimized of Chinese herbal formulas.
Uncertainty-Based Multi-Objective Optimization of Groundwater Remediation Design
NASA Astrophysics Data System (ADS)
Singh, A.; Minsker, B.
2003-12-01
Management of groundwater contamination is a cost-intensive undertaking filled with conflicting objectives and substantial uncertainty. A critical source of this uncertainty in groundwater remediation design problems comes from the hydraulic conductivity values for the aquifer, upon which the prediction of flow and transport of contaminants are dependent. For a remediation solution to be reliable in practice it is important that it is robust over the potential error in the model predictions. This work focuses on incorporating such uncertainty within a multi-objective optimization framework, to get reliable as well as Pareto optimal solutions. Previous research has shown that small amounts of sampling within a single-objective genetic algorithm can produce highly reliable solutions. However with multiple objectives the noise can interfere with the basic operations of a multi-objective solver, such as determining non-domination of individuals, diversity preservation, and elitism. This work proposes several approaches to improve the performance of noisy multi-objective solvers. These include a simple averaging approach, taking samples across the population (which we call extended averaging), and a stochastic optimization approach. All the approaches are tested on standard multi-objective benchmark problems and a hypothetical groundwater remediation case-study; the best-performing approach is then tested on a field-scale case at Umatilla Army Depot.
Wang, Zhongqi; Yang, Bo; Kang, Yonggang; Yang, Yuan
2016-01-01
Fixture plays an important part in constraining excessive sheet metal part deformation at machining, assembly, and measuring stages during the whole manufacturing process. However, it is still a difficult and nontrivial task to design and optimize sheet metal fixture locating layout at present because there is always no direct and explicit expression describing sheet metal fixture locating layout and responding deformation. To that end, an RBF neural network prediction model is proposed in this paper to assist design and optimization of sheet metal fixture locating layout. The RBF neural network model is constructed by training data set selected by uniform sampling and finite element simulation analysis. Finally, a case study is conducted to verify the proposed method.
Wang, Zhongqi; Yang, Bo; Kang, Yonggang; Yang, Yuan
2016-01-01
Fixture plays an important part in constraining excessive sheet metal part deformation at machining, assembly, and measuring stages during the whole manufacturing process. However, it is still a difficult and nontrivial task to design and optimize sheet metal fixture locating layout at present because there is always no direct and explicit expression describing sheet metal fixture locating layout and responding deformation. To that end, an RBF neural network prediction model is proposed in this paper to assist design and optimization of sheet metal fixture locating layout. The RBF neural network model is constructed by training data set selected by uniform sampling and finite element simulation analysis. Finally, a case study is conducted to verify the proposed method. PMID:27127499
Comparison of Optimal Design Methods in Inverse Problems
Banks, H. T.; Holm, Kathleen; Kappel, Franz
2011-01-01
Typical optimal design methods for inverse or parameter estimation problems are designed to choose optimal sampling distributions through minimization of a specific cost function related to the resulting error in parameter estimates. It is hoped that the inverse problem will produce parameter estimates with increased accuracy using data collected according to the optimal sampling distribution. Here we formulate the classical optimal design problem in the context of general optimization problems over distributions of sampling times. We present a new Prohorov metric based theoretical framework that permits one to treat succinctly and rigorously any optimal design criteria based on the Fisher Information Matrix (FIM). A fundamental approximation theory is also included in this framework. A new optimal design, SE-optimal design (standard error optimal design), is then introduced in the context of this framework. We compare this new design criteria with the more traditional D-optimal and E-optimal designs. The optimal sampling distributions from each design are used to compute and compare standard errors; the standard errors for parameters are computed using asymptotic theory or bootstrapping and the optimal mesh. We use three examples to illustrate ideas: the Verhulst-Pearl logistic population model [13], the standard harmonic oscillator model [13] and a popular glucose regulation model [16, 19, 29]. PMID:21857762
Lee, Byeong-Ju; Zhou, Yaoyao; Lee, Jae Soung; Shin, Byeung Kon; Seo, Jeong-Ah; Lee, Doyup; Kim, Young-Suk
2018-01-01
The ability to determine the origin of soybeans is an important issue following the inclusion of this information in the labeling of agricultural food products becoming mandatory in South Korea in 2017. This study was carried out to construct a prediction model for discriminating Chinese and Korean soybeans using Fourier-transform infrared (FT-IR) spectroscopy and multivariate statistical analysis. The optimal prediction models for discriminating soybean samples were obtained by selecting appropriate scaling methods, normalization methods, variable influence on projection (VIP) cutoff values, and wave-number regions. The factors for constructing the optimal partial-least-squares regression (PLSR) prediction model were using second derivatives, vector normalization, unit variance scaling, and the 4000–400 cm–1 region (excluding water vapor and carbon dioxide). The PLSR model for discriminating Chinese and Korean soybean samples had the best predictability when a VIP cutoff value was not applied. When Chinese soybean samples were identified, a PLSR model that has the lowest root-mean-square error of the prediction value was obtained using a VIP cutoff value of 1.5. The optimal PLSR prediction model for discriminating Korean soybean samples was also obtained using a VIP cutoff value of 1.5. This is the first study that has combined FT-IR spectroscopy with normalization methods, VIP cutoff values, and selected wave-number regions for discriminating Chinese and Korean soybeans. PMID:29689113
On the importance of image formation optics in the design of infrared spectroscopic imaging systems
Mayerich, David; van Dijk, Thomas; Walsh, Michael; Schulmerich, Matthew; Carney, P. Scott
2014-01-01
Infrared spectroscopic imaging provides micron-scale spatial resolution with molecular contrast. While recent work demonstrates that sample morphology affects the recorded spectrum, considerably less attention has been focused on the effects of the optics, including the condenser and objective. This analysis is extremely important, since it will be possible to understand effects on recorded data and provides insight for reducing optical effects through rigorous microscope design. Here, we present a theoretical description and experimental results that demonstrate the effects of commonly-employed cassegranian optics on recorded spectra. We first combine an explicit model of image formation and a method for quantifying and visualizing the deviations in recorded spectra as a function of microscope optics. We then verify these simulations with measurements obtained from spatially heterogeneous samples. The deviation of the computed spectrum from the ideal case is quantified via a map which we call a deviation map. The deviation map is obtained as a function of optical elements by systematic simulations. Examination of deviation maps demonstrates that the optimal optical configuration for minimal deviation is contrary to prevailing practice in which throughput is maximized for an instrument without a sample. This report should be helpful for understanding recorded spectra as a function of the optics, the analytical limits of recorded data determined by the optical design, and potential routes for optimization of imaging systems. PMID:24936526
On the importance of image formation optics in the design of infrared spectroscopic imaging systems.
Mayerich, David; van Dijk, Thomas; Walsh, Michael J; Schulmerich, Matthew V; Carney, P Scott; Bhargava, Rohit
2014-08-21
Infrared spectroscopic imaging provides micron-scale spatial resolution with molecular contrast. While recent work demonstrates that sample morphology affects the recorded spectrum, considerably less attention has been focused on the effects of the optics, including the condenser and objective. This analysis is extremely important, since it will be possible to understand effects on recorded data and provides insight for reducing optical effects through rigorous microscope design. Here, we present a theoretical description and experimental results that demonstrate the effects of commonly-employed cassegranian optics on recorded spectra. We first combine an explicit model of image formation and a method for quantifying and visualizing the deviations in recorded spectra as a function of microscope optics. We then verify these simulations with measurements obtained from spatially heterogeneous samples. The deviation of the computed spectrum from the ideal case is quantified via a map which we call a deviation map. The deviation map is obtained as a function of optical elements by systematic simulations. Examination of deviation maps demonstrates that the optimal optical configuration for minimal deviation is contrary to prevailing practice in which throughput is maximized for an instrument without a sample. This report should be helpful for understanding recorded spectra as a function of the optics, the analytical limits of recorded data determined by the optical design, and potential routes for optimization of imaging systems.
Quantifying selective alignment of ensemble nitrogen-vacancy centers in (111) diamond
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tahara, Kosuke; Ozawa, Hayato; Iwasaki, Takayuki
2015-11-09
Selective alignment of nitrogen-vacancy (NV) centers in diamond is an important technique towards its applications. Quantification of the alignment ratio is necessary to design the optimized diamond samples. However, this is not a straightforward problem for dense ensemble of the NV centers. We estimate the alignment ratio of ensemble NV centers along the [111] direction in (111) diamond by optically detected magnetic resonance measurements. Diamond films deposited by N{sub 2} doped chemical vapor deposition have NV center densities over 1 × 10{sup 15 }cm{sup −3} and alignment ratios over 75%. Although spin coherence time (T{sub 2}) is limited to a few μs bymore » electron spins of nitrogen impurities, the combination of the selective alignment and the high density can be a possible way to optimize NV-containing diamond samples for the sensing applications.« less
Combining configurational energies and forces for molecular force field optimization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vlcek, Lukas; Sun, Weiwei; Kent, Paul R. C.
While quantum chemical simulations have been increasingly used as an invaluable source of information for atomistic model development, the high computational expenses typically associated with these techniques often limit thorough sampling of the systems of interest. It is therefore of great practical importance to use all available information as efficiently as possible, and in a way that allows for consistent addition of constraints that may be provided by macroscopic experiments. We propose a simple approach that combines information from configurational energies and forces generated in a molecular dynamics simulation to increase the effective number of samples. Subsequently, this information ismore » used to optimize a molecular force field by minimizing the statistical distance similarity metric. We also illustrate the methodology on an example of a trajectory of configurations generated in equilibrium molecular dynamics simulations of argon and water and compare the results with those based on the force matching method.« less
Combining configurational energies and forces for molecular force field optimization
Vlcek, Lukas; Sun, Weiwei; Kent, Paul R. C.
2017-07-21
While quantum chemical simulations have been increasingly used as an invaluable source of information for atomistic model development, the high computational expenses typically associated with these techniques often limit thorough sampling of the systems of interest. It is therefore of great practical importance to use all available information as efficiently as possible, and in a way that allows for consistent addition of constraints that may be provided by macroscopic experiments. We propose a simple approach that combines information from configurational energies and forces generated in a molecular dynamics simulation to increase the effective number of samples. Subsequently, this information ismore » used to optimize a molecular force field by minimizing the statistical distance similarity metric. We also illustrate the methodology on an example of a trajectory of configurations generated in equilibrium molecular dynamics simulations of argon and water and compare the results with those based on the force matching method.« less
Goal-oriented Site Characterization in Hydrogeological Applications: An Overview
NASA Astrophysics Data System (ADS)
Nowak, W.; de Barros, F.; Rubin, Y.
2011-12-01
In this study, we address the importance of goal-oriented site characterization. Given the multiple sources of uncertainty in hydrogeological applications, information needs of modeling, prediction and decision support should be satisfied with efficient and rational field campaigns. In this work, we provide an overview of an optimal sampling design framework based on Bayesian decision theory, statistical parameter inference and Bayesian model averaging. It optimizes the field sampling campaign around decisions on environmental performance metrics (e.g., risk, arrival times, etc.) while accounting for parametric and model uncertainty in the geostatistical characterization, in forcing terms, and measurement error. The appealing aspects of the framework lie on its goal-oriented character and that it is directly linked to the confidence in a specified decision. We illustrate how these concepts could be applied in a human health risk problem where uncertainty from both hydrogeological and health parameters are accounted.
Qiu, Bo; Luo, Hai
2009-05-01
Desorption electrospray ionization (DESI) mass spectrometry has been implemented on a commercial ion-trap mass spectrometer and used to optimize mass spectrometric conditions for DNA nucleobases: adenine, cytosine, thymine, and guanine. Experimental parameters including spray voltage, distance between mass spectrometer inlet and the sampled spot, and nebulizing gas inlet pressure were optimized. Cluster ions including some magic number clusters of nucleobases were observed for the first time using DESI mass spectrometry. The formation of the cluster species was found to vary with the nucleobases, acidification of the spray solvent, and the deposited sample amount. All the experimental results can be explained well using a liquid film model based on the two-step droplet pick-up mechanism. It is further suggested that solubility of the analytes in the spray solvent is an important factor to consider for their studies by using DESI. 2009 John Wiley & Sons, Ltd.
Catalá-Icardo, Mónica; Gómez-Benito, Carmen; Simó-Alfonso, Ernesto Francisco; Herrero-Martínez, José Manuel
2017-01-01
This paper describes a novel and sensitive method for extraction, preconcentration, and determination of two important widely used fungicides, azoxystrobin, and chlorothalonil. The developed methodology is based on solid-phase extraction (SPE) using a polymeric material functionalized with gold nanoparticles (AuNPs) as sorbent followed by high-performance liquid chromatography (HPLC) with diode array detector (DAD). Several experimental variables that affect the extraction efficiency such as the eluent volume, sample flow rate, and salt addition were optimized. Under the optimal conditions, the sorbent provided satisfactory enrichment efficiency for both fungicides, high selectivity and excellent reusability (>120 re-uses). The proposed method allowed the detection of 0.05 μg L -1 of the fungicides and gave satisfactory recoveries (75-95 %) when it was applied to drinking and environmental water samples (river, well, tap, irrigation, spring, and sea waters).
Investigations of calcium spectral lines in laser-induced breakdown spectroscopy
NASA Astrophysics Data System (ADS)
Ching, Sim Yit; Tariq, Usman; Haider, Zuhaib; Tufail, Kashif; Sabri, Salwanie; Imran, Muhammad; Ali, Jalil
2017-03-01
Laser-induced breakdown spectroscopy (LIBS) is a direct and versatile analytical technique that performs the elemental composition analysis based on optical emission produced by laser induced-plasma, with a little or no sample preparation. The performance of the LIBS technique relies on the choice of experimental conditions which must be thoroughly explored and optimized for each application. The main parameters affecting the LIBS performance are the laser energy, laser wavelength, pulse duration, gate delay, geometrical set-up of the focusing and collecting optics. In LIBS quantitative analysis, the gate delay and laser energy are very important parameters that have pronounced impact on the accuracy of the elemental composition information of the materials. The determination of calcium elements in the pelletized samples was investigated and served for the purpose of optimizing the gate delay and laser energy by studying and analyzing the results from emission intensities collected and signal to background ratio (S/B) for the specified wavelengths.
Statistical Learning of Origin-Specific Statically Optimal Individualized Treatment Rules
van der Laan, Mark J.; Petersen, Maya L.
2008-01-01
Consider a longitudinal observational or controlled study in which one collects chronological data over time on a random sample of subjects. The time-dependent process one observes on each subject contains time-dependent covariates, time-dependent treatment actions, and an outcome process or single final outcome of interest. A statically optimal individualized treatment rule (as introduced in van der Laan et. al. (2005), Petersen et. al. (2007)) is a treatment rule which at any point in time conditions on a user-supplied subset of the past, computes the future static treatment regimen that maximizes a (conditional) mean future outcome of interest, and applies the first treatment action of the latter regimen. In particular, Petersen et. al. (2007) clarified that, in order to be statically optimal, an individualized treatment rule should not depend on the observed treatment mechanism. Petersen et. al. (2007) further developed estimators of statically optimal individualized treatment rules based on a past capturing all confounding of past treatment history on outcome. In practice, however, one typically wishes to find individualized treatment rules responding to a user-supplied subset of the complete observed history, which may not be sufficient to capture all confounding. The current article provides an important advance on Petersen et. al. (2007) by developing locally efficient double robust estimators of statically optimal individualized treatment rules responding to such a user-supplied subset of the past. However, failure to capture all confounding comes at a price; the static optimality of the resulting rules becomes origin-specific. We explain origin-specific static optimality, and discuss the practical importance of the proposed methodology. We further present the results of a data analysis in which we estimate a statically optimal rule for switching antiretroviral therapy among patients infected with resistant HIV virus. PMID:19122792
NASA Astrophysics Data System (ADS)
Bilge, Gonca; Sezer, Banu; Boyaci, Ismail Hakki; Eseller, Kemal Efe; Berberoglu, Halil
2018-07-01
Liquid analysis by using LIBS is a complicated process due to difficulties encountered during the collection of light and formation of plasma in liquid. To avoid these, some applications are performed such as aerosol formation and transforming liquid into solid state. However, performance of LIBS in liquid samples still remains a challenging issue. In this study, performance evaluation of LIBS and parameter optimizations in liquid and solid phase samples were performed. For this purpose, milk was chosen as model sample; milk powder was used as solid sample, and milk was used as liquid sample in the experiments. Different experimental setups have been constructed for each sampling technique, and optimizations were performed to determine suitable parameters such as delay time, laser energy, repetition rate and speed of rotary table for solid sampling technique, and flow rate of carrier gas for liquid sampling technique. Target element was determined as Ca, which is a critically important element in milk for determining its nutritional value and Ca addition. In optimum parameters, limit of detection (LOD), limit of quantification (LOQ) and relative standard deviation (RSD) values were calculated as 0.11%, 0.36% and 8.29% respectively for milk powders samples; while LOD, LOQ and RSD values were calculated as 0.24%, 0.81%, and 10.93% respectively for milk samples. It can be said that LIBS is an applicable method in both liquid and solid samples with suitable systems and parameters. However, liquid analysis requires much more developed systems for more accurate results.
USDA-ARS?s Scientific Manuscript database
Optimization of flour yield and quality is important in the milling industry. The objective of this study was to determine the effect of kernel size and mill type on flour yield and end-use quality. A hard red spring wheat composite sample was segregated, based on kernel size, into large, medium, ...
Optimal Time-Resource Allocation for Energy-Efficient Physical Activity Detection
Thatte, Gautam; Li, Ming; Lee, Sangwon; Emken, B. Adar; Annavaram, Murali; Narayanan, Shrikanth; Spruijt-Metz, Donna; Mitra, Urbashi
2011-01-01
The optimal allocation of samples for physical activity detection in a wireless body area network for health-monitoring is considered. The number of biometric samples collected at the mobile device fusion center, from both device-internal and external Bluetooth heterogeneous sensors, is optimized to minimize the transmission power for a fixed number of samples, and to meet a performance requirement defined using the probability of misclassification between multiple hypotheses. A filter-based feature selection method determines an optimal feature set for classification, and a correlated Gaussian model is considered. Using experimental data from overweight adolescent subjects, it is found that allocating a greater proportion of samples to sensors which better discriminate between certain activity levels can result in either a lower probability of error or energy-savings ranging from 18% to 22%, in comparison to equal allocation of samples. The current activity of the subjects and the performance requirements do not significantly affect the optimal allocation, but employing personalized models results in improved energy-efficiency. As the number of samples is an integer, an exhaustive search to determine the optimal allocation is typical, but computationally expensive. To this end, an alternate, continuous-valued vector optimization is derived which yields approximately optimal allocations and can be implemented on the mobile fusion center due to its significantly lower complexity. PMID:21796237
Fast detection of atrazine in corn using thermometric biosensors.
Qie, Zhiwei; Ning, Baoan; Liu, Ming; Bai, Jialei; Peng, Yuan; Song, Nan; Lv, Zhiqiang; Wang, Ying; Sun, Siming; Su, Xuan; Zhang, Yihong; Gao, Zhixian
2013-09-07
Fast detection is important in screening large-scale samples. This study establishes a direct competitive ELISA method (dcTELISA) based on an enzyme thermistor for fast atrazine (ATZ) detection. ATZ competes with β-lactamase-labeled ATZ (ATZ-E) for the binding sites on anti-ATZ monoclonal antibody (mAb). The mAb are covalently bound to Controlled Pore Glass (CPG) in an immunoreactor to form immunocomplexes with ATZ and ATZ-E. Several parameters of biosensor performance were optimized, such as the ATZ-E concentration, concentration and nature of the substrate, flow rate, and effect of temperature on the sensor response. After optimization, the assay time for a single sample was 12 min. The work process and result were compared with those of high-performance liquid chromatography (HPLC). The detection results exhibited a recovery rate of 88% to 107% in ATZ-spiked fresh cut corn stalks and silage samples. The results obtained via dcTELISA had good correlation with that of HPLC, and the biosensor response was reproducible and stable even when used continuously for over 4 months. All these properties suggested that the fast detection method, dcTELISA, may be used to detect pesticide residue in large-scale samples.
Abdolhosseini, Sana; Ghiasvand, Alireza; Heidari, Nahid
2017-09-01
The surface of a stainless steel fiber was made porous, resistant and cohesive using electrophoretic deposition and coated by the nanostructured polypyrrole using an amended in-situ electropolymerization method. The coated fiber was applied for direct extraction of nicotine in biological samples through a headspace solid-phase microextraction (HS-SPME) method followed by GC-FID determination. The effects of the important experimental variables on the efficiency of the developed HS-SPME-GC-FID method, including pH of sample solution, extraction temperature and time, stirring rate, and ionic strength were evaluated and optimized. Under the optimal experimental conditions, the calibration curve was linear over the range of 0.1-20μgmL -1 and the detection limit was obtained 20ngmL -1 . Relative standard deviation (RSD, n=6) was calculated 7.6%. The results demonstrated the superiority of the proposed fiber compared with the most used commercial types. The proposed HS-SPME-GC-FID method was successfully used for the analysis of nicotine in urine and human plasma samples. Copyright © 2017 Elsevier B.V. All rights reserved.
Microbial ecology measurement system
NASA Technical Reports Server (NTRS)
1972-01-01
The sensitivity and potential rapidity of the PIA test that was demonstrated during the feasibility study warranted continuing the effort to examine the possibility of adapting this test to an automated procedure that could be used during manned missions. The effort during this program has optimized the test conditions for two important respiratory pathogens, influenza virus and Mycoplasma pneumoniae, developed a laboratory model automated detection system, and investigated a group antigen concept for virus detection. Preliminary tests on the handling of oropharygeal clinical samples for PIA testing were performed using the adenovirus system. The results obtained indicated that the PIA signal is reduced in positive samples and is increased in negative samples. Treatment with cysteine appeared to reduce nonspecific agglutination in negative samples but did not maintain the signal in positive samples.
NASA Astrophysics Data System (ADS)
Hamza, Karim; Shalaby, Mohamed
2014-09-01
This article presents a framework for simulation-based design optimization of computationally expensive problems, where economizing the generation of sample designs is highly desirable. One popular approach for such problems is efficient global optimization (EGO), where an initial set of design samples is used to construct a kriging model, which is then used to generate new 'infill' sample designs at regions of the search space where there is high expectancy of improvement. This article attempts to address one of the limitations of EGO, where generation of infill samples can become a difficult optimization problem in its own right, as well as allow the generation of multiple samples at a time in order to take advantage of parallel computing in the evaluation of the new samples. The proposed approach is tested on analytical functions, and then applied to the vehicle crashworthiness design of a full Geo Metro model undergoing frontal crash conditions.
Optimal two-phase sampling design for comparing accuracies of two binary classification rules.
Xu, Huiping; Hui, Siu L; Grannis, Shaun
2014-02-10
In this paper, we consider the design for comparing the performance of two binary classification rules, for example, two record linkage algorithms or two screening tests. Statistical methods are well developed for comparing these accuracy measures when the gold standard is available for every unit in the sample, or in a two-phase study when the gold standard is ascertained only in the second phase in a subsample using a fixed sampling scheme. However, these methods do not attempt to optimize the sampling scheme to minimize the variance of the estimators of interest. In comparing the performance of two classification rules, the parameters of primary interest are the difference in sensitivities, specificities, and positive predictive values. We derived the analytic variance formulas for these parameter estimates and used them to obtain the optimal sampling design. The efficiency of the optimal sampling design is evaluated through an empirical investigation that compares the optimal sampling with simple random sampling and with proportional allocation. Results of the empirical study show that the optimal sampling design is similar for estimating the difference in sensitivities and in specificities, and both achieve a substantial amount of variance reduction with an over-sample of subjects with discordant results and under-sample of subjects with concordant results. A heuristic rule is recommended when there is no prior knowledge of individual sensitivities and specificities, or the prevalence of the true positive findings in the study population. The optimal sampling is applied to a real-world example in record linkage to evaluate the difference in classification accuracy of two matching algorithms. Copyright © 2013 John Wiley & Sons, Ltd.
Estimation of the discharges of the multiple water level stations by multi-objective optimization
NASA Astrophysics Data System (ADS)
Matsumoto, Kazuhiro; Miyamoto, Mamoru; Yamakage, Yuzuru; Tsuda, Morimasa; Yanami, Hitoshi; Anai, Hirokazu; Iwami, Yoichi
2016-04-01
This presentation shows two aspects of the parameter identification to estimate the discharges of the multiple water level stations by multi-objective optimization. One is how to adjust the parameters to estimate the discharges accurately. The other is which optimization algorithms are suitable for the parameter identification. Regarding the previous studies, there is a study that minimizes the weighted error of the discharges of the multiple water level stations by single-objective optimization. On the other hand, there are some studies that minimize the multiple error assessment functions of the discharge of a single water level station by multi-objective optimization. This presentation features to simultaneously minimize the errors of the discharges of the multiple water level stations by multi-objective optimization. Abe River basin in Japan is targeted. The basin area is 567.0km2. There are thirteen rainfall stations and three water level stations. Nine flood events are investigated. They occurred from 2005 to 2012 and the maximum discharges exceed 1,000m3/s. The discharges are calculated with PWRI distributed hydrological model. The basin is partitioned into the meshes of 500m x 500m. Two-layer tanks are placed on each mesh. Fourteen parameters are adjusted to estimate the discharges accurately. Twelve of them are the hydrological parameters and two of them are the parameters of the initial water levels of the tanks. Three objective functions are the mean squared errors between the observed and calculated discharges at the water level stations. Latin Hypercube sampling is one of the uniformly sampling algorithms. The discharges are calculated with respect to the parameter values sampled by a simplified version of Latin Hypercube sampling. The observed discharge is surrounded by the calculated discharges. It suggests that it might be possible to estimate the discharge accurately by adjusting the parameters. In a sense, it is true that the discharge of a water level station can be accurately estimated by setting the parameter values optimized to the responding water level station. However, there are some cases that the calculated discharge by setting the parameter values optimized to one water level station does not meet the observed discharge at another water level station. It is important to estimate the discharges of all the water level stations in some degree of accuracy. It turns out to be possible to select the parameter values from the pareto optimal solutions by the condition that all the normalized errors by the minimum error of the responding water level station are under 3. The optimization performance of five implementations of the algorithms and a simplified version of Latin Hypercube sampling are compared. Five implementations are NSGA2 and PAES of an optimization software inspyred and MCO_NSGA2R, MOPSOCD and NSGA2R_NSGA2R of a statistical software R. NSGA2, PAES and MOPSOCD are the optimization algorithms of a genetic algorithm, an evolution strategy and a particle swarm optimization respectively. The number of the evaluations of the objective functions is 10,000. Two implementations of NSGA2 of R outperform the others. They are promising to be suitable for the parameter identification of PWRI distributed hydrological model.
Global Sensitivity Analysis with Small Sample Sizes: Ordinary Least Squares Approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Davis, Michael J.; Liu, Wei; Sivaramakrishnan, Raghu
2016-12-21
A new version of global sensitivity analysis is developed in this paper. This new version coupled with tools from statistics, machine learning, and optimization can devise small sample sizes that allow for the accurate ordering of sensitivity coefficients for the first 10-30 most sensitive chemical reactions in complex chemical-kinetic mechanisms, and is particularly useful for studying the chemistry in realistic devices. A key part of the paper is calibration of these small samples. Because these small sample sizes are developed for use in realistic combustion devices, the calibration is done over the ranges of conditions in such devices, with amore » test case being the operating conditions of a compression ignition engine studied earlier. Compression ignition engines operate under low-temperature combustion conditions with quite complicated chemistry making this calibration difficult, leading to the possibility of false positives and false negatives in the ordering of the reactions. So an important aspect of the paper is showing how to handle the trade-off between false positives and false negatives using ideas from the multiobjective optimization literature. The combination of the new global sensitivity method and the calibration are sample sizes a factor of approximately 10 times smaller than were available with our previous algorithm.« less
Optimization control of LNG regasification plant using Model Predictive Control
NASA Astrophysics Data System (ADS)
Wahid, A.; Adicandra, F. F.
2018-03-01
Optimization of liquified natural gas (LNG) regasification plant is important to minimize costs, especially operational costs. Therefore, it is important to choose optimum LNG regasification plant design and maintaining the optimum operating conditions through the implementation of model predictive control (MPC). Optimal tuning parameter for MPC such as P (prediction horizon), M (control of the horizon) and T (sampling time) are achieved by using fine-tuning method. The optimal criterion for design is the minimum amount of energy used and for control is integral of square error (ISE). As a result, the optimum design is scheme 2 which is developed by Devold with an energy savings of 40%. To maintain the optimum conditions, required MPC with P, M and T as follows: tank storage pressure: 90, 2, 1; product pressure: 95, 2, 1; temperature vaporizer: 65, 2, 2; and temperature heater: 35, 6, 5, with ISE value at set point tracking respectively 0.99, 1792.78, 34.89 and 7.54, or improvement of control performance respectively 4.6%, 63.5%, 3.1% and 58.2% compared to PI controller performance. The energy savings that MPC controllers can make when there is a disturbance in temperature rise 1°C of sea water is 0.02 MW.
Optimization of Protein Extraction and Two-Dimensional Electrophoresis Protocols for Oil Palm Leaf.
Daim, Leona Daniela Jeffery; Ooi, Tony Eng Keong; Yusof, Hirzun Mohd; Majid, Nazia Abdul; Karsani, Saiful Anuar Bin
2015-08-01
Oil palm (Elaeis guineensis) is an important economic crop cultivated for its nutritional palm oil. A significant amount of effort has been undertaken to understand oil palm growth and physiology at the molecular level, particularly in genomics and transcriptomics. Recently, proteomics studies have begun to garner interest. However, this effort is impeded by technical challenges. Plant sample preparation for proteomics analysis is plagued with technical challenges due to the presence of polysaccharides, secondary metabolites and other interfering compounds. Although protein extraction methods for plant tissues exist, none work universally on all sample types. Therefore, this study aims to compare and optimize different protein extraction protocols for use with two-dimensional gel electrophoresis of young and mature leaves from the oil palm. Four protein extraction methods were evaluated: phenol-guanidine isothiocyanate, trichloroacetic acid-acetone precipitation, sucrose and trichloroacetic acid-acetone-phenol. Of these four protocols, the trichloroacetic acid-acetone-phenol method was found to give the highest resolution and most reproducible gel. The results from this study can be used in sample preparations of oil palm tissue for proteomics work.
Gong, Sheng-Xiang; Wang, Xia; Li, Lei; Wang, Ming-Lin; Zhao, Ru-Song
2015-11-01
In this paper, a novel and simple method for the sensitive determination of endocrine disrupter compounds octylphenol (OP) and nonylphenol (NP) in environmental water samples has been developed using solid-phase microextraction (SPME) coupled with gas chromatography-mass spectrometry. Carboxylated carbon nano-spheres (CNSs-COOH) are used as a novel SPME coating via physical adhesion. The CNSs-COOH fiber possessed higher adsorption efficiency than 100 μm polydimethysiloxane (PDMS) fiber and was similar to 85 μm polyacrylate (PA) fiber for the two analytes. Important parameters, such as extraction time, pH, agitation speed, ionic strength, and desorption temperature and time, were investigated and optimized in detail. Under the optimal parameters, the developed method achieved low limits of detection of 0.13~0.14 ng·L(-1) and a wide linear range of 1~1000 ng·(-1) for OP and NP. The novel method was validated with several real environmental water samples, and satisfactory results were obtained.
Optimal measurement counting time and statistics in gamma spectrometry analysis: The time balance
NASA Astrophysics Data System (ADS)
Joel, Guembou Shouop Cebastien; Penabei, Samafou; Maurice, Ndontchueng Moyo; Gregoire, Chene; Jilbert, Nguelem Mekontso Eric; Didier, Takoukam Serge; Werner, Volker; David, Strivay
2017-01-01
The optimal measurement counting time for gamma-ray spectrometry analysis using HPGe detectors was determined in our laboratory by comparing twelve hours measurement counting time at day and twelve hours measurement counting time at night. The day spectrum does not fully cover the night spectrum for the same sample. It is observed that the perturbation come to the sun-light. After several investigations became clearer: to remove all effects of radiation from outside (earth, the sun, and universe) our system, it is necessary to measure the background for 24, 48 or 72 hours. In the same way, the samples have to be measured for 24, 48 or 72 hours to be safe to be purified the measurement (equality of day and night measurement). It is also possible to not use the background of the winter in summer. Depend on to the energy of radionuclide we seek, it is clear that the most important steps of a gamma spectrometry measurement are the preparation of the sample and the calibration of the detector.
An 18S rRNA Workflow for Characterizing Protists in Sewage, with a Focus on Zoonotic Trichomonads.
Maritz, Julia M; Rogers, Krysta H; Rock, Tara M; Liu, Nicole; Joseph, Susan; Land, Kirkwood M; Carlton, Jane M
2017-11-01
Microbial eukaryotes (protists) are important components of terrestrial and aquatic environments, as well as animal and human microbiomes. Their relationships with metazoa range from mutualistic to parasitic and zoonotic (i.e., transmissible between humans and animals). Despite their ecological importance, our knowledge of protists in urban environments lags behind that of bacteria, largely due to a lack of experimentally validated high-throughput protocols that produce accurate estimates of protist diversity while minimizing non-protist DNA representation. We optimized protocols for detecting zoonotic protists in raw sewage samples, with a focus on trichomonad taxa. First, we investigated the utility of two commonly used variable regions of the 18S rRNA marker gene, V4 and V9, by amplifying and Sanger sequencing 23 different eukaryotic species, including 16 protist species such as Cryptosporidium parvum, Giardia intestinalis, Toxoplasma gondii, and species of trichomonad. Next, we optimized wet-lab methods for sample processing and Illumina sequencing of both regions from raw sewage collected from a private apartment building in New York City. Our results show that both regions are effective at identifying several zoonotic protists that may be present in sewage. A combination of small extractions (1 mL volumes) performed on the same day as sample collection, and the incorporation of a vertebrate blocking primer, is ideal to detect protist taxa of interest and combat the effects of metazoan DNA. We expect that the robust, standardized methods presented in our workflow will be applicable to investigations of protists in other environmental samples, and will help facilitate large-scale investigations of protistan diversity.
The SDSS-IV MaNGA Sample: Design, Optimization, and Usage Considerations
NASA Astrophysics Data System (ADS)
Wake, David A.; Bundy, Kevin; Diamond-Stanic, Aleksandar M.; Yan, Renbin; Blanton, Michael R.; Bershady, Matthew A.; Sánchez-Gallego, José R.; Drory, Niv; Jones, Amy; Kauffmann, Guinevere; Law, David R.; Li, Cheng; MacDonald, Nicholas; Masters, Karen; Thomas, Daniel; Tinker, Jeremy; Weijmans, Anne-Marie; Brownstein, Joel R.
2017-09-01
We describe the sample design for the SDSS-IV MaNGA survey and present the final properties of the main samples along with important considerations for using these samples for science. Our target selection criteria were developed while simultaneously optimizing the size distribution of the MaNGA integral field units (IFUs), the IFU allocation strategy, and the target density to produce a survey defined in terms of maximizing signal-to-noise ratio, spatial resolution, and sample size. Our selection strategy makes use of redshift limits that only depend on I-band absolute magnitude (M I ), or, for a small subset of our sample, M I and color (NUV - I). Such a strategy ensures that all galaxies span the same range in angular size irrespective of luminosity and are therefore covered evenly by the adopted range of IFU sizes. We define three samples: the Primary and Secondary samples are selected to have a flat number density with respect to M I and are targeted to have spectroscopic coverage to 1.5 and 2.5 effective radii (R e ), respectively. The Color-Enhanced supplement increases the number of galaxies in the low-density regions of color-magnitude space by extending the redshift limits of the Primary sample in the appropriate color bins. The samples cover the stellar mass range 5× {10}8≤slant {M}* ≤slant 3× {10}11 {M}⊙ {h}-2 and are sampled at median physical resolutions of 1.37 and 2.5 kpc for the Primary and Secondary samples, respectively. We provide weights that will statistically correct for our luminosity and color-dependent selection function and IFU allocation strategy, thus correcting the observed sample to a volume-limited sample.
Camin, Federica; Pavone, Anita; Bontempo, Luana; Wehrens, Ron; Paolini, Mauro; Faberi, Angelo; Marianella, Rosa Maria; Capitani, Donatella; Vista, Silvia; Mannina, Luisa
2016-04-01
Isotope Ratio Mass Spectrometry (IRMS), (1)H Nuclear Magnetic Resonance ((1)H NMR), conventional chemical analysis and chemometric elaboration were used to assess quality and to define and confirm the geographical origin of 177 Italian PDO (Protected Denomination of Origin) olive oils and 86 samples imported from Tunisia. Italian olive oils were richer in squalene and unsaturated fatty acids, whereas Tunisian olive oils showed higher δ(18)O, δ(2)H, linoleic acid, saturated fatty acids β-sitosterol, sn-1 and 3 diglyceride values. Furthermore, all the Tunisian samples imported were of poor quality, with a K232 and/or acidity values above the limits established for extra virgin olive oils. By combining isotopic composition with (1)H NMR data using a multivariate statistical approach, a statistical model able to discriminate olive oil from Italy and those imported from Tunisia was obtained, with an optimal differentiation ability arriving at around 98%. Copyright © 2015 Elsevier Ltd. All rights reserved.
A proposal of optimal sampling design using a modularity strategy
NASA Astrophysics Data System (ADS)
Simone, A.; Giustolisi, O.; Laucelli, D. B.
2016-08-01
In real water distribution networks (WDNs) are present thousands nodes and optimal placement of pressure and flow observations is a relevant issue for different management tasks. The planning of pressure observations in terms of spatial distribution and number is named sampling design and it was faced considering model calibration. Nowadays, the design of system monitoring is a relevant issue for water utilities e.g., in order to manage background leakages, to detect anomalies and bursts, to guarantee service quality, etc. In recent years, the optimal location of flow observations related to design of optimal district metering areas (DMAs) and leakage management purposes has been faced considering optimal network segmentation and the modularity index using a multiobjective strategy. Optimal network segmentation is the basis to identify network modules by means of optimal conceptual cuts, which are the candidate locations of closed gates or flow meters creating the DMAs. Starting from the WDN-oriented modularity index, as a metric for WDN segmentation, this paper proposes a new way to perform the sampling design, i.e., the optimal location of pressure meters, using newly developed sampling-oriented modularity index. The strategy optimizes the pressure monitoring system mainly based on network topology and weights assigned to pipes according to the specific technical tasks. A multiobjective optimization minimizes the cost of pressure meters while maximizing the sampling-oriented modularity index. The methodology is presented and discussed using the Apulian and Exnet networks.
Optimism on quality of life in Portuguese chronic patients: moderator/mediator?
Vilhena, Estela; Pais-Ribeiro, José; Silva, Isabel; Pedro, Luísa; Meneses, Rute F; Cardoso, Helena; Silva, António Martins da; Mendonça, Denisa
2014-07-01
optimism is an important variable that has consistently been shown to affect adjustment to quality of life in chronic diseases. This study aims to clarify if dispositional optimism exerts a moderating or a mediating influence on the personality traits-quality of life association, in Portuguese chronic patients. multiple regression models were used to test the moderation and mediation effects of dispositional optimism in quality of life. A sample of 729 patients was recruited in Portugal's main hospitals and completed self-reported questionnaires assessing socio-demographic and clinical variables, personality, dispositional optimism, quality of life (QoL) and subjective well-being (SWB). the results of the regression models showed that dispositional optimism did not moderate the relationships between personality traits and quality of life. After controlling for gender, age, education level and severity of disease perception, the effects of personality traits on QoL and in SWB were mediated by dispositional optimism (partially and completely), except for the links between neuroticism/openness to experience and physical health. dispositional optimism is more likely to play a mediating, rather than a moderating role in personality traits-quality of life pathway in Portuguese chronic patients, suggesting that "the expectation that good things will happen" contributes to a better quality of life and subjective well-being.
NASA Astrophysics Data System (ADS)
Liao, Kaihua; Zhou, Zhiwen; Lai, Xiaoming; Zhu, Qing; Feng, Huihui
2017-04-01
The identification of representative soil moisture sampling sites is important for the validation of remotely sensed mean soil moisture in a certain area and ground-based soil moisture measurements in catchment or hillslope hydrological studies. Numerous approaches have been developed to identify optimal sites for predicting mean soil moisture. Each method has certain advantages and disadvantages, but they have rarely been evaluated and compared. In our study, surface (0-20 cm) soil moisture data from January 2013 to March 2016 (a total of 43 sampling days) were collected at 77 sampling sites on a mixed land-use (tea and bamboo) hillslope in the hilly area of Taihu Lake Basin, China. A total of 10 methods (temporal stability (TS) analyses based on 2 indices, K-means clustering based on 6 kinds of inputs and 2 random sampling strategies) were evaluated for determining optimal sampling sites for mean soil moisture estimation. They were TS analyses based on the smallest index of temporal stability (ITS, a combination of the mean relative difference and standard deviation of relative difference (SDRD)) and based on the smallest SDRD, K-means clustering based on soil properties and terrain indices (EFs), repeated soil moisture measurements (Theta), EFs plus one-time soil moisture data (EFsTheta), and the principal components derived from EFs (EFs-PCA), Theta (Theta-PCA), and EFsTheta (EFsTheta-PCA), and global and stratified random sampling strategies. Results showed that the TS based on the smallest ITS was better (RMSE = 0.023 m3 m-3) than that based on the smallest SDRD (RMSE = 0.034 m3 m-3). The K-means clustering based on EFsTheta (-PCA) was better (RMSE <0.020 m3 m-3) than these based on EFs (-PCA) and Theta (-PCA). The sampling design stratified by the land use was more efficient than the global random method. Forty and 60 sampling sites are needed for stratified sampling and global sampling respectively to make their performances comparable to the best K-means method (EFsTheta-PCA). Overall, TS required only one site, but its accuracy was limited. The best K-means method required <8 sites and yielded high accuracy, but extra soil and terrain information is necessary when using this method. The stratified sampling strategy can only be used if no pre-knowledge about soil moisture variation is available. This information will help in selecting the optimal methods for estimation the area mean soil moisture.
Fully automatic characterization and data collection from crystals of biological macromolecules.
Svensson, Olof; Malbet-Monaco, Stéphanie; Popov, Alexander; Nurizzo, Didier; Bowler, Matthew W
2015-08-01
Considerable effort is dedicated to evaluating macromolecular crystals at synchrotron sources, even for well established and robust systems. Much of this work is repetitive, and the time spent could be better invested in the interpretation of the results. In order to decrease the need for manual intervention in the most repetitive steps of structural biology projects, initial screening and data collection, a fully automatic system has been developed to mount, locate, centre to the optimal diffraction volume, characterize and, if possible, collect data from multiple cryocooled crystals. Using the capabilities of pixel-array detectors, the system is as fast as a human operator, taking an average of 6 min per sample depending on the sample size and the level of characterization required. Using a fast X-ray-based routine, samples are located and centred systematically at the position of highest diffraction signal and important parameters for sample characterization, such as flux, beam size and crystal volume, are automatically taken into account, ensuring the calculation of optimal data-collection strategies. The system is now in operation at the new ESRF beamline MASSIF-1 and has been used by both industrial and academic users for many different sample types, including crystals of less than 20 µm in the smallest dimension. To date, over 8000 samples have been evaluated on MASSIF-1 without any human intervention.
Han, Zong-wei; Huang, Wei; Luo, Yun; Zhang, Chun-di; Qi, Da-cheng
2015-03-01
Taking the soil organic matter in eastern Zhongxiang County, Hubei Province, as a research object, thirteen sample sets from different regions were arranged surrounding the road network, the spatial configuration of which was optimized by the simulated annealing approach. The topographic factors of these thirteen sample sets, including slope, plane curvature, profile curvature, topographic wetness index, stream power index and sediment transport index, were extracted by the terrain analysis. Based on the results of optimization, a multiple linear regression model with topographic factors as independent variables was built. At the same time, a multilayer perception model on the basis of neural network approach was implemented. The comparison between these two models was carried out then. The results revealed that the proposed approach was practicable in optimizing soil sampling scheme. The optimal configuration was capable of gaining soil-landscape knowledge exactly, and the accuracy of optimal configuration was better than that of original samples. This study designed a sampling configuration to study the soil attribute distribution by referring to the spatial layout of road network, historical samples, and digital elevation data, which provided an effective means as well as a theoretical basis for determining the sampling configuration and displaying spatial distribution of soil organic matter with low cost and high efficiency.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Starling, K.E.; Mallinson, R.G.; Li, M.H.
The objective of this research is to examine the relationship between the calorimetric properties of coal fluids and their molecular functional group composition. Coal fluid samples which have had their calorimetric properties measured are characterized using proton NMR, IR, and elemental analysis. These characterizations are then used in a chemical structural model to determine the composition of the coal fluid in terms of the important molecular functional groups. These functional groups are particularly important in determining the intramolecular based properties of a fluid, such as ideal gas heat capacities. Correlational frameworks for ideal gas heat capacities are then examined withinmore » an existing equation of state methodology to determine an optimal correlation. The optimal correlation for obtaining the characterization/chemical structure information and the sensitivity of the correlation to the characterization and structural model is examined. 8 refs.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Starling, K.E.; Mallinson, R.G.; Li, M.H.
The objective of this research is to examine the relationship between the calorimetric properties of coal fluids and their molecular functional group composition. Coal fluid samples which have had their calorimetric properties measured are characterized using proton NMR, ir, and elemental analysis. These characterizations are then used in a chemical structural model to determine the composition of the coal fluid in terms of the important molecular functional groups. These functional groups are particularly important in determining the intramolecular based properties of a fluid, such as ideal gas heat capacities. Correlational frameworks for ideal gas heat capacities are then examined withinmore » an existing equation of state methodology to determine an optimal correlation. The optimal correlation for obtaining the characterization/chemical structure information and the sensitivity of the correlation to the characterization and structural model is examined.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Starling, K.E.; Mallinson, R.G.; Li, M.H.
The objective of this research is to examine the relationship between the calorimetric properties of coal liquids and their molecular functional group composition. Coal liquid samples which have had their calorimetric properties measured are characterized using proton NMR, ir and elemental analysis. These characterizations are then used in a chemical structural model to determine the composition of the coal liquid in terms of the important molecular functional groups. These functional groups are particularly important in determining the intramolecular based properties of a fluid, such as ideal gas heat capacities. Correlational frameworks for heat capacities will then be examined within anmore » existing equation of state methodology to determine an optimal correlation. Also, the optimal recipe for obtaining the characterization/chemical structure information and the sensitivity of the correlation to the characterization and structural model will be examined and determined. 7 refs.« less
Chenel, Marylore; Bouzom, François; Aarons, Leon; Ogungbenro, Kayode
2008-12-01
To determine the optimal sampling time design of a drug-drug interaction (DDI) study for the estimation of apparent clearances (CL/F) of two co-administered drugs (SX, a phase I compound, potentially a CYP3A4 inhibitor, and MDZ, a reference CYP3A4 substrate) without any in vivo data using physiologically based pharmacokinetic (PBPK) predictions, population PK modelling and multiresponse optimal design. PBPK models were developed with AcslXtreme using only in vitro data to simulate PK profiles of both drugs when they were co-administered. Then, using simulated data, population PK models were developed with NONMEM and optimal sampling times were determined by optimizing the determinant of the population Fisher information matrix with PopDes using either two uniresponse designs (UD) or a multiresponse design (MD) with joint sampling times for both drugs. Finally, the D-optimal sampling time designs were evaluated by simulation and re-estimation with NONMEM by computing the relative root mean squared error (RMSE) and empirical relative standard errors (RSE) of CL/F. There were four and five optimal sampling times (=nine different sampling times) in the UDs for SX and MDZ, respectively, whereas there were only five sampling times in the MD. Whatever design and compound, CL/F was well estimated (RSE < 20% for MDZ and <25% for SX) and expected RSEs from PopDes were in the same range as empirical RSEs. Moreover, there was no bias in CL/F estimation. Since MD required only five sampling times compared to the two UDs, D-optimal sampling times of the MD were included into a full empirical design for the proposed clinical trial. A joint paper compares the designs with real data. This global approach including PBPK simulations, population PK modelling and multiresponse optimal design allowed, without any in vivo data, the design of a clinical trial, using sparse sampling, capable of estimating CL/F of the CYP3A4 substrate and potential inhibitor when co-administered together.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Man, Jun; Zhang, Jiangjiang; Li, Weixuan
2016-10-01
The ensemble Kalman filter (EnKF) has been widely used in parameter estimation for hydrological models. The focus of most previous studies was to develop more efficient analysis (estimation) algorithms. On the other hand, it is intuitively understandable that a well-designed sampling (data-collection) strategy should provide more informative measurements and subsequently improve the parameter estimation. In this work, a Sequential Ensemble-based Optimal Design (SEOD) method, coupled with EnKF, information theory and sequential optimal design, is proposed to improve the performance of parameter estimation. Based on the first-order and second-order statistics, different information metrics including the Shannon entropy difference (SD), degrees ofmore » freedom for signal (DFS) and relative entropy (RE) are used to design the optimal sampling strategy, respectively. The effectiveness of the proposed method is illustrated by synthetic one-dimensional and two-dimensional unsaturated flow case studies. It is shown that the designed sampling strategies can provide more accurate parameter estimation and state prediction compared with conventional sampling strategies. Optimal sampling designs based on various information metrics perform similarly in our cases. The effect of ensemble size on the optimal design is also investigated. Overall, larger ensemble size improves the parameter estimation and convergence of optimal sampling strategy. Although the proposed method is applied to unsaturated flow problems in this study, it can be equally applied in any other hydrological problems.« less
Parent driver characteristics associated with sub-optimal restraint of child passengers.
Winston, Flaura K; Chen, Irene G; Smith, Rebecca; Elliott, Michael R
2006-12-01
To identify parent driver demographic and socioeconomic characteristics associated with the use of sub-optimal restraints for child passengers under nine years. Cross-sectional study using in-depth, validated telephone interviews with parent drivers in a probability sample of 3,818 vehicle crashes involving 5,146 children. Sub-optimal restraint was defined as use of forward-facing child safety seats for infants under one or weighing under 20 lbs, and any seat-belt use for children under 9. Sub-optimal restraint was more common among children under one and between four and eight years than among children aged one to three years (18%, 65%, and 5%, respectively). For children under nine, independent risk factors for sub-optimal restraint were: non-Hispanic black parent drivers (with non-Hispanic white parents as reference, adjusted relative risk, adjusted RR = 1.24, 95% CI: 1.09-1.41); less educated parents (with college graduate or above as reference: high school, adjusted RR = 1.27, 95% CI: 1.12-1.44; less than high school graduate, adjusted RR = 1.36, 95% CI: 1.13-1.63); and lower family income (with $50,000 or more as reference: <$20,000, adjusted RR = 1.23, 95% CI: 1.07-1.40). Multivariate analysis revealed the following independent risk factors for sub-optimal restraint among four-to-eight-year-olds: older parent age, limited education, black race, and income below $20,000. Parents with low educational levels or of non-Hispanic black background may require additional anticipatory guidance regarding child passenger safety. The importance of poverty in predicting sub-optimal restraint underscores the importance of child restraint and booster seat disbursement and education programs, potentially through Medicaid.
ERIC Educational Resources Information Center
Chalmers, R. Philip; Counsell, Alyssa; Flora, David B.
2016-01-01
Differential test functioning, or DTF, occurs when one or more items in a test demonstrate differential item functioning (DIF) and the aggregate of these effects are witnessed at the test level. In many applications, DTF can be more important than DIF when the overall effects of DIF at the test level can be quantified. However, optimal statistical…
Urke, Helga Bjørnøy; Contreras, Mariela; Matanda, Dennis Juma
2018-01-01
Optimal early childhood development (ECD) is currently jeopardized for more than 250 million children under five in low- and middle-income countries. The Sustainable Development Goals has called for a renewed emphasis on children’s wellbeing, encompassing a holistic approach that ensures nurturing care to facilitate optimal child development. In vulnerable contexts, the extent of a family’s available resources can influence a child’s potential of reaching its optimal development. Few studies have examined these relationships in low- and middle-income countries using nationally representative samples. The present paper explored the relationships between maternal and paternal psychosocial stimulation of the child as well as maternal and household resources and ECD among 2729 children 36–59 months old in Honduras. Data from the Demographic and Health Surveys conducted in 2011–2012 was used. Adjusted logistic regression analyses showed that maternal psychosocial stimulation was positively and significantly associated with ECD in the full, rural, and lowest wealth quintile samples. These findings underscore the importance of maternal engagement in facilitating ECD but also highlight the role of context when designing tailored interventions to improve ECD. PMID:29735895
Smokowski, Paul R; Evans, Caroline B R; Cotter, Katie L; Webber, Kristina C
2014-03-01
Mental health functioning in American Indian youth is an understudied topic. Given the increased rates of depression and anxiety in this population, further research is needed. Using multiple group structural equation modeling, the current study illuminates the effect of ethnic identity on anxiety symptoms, depressive symptoms, and externalizing behavior in a group of Lumbee adolescents and a group of Caucasian, African American, and Latino/Hispanic adolescents. This study examined two possible pathways (i.e., future optimism and self-esteem) through which ethnic identity is associated with adolescent mental health. The sample (N = 4,714) is 28.53% American Indian (Lumbee) and 51.38% female. The study findings indicate that self-esteem significantly mediated the relationships between ethnic identity and anxiety symptoms, depressive symptoms, and externalizing behavior for all racial/ethnic groups (i.e., the total sample). Future optimism significantly mediated the relationship between ethnic identity and externalizing behavior for all racial/ethnic groups and was a significant mediator between ethnic identity and depressive symptoms for American Indian youth only. Fostering ethnic identity in all youth serves to enhance mental health functioning, but is especially important for American Indian youth due to the collective nature of their culture.
Sharif, K M; Rahman, M M; Azmir, J; Khatib, A; Sabina, E; Shamsudin, S H; Zaidul, I S M
2015-12-01
Multivariate analysis of thin-layer chromatography (TLC) images was modeled to predict antioxidant activity of Pereskia bleo leaves and to identify the contributing compounds of the activity. TLC was developed in optimized mobile phase using the 'PRISMA' optimization method and the image was then converted to wavelet signals and imported for multivariate analysis. An orthogonal partial least square (OPLS) model was developed consisting of a wavelet-converted TLC image and 2,2-diphynyl-picrylhydrazyl free radical scavenging activity of 24 different preparations of P. bleo as the x- and y-variables, respectively. The quality of the constructed OPLS model (1 + 1 + 0) with one predictive and one orthogonal component was evaluated by internal and external validity tests. The validated model was then used to identify the contributing spot from the TLC plate that was then analyzed by GC-MS after trimethylsilyl derivatization. Glycerol and amine compounds were mainly found to contribute to the antioxidant activity of the sample. An alternative method to predict the antioxidant activity of a new sample of P. bleo leaves has been developed. Copyright © 2015 John Wiley & Sons, Ltd.
Izco, J M; Tormo, M; Harris, A; Tong, P S; Jimenez-Flores, R
2003-01-01
Quantification of phosphate and citrate compounds is very important because their distribution between soluble and colloidal phases of milk and their interactions with milk proteins influence the stability and some functional properties of dairy products. The aim of this work was to optimize and validate a capillary electrophoresis method for the rapid determination of these compounds in milk. Various parameters affecting analysis have been optimized, including type, composition, and pH of the electrolyte, and sample extraction. Ethanol, acetonitrile, sulfuric acid, water at 50 degrees C or at room temperature were tested as sample buffers (SB). Water at room temperature yielded the best overall results and was chosen for further validation. The extraction time was checked and could be shortened to less than 1 min. Also, sample preparation was simplified to pipet 12 microl of milk into 1 ml of water containing 20 ppm of tartaric acid as an internal standard. The linearity of the method was excellent (R2 > 0.999) with CV values of response factors <3%. The detection limits for phosphate and citrate were 5.1 and 2.4 nM, respectively. The accuracy of the method was calculated for each compound (103.2 and 100.3%). In addition, citrate and phosphate content of several commercial milk samples were analyzed by this method, and the results deviated less than 5% from values obtained when analyzing the samples by official methods. To study the versatility of the technique, other dairy productssuch as cream cheese, yogurt, or Cheddar cheese were analyzed and accuracy was similar to milk in all products tested. The procedure is rapid and offers a very fast and simple sample preparation. Once the sample has arrived at the laboratory, less than 5 min (including handling, preparation, running, integration, and quantification) are necessary to determine the concentration of citric acid and inorganic phosphate. Because of the speed and accuracy of this method, it is promising as an analytical quantitative testing technique.
Conditional Optimal Design in Three- and Four-Level Experiments
ERIC Educational Resources Information Center
Hedges, Larry V.; Borenstein, Michael
2014-01-01
The precision of estimates of treatment effects in multilevel experiments depends on the sample sizes chosen at each level. It is often desirable to choose sample sizes at each level to obtain the smallest variance for a fixed total cost, that is, to obtain optimal sample allocation. This article extends previous results on optimal allocation to…
Manju, Md Abu; Candel, Math J J M; Berger, Martijn P F
2014-07-10
In this paper, the optimal sample sizes at the cluster and person levels for each of two treatment arms are obtained for cluster randomized trials where the cost-effectiveness of treatments on a continuous scale is studied. The optimal sample sizes maximize the efficiency or power for a given budget or minimize the budget for a given efficiency or power. Optimal sample sizes require information on the intra-cluster correlations (ICCs) for effects and costs, the correlations between costs and effects at individual and cluster levels, the ratio of the variance of effects translated into costs to the variance of the costs (the variance ratio), sampling and measuring costs, and the budget. When planning, a study information on the model parameters usually is not available. To overcome this local optimality problem, the current paper also presents maximin sample sizes. The maximin sample sizes turn out to be rather robust against misspecifying the correlation between costs and effects at the cluster and individual levels but may lose much efficiency when misspecifying the variance ratio. The robustness of the maximin sample sizes against misspecifying the ICCs depends on the variance ratio. The maximin sample sizes are robust under misspecification of the ICC for costs for realistic values of the variance ratio greater than one but not robust under misspecification of the ICC for effects. Finally, we show how to calculate optimal or maximin sample sizes that yield sufficient power for a test on the cost-effectiveness of an intervention.
Zimmerman, Rick S; Kirschbaum, Allison L
2018-02-01
HIV treatment optimism and the ways in which news of HIV biomedical advances in HIV is presented to the most at-risk communities interact in ways that affect risk behavior and the incidence of HIV. The goal of the current study was to understand the relationships among HIV treatment optimism, knowledge of HIV biomedical advances, and current and expected increased risk behavior as a result of reading hypothetical news stories of further advances. Most of an online-recruited sample of MSM were quite knowledgeable about current biomedical advances. After reading three hypothetical news stories, 15-24% of those not living with HIV and 26-52% of those living with HIV reported their condom use would decrease if the story they read were true. Results suggest the importance of more cautious reporting on HIV biomedical advances, and for targeting individuals with greater treatment optimism and those living with HIV via organizations where they are most likely to receive their information about HIV.
Cao, Qi; Leung, K M
2014-09-22
Reliable computer models for the prediction of chemical biodegradability from molecular descriptors and fingerprints are very important for making health and environmental decisions. Coupling of the differential evolution (DE) algorithm with the support vector classifier (SVC) in order to optimize the main parameters of the classifier resulted in an improved classifier called the DE-SVC, which is introduced in this paper for use in chemical biodegradability studies. The DE-SVC was applied to predict the biodegradation of chemicals on the basis of extensive sample data sets and known structural features of molecules. Our optimization experiments showed that DE can efficiently find the proper parameters of the SVC. The resulting classifier possesses strong robustness and reliability compared with grid search, genetic algorithm, and particle swarm optimization methods. The classification experiments conducted here showed that the DE-SVC exhibits better classification performance than models previously used for such studies. It is a more effective and efficient prediction model for chemical biodegradability.
A solution quality assessment method for swarm intelligence optimization algorithms.
Zhang, Zhaojun; Wang, Gai-Ge; Zou, Kuansheng; Zhang, Jianhua
2014-01-01
Nowadays, swarm intelligence optimization has become an important optimization tool and wildly used in many fields of application. In contrast to many successful applications, the theoretical foundation is rather weak. Therefore, there are still many problems to be solved. One problem is how to quantify the performance of algorithm in finite time, that is, how to evaluate the solution quality got by algorithm for practical problems. It greatly limits the application in practical problems. A solution quality assessment method for intelligent optimization is proposed in this paper. It is an experimental analysis method based on the analysis of search space and characteristic of algorithm itself. Instead of "value performance," the "ordinal performance" is used as evaluation criteria in this method. The feasible solutions were clustered according to distance to divide solution samples into several parts. Then, solution space and "good enough" set can be decomposed based on the clustering results. Last, using relative knowledge of statistics, the evaluation result can be got. To validate the proposed method, some intelligent algorithms such as ant colony optimization (ACO), particle swarm optimization (PSO), and artificial fish swarm algorithm (AFS) were taken to solve traveling salesman problem. Computational results indicate the feasibility of proposed method.
Optimization of sampling pattern and the design of Fourier ptychographic illuminator.
Guo, Kaikai; Dong, Siyuan; Nanda, Pariksheet; Zheng, Guoan
2015-03-09
Fourier ptychography (FP) is a recently developed imaging approach that facilitates high-resolution imaging beyond the cutoff frequency of the employed optics. In the original FP approach, a periodic LED array is used for sample illumination, and therefore, the scanning pattern is a uniform grid in the Fourier space. Such a uniform sampling scheme leads to 3 major problems for FP, namely: 1) it requires a large number of raw images, 2) it introduces the raster grid artefacts in the reconstruction process, and 3) it requires a high-dynamic-range detector. Here, we investigate scanning sequences and sampling patterns to optimize the FP approach. For most biological samples, signal energy is concentrated at low-frequency region, and as such, we can perform non-uniform Fourier sampling in FP by considering the signal structure. In contrast, conventional ptychography perform uniform sampling over the entire real space. To implement the non-uniform Fourier sampling scheme in FP, we have designed and built an illuminator using LEDs mounted on a 3D-printed plastic case. The advantages of this illuminator are threefold in that: 1) it reduces the number of image acquisitions by at least 50% (68 raw images versus 137 in the original FP setup), 2) it departs from the translational symmetry of sampling to solve the raster grid artifact problem, and 3) it reduces the dynamic range of the captured images 6 fold. The results reported in this paper significantly shortened acquisition time and improved quality of FP reconstructions. It may provide new insights for developing Fourier ptychographic imaging platforms and find important applications in digital pathology.
Fernández-Silva, I; Sanmartín, P; Silva, B; Moldes, A; Prieto, B
2011-01-01
Biological colonization of rock surfaces constitutes an important problem for maintenance of buildings and monuments. In this work, we aim to establish an efficient extraction protocol for chlorophyll-a specific for rock materials, as this is one of the most commonly used biomarkers for quantifying phototrophic biomass. For this purpose, rock samples were cut into blocks, and three different mechanical treatments were tested, prior to extraction in dimethyl sulfoxide (DMSO). To evaluate the influence of the experimental factors (1) extractant-to-sample ratio, (2) temperature, and (3) time of incubation, on chlorophyll-a recovery (response variable), incomplete factorial designs of experiments were followed. Temperature of incubation was the most relevant variable for chlorophyll-a extraction. The experimental data obtained were analyzed following a response surface methodology, which allowed the development of empirical models describing the interrelationship between the considered response and experimental variables. The optimal extraction conditions for chlorophyll-a were estimated, and the expected yields were calculated. Based on these results, we propose a method involving application of ultrasound directly to intact sample, followed by incubation in 0.43 ml DMSO/cm(2) sample at 63°C for 40 min. Confirmation experiments were performed at the predicted optimal conditions, allowing chlorophyll-a recovery of 84.4 ± 11.6% (90% was expected), which implies a substantial improvement with respect to the expected recovery using previous methods (68%). This method will enable detection of small amounts of photosynthetic microorganisms and quantification of the extent of biocolonization of stone surfaces.
NASA Astrophysics Data System (ADS)
Cao, Jian; Chen, Jing-Bo; Dai, Meng-Xue
2018-01-01
An efficient finite-difference frequency-domain modeling of seismic wave propagation relies on the discrete schemes and appropriate solving methods. The average-derivative optimal scheme for the scalar wave modeling is advantageous in terms of the storage saving for the system of linear equations and the flexibility for arbitrary directional sampling intervals. However, using a LU-decomposition-based direct solver to solve its resulting system of linear equations is very costly for both memory and computational requirements. To address this issue, we consider establishing a multigrid-preconditioned BI-CGSTAB iterative solver fit for the average-derivative optimal scheme. The choice of preconditioning matrix and its corresponding multigrid components is made with the help of Fourier spectral analysis and local mode analysis, respectively, which is important for the convergence. Furthermore, we find that for the computation with unequal directional sampling interval, the anisotropic smoothing in the multigrid precondition may affect the convergence rate of this iterative solver. Successful numerical applications of this iterative solver for the homogenous and heterogeneous models in 2D and 3D are presented where the significant reduction of computer memory and the improvement of computational efficiency are demonstrated by comparison with the direct solver. In the numerical experiments, we also show that the unequal directional sampling interval will weaken the advantage of this multigrid-preconditioned iterative solver in the computing speed or, even worse, could reduce its accuracy in some cases, which implies the need for a reasonable control of directional sampling interval in the discretization.
Optimal sampling with prior information of the image geometry in microfluidic MRI.
Han, S H; Cho, H; Paulsen, J L
2015-03-01
Recent advances in MRI acquisition for microscopic flows enable unprecedented sensitivity and speed in a portable NMR/MRI microfluidic analysis platform. However, the application of MRI to microfluidics usually suffers from prolonged acquisition times owing to the combination of the required high resolution and wide field of view necessary to resolve details within microfluidic channels. When prior knowledge of the image geometry is available as a binarized image, such as for microfluidic MRI, it is possible to reduce sampling requirements by incorporating this information into the reconstruction algorithm. The current approach to the design of the partial weighted random sampling schemes is to bias toward the high signal energy portions of the binarized image geometry after Fourier transformation (i.e. in its k-space representation). Although this sampling prescription is frequently effective, it can be far from optimal in certain limiting cases, such as for a 1D channel, or more generally yield inefficient sampling schemes at low degrees of sub-sampling. This work explores the tradeoff between signal acquisition and incoherent sampling on image reconstruction quality given prior knowledge of the image geometry for weighted random sampling schemes, finding that optimal distribution is not robustly determined by maximizing the acquired signal but from interpreting its marginal change with respect to the sub-sampling rate. We develop a corresponding sampling design methodology that deterministically yields a near optimal sampling distribution for image reconstructions incorporating knowledge of the image geometry. The technique robustly identifies optimal weighted random sampling schemes and provides improved reconstruction fidelity for multiple 1D and 2D images, when compared to prior techniques for sampling optimization given knowledge of the image geometry. Copyright © 2015 Elsevier Inc. All rights reserved.
Space Weather Activities of IONOLAB Group: TEC Mapping
NASA Astrophysics Data System (ADS)
Arikan, F.; Yilmaz, A.; Arikan, O.; Sayin, I.; Gurun, M.; Akdogan, K. E.; Yildirim, S. A.
2009-04-01
Being a key player in Space Weather, ionospheric variability affects the performance of both communication and navigation systems. To improve the performance of these systems, ionosphere has to be monitored. Total Electron Content (TEC), line integral of the electron density along a ray path, is an important parameter to investigate the ionospheric variability. A cost-effective way of obtaining TEC is by using dual-frequency GPS receivers. Since these measurements are sparse in space, accurate and robust interpolation techniques are needed to interpolate (or map) the TEC distribution for a given region in space. However, the TEC data derived from GPS measurements contain measurement noise, model and computational errors. Thus, it is necessary to analyze the interpolation performance of the techniques on synthetic data sets that can represent various ionospheric states. By this way, interpolation performance of the techniques can be compared over many parameters that can be controlled to represent the desired ionospheric states. In this study, Multiquadrics, Inverse Distance Weighting (IDW), Cubic Splines, Ordinary and Universal Kriging, Random Field Priors (RFP), Multi-Layer Perceptron Neural Network (MLP-NN), and Radial Basis Function Neural Network (RBF-NN) are employed as the spatial interpolation algorithms. These mapping techniques are initially tried on synthetic TEC surfaces for parameter and coefficient optimization and determination of error bounds. Interpolation performance of these methods are compared on synthetic TEC surfaces over the parameters of sampling pattern, number of samples, the variability of the surface and the trend type in the TEC surfaces. By examining the performance of the interpolation methods, it is observed that both Kriging, RFP and NN have important advantages and possible disadvantages depending on the given constraints. It is also observed that the determining parameter in the error performance is the trend in the Ionosphere. Optimization of the algorithms in terms of their performance parameters (like the choice of the semivariogram function for Kriging algorithms and the hidden layer and neuron numbers for MLP-NN) mostly depend on the behavior of the ionosphere at that given time instant for the desired region. The sampling pattern and number of samples are the other important parameters that may contribute to the higher errors in reconstruction. For example, for all of the above listed algorithms, hexagonal regular sampling of the ionosphere provides the lowest reconstruction error and the performance significantly degrades as the samples in the region become sparse and clustered. The optimized models and coefficients are applied to regional GPS-TEC mapping using the IONOLAB-TEC data (www.ionolab.org). Both Kriging combined with Kalman Filter and dynamic modeling of NN are also implemented as first trials of TEC and space weather predictions.
Laser-induced breakdown spectroscopy for detection of heavy metals in environmental samples
NASA Astrophysics Data System (ADS)
Wisbrun, Richard W.; Schechter, Israel; Niessner, Reinhard; Schroeder, Hartmut
1993-03-01
The application of LIBS technology as a sensor for heavy metals in solid environmental samples has been studied. This specific application introduces some new problems in the LIBS analysis. Some of them are related to the particular distribution of contaminants in the grained samples. Other problems are related to mechanical properties of the samples and to general matrix effects, like the water and organic fibers content of the sample. An attempt has been made to optimize the experimental set-up for the various involved parameters. The understanding of these factors has enabled the adjustment of the technique to the substrates of interest. The special importance of the grain size and of the laser-induced aerosol production is pointed out. Calibration plots for the analysis of heavy metals in diverse sand and soil samples have been carried out. The detection limits are shown to be usually below the recent regulation restricted concentrations.
Wahyuningsih, Hesty; K Cayami, Ferdy; Bahrudin, Udin; A Sobirin, Mochamad; Ep Mundhofir, Farmaditya; Mh Faradz, Sultana; Hisatome, Ichiro
2017-03-01
High resolution melting (HRM) is a post-PCR technique for variant screening and genotyping based on the different melting points of DNA fragments. The advantages of this technique are that it is fast, simple, and efficient and has a high output, particularly for screening of a large number of samples. APOA1 encodes apolipoprotein A1 (apoA1) which is a major component of high density lipoprotein cholesterol (HDL-C). This study aimed to obtain an optimal quantitative polymerase chain reaction (qPCR)-HRM condition for screening of APOA1 variance. Genomic DNA was isolated from a peripheral blood sample using the salting out method. APOA1 was amplified using the RotorGeneQ 5Plex HRM. The PCR product was visualized with the HRM amplification curve and confirmed using gel electrophoresis. The melting profile was confirmed by looking at the melting curve. Five sets of primers covering the translated region of APOA1 exons were designed with expected PCR product size of 100-400 bps. The amplified segments of DNA were amplicons 2, 3, 4A, 4B, and 4C. Amplicons 2, 3 and 4B were optimized at an annealing temperature of 60 °C at 40 PCR cycles. Amplicon 4A was optimized at an annealing temperature of 62 °C at 45 PCR cycles. Amplicon 4C was optimized at an annealing temperature of 63 °C at 50 PCR cycles. In addition to the suitable procedures of DNA isolation and quantification, primer design and an estimated PCR product size, the data of this study showed that appropriate annealing temperature and PCR cycles were important factors in optimization of HRM technique for variant screening in APOA1 .
Wahyuningsih, Hesty; K Cayami, Ferdy; Bahrudin, Udin; A Sobirin, Mochamad; EP Mundhofir, Farmaditya; MH Faradz, Sultana; Hisatome, Ichiro
2017-01-01
Background High resolution melting (HRM) is a post-PCR technique for variant screening and genotyping based on the different melting points of DNA fragments. The advantages of this technique are that it is fast, simple, and efficient and has a high output, particularly for screening of a large number of samples. APOA1 encodes apolipoprotein A1 (apoA1) which is a major component of high density lipoprotein cholesterol (HDL-C). This study aimed to obtain an optimal quantitative polymerase chain reaction (qPCR)-HRM condition for screening of APOA1 variance. Methods Genomic DNA was isolated from a peripheral blood sample using the salting out method. APOA1 was amplified using the RotorGeneQ 5Plex HRM. The PCR product was visualized with the HRM amplification curve and confirmed using gel electrophoresis. The melting profile was confirmed by looking at the melting curve. Results Five sets of primers covering the translated region of APOA1 exons were designed with expected PCR product size of 100–400 bps. The amplified segments of DNA were amplicons 2, 3, 4A, 4B, and 4C. Amplicons 2, 3 and 4B were optimized at an annealing temperature of 60 °C at 40 PCR cycles. Amplicon 4A was optimized at an annealing temperature of 62 °C at 45 PCR cycles. Amplicon 4C was optimized at an annealing temperature of 63 °C at 50 PCR cycles. Conclusion In addition to the suitable procedures of DNA isolation and quantification, primer design and an estimated PCR product size, the data of this study showed that appropriate annealing temperature and PCR cycles were important factors in optimization of HRM technique for variant screening in APOA1. PMID:28331418
Sampling solution traces for the problem of sorting permutations by signed reversals
2012-01-01
Background Traditional algorithms to solve the problem of sorting by signed reversals output just one optimal solution while the space of all optimal solutions can be huge. A so-called trace represents a group of solutions which share the same set of reversals that must be applied to sort the original permutation following a partial ordering. By using traces, we therefore can represent the set of optimal solutions in a more compact way. Algorithms for enumerating the complete set of traces of solutions were developed. However, due to their exponential complexity, their practical use is limited to small permutations. A partial enumeration of traces is a sampling of the complete set of traces and can be an alternative for the study of distinct evolutionary scenarios of big permutations. Ideally, the sampling should be done uniformly from the space of all optimal solutions. This is however conjectured to be ♯P-complete. Results We propose and evaluate three algorithms for producing a sampling of the complete set of traces that instead can be shown in practice to preserve some of the characteristics of the space of all solutions. The first algorithm (RA) performs the construction of traces through a random selection of reversals on the list of optimal 1-sequences. The second algorithm (DFALT) consists in a slight modification of an algorithm that performs the complete enumeration of traces. Finally, the third algorithm (SWA) is based on a sliding window strategy to improve the enumeration of traces. All proposed algorithms were able to enumerate traces for permutations with up to 200 elements. Conclusions We analysed the distribution of the enumerated traces with respect to their height and average reversal length. Various works indicate that the reversal length can be an important aspect in genome rearrangements. The algorithms RA and SWA show a tendency to lose traces with high average reversal length. Such traces are however rare, and qualitatively our results show that, for testable-sized permutations, the algorithms DFALT and SWA produce distributions which approximate the reversal length distributions observed with a complete enumeration of the set of traces. PMID:22704580
Optimal design of clinical trials with biologics using dose-time-response models.
Lange, Markus R; Schmidli, Heinz
2014-12-30
Biologics, in particular monoclonal antibodies, are important therapies in serious diseases such as cancer, psoriasis, multiple sclerosis, or rheumatoid arthritis. While most conventional drugs are given daily, the effect of monoclonal antibodies often lasts for months, and hence, these biologics require less frequent dosing. A good understanding of the time-changing effect of the biologic for different doses is needed to determine both an adequate dose and an appropriate time-interval between doses. Clinical trials provide data to estimate the dose-time-response relationship with semi-mechanistic nonlinear regression models. We investigate how to best choose the doses and corresponding sample size allocations in such clinical trials, so that the nonlinear dose-time-response model can be precisely estimated. We consider both local and conservative Bayesian D-optimality criteria for the design of clinical trials with biologics. For determining the optimal designs, computer-intensive numerical methods are needed, and we focus here on the particle swarm optimization algorithm. This metaheuristic optimizer has been successfully used in various areas but has only recently been applied in the optimal design context. The equivalence theorem is used to verify the optimality of the designs. The methodology is illustrated based on results from a clinical study in patients with gout, treated by a monoclonal antibody. Copyright © 2014 John Wiley & Sons, Ltd.
Zisook, Sidney; Tal, Ilanit; Weingart, Kimberly; Hicks, Paul; Davis, Lori L; Chen, Peijun; Yoon, Jean; Johnson, Gary R; Vertrees, Julia E; Rao, Sanjai; Pilkinton, Patricia D; Wilcox, James A; Sapra, Mamta; Iranmanesh, Ali; Huang, Grant D; Mohamed, Somaia
2016-12-01
Finding effective and lasting treatments for patients with Major Depressive Disorder (MDD) that fail to respond optimally to initial standard treatment is a critical public health imperative. Understanding the nature and characteristics of patients prior to initiating "next-step" treatment is an important component of identifying which specific treatments are best suited for individual patients. We describe clinical features and demographic characteristics of a sample of Veterans who enrolled in a "next-step" clinical trial after failing to achieve an optimal outcome from at least one well-delivered antidepressant trial. 1522 Veteran outpatients with nonpsychotic MDD completed assessments prior to being randomized to study treatment. Data is summarized and presented in terms of demographic, social, historical and clinical features and compared to a similar, non-Veteran sample. Participants were largely male and white, with about half unmarried and half unemployed. They were moderately severely depressed, with about one-third reporting recent suicidal ideation. More than half had chronic and/or recurrent depression. General medical and psychiatric comorbidities were highly prevalent, particularly PTSD. Many had histories of childhood adversity and bereavement. Participants were impaired in multiple domains of their lives and had negative self-worth. These results may not be generalizable to females, and some characteristics may be specific to Veterans of US military service. There was insufficient data on age of clinical onset and depression subtypes, and three novel measures were not psychometrically validated. Characterizing VAST-D participants provides important information to help clinicians understand features that may optimize "next-step" MDD treatments. Published by Elsevier B.V.
Radio frequency coil technology for small-animal MRI.
Doty, F David; Entzminger, George; Kulkarni, Jatin; Pamarthy, Kranti; Staab, John P
2007-05-01
A review of the theory, technology, and use of radio frequency (RF) coils for small-animal MRI is presented. It includes a brief overview of MR signal-to-noise (S/N) analysis and discussions of the various coils commonly used in small-animal MR: surface coils, linear volume coils, birdcages, and their derivatives. The scope is limited to mid-range coils, i.e. coils where the product (fd) of the frequency f and the coil diameter d is in the range 2-30 MHz-m. Common applications include mouse brain and body coils from 125 to 750 MHz, rat body coils up to 500 MHz, and small surface coils at all fields. In this regime, all the sources of loss (coil, capacitor, sample, shield, and transmission lines) are important. All such losses may be accurately captured in some modern full-wave 3D electromagnetics software, and new simulation results are presented for a selection of surface coils using Microwave Studio 2006 by Computer Simulation Technology, showing the dramatic importance of the "lift-off effect". Standard linear circuit simulators have been shown to be useful in optimization of complex coil tuning and matching circuits. There appears to be considerable potential for trading S/N for speed using phased arrays, especially for a larger field of view. Circuit simulators are shown to be useful for optimal mismatching of ultra-low-noise preamps based on the enhancement-mode pseudomorphic high-electron-mobility transistor for optimal coil decoupling in phased arrays. Cryogenically cooled RF coils are shown to offer considerable opportunity for future gains in S/N in smaller samples.
Das, Anup Kumar; Mandal, Vivekananda; Mandal, Subhash C
2013-01-01
Triterpenoids are a group of important phytocomponents from Ficus racemosa (syn. Ficus glomerata Roxb.) that are known to possess diverse pharmacological activities and which have prompted the development of various extraction techniques and strategies for its better utilisation. To develop an effective, rapid and ecofriendly microwave-assisted extraction (MAE) strategy to optimise the extraction of a potent bioactive triterpenoid compound, lupeol, from young leaves of Ficus racemosa using response surface methodology (RSM) for industrial scale-up. Initially a Plackett-Burman design matrix was applied to identify the most significant extraction variables amongst microwave power, irradiation time, particle size, solvent:sample ratio loading, varying solvent strength and pre-leaching time on lupeol extraction. Among the six variables tested, microwave power, irradiation time and solvent-sample/loading ratio were found to have a significant effect (P < 0.05) on lupeol extraction and were fitted to a Box-Behnken-design-generated quadratic polynomial equation to predict optimal extraction conditions as well as to locate operability regions with maximum yield. The optimal conditions were microwave power of 65.67% of 700 W, extraction time of 4.27 min and solvent-sample ratio loading of 21.33 mL/g. Confirmation trials under the optimal conditions gave an experimental yield (18.52 µg/g of dry leaves) close to the RSM predicted value of 18.71 µg/g. Under the optimal conditions the mathematical model was found to be well fitted with the experimental data. The MAE was found to be a more rapid, convenient and appropriate extraction method, with a higher yield and lower solvent consumption when compared with conventional extraction techniques. Copyright © 2012 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Piazzi, L.; Bonaviri, C.; Castelli, A.; Ceccherelli, G.; Costa, G.; Curini-Galletti, M.; Langeneck, J.; Manconi, R.; Montefalcone, M.; Pipitone, C.; Rosso, A.; Pinna, S.
2018-07-01
In the Mediterranean Sea, Cystoseira species are the most important canopy-forming algae in shallow rocky bottoms, hosting high biodiverse sessile and mobile communities. A large-scale study has been carried out to investigate the structure of the Cystoseira-dominated assemblages at different spatial scales and to test the hypotheses that alpha and beta diversity of the assemblages, the abundance and the structure of epiphytic macroalgae, epilithic macroalgae, sessile macroinvertebrates and mobile macroinvertebrates associated to Cystoseira beds changed among scales. A hierarchical sampling design in a total of five sites across the Mediterranean Sea (Croatia, Montenegro, Sardinia, Tuscany and Balearic Islands) was used. A total of 597 taxa associated to Cystoseira beds were identified with a mean number per sample ranging between 141.1 ± 6.6 (Tuscany) and 173.9 ± 8.5(Sardinia). A high variability at small (among samples) and large (among sites) scale was generally highlighted, but the studied assemblages showed different patterns of spatial variability. The relative importance of the different scales of spatial variability should be considered to optimize sampling designs and propose monitoring plans of this habitat.
Optimizing physical energy functions for protein folding.
Fujitsuka, Yoshimi; Takada, Shoji; Luthey-Schulten, Zaida A; Wolynes, Peter G
2004-01-01
We optimize a physical energy function for proteins with the use of the available structural database and perform three benchmark tests of the performance: (1) recognition of native structures in the background of predefined decoy sets of Levitt, (2) de novo structure prediction using fragment assembly sampling, and (3) molecular dynamics simulations. The energy parameter optimization is based on the energy landscape theory and uses a Monte Carlo search to find a set of parameters that seeks the largest ratio deltaE(s)/DeltaE for all proteins in a training set simultaneously. Here, deltaE(s) is the stability gap between the native and the average in the denatured states and DeltaE is the energy fluctuation among these states. Some of the energy parameters optimized are found to show significant correlation with experimentally observed quantities: (1) In the recognition test, the optimized function assigns the lowest energy to either the native or a near-native structure among many decoy structures for all the proteins studied. (2) Structure prediction with the fragment assembly sampling gives structure models with root mean square deviation less than 6 A in one of the top five cluster centers for five of six proteins studied. (3) Structure prediction using molecular dynamics simulation gives poorer performance, implying the importance of having a more precise description of local structures. The physical energy function solely inferred from a structural database neither utilizes sequence information from the family of the target nor the outcome of the secondary structure prediction but can produce the correct native fold for many small proteins. Copyright 2003 Wiley-Liss, Inc.
Bignardi, Chiara; Cavazza, Antonella; Laganà, Carmen; Salvadeo, Paola; Corradini, Claudio
2018-01-01
The interest towards "substances of emerging concerns" referred to objects intended to come into contact with food is recently growing. Such substances can be found in traces in simulants and in food products put in contact with plastic materials. In this context, it is important to set up analytical systems characterized by high sensitivity and to improve detection parameters to enhance signals. This work was aimed at optimizing a method based on UHPLC coupled to high resolution mass spectrometry to quantify the most common plastic additives, and able to detect the presence of polymers degradation products and coloring agents migrating from plastic re-usable containers. The optimization of mass spectrometric parameter settings for quantitative analysis of additives has been achieved by a chemometric approach, using a full factorial and d-optimal experimental designs, allowing to evaluate possible interactions between the investigated parameters. Results showed that the optimized method was characterized by improved features in terms of sensitivity respect to existing methods and was successfully applied to the analysis of a complex model food system such as chocolate put in contact with 14 polycarbonate tableware samples. A new procedure for sample pre-treatment was carried out and validated, showing high reliability. Results reported, for the first time, the presence of several molecules migrating to chocolate, in particular belonging to plastic additives, such Cyasorb UV5411, Tinuvin 234, Uvitex OB, and oligomers, whose amount was found to be correlated to age and degree of damage of the containers. Copyright © 2017 John Wiley & Sons, Ltd.
Sawoszczuk, Tomasz; Syguła-Cholewińska, Justyna; del Hoyo-Meléndez, Julio M
2015-08-28
The main goal of this work was to optimize the SPME sampling method for measuring microbial volatile organic compounds (MVOCs) emitted by active molds that may deteriorate historical objects. A series of artificially aged model materials that resemble those found in historical objects was prepared and evaluated after exposure to four different types of fungi. The investigated pairs consisted of: Alternaria alternata on silk, Aspergillus niger on parchment, Chaetomium globosum on paper and wool, and Cladosporium herbarum on paper. First of all, a selection of the most efficient SPME fibers was carried out as there are six different types of fibers commercially available. It was important to find a fiber that absorbs the biggest number and the highest amount of MVOCs. The results allowed establishing and selecting the DVB/CAR/PDMS fiber as the most effective SPME fiber for this kind of an analysis. Another task was to optimize the time of MVOCs extraction on the fiber. It was recognized that a time between 12 and 24h is adequate for absorbing a high enough amount of MVOCs. In the last step the temperature of MVOCs desorption in the GC injection port was optimized. It was found that desorption at a temperature of 250°C allowed obtaining chromatograms with the highest abundances of compounds. To the best of our knowledge this work constitutes the first attempt of the SPME method optimization for sampling MVOCs emitted by molds growing on historical objects. Copyright © 2015 Elsevier B.V. All rights reserved.
2011-02-01
Defense DoE Department of Energy DPT Direct push technology EPA Environmental Protection Agency ERPIMS Enviromental Restoration Program...and 3) assessing whether new wells should be added and where (i.e., network adequacy). • Predict allows import and comparison of new sampling...data against previously estimated trends and maps. Two options include trend flagging and plume flagging to identify potentially anomalous new values
Development of Medical Technology for Contingency Response to Marrow Toxic Agents
2014-10-30
mismatches may differ in their impact on transplant outcome, therefore, it is important to identify and quantify the influence of specific HLA ...evaluate HLA disparity and impact on HSC transplantation by adding selected pairs to the Donor/Recipient Pair project utilizing sample selection...to assay the impact of DNA-based HLA matching on unrelated donor transplant outcome, develop strategies for optimal HLA matching, evaluate the
Heim, Brett C; Ivy, Jamie A; Latch, Emily K
2012-01-01
The addax (Addax nasomaculatus) is a critically endangered antelope that is currently maintained in zoos through regional, conservation breeding programs. As for many captive species, incomplete pedigree data currently impedes the ability of addax breeding programs to confidently manage the genetics of captive populations and to select appropriate animals for reintroduction. Molecular markers are often used to improve pedigree resolution, thereby improving the long-term effectiveness of genetic management. When developing a suite of molecular markers, it is important to consider the source of DNA, as the utility of markers may vary across DNA sources. In this study, we optimized a suite of microsatellite markers for use in genotyping captive addax blood samples collected on FTA cards. We amplified 66 microsatellite loci previously described in other Artiodactyls. Sixteen markers amplified a single product in addax, but only 5 of these were found to be polymorphic in a sample of 37 addax sampled from a captive herd at Fossil Rim Wildlife Center in the US. The suite of microsatellite markers developed in this study provides a new tool for the genetic management of captive addax, and demonstrates that FTA cards can be a useful means of sample storage, provided appropriate loci are used in downstream analyses. © 2011 Wiley Periodicals, Inc.
Determining the optimal forensic DNA analysis procedure following investigation of sample quality.
Hedell, Ronny; Hedman, Johannes; Mostad, Petter
2018-07-01
Crime scene traces of various types are routinely sent to forensic laboratories for analysis, generally with the aim of addressing questions about the source of the trace. The laboratory may choose to analyse the samples in different ways depending on the type and quality of the sample, the importance of the case and the cost and performance of the available analysis methods. Theoretically well-founded guidelines for the choice of analysis method are, however, lacking in most situations. In this paper, it is shown how such guidelines can be created using Bayesian decision theory. The theory is applied to forensic DNA analysis, showing how the information from the initial qPCR analysis can be utilized. It is assumed the alternatives for analysis are using a standard short tandem repeat (STR) DNA analysis assay, using the standard assay and a complementary assay, or the analysis may be cancelled following quantification. The decision is based on information about the DNA amount and level of DNA degradation of the forensic sample, as well as case circumstances and the cost for analysis. Semi-continuous electropherogram models are used for simulation of DNA profiles and for computation of likelihood ratios. It is shown how tables and graphs, prepared beforehand, can be used to quickly find the optimal decision in forensic casework.
Soejima, Mikiko; Egashira, Kouichi; Kawano, Hiroyuki; Kawaguchi, Atsushi; Sagawa, Kimitaka; Koda, Yoshiro
2011-01-01
Anhaptoglobinemic patients run the risk of severe anaphylactic transfusion reaction because they produce serum haptoglobin antibodies. Being homozygous for the haptoglobin gene deletion allele (HPdel) is the only known cause of congenital anhaptoglobinemia, and detection of HPdel before transfusion is important to prevent anaphylactic shock. In this study, we developed a loop-mediated isothermal amplification (LAMP)-based screening for HPdel. Optimal primer sets and temperature for LAMP were selected for HPdel and the 5′ region of the HP using genomic DNA as a template. Then, the effects of diluent and boiling on LAMP amplification were examined using whole blood as a template. Blood samples diluted 1:100 with 50 mmol/L NaOH without boiling gave optimal results as well as those diluted 1:2 with water followed by boiling. The results from 100 blood samples were fully concordant with those obtained by real-time PCR methods. Detection of the HPdel allele by LAMP using alkaline-denatured blood samples is rapid, simple, accurate, and cost effective, and is readily applicable in various clinical settings because this method requires only basic instruments. In addition, the simple preparation of blood samples using NaOH saves time and effort for various genetic tests. PMID:21497293
Taghvimi, Arezou; Hamishehkar, Hamed
2017-01-15
This paper develops a highly selective, specific and efficient method for simultaneous determination of ephedrine and methamphetamine by a new carbon coated magnetic nanoparticles (C/MNPs) as a magnetic solid phase extraction (MSPE) adsorbent in biological urine medium. The characterization of synthesized magnetic nano adsorbent was completely carried out by various characterization techniques like Fourier transform infrared (FT-IR) spectroscopy, powder x-ray diffraction (XRD), scanning electron microscopy (SEM) and vibrating sample magnetometer (VSM). Nine important parameters influencing extraction efficiency including amount of adsorbent, amounts of sample volume, pH, type and amount of extraction organic solvent, time of extraction and desorption, agitation rate and ionic strength of extraction medium, were studied and optimized. Under optimized extraction conditions, a good linearity was observed in the concentration range of 100-2000ng/mL for ephedrine and 100-2500ng/mL for methamphetamine. Analysis of positive urine samples was carried out by proposed method with the recovery of 98.71 and 97.87% for ephedrine and methamphetamine, respectively. The results indicated that carbon coated magnetic nanoparticles could be applied in clinical and forensic laboratories for simultaneous determination of abused drugs in urine media. Copyright © 2016 Elsevier B.V. All rights reserved.
Randomized algorithms for high quality treatment planning in volumetric modulated arc therapy
NASA Astrophysics Data System (ADS)
Yang, Yu; Dong, Bin; Wen, Zaiwen
2017-02-01
In recent years, volumetric modulated arc therapy (VMAT) has been becoming a more and more important radiation technique widely used in clinical application for cancer treatment. One of the key problems in VMAT is treatment plan optimization, which is complicated due to the constraints imposed by the involved equipments. In this paper, we consider a model with four major constraints: the bound on the beam intensity, an upper bound on the rate of the change of the beam intensity, the moving speed of leaves of the multi-leaf collimator (MLC) and its directional-convexity. We solve the model by a two-stage algorithm: performing minimization with respect to the shapes of the aperture and the beam intensities alternatively. Specifically, the shapes of the aperture are obtained by a greedy algorithm whose performance is enhanced by random sampling in the leaf pairs with a decremental rate. The beam intensity is optimized using a gradient projection method with non-monotonic line search. We further improve the proposed algorithm by an incremental random importance sampling of the voxels to reduce the computational cost of the energy functional. Numerical simulations on two clinical cancer date sets demonstrate that our method is highly competitive to the state-of-the-art algorithms in terms of both computational time and quality of treatment planning.
Ma, Jian; Yang, Bo; Byrne, Robert H
2012-06-15
Determination of chromate at low concentration levels in drinking water is an important analytical objective for both human health and environmental science. Here we report the use of solid phase extraction (SPE) in combination with a custom-made portable light-emitting diode (LED) spectrophotometer to achieve detection of chromate in the field at nanomolar levels. The measurement chemistry is based on a highly selective reaction between 1,5-diphenylcarbazide (DPC) and chromate under acidic conditions. The Cr-DPC complex formed in the reaction can be extracted on a commercial C18 SPE cartridge. Concentrated Cr-DPC is subsequently eluted with methanol and detected by spectrophotometry. Optimization of analytical conditions involved investigation of reagent compositions and concentrations, eluent type, flow rate (sample loading), sample volume, and stability of the SPE cartridge. Under optimized conditions, detection limits are on the order of 3 nM. Only 50 mL of sample is required for an analysis, and total analysis time is around 10 min. The targeted analytical range of 0-500 nM can be easily extended by changing the sample volume. Compared to previous SPE-based spectrophotometric methods, this analytical procedure offers the benefits of improved sensitivity, reduced sample consumption, shorter analysis time, greater operational convenience, and lower cost. Copyright © 2012 Elsevier B.V. All rights reserved.
Alemtuzumab: validation of a sensitive and simple enzyme-linked immunosorbent assay.
Jilani, Iman; Keating, Michael; Giles, Francis J; O'Brien, Susan; Kantarjian, Hagop M; Albitar, Maher
2004-12-01
Alemtuzumab (MabCampath) is a humanized rat monoclonal antibody that targets the CD52 antigen. It has been approved for the treatment of patients with resistant chronic lymphocytic leukaemia (CLL). Measuring plasma/serum levels of alemtuzumab is important for optimizing the dosing and scheduling of therapy; however, current assays in serum or plasma, based on the capture of alemtuzumab using CD52, are complicated and difficult to adapt for high throughput testing. We developed a simple sandwich enzyme-linked immunosorbent assay (ELISA) to measure alemtuzumab that takes advantage of the remaining rat sequence in alemtuzumab. Using specific anti-rat immunoglobulin (Ig) antibodies (absorbed against human Ig), alemtuzumab levels were measured in the serum and plasma of patients treated with alemtuzumab. Levels were similar between plasma and serum samples, in fresh samples and samples stored at 4 degrees C for 24 h, but were significantly lower in samples stored at room temperature for 24h. The assay was successfully used to determine serum alemtuzumab pre- and post-treatment. This assay is simple and adaptable for high throughput testing, with a limit of detection of 0.05 microg/ml and a coefficient of variation of +/-12.5%. No false positivity was observed in >200 samples tested. This validated assay should help optimize the dosing and scheduling of alemtuzumab therapy. The underlying principles are also applicable to the measurement of other humanized antibodies using an appropriate anti-Ig.
Forcisi, Sara; Moritz, Franco; Kanawati, Basem; Tziotis, Dimitrios; Lehmann, Rainer; Schmitt-Kopplin, Philippe
2013-05-31
The present review gives an introduction into the concept of metabolomics and provides an overview of the analytical tools applied in non-targeted metabolomics with a focus on liquid chromatography (LC). LC is a powerful analytical tool in the study of complex sample matrices. A further development and configuration employing Ultra-High Pressure Liquid Chromatography (UHPLC) is optimized to provide the largest known liquid chromatographic resolution and peak capacity. Reasonably UHPLC plays an important role in separation and consequent metabolite identification of complex molecular mixtures such as bio-fluids. The most sensitive detectors for these purposes are mass spectrometers. Almost any mass analyzer can be optimized to identify and quantify small pre-defined sets of targets; however, the number of analytes in metabolomics is far greater. Optimized protocols for quantification of large sets of targets may be rendered inapplicable. Results on small target set analyses on different sample matrices are easily comparable with each other. In non-targeted metabolomics there is almost no analytical method which is applicable to all different matrices due to limitations pertaining to mass analyzers and chromatographic tools. The specifications of the most important interfaces and mass analyzers are discussed. We additionally provide an exemplary application in order to demonstrate the level of complexity which remains intractable up to date. The potential of coupling a high field Fourier Transform Ion Cyclotron Resonance Mass Spectrometer (ICR-FT/MS), the mass analyzer with the largest known mass resolving power, to UHPLC is given with an example of one human pre-treated plasma sample. This experimental example illustrates one way of overcoming the necessity of faster scanning rates in the coupling with UHPLC. The experiment enabled the extraction of thousands of features (analytical signals). A small subset of this compositional space could be mapped into a mass difference network whose topology shows specificity toward putative metabolite classes and retention time. Copyright © 2013 Elsevier B.V. All rights reserved.
A Sequential Optimization Sampling Method for Metamodels with Radial Basis Functions
Pan, Guang; Ye, Pengcheng; Yang, Zhidong
2014-01-01
Metamodels have been widely used in engineering design to facilitate analysis and optimization of complex systems that involve computationally expensive simulation programs. The accuracy of metamodels is strongly affected by the sampling methods. In this paper, a new sequential optimization sampling method is proposed. Based on the new sampling method, metamodels can be constructed repeatedly through the addition of sampling points, namely, extrema points of metamodels and minimum points of density function. Afterwards, the more accurate metamodels would be constructed by the procedure above. The validity and effectiveness of proposed sampling method are examined by studying typical numerical examples. PMID:25133206
An optimized method for measuring fatty acids and cholesterol in stable isotope-labeled cells
Argus, Joseph P.; Yu, Amy K.; Wang, Eric S.; Williams, Kevin J.; Bensinger, Steven J.
2017-01-01
Stable isotope labeling has become an important methodology for determining lipid metabolic parameters of normal and neoplastic cells. Conventional methods for fatty acid and cholesterol analysis have one or more issues that limit their utility for in vitro stable isotope-labeling studies. To address this, we developed a method optimized for measuring both fatty acids and cholesterol from small numbers of stable isotope-labeled cultured cells. We demonstrate quantitative derivatization and extraction of fatty acids from a wide range of lipid classes using this approach. Importantly, cholesterol is also recovered, albeit at a modestly lower yield, affording the opportunity to quantitate both cholesterol and fatty acids from the same sample. Although we find that background contamination can interfere with quantitation of certain fatty acids in low amounts of starting material, our data indicate that this optimized method can be used to accurately measure mass isotopomer distributions for cholesterol and many fatty acids isolated from small numbers of cultured cells. Application of this method will facilitate acquisition of lipid parameters required for quantifying flux and provide a better understanding of how lipid metabolism influences cellular function. PMID:27974366
Additive manufacturing of reflective optics: evaluating finishing methods
NASA Astrophysics Data System (ADS)
Leuteritz, G.; Lachmayer, R.
2018-02-01
Individually shaped light distributions become more and more important in lighting technologies and thus the importance of additively manufactured reflectors increases significantly. The vast field of applications ranges from automotive lighting to medical imaging and bolsters the statement. However, the surfaces of additively manufactured reflectors suffer from insufficient optical properties even when manufactured using optimized process parameters for the Selective Laser Melting (SLM) process. Therefore post-process treatments of reflectors are necessary in order to further enhance their optical quality. This work concentrates on the effectiveness of post-process procedures for reflective optics. Based on already optimized aluminum reflectors, which are manufactured with a SLM machine, the parts are differently machined after the SLM process. Selected finishing methods like laser polishing, sputtering or sand blasting are applied and their effects quantified and compared. The post-process procedures are investigated on their impact on surface roughness and reflectance as well as geometrical precision. For each finishing method a demonstrator will be created and compared to a fully milled sample and among themselves. Ultimately, guidelines are developed in order to figure out the optimal treatment of additively manufactured reflectors regarding their optical and geometrical properties. Simulations of the light distributions will be validated with the developed demonstrators.
Design optimization of superconducting coils based on asymmetrical characteristics of REBCO tapes
NASA Astrophysics Data System (ADS)
Hong, Zhiyong; Li, Wenrong; Chen, Yanjun; Gömöry, Fedor; Frolek, Lubomír; Zhang, Min; Sheng, Jie
2018-07-01
Angle dependence Ic(B,θ) of superconducting tape is a crucial parameter to calculate the influence of magnetic field during the design of superconducting applications,. This paper focuses on the asymmetrical characteristics found in REBCO tapes and further applications based on this phenomenon. This paper starts with angle dependence measurements of different HTS tapes, asymmetrical characteristics are found in some of the testing samples. On basis of this property, optimization of superconducting coils in superconducting motor, transformer and insert magnet is discussed by simulation. Simplified experiments which represent the structure of insert magnet were carried out to prove the validity of numerical studies. Conclusions obtained in this paper show that the asymmetrical property of superconducting tape is quite important in design of superconducting applications, and optimized winding technique based on this property can be used to improve the performance of superconducting devices.
NASA Astrophysics Data System (ADS)
Chen, Jing-Bo
2014-06-01
By using low-frequency components of the damped wavefield, Laplace-Fourier-domain full waveform inversion (FWI) can recover a long-wavelength velocity model from the original undamped seismic data lacking low-frequency information. Laplace-Fourier-domain modelling is an important foundation of Laplace-Fourier-domain FWI. Based on the numerical phase velocity and the numerical attenuation propagation velocity, a method for performing Laplace-Fourier-domain numerical dispersion analysis is developed in this paper. This method is applied to an average-derivative optimal scheme. The results show that within the relative error of 1 per cent, the Laplace-Fourier-domain average-derivative optimal scheme requires seven gridpoints per smallest wavelength and smallest pseudo-wavelength for both equal and unequal directional sampling intervals. In contrast, the classical five-point scheme requires 23 gridpoints per smallest wavelength and smallest pseudo-wavelength to achieve the same accuracy. Numerical experiments demonstrate the theoretical analysis.
Olekšáková, Tereza; Žurovcová, Martina; Klimešová, Vanda; Barták, Miroslav; Šuláková, Hana
2018-04-01
Several methods of DNA extraction, coupled with 'DNA barcoding' species identification, were compared using specimens from early developmental stages of forensically important flies from the Calliphoridae and Sarcophagidae families. DNA was extracted at three immature stages - eggs, the first instar larvae, and empty pupal cases (puparia) - using four different extraction methods, namely, one simple 'homemade' extraction buffer protocol and three commercial kits. The extraction conditions, including the amount of proteinase K and incubation times, were optimized. The simple extraction buffer method was successful for half of the eggs and for the first instar larval samples. The DNA Lego Kit and DEP-25 DNA Extraction Kit were useful for DNA extractions from the first instar larvae samples, and the DNA Lego Kit was also successful regarding the extraction from eggs. The QIAamp DNA mini kit was the most effective; the extraction was successful with regard to all sample types - eggs, larvae, and pupari.
Sample distribution in peak mode isotachophoresis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rubin, Shimon; Schwartz, Ortal; Bercovici, Moran, E-mail: mberco@technion.ac.il
We present an analytical study of peak mode isotachophoresis (ITP), and provide closed form solutions for sample distribution and electric field, as well as for leading-, trailing-, and counter-ion concentration profiles. Importantly, the solution we present is valid not only for the case of fully ionized species, but also for systems of weak electrolytes which better represent real buffer systems and for multivalent analytes such as proteins and DNA. The model reveals two major scales which govern the electric field and buffer distributions, and an additional length scale governing analyte distribution. Using well-controlled experiments, and numerical simulations, we verify andmore » validate the model and highlight its key merits as well as its limitations. We demonstrate the use of the model for determining the peak concentration of focused sample based on known buffer and analyte properties, and show it differs significantly from commonly used approximations based on the interface width alone. We further apply our model for studying reactions between multiple species having different effective mobilities yet co-focused at a single ITP interface. We find a closed form expression for an effective-on rate which depends on reactants distributions, and derive the conditions for optimizing such reactions. Interestingly, the model reveals that maximum reaction rate is not necessarily obtained when the concentration profiles of the reacting species perfectly overlap. In addition to the exact solutions, we derive throughout several closed form engineering approximations which are based on elementary functions and are simple to implement, yet maintain the interplay between the important scales. Both the exact and approximate solutions provide insight into sample focusing and can be used to design and optimize ITP-based assays.« less
Kucherenko, I S; Soldatkin, O O; Lagarde, F; Jaffrezic-Renault, N; Dzyadevych, S V; Soldatkin, A P
2015-11-01
Creatine kinase (CK: adenosine-5-triphosphate-creatine phosphotransferase) is an important enzyme of muscle cells; the presence of a large amount of the enzyme in blood serum is a biomarker of muscular injuries, such as acute myocardial infarction. This work describes a bi-enzyme (glucose oxidase and hexokinase based) biosensor for rapid and convenient determination of CK activity by measuring the rate of ATP production by this enzyme. Simultaneously the biosensor determines glucose concentration in the sample. Platinum disk electrodes were used as amperometric transducers. Glucose oxidase and hexokinase were co-immobilized via cross-linking with BSA by glutaraldehyde and served as a biorecognition element of the biosensor. The biosensor work at different concentrations of CK substrates (ADP and creatine phosphate) was investigated; optimal concentration of ADP was 1mM, and creatine phosphate - 10 mM. The reproducibility of the biosensor responses to glucose, ATP and CK during a day was tested (relative standard deviation of 15 responses to glucose was 2%, to ATP - 6%, to CK - 7-18% depending on concentration of the CK). Total time of CK analysis was 10 min. The measurements of creatine kinase in blood serum samples were carried out (at 20-fold sample dilution). Twentyfold dilution of serum samples was chosen as optimal for CK determination. The biosensor could distinguish healthy and ill people and evaluate the level of CK increase. Thus, the biosensor can be used as a test-system for CK analysis in blood serum or serve as a component of multibiosensors for determination of important blood substances. Determination of activity of other kinases by the developed biosensor is also possible for research purposes. Copyright © 2015 Elsevier B.V. All rights reserved.
Ng, Nyuk Ting; Sanagi, Mohd Marsin; Wan Ibrahim, Wan Nazihah; Wan Ibrahim, Wan Aini
2017-05-01
Agarose-chitosan-immobilized octadecylsilyl-silica (C 18 ) film micro-solid phase extraction (μSPE) was developed and applied for the determination of phenanthrene (PHE) and pyrene (PYR) in chrysanthemum tea samples using high performance liquid chromatography-ultraviolet detection (HPLC-UV). The film of blended agarose and chitosan allows good dispersion of C 18 , prevents the leaching of C 18 during application and enhances the film mechanical stability. Important μSPE parameters were optimized including amount of sorbent loading, extraction time, desorption solvent and desorption time. The matrix match calibration curves showed good linearity (r⩾0.994) over a concentration range of 1-500ppb. Under the optimized conditions, the proposed method showed good limits of detection (0.549-0.673ppb), good analyte recoveries (100.8-105.99%) and good reproducibilities (RSDs⩽13.53%, n=3) with preconcentration factors of 4 and 72 for PHE and PYR, respectively. Copyright © 2016 Elsevier Ltd. All rights reserved.
A boosted optimal linear learner for retinal vessel segmentation
NASA Astrophysics Data System (ADS)
Poletti, E.; Grisan, E.
2014-03-01
Ocular fundus images provide important information about retinal degeneration, which may be related to acute pathologies or to early signs of systemic diseases. An automatic and quantitative assessment of vessel morphological features, such as diameters and tortuosity, can improve clinical diagnosis and evaluation of retinopathy. At variance with available methods, we propose a data-driven approach, in which the system learns a set of optimal discriminative convolution kernels (linear learner). The set is progressively built based on an ADA-boost sample weighting scheme, providing seamless integration between linear learner estimation and classification. In order to capture the vessel appearance changes at different scales, the kernels are estimated on a pyramidal decomposition of the training samples. The set is employed as a rotating bank of matched filters, whose response is used by the boosted linear classifier to provide a classification of each image pixel into the two classes of interest (vessel/background). We tested the approach fundus images available from the DRIVE dataset. We show that the segmentation performance yields an accuracy of 0.94.
Uncertainty Quantification and Statistical Convergence Guidelines for PIV Data
NASA Astrophysics Data System (ADS)
Stegmeir, Matthew; Kassen, Dan
2016-11-01
As Particle Image Velocimetry has continued to mature, it has developed into a robust and flexible technique for velocimetry used by expert and non-expert users. While historical estimates of PIV accuracy have typically relied heavily on "rules of thumb" and analysis of idealized synthetic images, recently increased emphasis has been placed on better quantifying real-world PIV measurement uncertainty. Multiple techniques have been developed to provide per-vector instantaneous uncertainty estimates for PIV measurements. Often real-world experimental conditions introduce complications in collecting "optimal" data, and the effect of these conditions is important to consider when planning an experimental campaign. The current work utilizes the results of PIV Uncertainty Quantification techniques to develop a framework for PIV users to utilize estimated PIV confidence intervals to compute reliable data convergence criteria for optimal sampling of flow statistics. Results are compared using experimental and synthetic data, and recommended guidelines and procedures leveraging estimated PIV confidence intervals for efficient sampling for converged statistics are provided.
Parkinson, Don-Roger; Churchill, Tonia J; Rolls, Wyn
2008-11-01
Methyl benzoate - as a biomarker for mold growth - was used as a specific target compound to indicate outgassed MVOC products from mold. Both real and surrogate samples were analyzed from a variety of matrices including: carpet, ceiling tiles, dried paint surfaces, wallboard and wallboard paper. Sampling parameters, including: desorption, extraction time, incubation temperature, pH, salt effects and spinning rate, were optimized. Results suggest that extraction and detection of methyl benzoate amongst other MVOCs can be accomplished cleanly by SPME-GC/MS methods. With detection limits (LOD = 1.5 ppb) and linearity (0.999) over a range of 100 ppm to 2 ppb, this work demonstrates that such a green technique can be contemplated for use in quick assessment or as part of an ongoing assessment strategy to detect mold growth in common indoor buildings and materials for both qualitative and quantitative determinations. Of importance, no matrix effects are observed under optimized extraction conditions.
The importance of hyaluronic acid in vocal fold biomechanics.
Chan, R W; Gray, S D; Titze, I R
2001-06-01
This study examined the influence of hyaluronic acid (HA) on the biomechanical properties of the human vocal fold cover (the superficial layer of the lamina propria). Vocal fold tissues were freshly excised from 5 adult male cadavers and were treated with bovine testicular hyaluronidase to selectively remove HA from the lamina propria extracellular matrix (ECM). Linear viscoelastic shear properties (elastic shear modulus and dynamic viscosity) of the tissue samples before and after enzymatic treatment were quantified as a function of frequency (0.01 to 15 Hz) by a parallel-plate rotational rheometer at 37 degrees C. On removing HA from the vocal fold ECM, the elastic shear modulus (G' ) or stiffness of the vocal fold cover decreased by an average of around 35%, while the dynamic viscosity (eta') increased by 70% at higher frequencies (>1 Hz). The results suggested that HA plays an important role in determining the biomechanical properties of the vocal fold cover. As a highly hydrated glycosaminoglycan in the vocal fold ECM, it likely contributes to the maintenance of an optimal tissue viscosity that may facilitate phonation, and an optimal tissue stiffness that may be important for vocal fundamental frequency control. HA has been proposed as a potential bioimplant for the surgical repair of vocal fold ECM defects (eg, vocal fold scarring and sulcus vocalis). Our results suggested that such clinical use may be potentially optimal for voice production from a biomechanical perspective.
Evaluating information content of SNPs for sample-tagging in re-sequencing projects.
Hu, Hao; Liu, Xiang; Jin, Wenfei; Hilger Ropers, H; Wienker, Thomas F
2015-05-15
Sample-tagging is designed for identification of accidental sample mix-up, which is a major issue in re-sequencing studies. In this work, we develop a model to measure the information content of SNPs, so that we can optimize a panel of SNPs that approach the maximal information for discrimination. The analysis shows that as low as 60 optimized SNPs can differentiate the individuals in a population as large as the present world, and only 30 optimized SNPs are in practice sufficient in labeling up to 100 thousand individuals. In the simulated populations of 100 thousand individuals, the average Hamming distances, generated by the optimized set of 30 SNPs are larger than 18, and the duality frequency, is lower than 1 in 10 thousand. This strategy of sample discrimination is proved robust in large sample size and different datasets. The optimized sets of SNPs are designed for Whole Exome Sequencing, and a program is provided for SNP selection, allowing for customized SNP numbers and interested genes. The sample-tagging plan based on this framework will improve re-sequencing projects in terms of reliability and cost-effectiveness.
NMR methods for metabolomics of mammalian cell culture bioreactors.
Aranibar, Nelly; Reily, Michael D
2014-01-01
Metabolomics has become an important tool for measuring pools of small molecules in mammalian cell cultures expressing therapeutic proteins. NMR spectroscopy has played an important role, largely because it requires minimal sample preparation, does not require chromatographic separation, and is quantitative. The concentrations of large numbers of small molecules in the extracellular media or within the cells themselves can be measured directly on the culture supernatant and on the supernatant of the lysed cells, respectively, and correlated with endpoints such as titer, cell viability, or glycosylation patterns. The observed changes can be used to generate hypotheses by which these parameters can be optimized. This chapter focuses on the sample preparation, data acquisition, and analysis to get the most out of NMR metabolomics data from CHO cell cultures but could easily be extended to other in vitro culture systems.
Wang, Yuhao; Li, Xin; Xu, Kai; Ren, Fengbo; Yu, Hao
2017-04-01
Compressive sensing is widely used in biomedical applications, and the sampling matrix plays a critical role on both quality and power consumption of signal acquisition. It projects a high-dimensional vector of data into a low-dimensional subspace by matrix-vector multiplication. An optimal sampling matrix can ensure accurate data reconstruction and/or high compression ratio. Most existing optimization methods can only produce real-valued embedding matrices that result in large energy consumption during data acquisition. In this paper, we propose an efficient method that finds an optimal Boolean sampling matrix in order to reduce the energy consumption. Compared to random Boolean embedding, our data-driven Boolean sampling matrix can improve the image recovery quality by 9 dB. Moreover, in terms of sampling hardware complexity, it reduces the energy consumption by 4.6× and the silicon area by 1.9× over the data-driven real-valued embedding.
NASA Astrophysics Data System (ADS)
Yanchun, Wan; Qiucen, Chen
2017-11-01
Purchasing is an important part of export e-commerce of B2C, which plays an important role on risk and cost control in supply management. From the perspective of risk control, the paper construct a CVaR model for portfolio purchase. We select a heavy sales mobile power equipment from a typical B2C e-commerce export retailer as study sample. This study optimizes the purchasing strategy of this type of mobile power equipment. The research has some reference for similar enterprises in purchasing portfolio decision.
Capturing heterogeneity: The role of a study area's extent for estimating mean throughfall
NASA Astrophysics Data System (ADS)
Zimmermann, Alexander; Voss, Sebastian; Metzger, Johanna Clara; Hildebrandt, Anke; Zimmermann, Beate
2016-11-01
The selection of an appropriate spatial extent of a sampling plot is one among several important decisions involved in planning a throughfall sampling scheme. In fact, the choice of the extent may determine whether or not a study can adequately characterize the hydrological fluxes of the studied ecosystem. Previous attempts to optimize throughfall sampling schemes focused on the selection of an appropriate sample size, support, and sampling design, while comparatively little attention has been given to the role of the extent. In this contribution, we investigated the influence of the extent on the representativeness of mean throughfall estimates for three forest ecosystems of varying stand structure. Our study is based on virtual sampling of simulated throughfall fields. We derived these fields from throughfall data sampled in a simply structured forest (young tropical forest) and two heterogeneous forests (old tropical forest, unmanaged mixed European beech forest). We then sampled the simulated throughfall fields with three common extents and various sample sizes for a range of events and for accumulated data. Our findings suggest that the size of the study area should be carefully adapted to the complexity of the system under study and to the required temporal resolution of the throughfall data (i.e. event-based versus accumulated). Generally, event-based sampling in complex structured forests (conditions that favor comparatively long autocorrelations in throughfall) requires the largest extents. For event-based sampling, the choice of an appropriate extent can be as important as using an adequate sample size.
Huang, Tao; Li, Xiao-yu; Xu, Meng-ling; Jin, Rui; Ku, Jing; Xu, Sen-miao; Wu, Zhen-zhong
2015-01-01
The quality of potato is directly related to their edible value and industrial value. Hollow heart of potato, as a physiological disease occurred inside the tuber, is difficult to be detected. This paper put forward a non-destructive detection method by using semi-transmission hyperspectral imaging with support vector machine (SVM) to detect hollow heart of potato. Compared to reflection and transmission hyperspectral image, semi-transmission hyperspectral image can get clearer image which contains the internal quality information of agricultural products. In this study, 224 potato samples (149 normal samples and 75 hollow samples) were selected as the research object, and semi-transmission hyperspectral image acquisition system was constructed to acquire the hyperspectral images (390-1 040 nn) of the potato samples, and then the average spectrum of region of interest were extracted for spectral characteristics analysis. Normalize was used to preprocess the original spectrum, and prediction model were developed based on SVM using all wave bands, the accurate recognition rate of test set is only 87. 5%. In order to simplify the model competitive.adaptive reweighed sampling algorithm (CARS) and successive projection algorithm (SPA) were utilized to select important variables from the all 520 spectral variables and 8 variables were selected (454, 601, 639, 664, 748, 827, 874 and 936 nm). 94. 64% of the accurate recognition rate of test set was obtained by using the 8 variables to develop SVM model. Parameter optimization algorithms, including artificial fish swarm algorithm (AFSA), genetic algorithm (GA) and grid search algorithm, were used to optimize the SVM model parameters: penalty parameter c and kernel parameter g. After comparative analysis, AFSA, a new bionic optimization algorithm based on the foraging behavior of fish swarm, was proved to get the optimal model parameter (c=10. 659 1, g=0. 349 7), and the recognition accuracy of 10% were obtained for the AFSA-SVM model. The results indicate that combining the semi-transmission hyperspectral imaging technology with CARS-SPA and AFSA-SVM can accurately detect hollow heart of potato, and also provide technical support for rapid non-destructive detecting of hollow heart of potato.
Quantification of soil water retention parameters using multi-section TDR-waveform analysis
NASA Astrophysics Data System (ADS)
Baviskar, S. M.; Heimovaara, T. J.
2017-06-01
Soil water retention parameters are important for describing flow in variably saturated soils. TDR is one of the standard methods used for determining water content in soil samples. In this study, we present an approach to estimate water retention parameters of a sample which is initially saturated and subjected to an incremental decrease in boundary head causing it to drain in a multi-step fashion. TDR waveforms are measured along the height of the sample at assumed different hydrostatic conditions at daily interval. The cumulative discharge outflow drained from the sample is also recorded. The saturated water content is obtained using volumetric analysis after the final step involved in multi-step drainage. The equation obtained by coupling the unsaturated parametric function and the apparent dielectric permittivity is fitted to a TDR wave propagation forward model. The unsaturated parametric function is used to spatially interpolate the water contents along TDR probe. The cumulative discharge outflow data is fitted with cumulative discharge estimated using the unsaturated parametric function. The weight of water inside the sample estimated at the first and final boundary head in multi-step drainage is fitted with the corresponding weights calculated using unsaturated parametric function. A Bayesian optimization scheme is used to obtain optimized water retention parameters for these different objective functions. This approach can be used for samples with long heights and is especially suitable for characterizing sands with a uniform particle size distribution at low capillary heads.
Risk-Based Sampling: I Don't Want to Weight in Vain.
Powell, Mark R
2015-12-01
Recently, there has been considerable interest in developing risk-based sampling for food safety and animal and plant health for efficient allocation of inspection and surveillance resources. The problem of risk-based sampling allocation presents a challenge similar to financial portfolio analysis. Markowitz (1952) laid the foundation for modern portfolio theory based on mean-variance optimization. However, a persistent challenge in implementing portfolio optimization is the problem of estimation error, leading to false "optimal" portfolios and unstable asset weights. In some cases, portfolio diversification based on simple heuristics (e.g., equal allocation) has better out-of-sample performance than complex portfolio optimization methods due to estimation uncertainty. Even for portfolios with a modest number of assets, the estimation window required for true optimization may imply an implausibly long stationary period. The implications for risk-based sampling are illustrated by a simple simulation model of lot inspection for a small, heterogeneous group of producers. © 2015 Society for Risk Analysis.
Evaluation of PCR methods for detection of Brucella strains from culture and tissues.
Çiftci, Alper; İça, Tuba; Savaşan, Serap; Sareyyüpoğlu, Barış; Akan, Mehmet; Diker, Kadir Serdar
2017-04-01
The genus Brucella causes significant economic losses due to infertility, abortion, stillbirth or weak calves, and neonatal mortality in livestock. Brucellosis is still a zoonosis of public health importance worldwide. The study was aimed to optimize and evaluate PCR assays used for the diagnosis of Brucella infections. For this aim, several primers and PCR protocols were performed and compared with Brucella cultures and biological material inoculated with Brucella. In PCR assays, genus- or species-specific oligonucleotide primers derived from 16S rRNA sequences (F4/R2, Ba148/928, IS711, BruP6-P7) and OMPs (JPF/JPR, 31ter/sd) of Brucella were used. All primers except for BruP6-P7 detected the DNA from reference Brucella strains and field isolates. In spiked blood, milk, and semen samples, F4-R2 primer-oriented PCR assays detected minimal numbers of Brucella. In spiked serum and fetal stomach content, Ba148/928 primer-oriented PCR assays detected minimal numbers of Brucella. Field samples collected from sheep and cattle were examined by bacteriological methods and optimized PCR assays. Overall, sensitivity of PCR assays was found superior to conventional bacteriological isolation. Brucella DNA was detected in 35.1, 1.1, 24.8, 5.0, and 8.0% of aborted fetus, blood, milk, semen, and serum samples by PCR assays, respectively. In conclusion, PCR assay in optimized conditions was found to be valuable in sensitive and specific detection of Brucella infections of animals.
Su, Zi Dan; Shi, Cheng Yin; Huang, Jie; Shen, Gui Ming; Li, Jin; Wang, Sheng Qiang; Fan, Chao
2015-09-26
Red-spotted grouper nervous necrosis virus (RGNNV) is an important pathogen that causes diseases in many species of fish in marine aquaculture. The larvae and juveniles are more easily infected by RGNNV and the cumulative mortality is as high as 100 % after being infected with RGNNV. This virus imposes a serious threat to aquaculture of grouper fry. This study aimed to establish a simple, accurate and highly sensitive method for rapid detection of RGNNV on the spot. In this study, the primers specifically targeting RGNNV were designed and cross-priming isothermal amplification (CPA) system was established. The product amplified by CPA was detected through visualization with lateral flow dipstick (LFD). Three important parameters, including the amplification temperature, the concentration of dNTPs and the concentration of Mg(2+) for the CPA system, were optimized. The sensitivity and specificity of this method for RGNNV were tested and compared with those of the conventional RT-PCR and real-time quantitative RT-PCR (qRT-PCR). The optimized conditions for the CPA amplification system were determined as follows: the optimal amplification temperature, the optimized concentration of dNTPs and the concentration for Mg(2+) were 69 °C, 1.2 mmol/L and 5 mmol/L, respectively. The lowest limit of detection (LLOD) of this method for RGNNV was 10(1) copies/μL of RNA sample, which was 10 times lower than that of conventional RT-PCR and comparable to that of RT-qPCR. This method was specific for RGNNV in combination with SJNNV and had no cross-reactions with 8 types of virus and bacterial strains tested. This method was successfully applied to detect RGNNV in fish samples. This study established a CPA-LFD method for detection of RGNNV. This method is simple and rapid with high sensitivity and good specificity and can be widely applied for rapid detection of this virus on the spot.
NASA Astrophysics Data System (ADS)
Quinn, J.; Reed, P. M.; Giuliani, M.; Castelletti, A.
2016-12-01
Optimizing the operations of multi-reservoir systems poses several challenges: 1) the high dimension of the problem's states and controls, 2) the need to balance conflicting multi-sector objectives, and 3) understanding how uncertainties impact system performance. These difficulties motivated the development of the Evolutionary Multi-Objective Direct Policy Search (EMODPS) framework, in which multi-reservoir operating policies are parameterized in a given family of functions and then optimized for multiple objectives through simulation over a set of stochastic inputs. However, properly framing these objectives remains a severe challenge and a neglected source of uncertainty. Here, we use EMODPS to optimize operating policies for a 4-reservoir system in the Red River Basin in Vietnam, exploring the consequences of optimizing to different sets of objectives related to 1) hydropower production, 2) meeting multi-sector water demands, and 3) providing flood protection to the capital city of Hanoi. We show how coordinated operation of the reservoirs can differ markedly depending on how decision makers weigh these concerns. Moreover, we illustrate how formulation choices that emphasize the mean, tail, or variability of performance across objective combinations must be evaluated carefully. Our results show that these choices can significantly improve attainable system performance, or yield severe unintended consequences. Finally, we show that satisfactory validation of the operating policies on a set of out-of-sample stochastic inputs depends as much or more on the formulation of the objectives as on effective optimization of the policies. These observations highlight the importance of carefully considering how we abstract stakeholders' objectives and of iteratively optimizing and visualizing multiple problem formulation hypotheses to ensure that we capture the most important tradeoffs that emerge from different stakeholder preferences.
NASA Astrophysics Data System (ADS)
Brown, G. J.; Haugan, H. J.; Mahalingam, K.; Grazulis, L.; Elhamri, S.
2015-01-01
The objective of this work is to establish molecular beam epitaxy (MBE) growth processes that can produce high quality InAs/GaInSb superlattice (SL) materials specifically tailored for very long wavelength infrared (VLWIR) detection. To accomplish this goal, several series of MBE growth optimization studies, using a SL structure of 47.0 Å InAs/21.5 Å Ga0.75In0.25Sb, were performed to refine the MBE growth process and optimize growth parameters. Experimental results demonstrated that our "slow" MBE growth process can consistently produce an energy gap near 50 meV. This is an important factor in narrow band gap SLs. However, there are other growth factors that also impact the electrical and optical properties of the SL materials. The SL layers are particularly sensitive to the anion incorporation condition formed during the surface reconstruction process. Since antisite defects are potentially responsible for the inherent residual carrier concentrations and short carrier lifetimes, the optimization of anion incorporation conditions, by manipulating anion fluxes, anion species, and deposition temperature, was systematically studied. Optimization results are reported in the context of comparative studies on the influence of the growth temperature on the crystal structural quality and surface roughness performed under a designed set of deposition conditions. The optimized SL samples produced an overall strong photoresponse signal with a relatively sharp band edge that is essential for developing VLWIR detectors. A quantitative analysis of the lattice strain, performed at the atomic scale by aberration corrected transmission electron microscopy, provided valuable information about the strain distribution at the GaInSb-on-InAs interface and in the InAs layers, which was important for optimizing the anion conditions.
de Macedo, Cristiana Santos; Anderson, David M; Schey, Kevin L
2017-11-01
MALDI (matrix assisted laser desorption ionization) Imaging Mass Spectrometry (IMS) allows molecular analysis of biological materials making possible the identification and localization of molecules in tissues, and has been applied to address many questions on skin pathophysiology, as well as on studies about drug absorption and metabolism. Sample preparation for MALDI IMS is the most important part of the workflow, comprising specimen collection and preservation, tissue embedding, cryosectioning, washing, and matrix application. These steps must be carefully optimized for specific analytes of interest (lipids, proteins, drugs, etc.), representing a challenge for skin analysis. In this review, critical parameters for MALDI IMS sample preparation of skin samples will be described. In addition, specific applications of MALDI IMS of skin samples will be presented including wound healing, neoplasia, and infection. Copyright © 2017 Elsevier B.V. All rights reserved.
Tan, Zhijing; Yin, Haidi; Nie, Song; Lin, Zhenxin; Zhu, Jianhui; Ruffin, Mack T; Anderson, Michelle A; Simeone, Diane M; Lubman, David M
2015-04-03
Glycosylation has significant effects on protein function and cell metastasis, which are important in cancer progression. It is of great interest to identify site-specific glycosylation in search of potential cancer biomarkers. However, the abundance of glycopeptides is low compared to that of nonglycopeptides after trypsin digestion of serum samples, and the mass spectrometric signals of glycopeptides are often masked by coeluting nonglycopeptides due to low ionization efficiency. Selective enrichment of glycopeptides from complex serum samples is essential for mass spectrometry (MS)-based analysis. Herein, a strategy has been optimized using LCA enrichment to improve the identification of core-fucosylation (CF) sites in serum of pancreatic cancer patients. The optimized strategy was then applied to analyze CF glycopeptide sites in 13 sets of serum samples from pancreatic cancer, chronic pancreatitis, healthy controls, and a standard reference. In total, 630 core-fucosylation sites were identified from 322 CF proteins in pancreatic cancer patient serum using an Orbitrap Elite mass spectrometer. Further data analysis revealed that 8 CF peptides exhibited a significant difference between pancreatic cancer and other controls, which may be potential diagnostic biomarkers for pancreatic cancer.
Jalbani, N; Soylak, M
2014-04-01
In the present study, a microextraction technique combining Fe3O4 nano-particle with surfactant mediated solid phase extraction ((SM-SPE)) was successfully developed for the preconcentration/separation of Cd(II) and Pb(II) in water and soil samples. The analytes were determined by flame atomic absorption spectrometry (FAAS). The effective variables such as the amount of adsorbent (NPs), the pH, concentration of non-ionic (TX-114) and centrifugation time (min) were investigated by Plackett-Burman (PBD) design. The important variables were further optimized by central composite design (CCD). Under the optimized conditions, the detection limits (LODs) of Cd(II) and Pb(II) were 0.15 and 0.74 µg/L, respectively. The validation of the proposed procedure was checked by the analysis of certified reference materials of TMDA 53.3 fortified water and GBW07425 soil. The method was successfully applied for the determination of Cd(II) and Pb(II) in water and soil samples. Copyright © 2014 Elsevier Inc. All rights reserved.
Chromatographic analysis of tryptophan metabolites
Sadok, Ilona; Gamian, Andrzej
2017-01-01
The kynurenine pathway generates multiple tryptophan metabolites called collectively kynurenines and leads to formation of the enzyme cofactor nicotinamide adenine dinucleotide. The first step in this pathway is tryptophan degradation, initiated by the rate‐limiting enzymes indoleamine 2,3‐dioxygenase, or tryptophan 2,3‐dioxygenase, depending on the tissue. The balanced kynurenine metabolism, which has been a subject of multiple studies in last decades, plays an important role in several physiological and pathological conditions such as infections, autoimmunity, neurological disorders, cancer, cataracts, as well as pregnancy. Understanding the regulation of tryptophan depletion provide novel diagnostic and treatment opportunities, however it requires reliable methods for quantification of kynurenines in biological samples with complex composition (body fluids, tissues, or cells). Trace concentrations, interference of sample components, and instability of some tryptophan metabolites need to be addressed using analytical methods. The novel separation approaches and optimized extraction protocols help to overcome difficulties in analyzing kynurenines within the complex tissue material. Recent developments in chromatography coupled with mass spectrometry provide new opportunity for quantification of tryptophan and its degradation products in various biological samples. In this review, we present current accomplishments in the chromatographic methodologies proposed for detection of tryptophan metabolites and provide a guide for choosing the optimal approach. PMID:28590049
Comparison of optimal design methods in inverse problems
NASA Astrophysics Data System (ADS)
Banks, H. T.; Holm, K.; Kappel, F.
2011-07-01
Typical optimal design methods for inverse or parameter estimation problems are designed to choose optimal sampling distributions through minimization of a specific cost function related to the resulting error in parameter estimates. It is hoped that the inverse problem will produce parameter estimates with increased accuracy using data collected according to the optimal sampling distribution. Here we formulate the classical optimal design problem in the context of general optimization problems over distributions of sampling times. We present a new Prohorov metric-based theoretical framework that permits one to treat succinctly and rigorously any optimal design criteria based on the Fisher information matrix. A fundamental approximation theory is also included in this framework. A new optimal design, SE-optimal design (standard error optimal design), is then introduced in the context of this framework. We compare this new design criterion with the more traditional D-optimal and E-optimal designs. The optimal sampling distributions from each design are used to compute and compare standard errors; the standard errors for parameters are computed using asymptotic theory or bootstrapping and the optimal mesh. We use three examples to illustrate ideas: the Verhulst-Pearl logistic population model (Banks H T and Tran H T 2009 Mathematical and Experimental Modeling of Physical and Biological Processes (Boca Raton, FL: Chapman and Hall/CRC)), the standard harmonic oscillator model (Banks H T and Tran H T 2009) and a popular glucose regulation model (Bergman R N, Ider Y Z, Bowden C R and Cobelli C 1979 Am. J. Physiol. 236 E667-77 De Gaetano A and Arino O 2000 J. Math. Biol. 40 136-68 Toffolo G, Bergman R N, Finegood D T, Bowden C R and Cobelli C 1980 Diabetes 29 979-90).
Gallagher, Matthew W; Lopez, Shane J; Pressman, Sarah D
2013-10-01
Current theories of optimism suggest that the tendency to maintain positive expectations for the future is an adaptive psychological resource associated with improved well-being and physical health, but the majority of previous optimism research has been conducted in industrialized nations. The present study examined (a) whether optimism is universal, (b) what demographic factors predict optimism, and (c) whether optimism is consistently associated with improved subjective well-being and perceived health worldwide. The present study used representative samples of 142 countries that together represent 95% of the world's population. The total sample of 150,048 individuals had a mean age of 38.28 (SD = 16.85) and approximately equal sex distribution (51.2% female). The relationships between optimism, subjective well-being, and perceived health were examined using hierarchical linear modeling. Results indicated that most individuals and most countries worldwide are optimistic and that higher levels of optimism are associated with improved subjective well-being and perceived health worldwide. The present study provides compelling evidence that optimism is a universal phenomenon and that the associations between optimism and improved psychological functioning are not limited to industrialized nations. © 2012 Wiley Periodicals, Inc.
Characterizing lentic freshwater fish assemblages using multiple sampling methods
Fischer, Jesse R.; Quist, Michael C.
2014-01-01
Characterizing fish assemblages in lentic ecosystems is difficult, and multiple sampling methods are almost always necessary to gain reliable estimates of indices such as species richness. However, most research focused on lentic fish sampling methodology has targeted recreationally important species, and little to no information is available regarding the influence of multiple methods and timing (i.e., temporal variation) on characterizing entire fish assemblages. Therefore, six lakes and impoundments (48–1,557 ha surface area) were sampled seasonally with seven gear types to evaluate the combined influence of sampling methods and timing on the number of species and individuals sampled. Probabilities of detection for species indicated strong selectivities and seasonal trends that provide guidance on optimal seasons to use gears when targeting multiple species. The evaluation of species richness and number of individuals sampled using multiple gear combinations demonstrated that appreciable benefits over relatively few gears (e.g., to four) used in optimal seasons were not present. Specifically, over 90 % of the species encountered with all gear types and season combinations (N = 19) from six lakes and reservoirs were sampled with nighttime boat electrofishing in the fall and benthic trawling, modified-fyke, and mini-fyke netting during the summer. Our results indicated that the characterization of lentic fish assemblages was highly influenced by the selection of sampling gears and seasons, but did not appear to be influenced by waterbody type (i.e., natural lake, impoundment). The standardization of data collected with multiple methods and seasons to account for bias is imperative to monitoring of lentic ecosystems and will provide researchers with increased reliability in their interpretations and decisions made using information on lentic fish assemblages.
Evaluating data worth for ground-water management under uncertainty
Wagner, B.J.
1999-01-01
A decision framework is presented for assessing the value of ground-water sampling within the context of ground-water management under uncertainty. The framework couples two optimization models-a chance-constrained ground-water management model and an integer-programing sampling network design model-to identify optimal pumping and sampling strategies. The methodology consists of four steps: (1) The optimal ground-water management strategy for the present level of model uncertainty is determined using the chance-constrained management model; (2) for a specified data collection budget, the monitoring network design model identifies, prior to data collection, the sampling strategy that will minimize model uncertainty; (3) the optimal ground-water management strategy is recalculated on the basis of the projected model uncertainty after sampling; and (4) the worth of the monitoring strategy is assessed by comparing the value of the sample information-i.e., the projected reduction in management costs-with the cost of data collection. Steps 2-4 are repeated for a series of data collection budgets, producing a suite of management/monitoring alternatives, from which the best alternative can be selected. A hypothetical example demonstrates the methodology's ability to identify the ground-water sampling strategy with greatest net economic benefit for ground-water management.A decision framework is presented for assessing the value of ground-water sampling within the context of ground-water management under uncertainty. The framework couples two optimization models - a chance-constrained ground-water management model and an integer-programming sampling network design model - to identify optimal pumping and sampling strategies. The methodology consists of four steps: (1) The optimal ground-water management strategy for the present level of model uncertainty is determined using the chance-constrained management model; (2) for a specified data collection budget, the monitoring network design model identifies, prior to data collection, the sampling strategy that will minimize model uncertainty; (3) the optimal ground-water management strategy is recalculated on the basis of the projected model uncertainty after sampling; and (4) the worth of the monitoring strategy is assessed by comparing the value of the sample information - i.e., the projected reduction in management costs - with the cost of data collection. Steps 2-4 are repeated for a series of data collection budgets, producing a suite of management/monitoring alternatives, from which the best alternative can be selected. A hypothetical example demonstrates the methodology's ability to identify the ground-water sampling strategy with greatest net economic benefit for ground-water management.
Burger, Emily A; Sy, Stephen; Nygård, Mari; Kim, Jane J
2017-01-01
Human papillomavirus (HPV) testing allows women to self-collect cervico-vaginal cells at home (i.e., self-sampling). Using primary data from a randomized pilot study, we evaluated the long-term consequences and cost-effectiveness of using self-sampling to improve participation to routine cervical cancer screening in Norway. We compared a strategy reflecting screening participation (using reminder letters) to strategies that involved mailing self-sampling device kits to women noncompliant to screening within a 5- or 10-year period under two scenarios: (A) self-sampling respondents had moderate under-screening histories, or (B) respondents to self-sampling had moderate and severe under-screening histories. Model outcomes included quality-adjusted life-years (QALY) and lifetime costs. The "most cost-effective" strategy was identified as the strategy just below $100,000 per QALY gained. Mailing self-sampling device kits to all women noncompliant to screening within a 5- or 10-year period can be more effective and less costly than the current reminder letter policy; however, the optimal self-sampling strategy was dependent on the profile of self-sampling respondents. For example, "10-yearly self-sampling" is preferred ($95,500 per QALY gained) if "5-yearly self-sampling" could only attract moderate under-screeners; however, "5-yearly self-sampling" is preferred if this strategy could additionally attract severe under-screeners. Targeted self-sampling of noncompliers likely represents good value-for-money; however, the preferred strategy is contingent on the screening histories and compliance of respondents. The magnitude of the health benefit and optimal self-sampling strategy is dependent on the profile and behavior of respondents. Health authorities should understand these factors prior to selecting and implementing a self-sampling policy. Cancer Epidemiol Biomarkers Prev; 26(1); 95-103. ©2016 AACR. ©2016 American Association for Cancer Research.
Determining Optimal Location and Numbers of Sample Transects for Characterization of UXO Sites
DOE Office of Scientific and Technical Information (OSTI.GOV)
BILISOLY, ROGER L.; MCKENNA, SEAN A.
2003-01-01
Previous work on sample design has been focused on constructing designs for samples taken at point locations. Significantly less work has been done on sample design for data collected along transects. A review of approaches to point and transect sampling design shows that transects can be considered as a sequential set of point samples. Any two sampling designs can be compared through using each one to predict the value of the quantity being measured on a fixed reference grid. The quality of a design is quantified in two ways: computing either the sum or the product of the eigenvalues ofmore » the variance matrix of the prediction error. An important aspect of this analysis is that the reduction of the mean prediction error variance (MPEV) can be calculated for any proposed sample design, including one with straight and/or meandering transects, prior to taking those samples. This reduction in variance can be used as a ''stopping rule'' to determine when enough transect sampling has been completed on the site. Two approaches for the optimization of the transect locations are presented. The first minimizes the sum of the eigenvalues of the predictive error, and the second minimizes the product of these eigenvalues. Simulated annealing is used to identify transect locations that meet either of these objectives. This algorithm is applied to a hypothetical site to determine the optimal locations of two iterations of meandering transects given a previously existing straight transect. The MPEV calculation is also used on both a hypothetical site and on data collected at the Isleta Pueblo to evaluate its potential as a stopping rule. Results show that three or four rounds of systematic sampling with straight parallel transects covering 30 percent or less of the site, can reduce the initial MPEV by as much as 90 percent. The amount of reduction in MPEV can be used as a stopping rule, but the relationship between MPEV and the results of excavation versus no-further-action decisions is site specific and cannot be calculated prior to the sampling. It may be advantageous to use the reduction in MPEV as a stopping rule for systematic sampling across the site that can then be followed by focused sampling in areas identified has having UXO during the systematic sampling. The techniques presented here provide answers to the questions of ''Where to sample?'' and ''When to stop?'' and are capable of running in near real time to support iterative site characterization campaigns.« less
Jiang, Jun; Feng, Liang; Li, Jie; Sun, E; Ding, Shu-Min; Jia, Xiao-Bin
2014-04-10
Suet oil (SO) has been used commonly for food and medicine preparation. The determination of its elemental composition has became an important challenge for human safety and health owing to its possible contents of heavy metals or other elements. In this study, ultrawave single reaction chamber microwave digestion (Ultrawave) and inductively coupled plasma-mass spectrometry (ICP-MS) analysis was performed to determine 14 elements (Pb, As, Hg, Cd, Fe, Cu, Mn, Ti, Ni, V, Sr, Na, Ka and Ca) in SO samples. Furthermore, the multielemental content of 18 SO samples, which represented three different sources in China: Qinghai, Anhui and Jiangsu, were evaluated and compared. The optimal ultrawave digestion conditions, namely, the optimal time (35 min), temperature (210 °C) and pressure (90 bar), were screened by Box-Behnken design (BBD). Eighteen samples were successfully classified into three groups by principal component analysis (PCA) according to the contents of 14 elements. The results showed that all SO samples were rich in elements, but with significant differences corresponding to different origins. The outliers and majority of SO could be discriminated by PCA according to the multielemental content profile. The results highlighted that the element distribution was associated with the origins of SO samples. The proposed ultrawave digestion system was quite efficient and convenient, which could be mainly attributed to its high pressure and special high-throughput for the sample digestion procedure. Our established method could be useful for the quality control and standardization of elements in SO samples and products.
NASA Astrophysics Data System (ADS)
Maglevanny, I. I.; Smolar, V. A.
2016-01-01
We introduce a new technique of interpolation of the energy-loss function (ELF) in solids sampled by empirical optical spectra. Finding appropriate interpolation methods for ELFs poses several challenges. The sampled ELFs are usually very heterogeneous, can originate from various sources thus so called "data gaps" can appear, and significant discontinuities and multiple high outliers can be present. As a result an interpolation based on those data may not perform well at predicting reasonable physical results. Reliable interpolation tools, suitable for ELF applications, should therefore satisfy several important demands: accuracy and predictive power, robustness and computational efficiency, and ease of use. We examined the effect on the fitting quality due to different interpolation schemes with emphasis on ELF mesh optimization procedures and we argue that the optimal fitting should be based on preliminary log-log scaling data transforms by which the non-uniformity of sampled data distribution may be considerably reduced. The transformed data are then interpolated by local monotonicity preserving Steffen spline. The result is a piece-wise smooth fitting curve with continuous first-order derivatives that passes through all data points without spurious oscillations. Local extrema can occur only at grid points where they are given by the data, but not in between two adjacent grid points. It is found that proposed technique gives the most accurate results and also that its computational time is short. Thus, it is feasible using this simple method to address practical problems associated with interaction between a bulk material and a moving electron. A compact C++ implementation of our algorithm is also presented.
Cao, Youfang; Liang, Jie
2013-01-01
Critical events that occur rarely in biological processes are of great importance, but are challenging to study using Monte Carlo simulation. By introducing biases to reaction selection and reaction rates, weighted stochastic simulation algorithms based on importance sampling allow rare events to be sampled more effectively. However, existing methods do not address the important issue of barrier crossing, which often arises from multistable networks and systems with complex probability landscape. In addition, the proliferation of parameters and the associated computing cost pose significant problems. Here we introduce a general theoretical framework for obtaining optimized biases in sampling individual reactions for estimating probabilities of rare events. We further describe a practical algorithm called adaptively biased sequential importance sampling (ABSIS) method for efficient probability estimation. By adopting a look-ahead strategy and by enumerating short paths from the current state, we estimate the reaction-specific and state-specific forward and backward moving probabilities of the system, which are then used to bias reaction selections. The ABSIS algorithm can automatically detect barrier-crossing regions, and can adjust bias adaptively at different steps of the sampling process, with bias determined by the outcome of exhaustively generated short paths. In addition, there are only two bias parameters to be determined, regardless of the number of the reactions and the complexity of the network. We have applied the ABSIS method to four biochemical networks: the birth-death process, the reversible isomerization, the bistable Schlögl model, and the enzymatic futile cycle model. For comparison, we have also applied the finite buffer discrete chemical master equation (dCME) method recently developed to obtain exact numerical solutions of the underlying discrete chemical master equations of these problems. This allows us to assess sampling results objectively by comparing simulation results with true answers. Overall, ABSIS can accurately and efficiently estimate rare event probabilities for all examples, often with smaller variance than other importance sampling algorithms. The ABSIS method is general and can be applied to study rare events of other stochastic networks with complex probability landscape. PMID:23862966
NASA Astrophysics Data System (ADS)
Cao, Youfang; Liang, Jie
2013-07-01
Critical events that occur rarely in biological processes are of great importance, but are challenging to study using Monte Carlo simulation. By introducing biases to reaction selection and reaction rates, weighted stochastic simulation algorithms based on importance sampling allow rare events to be sampled more effectively. However, existing methods do not address the important issue of barrier crossing, which often arises from multistable networks and systems with complex probability landscape. In addition, the proliferation of parameters and the associated computing cost pose significant problems. Here we introduce a general theoretical framework for obtaining optimized biases in sampling individual reactions for estimating probabilities of rare events. We further describe a practical algorithm called adaptively biased sequential importance sampling (ABSIS) method for efficient probability estimation. By adopting a look-ahead strategy and by enumerating short paths from the current state, we estimate the reaction-specific and state-specific forward and backward moving probabilities of the system, which are then used to bias reaction selections. The ABSIS algorithm can automatically detect barrier-crossing regions, and can adjust bias adaptively at different steps of the sampling process, with bias determined by the outcome of exhaustively generated short paths. In addition, there are only two bias parameters to be determined, regardless of the number of the reactions and the complexity of the network. We have applied the ABSIS method to four biochemical networks: the birth-death process, the reversible isomerization, the bistable Schlögl model, and the enzymatic futile cycle model. For comparison, we have also applied the finite buffer discrete chemical master equation (dCME) method recently developed to obtain exact numerical solutions of the underlying discrete chemical master equations of these problems. This allows us to assess sampling results objectively by comparing simulation results with true answers. Overall, ABSIS can accurately and efficiently estimate rare event probabilities for all examples, often with smaller variance than other importance sampling algorithms. The ABSIS method is general and can be applied to study rare events of other stochastic networks with complex probability landscape.
Cao, Youfang; Liang, Jie
2013-07-14
Critical events that occur rarely in biological processes are of great importance, but are challenging to study using Monte Carlo simulation. By introducing biases to reaction selection and reaction rates, weighted stochastic simulation algorithms based on importance sampling allow rare events to be sampled more effectively. However, existing methods do not address the important issue of barrier crossing, which often arises from multistable networks and systems with complex probability landscape. In addition, the proliferation of parameters and the associated computing cost pose significant problems. Here we introduce a general theoretical framework for obtaining optimized biases in sampling individual reactions for estimating probabilities of rare events. We further describe a practical algorithm called adaptively biased sequential importance sampling (ABSIS) method for efficient probability estimation. By adopting a look-ahead strategy and by enumerating short paths from the current state, we estimate the reaction-specific and state-specific forward and backward moving probabilities of the system, which are then used to bias reaction selections. The ABSIS algorithm can automatically detect barrier-crossing regions, and can adjust bias adaptively at different steps of the sampling process, with bias determined by the outcome of exhaustively generated short paths. In addition, there are only two bias parameters to be determined, regardless of the number of the reactions and the complexity of the network. We have applied the ABSIS method to four biochemical networks: the birth-death process, the reversible isomerization, the bistable Schlögl model, and the enzymatic futile cycle model. For comparison, we have also applied the finite buffer discrete chemical master equation (dCME) method recently developed to obtain exact numerical solutions of the underlying discrete chemical master equations of these problems. This allows us to assess sampling results objectively by comparing simulation results with true answers. Overall, ABSIS can accurately and efficiently estimate rare event probabilities for all examples, often with smaller variance than other importance sampling algorithms. The ABSIS method is general and can be applied to study rare events of other stochastic networks with complex probability landscape.
NASA Technical Reports Server (NTRS)
Jordan, T. M.
1970-01-01
A description of the FASTER-III program for Monte Carlo Carlo calculation of photon and neutron transport in complex geometries is presented. Major revisions include the capability of calculating minimum weight shield configurations for primary and secondary radiation and optimal importance sampling parameters. The program description includes a users manual describing the preparation of input data cards, the printout from a sample problem including the data card images, definitions of Fortran variables, the program logic, and the control cards required to run on the IBM 7094, IBM 360, UNIVAC 1108 and CDC 6600 computers.
Bautista, Gabriela; Mátyás, Bence; Carpio, Isabel; Vilches, Richard; Pazmino, Karina
2017-01-01
The number of studies investigating the effect of bio-fertilizers is increasing because of their importance in sustainable agriculture and environmental quality. In our experiments, we measured the effect of different fertilizers on soil respiration. In the present study, we were looking for the cause of unexpected changes in CO2 values while examining Chernozem soil samples. We concluded that CO2 oxidizing microbes or methanotrophs may be present in the soil that periodically consume CO2 . This is unusual for a sample taken from the upper layer of well-ventilated Chernozem soil with optimal moisture content.
Bautista, Gabriela; Mátyás, Bence; Carpio, Isabel; Vilches, Richard; Pazmino, Karina
2017-01-01
The number of studies investigating the effect of bio-fertilizers is increasing because of their importance in sustainable agriculture and environmental quality. In our experiments, we measured the effect of different fertilizers on soil respiration. In the present study, we were looking for the cause of unexpected changes in CO2 values while examining Chernozem soil samples. We concluded that CO2 oxidizing microbes or methanotrophs may be present in the soil that periodically consume CO2 . This is unusual for a sample taken from the upper layer of well-ventilated Chernozem soil with optimal moisture content. PMID:29333243
Optimization of Immobilization of Nanodiamonds on Graphene
NASA Astrophysics Data System (ADS)
Pille, A.; Lange, S.; Utt, K.; Eltermann, M.
2015-04-01
We report using simple dip-coating method to cover the surface of graphene with nanodiamonds for future optical detection of defects on graphene. Most important part of the immobilization process is the pre-functionalization of both, nanodiamond and graphene surfaces to obtain the selectiveness of the method. This work focuses on an example of using electrostatic attraction to confine nanodiamonds to graphene. Raman spectroscopy, microluminescence imaging and scanning electron microscopy were applied to characterize obtained samples.
Optimal Design for Informative Protocols in Xenograft Tumor Growth Inhibition Experiments in Mice.
Lestini, Giulia; Mentré, France; Magni, Paolo
2016-09-01
Tumor growth inhibition (TGI) models are increasingly used during preclinical drug development in oncology for the in vivo evaluation of antitumor effect. Tumor sizes are measured in xenografted mice, often only during and shortly after treatment, thus preventing correct identification of some TGI model parameters. Our aims were (i) to evaluate the importance of including measurements during tumor regrowth and (ii) to investigate the proportions of mice included in each arm. For these purposes, optimal design theory based on the Fisher information matrix implemented in PFIM4.0 was applied. Published xenograft experiments, involving different drugs, schedules, and cell lines, were used to help optimize experimental settings and parameters using the Simeoni TGI model. For each experiment, a two-arm design, i.e., control versus treatment, was optimized with or without the constraint of not sampling during tumor regrowth, i.e., "short" and "long" studies, respectively. In long studies, measurements could be taken up to 6 g of tumor weight, whereas in short studies the experiment was stopped 3 days after the end of treatment. Predicted relative standard errors were smaller in long studies than in corresponding short studies. Some optimal measurement times were located in the regrowth phase, highlighting the importance of continuing the experiment after the end of treatment. In the four-arm designs, the results showed that the proportions of control and treated mice can differ. To conclude, making measurements during tumor regrowth should become a general rule for informative preclinical studies in oncology, especially when a delayed drug effect is suspected.
Optimal design for informative protocols in xenograft tumor growth inhibition experiments in mice
Lestini, Giulia; Mentré, France; Magni, Paolo
2016-01-01
Tumor growth inhibition (TGI) models are increasingly used during preclinical drug development in oncology for the in vivo evaluation of antitumor effect. Tumor sizes are measured in xenografted mice, often only during and shortly after treatment, thus preventing correct identification of some TGI model parameters. Our aims were i) to evaluate the importance of including measurements during tumor regrowth; ii) to investigate the proportions of mice included in each arm. For these purposes, optimal design theory based on the Fisher information matrix implemented in PFIM4.0 was applied. Published xenograft experiments, involving different drugs, schedules and cell lines, were used to help optimize experimental settings and parameters using the Simeoni TGI model. For each experiment, a two-arm design, i.e. control vs treatment, was optimized with or without the constraint of not sampling during tumor regrowth, i.e. “short” and “long” studies, respectively. In long studies, measurements could be taken up to 6 grams of tumor weight, whereas in short studies the experiment was stopped three days after the end of treatment. Predicted relative standard errors were smaller in long studies than in corresponding short studies. Some optimal measurement times were located in the regrowth phase, highlighting the importance of continuing the experiment after the end of treatment. In the four-arm designs, the results showed that the proportions of control and treated mice can differ. To conclude, making measurements during tumor regrowth should become a general rule for informative preclinical studies in oncology, especially when a delayed drug effect is suspected. PMID:27306546
Zaari, Ryan R; Brown, Alex
2011-07-28
The importance of the ro-vibrational state energies on the ability to produce high fidelity binary shaped laser pulses for quantum logic gates is investigated. The single frequency 2-qubit ACNOT(1) and double frequency 2-qubit NOT(2) quantum gates are used as test cases to examine this behaviour. A range of diatomics is sampled. The laser pulses are optimized using a genetic algorithm for binary (two amplitude and two phase parameter) variation on a discretized frequency spectrum. The resulting trends in the fidelities were attributed to the intrinsic molecular properties and not the choice of method: a discretized frequency spectrum with genetic algorithm optimization. This is verified by using other common laser pulse optimization methods (including iterative optimal control theory), which result in the same qualitative trends in fidelity. The results differ from other studies that used vibrational state energies only. Moreover, appropriate choice of diatomic (relative ro-vibrational state arrangement) is critical for producing high fidelity optimized quantum logic gates. It is also suggested that global phase alignment imposes a significant restriction on obtaining high fidelity regions within the parameter search space. Overall, this indicates a complexity in the ability to provide appropriate binary laser pulse control of diatomics for molecular quantum computing. © 2011 American Institute of Physics
Fang, Wenjie; Zhang, Yanting; Mei, Jiaojiao; Chai, Xiaohui; Fan, Xiuzhen
2018-06-01
For solving the problem of the abandonment of the career in nursing undergraduates, it is important to understand their motivation to choose nursing as a career and its associated personal and situational factors. To examine the relationships between optimism, educational environment, career adaptability, and career motivation in nursing undergraduates using the career construction model of adaptation. This study adopted a cross-sectional design. A convenience sample of 1060 nursing undergraduates from three universities completed questionnaires for measuring optimism, educational environment, career adaptability, and career motivation. Confirmatory factor analyses, descriptive analyses, comparison analyses, correlation analyses, and mediation analyses were performed accordingly. Nursing undergraduates' career motivation was positively correlated with their career adaptability (r = 0.41, P < 0.01), the educational environment (r = 0.60, P < 0.01), and optimism (r = 0.26, P < 0.01). In addition, the effects of optimism and educational environment on career motivation were partially mediated by career adaptability in nursing undergraduates. In nursing undergraduates, the educational environment had a relatively strong positive association with career motivation, while optimism had a weak one. Career adaptability played a mediating role in the relationships. Targeted interventions may improve nursing undergraduates' career motivation. Copyright © 2018 Elsevier Ltd. All rights reserved.
Valadez-Bustos, Ma Guadalupe; Aguado-Santacruz, Gerardo Armando; Tiessen-Favier, Axel; Robledo-Paz, Alejandrina; Muñoz-Orozco, Abel; Rascón-Cruz, Quintin; Santacruz-Varela, Amalio
2016-04-01
Glycine betaine is a quaternary ammonium compound that accumulates in a large variety of species in response to different types of stress. Glycine betaine counteracts adverse effects caused by abiotic factors, preventing the denaturation and inactivation of proteins. Thus, its determination is important, particularly for scientists focused on relating structural, biochemical, physiological, and/or molecular responses to plant water status. In the current work, we optimized the periodide technique for the determination of glycine betaine levels. This modification permitted large numbers of samples taken from a chlorophyllic cell line of the grass Bouteloua gracilis to be analyzed. Growth kinetics were assessed using the chlorophyllic suspension to determine glycine betaine levels in control (no stress) cells and cells osmotically stressed with 14 or 21% polyethylene glycol 8000. After glycine extraction, different wavelengths and reading times were evaluated in a spectrophotometer to determine the optimal quantification conditions for this osmolyte. Optimal results were obtained when readings were taken at a wavelength of 290 nm at 48 h after dissolving glycine betaine crystals in dichloroethane. We expect this modification to provide a simple, rapid, reliable, and cheap method for glycine betaine determination in plant samples and cell suspension cultures. Copyright © 2016 Elsevier Inc. All rights reserved.
Structural degradation of Thar lignite using MW1 fungal isolate: optimization studies
Haider, Rizwan; Ghauri, Muhammad A.; Jones, Elizabeth J.; Orem, William H.; SanFilipo, John R.
2015-01-01
Biological degradation of low-rank coals, particularly degradation mediated by fungi, can play an important role in helping us to utilize neglected lignite resources for both fuel and non-fuel applications. Fungal degradation of low-rank coals has already been investigated for the extraction of soil-conditioning agents and the substrates, which could be subjected to subsequent processing for the generation of alternative fuel options, like methane. However, to achieve an efficient degradation process, the fungal isolates must originate from an appropriate coal environment and the degradation process must be optimized. With this in mind, a representative sample from the Thar coalfield (the largest lignite resource of Pakistan) was treated with a fungal strain, MW1, which was previously isolated from a drilled core coal sample. The treatment caused the liberation of organic fractions from the structural matrix of coal. Fungal degradation was optimized, and it showed significant release of organics, with 0.1% glucose concentration and 1% coal loading ratio after an incubation time of 7 days. Analytical investigations revealed the release of complex organic moieties, pertaining to polyaromatic hydrocarbons, and it also helped in predicting structural units present within structure of coal. Such isolates, with enhanced degradation capabilities, can definitely help in exploiting the chemical-feedstock-status of coal.
Tabani, Hadi; Fakhari, Ali Reza; Shahsavani, Abolfath; Gharari Alibabaou, Hossein
2014-05-01
In this study, electromembrane extraction (EME) combined with cyclodextrin (CD)-modified capillary electrophoresis (CE) was applied for the extraction, separation, and quantification of propranolol (PRO) enantiomers from biological samples. The PRO enantiomers were extracted from aqueous donor solutions, through a supported liquid membrane (SLM) consisting of 2-nitrophenyl octyl ether (NPOE) impregnated on the wall of the hollow fiber, and into a 20-μL acidic aqueous acceptor solution into the lumen of hollow fiber. Important parameters affecting EME efficiency such as extraction voltage, extraction time, pH of the donor and acceptor solutions were optimized using a Box-Behnken design (BBD). Then, under these optimized conditions, the acceptor solution was analyzed using an optimized CD-modified CE. Several types of CD were evaluated and best results were obtained using a fused-silica capillary with ammonium acetate (80 mM, pH 2.5) containing 8 mM hydroxypropyl-β-CD as a chiral selector, applied voltage of 18 kV, and temperature of 20°C. The relative recoveries were obtained in the range of 78-95%. Finally, the performance of the present method was evaluated for the extraction and determination of PRO enantiomers in real biological samples. © 2014 Wiley Periodicals, Inc.
A review on existing OSSEs and their implications on European marine observation requirements
NASA Astrophysics Data System (ADS)
She, Jun
2017-04-01
Marine observations are essential for understanding marine processes and improving the forecast quality, they are also expensive. It has always been an important issue to optimize sampling schemes of marine observational networks so that the value of marine observations can be maximized and the cost can be lowered. Ocean System Simulation Experiment (OSSE) is an efficient tool in assessing impacts of proposed future sampling schemes on reconstructing and forecasting the ocean and ecosystem conditions. In this study existing OSSE research results from EU projects (such as JERICO, OPEC, SANGOMA, E-AIMS and AtlantOS), institutional studies and review papers are collected and analyzed, according to regions (Arctic, Baltic, N. Atlantic, Mediterranean Sea and Black Sea) and instruments/variables. The preliminary results show that significant gaps for OSSEs in regions and instruments. Among the existing OSSEs, Argo (Bio-Argo and Deep See Argo), gliders and ferrybox are the most often investigated instruments. Although many of the OSSEs are dedicated for very specific monitoring strategies and not sufficiently comprehensive for making solid recommendations for optimizing the existing networks, the detailed findings for future marine observation requirements from the OSSEs will be summarized in the presentation. Recommendations for systematic OSSEs for optimizing European marine observation networks are also given.
Estimating leaf nitrogen accumulation in maize based on canopy hyperspectrum data
NASA Astrophysics Data System (ADS)
Gu, Xiaohe; Wang, Lizhi; Song, Xiaoyu; Xu, Xingang
2016-10-01
Leaf nitrogen accumulation (LNA) has important influence on the formation of crop yield and grain protein. Monitoring leaf nitrogen accumulation of crop canopy quantitively and real-timely is helpful for mastering crop nutrition status, diagnosing group growth and managing fertilization precisely. The study aimed to develop a universal method to monitor LNA of maize by hyperspectrum data, which could provide mechanism support for mapping LNA of maize at county scale. The correlations between LNA and hyperspectrum reflectivity and its mathematical transformations were analyzed. Then the feature bands and its transformations were screened to develop the optimal model of estimating LNA based on multiple linear regression method. The in-situ samples were used to evaluate the accuracy of the estimating model. Results showed that the estimating model with one differential logarithmic transformation (lgP') of reflectivity could reach highest correlation coefficient (0.889) with lowest RMSE (0.646 g·m-2), which was considered as the optimal model for estimating LNA in maize. The determination coefficient (R2) of testing samples was 0.831, while the RMSE was 1.901 g·m-2. It indicated that the one differential logarithmic transformation of hyperspectrum had good response with LNA of maize. Based on this transformation, the optimal estimating model of LNA could reach good accuracy with high stability.
Zhang, Xinyu; Zhao, Liang; Wang, Yexin; Xu, Yunping; Zhou, Liping
2013-07-01
Preparative capillary GC (PCGC) is a powerful tool for the separation and purification of compounds from any complex matrix, which can be used for compound-specific radiocarbon analysis. However, the effect of PCGC parameters on the trapping efficiency is not well understood. Here, we present a comprehensive study on the optimization of parameters based on 11 reference compounds with different physicochemical properties. Under the optimum conditions, the trapping efficiencies of these 11 compounds (including high-boiling-point n-hentriacontane and methyl lignocerate) are about 80% (60-89%). The isolation of target compounds from standard solutions, plant and soil samples demonstrates that our optimized method is applicable for different classes of compounds including n-alkanes, fatty acid esters, long-chain fatty alcohol esters, polycyclic aromatic hydrocarbons (PAHs) and steranes. By injecting 25 μL in large volume injection mode, over 100 μg, high purity (>90%) target compounds are harvested within 24 h. The recovery ranges of two real samples are about 70% (59.9-83.8%) and about 83% (77.2-88.5%), respectively. Compared to previous studies, our study makes significant improvement in the recovery of PCGC, which is important for its wide application in biogeochemistry, environmental sciences, and archaeology. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Urke, Helga Bjørnøy; Contreras, Mariela; Matanda, Dennis Juma
2018-05-07
Optimal early childhood development (ECD) is currently jeopardized for more than 250 million children under five in low- and middle-income countries. The Sustainable Development Goals has called for a renewed emphasis on children’s wellbeing, encompassing a holistic approach that ensures nurturing care to facilitate optimal child development. In vulnerable contexts, the extent of a family’s available resources can influence a child’s potential of reaching its optimal development. Few studies have examined these relationships in low- and middle-income countries using nationally representative samples. The present paper explored the relationships between maternal and paternal psychosocial stimulation of the child as well as maternal and household resources and ECD among 2729 children 36⁻59 months old in Honduras. Data from the Demographic and Health Surveys conducted in 2011⁻2012 was used. Adjusted logistic regression analyses showed that maternal psychosocial stimulation was positively and significantly associated with ECD in the full, rural, and lowest wealth quintile samples. These findings underscore the importance of maternal engagement in facilitating ECD but also highlight the role of context when designing tailored interventions to improve ECD.
Jácome, Gabriel; Valarezo, Carla; Yoo, Changkyoo
2018-03-30
Pollution and the eutrophication process are increasing in lake Yahuarcocha and constant water quality monitoring is essential for a better understanding of the patterns occurring in this ecosystem. In this study, key sensor locations were determined using spatial and temporal analyses combined with geographical information systems (GIS) to assess the influence of weather features, anthropogenic activities, and other non-point pollution sources. A water quality monitoring network was established to obtain data on 14 physicochemical and microbiological parameters at each of seven sample sites over a period of 13 months. A spatial and temporal statistical approach using pattern recognition techniques, such as cluster analysis (CA) and discriminant analysis (DA), was employed to classify and identify the most important water quality parameters in the lake. The original monitoring network was reduced to four optimal sensor locations based on a fuzzy overlay of the interpolations of concentration variations of the most important parameters.
Optimal sample formulations for DNP SENS: The importance of radical-surface interactions
Perras, Frederic A.; Wang, Lin-Lin; Manzano, J. Sebastian; ...
2017-11-15
The efficacy of dynamic nuclear polarization (DNP) surface-enhanced NMR spectroscopy (SENS) is reviewed for alumina, silica, and ordered mesoporous carbon (OMC) materials, with vastly different surface areas, as a function of the biradical concentration. Importantly, our studies show that the use of a “one-size-fits-all” biradical concentration should be avoided when performing DNP SENS experiments and instead an optimal concentration should be selected as appropriate for the type of material studied as well as its surface area. In general, materials with greater surface areas require higher radical concentrations for best possible DNP performance. This result is explained with the use ofmore » a thermodynamic model wherein radical-surface interactions are expected to lead to an increase in the local concentration of the polarizing agent at the surface. We also show, using plane-wave density functional theory calculations, that weak radical-surface interactions are the cause of the poor performance of DNP SENS for carbonaceous materials.« less
Schneider, H D
1979-03-01
The investigation studies the influence of the age composition of secondary groups on their member's satisfaction by interviewing a representative sample of the members of Swiss Protestant choirs. The satisfaction with fellow members increases the lower the mean age of the choir is. Independently the older the respondents the more they want young members in their organization. Furthermore, there are indications for an optimal distance of age between the members. In total the influence of the age composition on the operationalizations of satisfaction is, low however. The hypothesis is deduced from the optimal distance of age that tendencies for age-homogeneous social relations will be more frequent in societies with large differences in behavior between their cohorts.
Barca, E; Castrignanò, A; Buttafuoco, G; De Benedetto, D; Passarella, G
2015-07-01
Soil survey is generally time-consuming, labor-intensive, and costly. Optimization of sampling scheme allows one to reduce the number of sampling points without decreasing or even increasing the accuracy of investigated attribute. Maps of bulk soil electrical conductivity (EC a ) recorded with electromagnetic induction (EMI) sensors could be effectively used to direct soil sampling design for assessing spatial variability of soil moisture. A protocol, using a field-scale bulk EC a survey, has been applied in an agricultural field in Apulia region (southeastern Italy). Spatial simulated annealing was used as a method to optimize spatial soil sampling scheme taking into account sampling constraints, field boundaries, and preliminary observations. Three optimization criteria were used. the first criterion (minimization of mean of the shortest distances, MMSD) optimizes the spreading of the point observations over the entire field by minimizing the expectation of the distance between an arbitrarily chosen point and its nearest observation; the second criterion (minimization of weighted mean of the shortest distances, MWMSD) is a weighted version of the MMSD, which uses the digital gradient of the grid EC a data as weighting function; and the third criterion (mean of average ordinary kriging variance, MAOKV) minimizes mean kriging estimation variance of the target variable. The last criterion utilizes the variogram model of soil water content estimated in a previous trial. The procedures, or a combination of them, were tested and compared in a real case. Simulated annealing was implemented by the software MSANOS able to define or redesign any sampling scheme by increasing or decreasing the original sampling locations. The output consists of the computed sampling scheme, the convergence time, and the cooling law, which can be an invaluable support to the process of sampling design. The proposed approach has found the optimal solution in a reasonable computation time. The use of bulk EC a gradient as an exhaustive variable, known at any node of an interpolation grid, has allowed the optimization of the sampling scheme, distinguishing among areas with different priority levels.
Using machine learning to examine medication adherence thresholds and risk of hospitalization.
Lo-Ciganic, Wei-Hsuan; Donohue, Julie M; Thorpe, Joshua M; Perera, Subashan; Thorpe, Carolyn T; Marcum, Zachary A; Gellad, Walid F
2015-08-01
Quality improvement efforts are frequently tied to patients achieving ≥80% medication adherence. However, there is little empirical evidence that this threshold optimally predicts important health outcomes. To apply machine learning to examine how adherence to oral hypoglycemic medications is associated with avoidance of hospitalizations, and to identify adherence thresholds for optimal discrimination of hospitalization risk. A retrospective cohort study of 33,130 non-dual-eligible Medicaid enrollees with type 2 diabetes. We randomly selected 90% of the cohort (training sample) to develop the prediction algorithm and used the remaining (testing sample) for validation. We applied random survival forests to identify predictors for hospitalization and fit survival trees to empirically derive adherence thresholds that best discriminate hospitalization risk, using the proportion of days covered (PDC). Time to first all-cause and diabetes-related hospitalization. The training and testing samples had similar characteristics (mean age, 48 y; 67% female; mean PDC=0.65). We identified 8 important predictors of all-cause hospitalizations (rank in order): prior hospitalizations/emergency department visit, number of prescriptions, diabetes complications, insulin use, PDC, number of prescribers, Elixhauser index, and eligibility category. The adherence thresholds most discriminating for risk of all-cause hospitalization varied from 46% to 94% according to patient health and medication complexity. PDC was not predictive of hospitalizations in the healthiest or most complex patient subgroups. Adherence thresholds most discriminating of hospitalization risk were not uniformly 80%. Machine-learning approaches may be valuable to identify appropriate patient-specific adherence thresholds for measuring quality of care and targeting nonadherent patients for intervention.
Parameter identification and optimization of slide guide joint of CNC machine tools
NASA Astrophysics Data System (ADS)
Zhou, S.; Sun, B. B.
2017-11-01
The joint surface has an important influence on the performance of CNC machine tools. In order to identify the dynamic parameters of slide guide joint, the parametric finite element model of the joint is established and optimum design method is used based on the finite element simulation and modal test. Then the mode that has the most influence on the dynamics of slip joint is found through harmonic response analysis. Take the frequency of this mode as objective, the sensitivity analysis of the stiffness of each joint surface is carried out using Latin Hypercube Sampling and Monte Carlo Simulation. The result shows that the vertical stiffness of slip joint surface constituted by the bed and the slide plate has the most obvious influence on the structure. Therefore, this stiffness is taken as the optimization variable and the optimal value is obtained through studying the relationship between structural dynamic performance and stiffness. Take the stiffness values before and after optimization into the FEM of machine tool, and it is found that the dynamic performance of the machine tool is improved.
NASA Astrophysics Data System (ADS)
Kim, Ji-hyun; Han, Jae-Ho; Jeong, Jichai
2015-09-01
Integration time and reference intensity are important factors for achieving high signal-to-noise ratio (SNR) and sensitivity in optical coherence tomography (OCT). In this context, we present an adaptive optimization method of reference intensity for OCT setup. The reference intensity is automatically controlled by tilting a beam position using a Galvanometric scanning mirror system. Before sample scanning, the OCT system acquires two dimensional intensity map with normalized intensity and variables in color spaces using false-color mapping. Then, the system increases or decreases reference intensity following the map data for optimization with a given algorithm. In our experiments, the proposed method successfully corrected the reference intensity with maintaining spectral shape, enabled to change integration time without manual calibration of the reference intensity, and prevented image degradation due to over-saturation and insufficient reference intensity. Also, SNR and sensitivity could be improved by increasing integration time with automatic adjustment of the reference intensity. We believe that our findings can significantly aid in the optimization of SNR and sensitivity for optical coherence tomography systems.
Strengths in older adults: differential effect of savoring, gratitude and optimism on well-being.
Salces-Cubero, Isabel María; Ramírez-Fernández, Encarnación; Ortega-Martínez, Ana Raquel
2018-05-21
Objetive: The present study aimed to compare the efficacy of three separate strengths training-based interventions - Gratitude, Savoring, and Optimism - in older adults. The sample comprised 124 older adults, namely, 74 women and 50 men, non-institutionalized individuals who regularly attend day centers in the provinces of Jaén and Córdoba, southern Spain. Their ages ranged between 60 and 89 years. The measures used were Anxiety, Depression, Life Satisfaction, Positive and Negative Affect, Subjective Happiness, and Resilience. Training in Gratitude and Savoring increased scores in Life Satisfaction, Positive Affect, Subjective Happiness and Resilience, and reduced Negative Affect, whereas training in Optimism failed to produce a significant change in these variables. The Savoring and Optimism interventions decreased scores in Depression but, contrary to hypothesis, this was not the case for Gratitude. These results represent an important step in understanding what type of strengths work best when it comes to enhancing well-being in older adults and consequently helping them tackle the challenges of everyday life and recover as quickly as possible from the adverse situations and events that may arise.
Optimal sampling strategies for detecting zoonotic disease epidemics.
Ferguson, Jake M; Langebrake, Jessica B; Cannataro, Vincent L; Garcia, Andres J; Hamman, Elizabeth A; Martcheva, Maia; Osenberg, Craig W
2014-06-01
The early detection of disease epidemics reduces the chance of successful introductions into new locales, minimizes the number of infections, and reduces the financial impact. We develop a framework to determine the optimal sampling strategy for disease detection in zoonotic host-vector epidemiological systems when a disease goes from below detectable levels to an epidemic. We find that if the time of disease introduction is known then the optimal sampling strategy can switch abruptly between sampling only from the vector population to sampling only from the host population. We also construct time-independent optimal sampling strategies when conducting periodic sampling that can involve sampling both the host and the vector populations simultaneously. Both time-dependent and -independent solutions can be useful for sampling design, depending on whether the time of introduction of the disease is known or not. We illustrate the approach with West Nile virus, a globally-spreading zoonotic arbovirus. Though our analytical results are based on a linearization of the dynamical systems, the sampling rules appear robust over a wide range of parameter space when compared to nonlinear simulation models. Our results suggest some simple rules that can be used by practitioners when developing surveillance programs. These rules require knowledge of transition rates between epidemiological compartments, which population was initially infected, and of the cost per sample for serological tests.
NASA Astrophysics Data System (ADS)
Feng, J.; Bai, L.; Liu, S.; Su, X.; Hu, H.
2012-07-01
In this paper, the MODIS remote sensing data, featured with low-cost, high-timely and moderate/low spatial resolutions, in the North China Plain (NCP) as a study region were firstly used to carry out mixed-pixel spectral decomposition to extract an useful regionalized indicator parameter (RIP) (i.e., an available ratio, that is, fraction/percentage, of winter wheat planting area in each pixel as a regionalized indicator variable (RIV) of spatial sampling) from the initial selected indicators. Then, the RIV values were spatially analyzed, and the spatial structure characteristics (i.e., spatial correlation and variation) of the NCP were achieved, which were further processed to obtain the scalefitting, valid a priori knowledge or information of spatial sampling. Subsequently, founded upon an idea of rationally integrating probability-based and model-based sampling techniques and effectively utilizing the obtained a priori knowledge or information, the spatial sampling models and design schemes and their optimization and optimal selection were developed, as is a scientific basis of improving and optimizing the existing spatial sampling schemes of large-scale cropland remote sensing monitoring. Additionally, by the adaptive analysis and decision strategy the optimal local spatial prediction and gridded system of extrapolation results were able to excellently implement an adaptive report pattern of spatial sampling in accordance with report-covering units in order to satisfy the actual needs of sampling surveys.
NASA Astrophysics Data System (ADS)
Ander, Louise; Lark, Murray; Smedley, Pauline; Watts, Michael; Hamilton, Elliott; Fletcher, Tony; Crabbe, Helen; Close, Rebecca; Studden, Mike; Leonardi, Giovanni
2015-04-01
Random sampling design is optimal in order to be able to assess outcomes, such as the mean of a given variable across an area. However, this optimal sampling design may be compromised to an unknown extent by unavoidable real-world factors: the extent to which the study design can still be considered random, and the influence this may have on the choice of appropriate statistical data analysis is examined in this work. We take a study which relied on voluntary participation for the sampling of private water tap chemical composition in England, UK. This study was designed and implemented as a categorical, randomised study. The local geological classes were grouped into 10 types, which were considered to be most important in likely effects on groundwater chemistry (the source of all the tap waters sampled). Locations of the users of private water supplies were made available to the study group from the Local Authority in the area. These were then assigned, based on location, to geological groups 1 to 10 and randomised within each group. However, the permission to collect samples then required active, voluntary participation by householders and thus, unlike many environmental studies, could not always follow the initial sample design. Impediments to participation ranged from 'willing but not available' during the designated sampling period, to a lack of response to requests to sample (assumed to be wholly unwilling or unable to participate). Additionally, a small number of unplanned samples were collected via new participants making themselves known to the sampling teams, during the sampling period. Here we examine the impact this has on the 'random' nature of the resulting data distribution, by comparison with the non-participating known supplies. We consider the implications this has on choice of statistical analysis methods to predict values and uncertainty at un-sampled locations.
Molecular taxonomy of phytopathogenic fungi: a case study in Peronospora.
Göker, Markus; García-Blázquez, Gema; Voglmayr, Hermann; Tellería, M Teresa; Martín, María P
2009-07-29
Inappropriate taxon definitions may have severe consequences in many areas. For instance, biologically sensible species delimitation of plant pathogens is crucial for measures such as plant protection or biological control and for comparative studies involving model organisms. However, delimiting species is challenging in the case of organisms for which often only molecular data are available, such as prokaryotes, fungi, and many unicellular eukaryotes. Even in the case of organisms with well-established morphological characteristics, molecular taxonomy is often necessary to emend current taxonomic concepts and to analyze DNA sequences directly sampled from the environment. Typically, for this purpose clustering approaches to delineate molecular operational taxonomic units have been applied using arbitrary choices regarding the distance threshold values, and the clustering algorithms. Here, we report on a clustering optimization method to establish a molecular taxonomy of Peronospora based on ITS nrDNA sequences. Peronospora is the largest genus within the downy mildews, which are obligate parasites of higher plants, and includes various economically important pathogens. The method determines the distance function and clustering setting that result in an optimal agreement with selected reference data. Optimization was based on both taxonomy-based and host-based reference information, yielding the same outcome. Resampling and permutation methods indicate that the method is robust regarding taxon sampling and errors in the reference data. Tests with newly obtained ITS sequences demonstrate the use of the re-classified dataset in molecular identification of downy mildews. A corrected taxonomy is provided for all Peronospora ITS sequences contained in public databases. Clustering optimization appears to be broadly applicable in automated, sequence-based taxonomy. The method connects traditional and modern taxonomic disciplines by specifically addressing the issue of how to optimally account for both traditional species concepts and genetic divergence.
Molecular Taxonomy of Phytopathogenic Fungi: A Case Study in Peronospora
Göker, Markus; García-Blázquez, Gema; Voglmayr, Hermann; Tellería, M. Teresa; Martín, María P.
2009-01-01
Background Inappropriate taxon definitions may have severe consequences in many areas. For instance, biologically sensible species delimitation of plant pathogens is crucial for measures such as plant protection or biological control and for comparative studies involving model organisms. However, delimiting species is challenging in the case of organisms for which often only molecular data are available, such as prokaryotes, fungi, and many unicellular eukaryotes. Even in the case of organisms with well-established morphological characteristics, molecular taxonomy is often necessary to emend current taxonomic concepts and to analyze DNA sequences directly sampled from the environment. Typically, for this purpose clustering approaches to delineate molecular operational taxonomic units have been applied using arbitrary choices regarding the distance threshold values, and the clustering algorithms. Methodology Here, we report on a clustering optimization method to establish a molecular taxonomy of Peronospora based on ITS nrDNA sequences. Peronospora is the largest genus within the downy mildews, which are obligate parasites of higher plants, and includes various economically important pathogens. The method determines the distance function and clustering setting that result in an optimal agreement with selected reference data. Optimization was based on both taxonomy-based and host-based reference information, yielding the same outcome. Resampling and permutation methods indicate that the method is robust regarding taxon sampling and errors in the reference data. Tests with newly obtained ITS sequences demonstrate the use of the re-classified dataset in molecular identification of downy mildews. Conclusions A corrected taxonomy is provided for all Peronospora ITS sequences contained in public databases. Clustering optimization appears to be broadly applicable in automated, sequence-based taxonomy. The method connects traditional and modern taxonomic disciplines by specifically addressing the issue of how to optimally account for both traditional species concepts and genetic divergence. PMID:19641601
Monte Carlo simulation of a photodisintegration of 3 H experiment in Geant4
NASA Astrophysics Data System (ADS)
Gray, Isaiah
2013-10-01
An upcoming experiment involving photodisintegration of 3 H at the High Intensity Gamma-Ray Source facility at Duke University has been simulated in the software package Geant4. CAD models of silicon detectors and wire chambers were imported from Autodesk Inventor using the program FastRad and the Geant4 GDML importer. Sensitive detectors were associated with the appropriate logical volumes in the exported GDML file so that changes in detector geometry will be easily manifested in the simulation. Probability distribution functions for the energy and direction of outgoing protons were generated using numerical tables from previous theory, and energies and directions were sampled from these distributions using a rejection sampling algorithm. The simulation will be a useful tool to optimize detector geometry, estimate background rates, and test data analysis algorithms. This work was supported by the Triangle Universities Nuclear Laboratory REU program at Duke University.
Annealed Importance Sampling for Neural Mass Models
Penny, Will; Sengupta, Biswa
2016-01-01
Neural Mass Models provide a compact description of the dynamical activity of cell populations in neocortical regions. Moreover, models of regional activity can be connected together into networks, and inferences made about the strength of connections, using M/EEG data and Bayesian inference. To date, however, Bayesian methods have been largely restricted to the Variational Laplace (VL) algorithm which assumes that the posterior distribution is Gaussian and finds model parameters that are only locally optimal. This paper explores the use of Annealed Importance Sampling (AIS) to address these restrictions. We implement AIS using proposals derived from Langevin Monte Carlo (LMC) which uses local gradient and curvature information for efficient exploration of parameter space. In terms of the estimation of Bayes factors, VL and AIS agree about which model is best but report different degrees of belief. Additionally, AIS finds better model parameters and we find evidence of non-Gaussianity in their posterior distribution. PMID:26942606
Tools for phospho- and glycoproteomics of plasma membranes.
Wiśniewski, Jacek R
2011-07-01
Analysis of plasma membrane proteins and their posttranslational modifications is considered as important for identification of disease markers and targets for drug treatment. Due to their insolubility in water, studying of plasma membrane proteins using mass spectrometry has been difficult for a long time. Recent technological developments in sample preparation together with important improvements in mass spectrometric analysis have facilitated analysis of these proteins and their posttranslational modifications. Now, large scale proteomic analyses allow identification of thousands of membrane proteins from minute amounts of sample. Optimized protocols for affinity enrichment of phosphorylated and glycosylated peptides have set new dimensions in the depth of characterization of these posttranslational modifications of plasma membrane proteins. Here, I summarize recent advances in proteomic technology for the characterization of the cell surface proteins and their modifications. In the focus are approaches allowing large scale mapping rather than analytical methods suitable for studying individual proteins or non-complex mixtures.
Accelerating IMRT optimization by voxel sampling
NASA Astrophysics Data System (ADS)
Martin, Benjamin C.; Bortfeld, Thomas R.; Castañon, David A.
2007-12-01
This paper presents a new method for accelerating intensity-modulated radiation therapy (IMRT) optimization using voxel sampling. Rather than calculating the dose to the entire patient at each step in the optimization, the dose is only calculated for some randomly selected voxels. Those voxels are then used to calculate estimates of the objective and gradient which are used in a randomized version of a steepest descent algorithm. By selecting different voxels on each step, we are able to find an optimal solution to the full problem. We also present an algorithm to automatically choose the best sampling rate for each structure within the patient during the optimization. Seeking further improvements, we experimented with several other gradient-based optimization algorithms and found that the delta-bar-delta algorithm performs well despite the randomness. Overall, we were able to achieve approximately an order of magnitude speedup on our test case as compared to steepest descent.
Svanevik, Cecilie Smith; Roiha, Irja Sunde; Levsen, Arne; Lunestad, Bjørn Tore
2015-10-01
Microbes play an important role in the degradation of fish products, thus better knowledge of the microbiological conditions throughout the fish production chain may help to optimise product quality and resource utilisation. This paper presents the results of a ten-year spot sampling programme (2005-2014) of the commercially most important pelagic fish species harvested in Norway. Fish-, surface-, and storage water samples were collected from fishing vessels and processing factories. Totally 1,181 samples were assessed with respect to microbiological quality, hygiene and food safety. We introduce a quality and safety assessment scheme for fresh pelagic fish recommending limits for heterotrophic plate counts (HPC), thermos tolerant coliforms, enterococci and Listeria monocytogenes. According to the scheme, in 25 of 41 samplings, sub-optimal conditions were found with respect to quality, whereas in 21 and 9 samplings, samples were not in compliance concerning hygiene and food safety, respectively. The present study has revealed that the quality of pelagic fish can be optimised by improving the hygiene conditions at some critical points at an early phase of the production chain. Thus, the proposed assessment scheme may provide a useful tool for the industry to optimise quality and maintain consumer safety of pelagic fishery products. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.
Conceptual design and multidisciplinary optimization of in-plane morphing wing structures
NASA Astrophysics Data System (ADS)
Inoyama, Daisaku; Sanders, Brian P.; Joo, James J.
2006-03-01
In this paper, the topology optimization methodology for the synthesis of distributed actuation system with specific applications to the morphing air vehicle is discussed. The main emphasis is placed on the topology optimization problem formulations and the development of computational modeling concepts. For demonstration purposes, the inplane morphing wing model is presented. The analysis model is developed to meet several important criteria: It must allow large rigid-body displacements, as well as variation in planform area, with minimum strain on structural members while retaining acceptable numerical stability for finite element analysis. Preliminary work has indicated that addressed modeling concept meets the criteria and may be suitable for the purpose. Topology optimization is performed on the ground structure based on this modeling concept with design variables that control the system configuration. In other words, states of each element in the model are design variables and they are to be determined through optimization process. In effect, the optimization process assigns morphing members as 'soft' elements, non-morphing load-bearing members as 'stiff' elements, and non-existent members as 'voids.' In addition, the optimization process determines the location and relative force intensities of distributed actuators, which is represented computationally as equal and opposite nodal forces with soft axial stiffness. Several different optimization problem formulations are investigated to understand their potential benefits in solution quality, as well as meaningfulness of formulation itself. Sample in-plane morphing problems are solved to demonstrate the potential capability of the methodology introduced in this paper.
Convergent evolution of vascular optimization in kelp (Laminariales).
Drobnitch, Sarah Tepler; Jensen, Kaare H; Prentice, Paige; Pittermann, Jarmila
2015-10-07
Terrestrial plants and mammals, although separated by a great evolutionary distance, have each arrived at a highly conserved body plan in which universal allometric scaling relationships govern the anatomy of vascular networks and key functional metabolic traits. The universality of allometric scaling suggests that these phyla have each evolved an 'optimal' transport strategy that has been overwhelmingly adopted by extant species. To truly evaluate the dominance and universality of vascular optimization, however, it is critical to examine other, lesser-known, vascularized phyla. The brown algae (Phaeophyceae) are one such group--as distantly related to plants as mammals, they have convergently evolved a plant-like body plan and a specialized phloem-like transport network. To evaluate possible scaling and optimization in the kelp vascular system, we developed a model of optimized transport anatomy and tested it with measurements of the giant kelp, Macrocystis pyrifera, which is among the largest and most successful of macroalgae. We also evaluated three classical allometric relationships pertaining to plant vascular tissues with a diverse sampling of kelp species. Macrocystis pyrifera displays strong scaling relationships between all tested vascular parameters and agrees with our model; other species within the Laminariales display weak or inconsistent vascular allometries. The lack of universal scaling in the kelps and the presence of optimized transport anatomy in M. pyrifera raises important questions about the evolution of optimization and the possible competitive advantage conferred by optimized vascular systems to multicellular phyla. © 2015 The Author(s).
Pfeuffer, Kevin P.; Ray, Steven J.; Hieftje, Gary M.
2014-01-01
Ambient desorption/ionization mass spectrometry (ADI-MS) has developed into an important analytical field over the last nine years. The ability to analyze samples under ambient conditions while retaining the sensitivity and specificity of mass spectrometry has led to numerous applications and a corresponding jump in the popularity of this field. Despite the great potential of ADI-MS, problems remain in the areas of ion identification and quantification. Difficulties with ion identification can be solved through modified instrumentation, including accurate-mass or MS/MS capabilities for analyte identification. More difficult problems include quantification due to the ambient nature of the sampling process. To characterize and improve sample volatilization, ionization, and introduction into the mass-spectrometer interface, a method of visualizing mass transport into the mass spectrometer is needed. Schlieren imaging is a well-established technique that renders small changes in refractive index visible. Here, schlieren imaging was used to visualize helium flow from a plasma-based ADI-MS source into a mass spectrometer while ion signals were recorded. Optimal sample positions for melting-point capillary and transmission-mode (stainless steel mesh) introduction were found to be near (within 1 mm of) the mass spectrometer inlet. Additionally, the orientation of the sampled surface plays a significant role. More efficient mass transport resulted for analyte deposits directly facing the MS inlet. Different surfaces (glass slide and rough surface) were also examined; for both it was found that the optimal position is immediately beneath the MS inlet. PMID:24658804
Pfeuffer, Kevin P; Ray, Steven J; Hieftje, Gary M
2014-05-01
Ambient desorption/ionization mass spectrometry (ADI-MS) has developed into an important analytical field over the last 9 years. The ability to analyze samples under ambient conditions while retaining the sensitivity and specificity of mass spectrometry has led to numerous applications and a corresponding jump in the popularity of this field. Despite the great potential of ADI-MS, problems remain in the areas of ion identification and quantification. Difficulties with ion identification can be solved through modified instrumentation, including accurate-mass or MS/MS capabilities for analyte identification. More difficult problems include quantification because of the ambient nature of the sampling process. To characterize and improve sample volatilization, ionization, and introduction into the mass spectrometer interface, a method of visualizing mass transport into the mass spectrometer is needed. Schlieren imaging is a well-established technique that renders small changes in refractive index visible. Here, schlieren imaging was used to visualize helium flow from a plasma-based ADI-MS source into a mass spectrometer while ion signals were recorded. Optimal sample positions for melting-point capillary and transmission-mode (stainless steel mesh) introduction were found to be near (within 1 mm of) the mass spectrometer inlet. Additionally, the orientation of the sampled surface plays a significant role. More efficient mass transport resulted for analyte deposits directly facing the MS inlet. Different surfaces (glass slide and rough surface) were also examined; for both it was found that the optimal position is immediately beneath the MS inlet.
NASA Astrophysics Data System (ADS)
Pfeuffer, Kevin P.; Ray, Steven J.; Hieftje, Gary M.
2014-05-01
Ambient desorption/ionization mass spectrometry (ADI-MS) has developed into an important analytical field over the last 9 years. The ability to analyze samples under ambient conditions while retaining the sensitivity and specificity of mass spectrometry has led to numerous applications and a corresponding jump in the popularity of this field. Despite the great potential of ADI-MS, problems remain in the areas of ion identification and quantification. Difficulties with ion identification can be solved through modified instrumentation, including accurate-mass or MS/MS capabilities for analyte identification. More difficult problems include quantification because of the ambient nature of the sampling process. To characterize and improve sample volatilization, ionization, and introduction into the mass spectrometer interface, a method of visualizing mass transport into the mass spectrometer is needed. Schlieren imaging is a well-established technique that renders small changes in refractive index visible. Here, schlieren imaging was used to visualize helium flow from a plasma-based ADI-MS source into a mass spectrometer while ion signals were recorded. Optimal sample positions for melting-point capillary and transmission-mode (stainless steel mesh) introduction were found to be near (within 1 mm of) the mass spectrometer inlet. Additionally, the orientation of the sampled surface plays a significant role. More efficient mass transport resulted for analyte deposits directly facing the MS inlet. Different surfaces (glass slide and rough surface) were also examined; for both it was found that the optimal position is immediately beneath the MS inlet.
Optimization of the radioimmunoassays for measuring fentanyl and alfentanil in human serum
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schuettler, J.; White, P.F.
Measurement of serum fentanyl and alfentanil concentrations by radioimmunoassay (RIA) may result in significant errors and high variability when the technique described in the available fentanyl and alfentanil RIA kits is used. The authors found a 29-94% overestimation of measured fentanyl and alfentanil serum levels when 3H-fentanyl or 3H-alfentanil was added lastly to the mixture of antiserum and sample. This finding is related to a reduction in binding sites for the labeled compounds after preincubation of sample and antiserum. If this sequence is used, it becomes necessary to extend the incubation period up to 6 h for fentanyl and upmore » to 10 h for alfentanil in order to achieve equilibration between unlabeled and labeled drug with respect to antiserum binding. However, when antiserum is added lastly to the mixture of sample and labeled drug, measurement accuracy and precision for fentanyl and alfentanil serum concentrations are enhanced markedly. In addition, it is important to perform the calibration curves and sample measurements using the same medium (i.e., serum alone or a serum/buffer dilution). In summary, to optimize the RIA for fentanyl and alfentanil, the authors recommend the following: 1) adding the antiserum lastly to the mixture of sample and labeled drug; 2) performing calibration curves using patient's blank serum when possible; 3) carefully examining and standardizing each step of the RIA procedure to reduce variability, and, finally; 4) comparing results with those of other established RIA laboratories.« less
NASA Astrophysics Data System (ADS)
Aifat, N. R.; Yaakop, S.; Md-Zain, B. M.
2016-11-01
The IUCN Red List of Threatened Species has categorized Malaysian primates from being data deficient to critically endanger. Thus, ancient DNA analyses hold great potential to understand phylogeny, phylogeography and population history of extinct and extant species. Museum samples are one of the alternatives to provide important sources of biological materials for a large proportion of ancient DNA studies. In this study, a total of six museum skin samples from species Presbytis hosei (4 samples) and Presbytis frontata (2 samples), aged between 43 and 124 years old were extracted to obtain the DNA. Extraction was done by using QIAGEN QIAamp DNA Investigator Kit and the ability of this kit to extract museum skin samples was tested by amplification of partial Cyt b sequence using species-specific designed primer. Two primer pairs were designed specifically for P. hosei and P. frontata, respectively. These primer pairs proved to be efficient in amplifying 200bp of the targeted species in the optimized PCR conditions. The performance of the sequences were tested to determine genetic distance of genus Presbytis in Malaysia. From the analyses, P. hosei is closely related to P. chrysomelas and P. frontata with the value of 0.095 and 0.106, respectively. Cyt b gave a clear data in determining relationships among Bornean species. Thus, with the optimized condition, museum specimens can be used for molecular systematic studies of the Malaysian primates.
Urine sampling and collection system optimization and testing
NASA Technical Reports Server (NTRS)
Fogal, G. L.; Geating, J. A.; Koesterer, M. G.
1975-01-01
A Urine Sampling and Collection System (USCS) engineering model was developed to provide for the automatic collection, volume sensing and sampling of urine from each micturition. The purpose of the engineering model was to demonstrate verification of the system concept. The objective of the optimization and testing program was to update the engineering model, to provide additional performance features and to conduct system testing to determine operational problems. Optimization tasks were defined as modifications to minimize system fluid residual and addition of thermoelectric cooling.
Foo, Lee Kien; McGree, James; Duffull, Stephen
2012-01-01
Optimal design methods have been proposed to determine the best sampling times when sparse blood sampling is required in clinical pharmacokinetic studies. However, the optimal blood sampling time points may not be feasible in clinical practice. Sampling windows, a time interval for blood sample collection, have been proposed to provide flexibility in blood sampling times while preserving efficient parameter estimation. Because of the complexity of the population pharmacokinetic models, which are generally nonlinear mixed effects models, there is no analytical solution available to determine sampling windows. We propose a method for determination of sampling windows based on MCMC sampling techniques. The proposed method attains a stationary distribution rapidly and provides time-sensitive windows around the optimal design points. The proposed method is applicable to determine sampling windows for any nonlinear mixed effects model although our work focuses on an application to population pharmacokinetic models. Copyright © 2012 John Wiley & Sons, Ltd.
Asiabi, Hamid; Yamini, Yadollah; Seidi, Shahram; Esrafili, Ali; Rezaei, Fatemeh
2015-06-05
In this work, a novel and efficient on-line in-tube solid phase microextraction method followed by high performance liquid chromatography was developed for preconcentration and determination of trace amounts of parabens. A nanostructured polyaniline-polypyrrole composite was electrochemically deposited on the inner surface of a stainless steel tube and used as the extraction phase. Several important factors that influence the extraction efficiency, including type of solid-phase coating, extraction and desorption times, flow rates of the sample solution and eluent, pH, and ionic strength of the sample solution were investigated and optimized. Under the optimal conditions, the limits of detection were in the range of 0.02-0.04 μg L(-1). This method showed good linearity for parabens in the range of 0.07-50 μg L(-1), with coefficients of determination better than 0.998. The intra- and inter-assay precisions (RSD%, n=3) were in the range of 5.9-7.0% and 4.4-5.7% at three concentration levels of 2, 10, and 20 μg L(-1), respectively. The extraction recovery values for the spiked samples were in the acceptable range of 80.3-90.2%. The validated method was successfully applied for analysis of methyl-, ethyl-, and propyl parabens in some water, milk, and juice samples. Copyright © 2015 Elsevier B.V. All rights reserved.
Diaby, M; Kinani, S; Genty, C; Bouchonnet, S; Sablier, M; Le Negrate, A; El Fassi, M
2009-12-01
This article establishes an alternative method for the characterization of volatiles organic matter (VOM) contained in deposits of the piston first ring grooves of diesel engines using a ChromatoProbe direct sample introduction (DSI) device coupled to gas chromatography/mass spectrometry (GC/MS) analysis. The addition of an organic solvent during thermal desorption leads to an efficient extraction and a good chromatographic separation of extracted products. The method was optimized investigating the effects of several solvents, the volume added to the solid sample, and temperature programming of the ChromatoProbe DSI device. The best results for thermal desorption were found using toluene as an extraction solvent and heating the programmable temperature injector from room temperature to 300 degrees C with a temperature step of 105 degrees C. With the use of the optimized thermal desorption conditions, several components have been positively identified in the volatile fraction of the deposits: aromatics, antioxidants, and antioxidant degradation products. Moreover, this work highlighted the presence of diesel fuel in the VOM of the piston deposits and gave new facts on the absence of the role of diesel fuel in the deposit formation process. Most importantly, it opens the possibility of quickly performing the analysis of deposits with small amounts of samples while having a good separation of the volatiles.
Burger, Emily A; Sy, Stephen; Nygård, Mari; Kim, Jane J
2016-01-01
Background Human papillomavirus (HPV) testing allows women to self-collect cervico-vaginal cells at home (i.e., self-sampling). Using primary data from a randomized pilot study, we evaluated the long-term consequences and cost-effectiveness of using self-sampling to improve participation to routine cervical cancer screening in Norway. Methods We compared a strategy reflecting screening participation (using reminder letters) to strategies that involved mailing self-sampling device kits to women non-compliant to screening within a 5-year or 10-year period under two scenarios: A) self-sampling respondents had moderate under-screening histories, or B) respondents to self-sampling had moderate and severe under-screening histories. Model outcomes included quality-adjusted life-years (QALY) and lifetime costs. The ‘most cost-effective’ strategy was identified as the strategy just below $100,000 per QALY gained. Results Mailing self-sampling device kits to all women non-compliant to screening within a 5-year or 10-year period can be more effective and less costly than the current reminder letter policy; however, the optimal self-sampling strategy was dependent on the profile of self-sampling respondents. For example, ‘10-yearly self-sampling’ is preferred ($95,500 per QALY gained) if ‘5-yearly self-sampling’ could only attract moderate under-screeners; however, ‘5-yearly self-sampling’ is preferred if this strategy could additionally attract severe under-screeners. Conclusions Targeted self-sampling of non-compliers likely represents good value-for-money; however, the preferred strategy is contingent on the screening histories and compliance of respondents. Impact The magnitude of the health benefit and optimal self-sampling strategy is dependent on the profile and behavior of respondents. Health authorities should understand these factors prior to selecting and implementing a self-sampling policy. PMID:27624639
ERIC Educational Resources Information Center
Liu, Xiaofeng
2003-01-01
This article considers optimal sample allocation between the treatment and control condition in multilevel designs when the costs per sampling unit vary due to treatment assignment. Optimal unequal allocation may reduce the cost from that of a balanced design without sacrificing any power. The optimum sample allocation ratio depends only on the…
Fully automatic characterization and data collection from crystals of biological macromolecules
DOE Office of Scientific and Technical Information (OSTI.GOV)
Svensson, Olof; Malbet-Monaco, Stéphanie; Popov, Alexander
A fully automatic system has been developed that performs X-ray centring and characterization of, and data collection from, large numbers of cryocooled crystals without human intervention. Considerable effort is dedicated to evaluating macromolecular crystals at synchrotron sources, even for well established and robust systems. Much of this work is repetitive, and the time spent could be better invested in the interpretation of the results. In order to decrease the need for manual intervention in the most repetitive steps of structural biology projects, initial screening and data collection, a fully automatic system has been developed to mount, locate, centre to themore » optimal diffraction volume, characterize and, if possible, collect data from multiple cryocooled crystals. Using the capabilities of pixel-array detectors, the system is as fast as a human operator, taking an average of 6 min per sample depending on the sample size and the level of characterization required. Using a fast X-ray-based routine, samples are located and centred systematically at the position of highest diffraction signal and important parameters for sample characterization, such as flux, beam size and crystal volume, are automatically taken into account, ensuring the calculation of optimal data-collection strategies. The system is now in operation at the new ESRF beamline MASSIF-1 and has been used by both industrial and academic users for many different sample types, including crystals of less than 20 µm in the smallest dimension. To date, over 8000 samples have been evaluated on MASSIF-1 without any human intervention.« less
2012 Workplace and Gender Relations Survey of Active Duty Members: Nonresponse Bias Analysis Report
2014-01-01
Control and Prevention ), or command climate surveys (e.g., DEOCS). 6 Table 1. Comparison of Trends in WGRA and SOFS-A Response Rates (Shown in...DMDC draws optimized samples to reduce survey burden on members as well as produce high levels of precision for important domain estimates by using...statistical significance at α= .05 Because paygrade is a significant predictor of survey response, we next examined the odds ratio of each paygrade levels
An Asymptotically-Optimal Sampling-Based Algorithm for Bi-directional Motion Planning
Starek, Joseph A.; Gomez, Javier V.; Schmerling, Edward; Janson, Lucas; Moreno, Luis; Pavone, Marco
2015-01-01
Bi-directional search is a widely used strategy to increase the success and convergence rates of sampling-based motion planning algorithms. Yet, few results are available that merge both bi-directional search and asymptotic optimality into existing optimal planners, such as PRM*, RRT*, and FMT*. The objective of this paper is to fill this gap. Specifically, this paper presents a bi-directional, sampling-based, asymptotically-optimal algorithm named Bi-directional FMT* (BFMT*) that extends the Fast Marching Tree (FMT*) algorithm to bidirectional search while preserving its key properties, chiefly lazy search and asymptotic optimality through convergence in probability. BFMT* performs a two-source, lazy dynamic programming recursion over a set of randomly-drawn samples, correspondingly generating two search trees: one in cost-to-come space from the initial configuration and another in cost-to-go space from the goal configuration. Numerical experiments illustrate the advantages of BFMT* over its unidirectional counterpart, as well as a number of other state-of-the-art planners. PMID:27004130
Šumić, Zdravko; Vakula, Anita; Tepić, Aleksandra; Čakarević, Jelena; Vitas, Jasmina; Pavlić, Branimir
2016-07-15
Fresh red currants were dried by vacuum drying process under different drying conditions. Box-Behnken experimental design with response surface methodology was used for optimization of drying process in terms of physical (moisture content, water activity, total color change, firmness and rehydratation power) and chemical (total phenols, total flavonoids, monomeric anthocyanins and ascorbic acid content and antioxidant activity) properties of dried samples. Temperature (48-78 °C), pressure (30-330 mbar) and drying time (8-16 h) were investigated as independent variables. Experimental results were fitted to a second-order polynomial model where regression analysis and analysis of variance were used to determine model fitness and optimal drying conditions. The optimal conditions of simultaneously optimized responses were temperature of 70.2 °C, pressure of 39 mbar and drying time of 8 h. It could be concluded that vacuum drying provides samples with good physico-chemical properties, similar to lyophilized sample and better than conventionally dried sample. Copyright © 2016 Elsevier Ltd. All rights reserved.
A Hybrid Monte Carlo importance sampling of rare events in Turbulence and in Turbulent Models
NASA Astrophysics Data System (ADS)
Margazoglou, Georgios; Biferale, Luca; Grauer, Rainer; Jansen, Karl; Mesterhazy, David; Rosenow, Tillmann; Tripiccione, Raffaele
2017-11-01
Extreme and rare events is a challenging topic in the field of turbulence. Trying to investigate those instances through the use of traditional numerical tools turns to be a notorious task, as they fail to systematically sample the fluctuations around them. On the other hand, we propose that an importance sampling Monte Carlo method can selectively highlight extreme events in remote areas of the phase space and induce their occurrence. We present a brand new computational approach, based on the path integral formulation of stochastic dynamics, and employ an accelerated Hybrid Monte Carlo (HMC) algorithm for this purpose. Through the paradigm of stochastic one-dimensional Burgers' equation, subjected to a random noise that is white-in-time and power-law correlated in Fourier space, we will prove our concept and benchmark our results with standard CFD methods. Furthermore, we will present our first results of constrained sampling around saddle-point instanton configurations (optimal fluctuations). The research leading to these results has received funding from the EU Horizon 2020 research and innovation programme under Grant Agreement No. 642069, and from the EU Seventh Framework Programme (FP7/2007-2013) under ERC Grant Agreement No. 339032.
Li, Zhou; Xiao, Chong; Fan, Shaojuan; Deng, Yu; Zhang, Wenshuai; Ye, Bangjiao; Xie, Yi
2015-05-27
Vacancy is a very important class of phonon scattering center to reduce thermal conductivity for the development of high efficient thermoelectric materials. However, conventional monovacancy may also act as an electron or hole acceptor, thereby modifying the electrical transport properties and even worsening the thermoelectric performance. This issue urges us to create new types of vacancies that scatter phonons effectively while not deteriorating the electrical transport. Herein, taking BiCuSeO as an example, we first reported the successful synergistic optimization of electrical and thermal parameters through Bi/Cu dual vacancies. As expected, as compared to its pristine and monovacancy samples, these dual vacancies further increase the phonon scattering, which results in an ultra low thermal conductivity of 0.37 W m(-1) K(-1) at 750 K. Most importantly, the clear-cut evidence in positron annihilation unambiguously confirms the interlayer charge transfer between these Bi/Cu dual vacancies, which results in the significant increase of electrical conductivity with relatively high Seebeck coefficient. As a result, BiCuSeO with Bi/Cu dual vacancies shows a high ZT value of 0.84 at 750 K, which is superior to that of its native sample and monovacancies-dominant counterparts. These findings undoubtedly elucidate a new strategy and direction for rational design of high performance thermoelectric materials.
The importance of personality and parental styles on optimism in adolescents.
Zanon, Cristian; Bastianello, Micheline Roat; Pacico, Juliana Cerentini; Hutz, Claudio Simon
2014-01-01
Some studies have suggested that personality factors are important to optimism development. Others have emphasized that family relations are relevant variables to optimism. This study aimed to evaluate the importance of parenting styles to optimism controlling for the variance accounted for by personality factors. Participants were 344 Brazilian high school students (44% male) with mean age of 16.2 years (SD = 1) who answered personality, optimism, responsiveness and demandingness scales. Hierarchical regression analyses were conducted having personality factors (in the first step) and maternal and paternal parenting styles, and demandingness and responsiveness (in the second step) as predictive variables and optimism as the criterion. Personality factors, especially neuroticism (β = -.34, p < .01), extraversion (β = .26, p < .01) and agreeableness (β = .16, p < .01), accounted for 34% of the optimism variance and insignificant variance was predicted exclusively by parental styles (1%). These findings suggest that personality is more important to optimism development than parental styles.
On algorithmic optimization of histogramming functions for GEM systems
NASA Astrophysics Data System (ADS)
Krawczyk, Rafał D.; Czarski, Tomasz; Kolasinski, Piotr; Poźniak, Krzysztof T.; Linczuk, Maciej; Byszuk, Adrian; Chernyshova, Maryna; Juszczyk, Bartlomiej; Kasprowicz, Grzegorz; Wojenski, Andrzej; Zabolotny, Wojciech
2015-09-01
This article concerns optimization methods for data analysis for the X-ray GEM detector system. The offline analysis of collected samples was optimized for MATLAB computations. Compiled functions in C language were used with MEX library. Significant speedup was received for both ordering-preprocessing and for histogramming of samples. Utilized techniques with obtained results are presented.
Isolation of microplastics in biota-rich seawater samples and marine organisms
NASA Astrophysics Data System (ADS)
Cole, Matthew; Webb, Hannah; Lindeque, Pennie K.; Fileman, Elaine S.; Halsband, Claudia; Galloway, Tamara S.
2014-03-01
Microplastic litter is a pervasive pollutant present in aquatic systems across the globe. A range of marine organisms have the capacity to ingest microplastics, resulting in adverse health effects. Developing methods to accurately quantify microplastics in productive marine waters, and those internalized by marine organisms, is of growing importance. Here we investigate the efficacy of using acid, alkaline and enzymatic digestion techniques in mineralizing biological material from marine surface trawls to reveal any microplastics present. Our optimized enzymatic protocol can digest >97% (by weight) of the material present in plankton-rich seawater samples without destroying any microplastic debris present. In applying the method to replicate marine samples from the western English Channel, we identified 0.27 microplastics m-3. The protocol was further used to extract microplastics ingested by marine zooplankton under laboratory conditions. Our findings illustrate that enzymatic digestion can aid the detection of microplastic debris within seawater samples and marine biota.
Isolation of microplastics in biota-rich seawater samples and marine organisms
Cole, Matthew; Webb, Hannah; Lindeque, Pennie K.; Fileman, Elaine S.; Halsband, Claudia; Galloway, Tamara S.
2014-01-01
Microplastic litter is a pervasive pollutant present in aquatic systems across the globe. A range of marine organisms have the capacity to ingest microplastics, resulting in adverse health effects. Developing methods to accurately quantify microplastics in productive marine waters, and those internalized by marine organisms, is of growing importance. Here we investigate the efficacy of using acid, alkaline and enzymatic digestion techniques in mineralizing biological material from marine surface trawls to reveal any microplastics present. Our optimized enzymatic protocol can digest >97% (by weight) of the material present in plankton-rich seawater samples without destroying any microplastic debris present. In applying the method to replicate marine samples from the western English Channel, we identified 0.27 microplastics m−3. The protocol was further used to extract microplastics ingested by marine zooplankton under laboratory conditions. Our findings illustrate that enzymatic digestion can aid the detection of microplastic debris within seawater samples and marine biota. PMID:24681661
High Resolution Separations and Improved Ion Production and Transmission in Metabolomics
Metz, Thomas O.; Page, Jason S.; Baker, Erin S.; Tang, Keqi; Ding, Jie; Shen, Yufeng; Smith, Richard D.
2008-01-01
The goal of metabolomics analyses is the detection and quantitation of as many sample components as reasonably possible in order to identify compounds or “features” that can be used to characterize the samples under study. When utilizing electrospray ionization to produce ions for analysis by mass spectrometry (MS), it is important that metabolome sample constituents be efficiently separated prior to ion production, in order to minimize ionization suppression and thereby extend the dynamic range of the measurement, as well as the coverage of the metabolome. Similarly, optimization of the MS inlet and interface can lead to increased measurement sensitivity. This perspective review will focus on the role of high resolution liquid chromatography (LC) separations in conjunction with improved ion production and transmission for LC-MS-based metabolomics. Additional emphasis will be placed on the compromise between metabolome coverage and sample analysis throughput. PMID:19255623
Caswell, J L; Middleton, D M; Gordon, J R
2001-01-01
Interleukin-8 (IL-8), an in vitro and in vivo neutrophil chemoattractant, is expressed at high levels in the lesions observed in bovine pneumonic pasteurellosis. Because of the role of neutrophils in the pathogenesis of pneumonic pasteurellosis, we investigated the relative importance of IL-8 as a neutrophil chemoattractant in this disease. Bronchoalveolar lavage (BAL) fluid was harvested from calves experimentally infected with bovine herpesvirus-1 and challenged with Mannheimia haemolytica. Neutrophil chemotactic activity was measured in pneumonic BAL fluid samples treated with a neutralizing monoclonal antibody to ovine IL-8, and compared to the activity in samples treated with an isotype-matched control antibody. Bronchoalveolar lavage fluid was analyzed at a dilution which induced a half-maximal response, and the concentrations of antibody were optimized in a preliminary experiment. Following incubation of replicate samples of diluted pneumonic bovine BAL fluid with 70 microg/mL of IL-8-neutralizing antibody or control antibody, the neutrophil chemotactic activities of the samples were determined using an in vitro microchemotaxis assay. Overall, pretreatment of BAL fluid samples with neutralizing anti-IL-8 antibody reduced neutrophil chemotactic activity by 15% to 60%, compared to pretreatment with control antibody. This effect was highly significant (P < 0.001), and was present in 5 of 5 samples. These data indicate that IL-8 is an important neutrophil chemoattractant in calves with pneumonic pasteurellosis, but that mediators with actions redundant to those of IL-8 must also be present in the lesions. PMID:11768129
Design and Sampling Plan Optimization for RT-qPCR Experiments in Plants: A Case Study in Blueberry.
Die, Jose V; Roman, Belen; Flores, Fernando; Rowland, Lisa J
2016-01-01
The qPCR assay has become a routine technology in plant biotechnology and agricultural research. It is unlikely to be technically improved, but there are still challenges which center around minimizing the variability in results and transparency when reporting technical data in support of the conclusions of a study. There are a number of aspects of the pre- and post-assay workflow that contribute to variability of results. Here, through the study of the introduction of error in qPCR measurements at different stages of the workflow, we describe the most important causes of technical variability in a case study using blueberry. In this study, we found that the stage for which increasing the number of replicates would be the most beneficial depends on the tissue used. For example, we would recommend the use of more RT replicates when working with leaf tissue, while the use of more sampling (RNA extraction) replicates would be recommended when working with stems or fruits to obtain the most optimal results. The use of more qPCR replicates provides the least benefit as it is the most reproducible step. By knowing the distribution of error over an entire experiment and the costs at each step, we have developed a script to identify the optimal sampling plan within the limits of a given budget. These findings should help plant scientists improve the design of qPCR experiments and refine their laboratory practices in order to conduct qPCR assays in a more reliable-manner to produce more consistent and reproducible data.
MALDI-based intact spore mass spectrometry of downy and powdery mildews.
Chalupová, Jana; Sedlářová, Michaela; Helmel, Michaela; Rehulka, Pavel; Marchetti-Deschmann, Martina; Allmaier, Günter; Sebela, Marek
2012-08-01
Fast and easy identification of fungal phytopathogens is of great importance in agriculture. In this context, matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOF MS) has emerged as a powerful tool for analyzing microorganisms. This study deals with a methodology for MALDI-TOF MS-based identification of downy and powdery mildews representing obligate biotrophic parasites of crop plants. Experimental approaches for the MS analyses were optimized using Bremia lactucae, cause of lettuce downy mildew, and Oidium neolycopersici, cause of tomato powdery mildew. This involved determining a suitable concentration of spores in the sample, selection of a proper MALDI matrix, looking for the optimal solvent composition, and evaluation of different sample preparation methods. Furthermore, using different MALDI target materials and surfaces (stainless steel vs polymer-based) and applying various conditions for sample exposure to the acidic MALDI matrix system were investigated. The dried droplet method involving solvent evaporation at room temperature was found to be the most suitable for the deposition of spores and MALDI matrix on the target and the subsequent crystallization. The concentration of spore suspension was optimal between 2 and 5 × 10(9) spores per ml. The best peptide/protein profiles (in terms of signal-to-noise ratio and number of peaks) were obtained by combining ferulic and sinapinic acids as a mixed MALDI matrix. A pretreatment of the spore cell wall with hydrolases was successfully introduced prior to MS measurements to obtain more pronounced signals. Finally, a novel procedure was developed for direct mass spectra acquisition from infected plant leaves. Copyright © 2012 John Wiley & Sons, Ltd.
Investigating phenology of larval fishes in St. Louis River ...
As part of the development of an early detection monitoring strategy for non-native fishes, larval fish surveys have been conducted since 2012 in the St. Louis River estuary. Survey data demonstrates there is considerable variability in fish abundance and species assemblages across different habitats and at multiple temporal scales. To optimize early detection monitoring we need to understand temporal and spatial patterns of larval fishes related to their development and dispersion, as well as the environmental factors that influence them. In 2016 we designed an experiment to assess the phenological variability in larval fish abundance and assemblages amongst shallow water habitats. Specifically, we sought to contrast different thermal environments and turbidity levels, as well as assess the importance of vegetation in these habitats. To evaluate phenological differences we sampled larval fish bi-weekly at nine locations from mid-May to mid-July. Sampling locations were split between upper estuary and lower estuary to contrast river versus seiche influenced habitats. To assess differences in thermal environments, temperature was monitored every 15 minutes at each sampling location throughout the study, beginning in early April. Our design also included sampling at both vegetated (or pre-vegetated) and non-vegetated stations within each sampling location throughout the study to assess the importance of this habitat variable. Hydroacoustic surveys (Biosonics) were
Gang, G J; Siewerdsen, J H; Stayman, J W
2017-02-11
This work presents a task-driven joint optimization of fluence field modulation (FFM) and regularization in quadratic penalized-likelihood (PL) reconstruction. Conventional FFM strategies proposed for filtered-backprojection (FBP) are evaluated in the context of PL reconstruction for comparison. We present a task-driven framework that leverages prior knowledge of the patient anatomy and imaging task to identify FFM and regularization. We adopted a maxi-min objective that ensures a minimum level of detectability index ( d' ) across sample locations in the image volume. The FFM designs were parameterized by 2D Gaussian basis functions to reduce dimensionality of the optimization and basis function coefficients were estimated using the covariance matrix adaptation evolutionary strategy (CMA-ES) algorithm. The FFM was jointly optimized with both space-invariant and spatially-varying regularization strength ( β ) - the former via an exhaustive search through discrete values and the latter using an alternating optimization where β was exhaustively optimized locally and interpolated to form a spatially-varying map. The optimal FFM inverts as β increases, demonstrating the importance of a joint optimization. For the task and object investigated, the optimal FFM assigns more fluence through less attenuating views, counter to conventional FFM schemes proposed for FBP. The maxi-min objective homogenizes detectability throughout the image and achieves a higher minimum detectability than conventional FFM strategies. The task-driven FFM designs found in this work are counter to conventional patterns for FBP and yield better performance in terms of the maxi-min objective, suggesting opportunities for improved image quality and/or dose reduction when model-based reconstructions are applied in conjunction with FFM.
van der Elst, Kim C. M.; Span, Lambert F. R.; van Hateren, Kai; Vermeulen, Karin M.; van der Werf, Tjip S.; Greijdanus, Ben; Kosterink, Jos G. W.; Uges, Donald R. A.
2013-01-01
Invasive aspergillosis and candidemia are important causes of morbidity and mortality in immunocompromised and critically ill patients. The triazoles voriconazole, fluconazole, and posaconazole are widely used for the treatment and prophylaxis of these fungal infections. Due to the variability of the pharmacokinetics of the triazoles among and within individual patients, therapeutic drug monitoring is important for optimizing the efficacy and safety of antifungal treatment. A dried blood spot (DBS) analysis was developed and was clinically validated for voriconazole, fluconazole, and posaconazole in 28 patients. Furthermore, a questionnaire was administered to evaluate the patients' opinions of the sampling method. The DBS analytical method showed linearity over the concentration range measured for all triazoles. Results for accuracy and precision were within accepted ranges; samples were stable at room temperature for at least 12 days; and different hematocrit values and blood spot volumes had no significant influence. The ratio of the drug concentration in DBS samples to that in plasma was 1.0 for voriconazole and fluconazole and 0.9 for posaconazole. Sixty percent of the patients preferred DBS analysis as a sampling method; 15% preferred venous blood sampling; and 25% had no preferred method. There was significantly less perception of pain with the DBS sampling method (P = 0.021). In conclusion, DBS analysis is a reliable alternative to venous blood sampling and can be used for therapeutic drug monitoring of voriconazole, fluconazole, and posaconazole. Patients were satisfied with DBS sampling and had less pain than with venous sampling. Most patients preferred DBS sampling to venous blood sampling. PMID:23896473
Pavlović, Dragana Mutavdžić; Ašperger, Danijela; Tolić, Dijana; Babić, Sandra
2013-09-01
This paper describes the development, optimization, and validation of a method for the determination of five pharmaceuticals from different therapeutic classes (antibiotics, anthelmintics, glucocorticoides) in water samples. Water samples were prepared using SPE and extracts were analyzed by HPLC with diode-array detection. The efficiency of 11 different SPE cartridges to extract the investigated compounds from water was tested in preliminary experiments. Then, the pH of the water sample, elution solvent, and sorbent mass were optimized. Except for optimization of the SPE procedure, selection of the optimal HPLC column with different stationary phases from different manufacturers has been performed. The developed method was validated using spring water samples spiked with appropriate concentrations of pharmaceuticals. Good linearity was obtained in the range of 2.4-200 μg/L, depending on the pharmaceutical with the correlation coefficients >0.9930 in all cases, except for ciprofloxacin (0.9866). Also, the method has revealed that low LODs (0.7-3.9 μg/L), good precision (intra- and interday) with RSD below 17% and recoveries above 98% for all pharmaceuticals. The method has been successfully applied to the analysis of production wastewater samples from the pharmaceutical industry. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Apparatus and methods for manipulation and optimization of biological systems
NASA Technical Reports Server (NTRS)
Sun, Ren (Inventor); Ho, Chih-Ming (Inventor); Wong, Pak Kin (Inventor); Yu, Fuqu (Inventor)
2012-01-01
The invention provides systems and methods for manipulating, e.g., optimizing and controlling, biological systems, e.g., for eliciting a more desired biological response of biological sample, such as a tissue, organ, and/or a cell. In one aspect, systems and methods of the invention operate by efficiently searching through a large parametric space of stimuli and system parameters to manipulate, control, and optimize the response of biological samples sustained in the system, e.g., a bioreactor. In alternative aspects, systems include a device for sustaining cells or tissue samples, one or more actuators for stimulating the samples via biochemical, electromagnetic, thermal, mechanical, and/or optical stimulation, one or more sensors for measuring a biological response signal of the samples resulting from the stimulation of the sample. In one aspect, the systems and methods of the invention use at least one optimization algorithm to modify the actuator's control inputs for stimulation, responsive to the sensor's output of response signals. The compositions and methods of the invention can be used, e.g., to for systems optimization of any biological manufacturing or experimental system, e.g., bioreactors for proteins, e.g., therapeutic proteins, polypeptides or peptides for vaccines, and the like, small molecules (e.g., antibiotics), polysaccharides, lipids, and the like. Another use of the apparatus and methods includes combination drug therapy, e.g. optimal drug cocktail, directed cell proliferations and differentiations, e.g. in tissue engineering, e.g. neural progenitor cells differentiation, and discovery of key parameters in complex biological systems.
Alegana, Victor A; Wright, Jim; Bosco, Claudio; Okiro, Emelda A; Atkinson, Peter M; Snow, Robert W; Tatem, Andrew J; Noor, Abdisalan M
2017-11-21
One pillar to monitoring progress towards the Sustainable Development Goals is the investment in high quality data to strengthen the scientific basis for decision-making. At present, nationally-representative surveys are the main source of data for establishing a scientific evidence base, monitoring, and evaluation of health metrics. However, little is known about the optimal precisions of various population-level health and development indicators that remains unquantified in nationally-representative household surveys. Here, a retrospective analysis of the precision of prevalence from these surveys was conducted. Using malaria indicators, data were assembled in nine sub-Saharan African countries with at least two nationally-representative surveys. A Bayesian statistical model was used to estimate between- and within-cluster variability for fever and malaria prevalence, and insecticide-treated bed nets (ITNs) use in children under the age of 5 years. The intra-class correlation coefficient was estimated along with the optimal sample size for each indicator with associated uncertainty. Results suggest that the estimated sample sizes for the current nationally-representative surveys increases with declining malaria prevalence. Comparison between the actual sample size and the modelled estimate showed a requirement to increase the sample size for parasite prevalence by up to 77.7% (95% Bayesian credible intervals 74.7-79.4) for the 2015 Kenya MIS (estimated sample size of children 0-4 years 7218 [7099-7288]), and 54.1% [50.1-56.5] for the 2014-2015 Rwanda DHS (12,220 [11,950-12,410]). This study highlights the importance of defining indicator-relevant sample sizes to achieve the required precision in the current national surveys. While expanding the current surveys would need additional investment, the study highlights the need for improved approaches to cost effective sampling.
Clewe, Oskar; Karlsson, Mats O; Simonsson, Ulrika S H
2015-12-01
Bronchoalveolar lavage (BAL) is a pulmonary sampling technique for characterization of drug concentrations in epithelial lining fluid and alveolar cells. Two hypothetical drugs with different pulmonary distribution rates (fast and slow) were considered. An optimized BAL sampling design was generated assuming no previous information regarding the pulmonary distribution (rate and extent) and with a maximum of two samples per subject. Simulations were performed to evaluate the impact of the number of samples per subject (1 or 2) and the sample size on the relative bias and relative root mean square error of the parameter estimates (rate and extent of pulmonary distribution). The optimized BAL sampling design depends on a characterized plasma concentration time profile, a population plasma pharmacokinetic model, the limit of quantification (LOQ) of the BAL method and involves only two BAL sample time points, one early and one late. The early sample should be taken as early as possible, where concentrations in the BAL fluid ≥ LOQ. The second sample should be taken at a time point in the declining part of the plasma curve, where the plasma concentration is equivalent to the plasma concentration in the early sample. Using a previously described general pulmonary distribution model linked to a plasma population pharmacokinetic model, simulated data using the final BAL sampling design enabled characterization of both the rate and extent of pulmonary distribution. The optimized BAL sampling design enables characterization of both the rate and extent of the pulmonary distribution for both fast and slowly equilibrating drugs.
Retinal optical coherence tomography at 1 μm with dynamic focus control and axial motion tracking
NASA Astrophysics Data System (ADS)
Cua, Michelle; Lee, Sujin; Miao, Dongkai; Ju, Myeong Jin; Mackenzie, Paul J.; Jian, Yifan; Sarunic, Marinko V.
2016-02-01
High-resolution optical coherence tomography (OCT) retinal imaging is important to noninvasively visualize the various retinal structures to aid in better understanding of the pathogenesis of vision-robbing diseases. However, conventional OCT systems have a trade-off between lateral resolution and depth-of-focus. In this report, we present the development of a focus-stacking OCT system with automatic focus optimization for high-resolution, extended-focal-range clinical retinal imaging by incorporating a variable-focus liquid lens into the sample arm optics. Retinal layer tracking and selection was performed using a graphics processing unit accelerated processing platform for focus optimization, providing real-time layer-specific en face visualization. After optimization, multiple volumes focused at different depths were acquired, registered, and stitched together to yield a single, high-resolution focus-stacked dataset. Using this system, we show high-resolution images of the retina and optic nerve head, from which we extracted clinically relevant parameters such as the nerve fiber layer thickness and lamina cribrosa microarchitecture.
Retinal optical coherence tomography at 1 μm with dynamic focus control and axial motion tracking.
Cua, Michelle; Lee, Sujin; Miao, Dongkai; Ju, Myeong Jin; Mackenzie, Paul J; Jian, Yifan; Sarunic, Marinko V
2016-02-01
High-resolution optical coherence tomography (OCT) retinal imaging is important to noninvasively visualize the various retinal structures to aid in better understanding of the pathogenesis of vision-robbing diseases. However, conventional OCT systems have a trade-off between lateral resolution and depth-of-focus. In this report, we present the development of a focus-stacking OCT system with automatic focus optimization for high-resolution, extended-focal-range clinical retinal imaging by incorporating a variable-focus liquid lens into the sample arm optics. Retinal layer tracking and selection was performed using a graphics processing unit accelerated processing platform for focus optimization, providing real-time layer-specific en face visualization. After optimization, multiple volumes focused at different depths were acquired, registered, and stitched together to yield a single, high-resolution focus-stacked dataset. Using this system, we show high-resolution images of the retina and optic nerve head, from which we extracted clinically relevant parameters such as the nerve fiber layer thickness and lamina cribrosa microarchitecture.
Optimally resolving Lambertian surface orientation
NASA Astrophysics Data System (ADS)
Bertsatos, Ioannis; Makris, Nicholas C.
2003-10-01
Sonar images of remote surfaces are typically corrupted by signal-dependent noise known as speckle. Relative motion between source, surface, and receiver causes the received field to fluctuate over time with circular complex Gaussian random (CCGR) statistics. In many cases of practical importance, Lambert's law is appropriate to model radiant intensity from the surface. In a previous paper, maximum likelihood estimators (MLE) for Lambertian surface orientation have been derived based on CCGR measurements [N. C. Makris, SACLANT Conference Proceedings Series CP-45, 1997, pp. 339-346]. A Lambertian surface needs to be observed from more than one illumination direction for its orientation to be properly constrained. It is found, however, that MLE performance varies significantly with illumination direction due to the inherently nonlinear nature of this problem. It is shown that a large number of samples is often required to optimally resolve surface orientation using the optimality criteria of the MLE derived in Naftali and Makris [J. Acoust. Soc. Am. 110, 1917-1930 (2001)].
Acter, Thamina; Lee, Seulgidaun; Cho, Eunji; Jung, Maeng-Joon; Kim, Sunghwan
2018-01-01
In this study, continuous in-source hydrogen/deuterium exchange (HDX) atmospheric pressure photoionization (APPI) mass spectrometry (MS) with continuous feeding of D 2 O was developed and validated. D 2 O was continuously fed using a capillary line placed on the center of a metal plate positioned between the UV lamp and nebulizer. The proposed system overcomes the limitations of previously reported APPI HDX-MS approaches where deuterated solvents were premixed with sample solutions before ionization. This is particularly important for APPI because solvent composition can greatly influence ionization efficiency as well as the solubility of analytes. The experimental parameters for APPI HDX-MS with continuous feeding of D 2 O were optimized, and the optimized conditions were applied for the analysis of nitrogen-, oxygen-, and sulfur-containing compounds. The developed method was also applied for the analysis of the polar fraction of a petroleum sample. Thus, the data presented in this study clearly show that the proposed HDX approach can serve as an effective analytical tool for the structural analysis of complex mixtures. Graphical abstract ᅟ.
NASA Astrophysics Data System (ADS)
Cordier, G.; Choi, J.; Raguin, L. G.
2008-11-01
Skin microcirculation plays an important role in diseases such as chronic venous insufficiency and diabetes. Magnetic resonance imaging (MRI) can provide quantitative information with a better penetration depth than other noninvasive methods, such as laser Doppler flowmetry or optical coherence tomography. Moreover, successful MRI skin studies have recently been reported. In this article, we investigate three potential inverse models to quantify skin microcirculation using diffusion-weighted MRI (DWI), also known as q-space MRI. The model parameters are estimated based on nonlinear least-squares (NLS). For each of the three models, an optimal DWI sampling scheme is proposed based on D-optimality in order to minimize the size of the confidence region of the NLS estimates and thus the effect of the experimental noise inherent to DWI. The resulting covariance matrices of the NLS estimates are predicted by asymptotic normality and compared to the ones computed by Monte-Carlo simulations. Our numerical results demonstrate the effectiveness of the proposed models and corresponding DWI sampling schemes as compared to conventional approaches.
Ramirez, Daniela Andrea; Locatelli, Daniela Ana; Torres-Palazzolo, Carolina Andrea; Altamirano, Jorgelina Cecilia; Camargo, Alejandra Beatriz
2017-01-15
Organosulphur compounds (OSCs) present in garlic (Allium sativum L.) are responsible of several biological properties. Functional foods researches indicate the importance of quantifying these compounds in food matrices and biological fluids. For this purpose, this paper introduces a novel methodology based on dispersive liquid-liquid microextraction (DLLME) coupled to high performance liquid chromatography with ultraviolet detector (HPLC-UV) for the extraction and determination of organosulphur compounds in different matrices. The target analytes were allicin, (E)- and (Z)-ajoene, 2-vinyl-4H-1,2-dithiin (2-VD), diallyl sulphide (DAS) and diallyl disulphide (DADS). The microextraction technique was optimized using an experimental design, and the analytical performance was evaluated under optimum conditions. The desirability function presented an optimal value for 600μL of chloroform as extraction solvent using acetonitrile as dispersant. The method proved to be reliable, precise and accurate. It was successfully applied to determine OSCs in cooked garlic samples as well as blood plasma and digestive fluids. Copyright © 2016 Elsevier Ltd. All rights reserved.
Eslami, Babak; Ebeling, Daniel
2014-01-01
Summary This paper presents experiments on Nafion® proton exchange membranes and numerical simulations illustrating the trade-offs between the optimization of compositional contrast and the modulation of tip indentation depth in bimodal atomic force microscopy (AFM). We focus on the original bimodal AFM method, which uses amplitude modulation to acquire the topography through the first cantilever eigenmode, and drives a higher eigenmode in open-loop to perform compositional mapping. This method is attractive due to its relative simplicity, robustness and commercial availability. We show that this technique offers the capability to modulate tip indentation depth, in addition to providing sample topography and material property contrast, although there are important competing effects between the optimization of sensitivity and the control of indentation depth, both of which strongly influence the contrast quality. Furthermore, we demonstrate that the two eigenmodes can be highly coupled in practice, especially when highly repulsive imaging conditions are used. Finally, we also offer a comparison with a previously reported trimodal AFM method, where the above competing effects are minimized. PMID:25161847
Three-dimensional-printed gas dynamic virtual nozzles for x-ray laser sample delivery
Nelson, Garrett; Kirian, Richard A.; Weierstall, Uwe; Zatsepin, Nadia A.; Faragó, Tomáš; Baumbach, Tilo; Wilde, Fabian; Niesler, Fabian B. P.; Zimmer, Benjamin; Ishigami, Izumi; Hikita, Masahide; Bajt, Saša; Yeh, Syun-Ru; Rousseau, Denis L.; Chapman, Henry N.; Spence, John C. H.; Heymann, Michael
2016-01-01
Reliable sample delivery is essential to biological imaging using X-ray Free Electron Lasers (XFELs). Continuous injection using the Gas Dynamic Virtual Nozzle (GDVN) has proven valuable, particularly for time-resolved studies. However, many important aspects of GDVN functionality have yet to be thoroughly understood and/or refined due to fabrication limitations. We report the application of 2-photon polymerization as a form of high-resolution 3D printing to fabricate high-fidelity GDVNs with submicron resolution. This technique allows rapid prototyping of a wide range of different types of nozzles from standard CAD drawings and optimization of crucial dimensions for optimal performance. Three nozzles were tested with pure water to determine general nozzle performance and reproducibility, with nearly reproducible off-axis jetting being the result. X-ray tomography and index matching were successfully used to evaluate the interior nozzle structures and identify the cause of off-axis jetting. Subsequent refinements to fabrication resulted in straight jetting. A performance test of printed nozzles at an XFEL provided high quality femtosecond diffraction patterns. PMID:27410079
NASA Technical Reports Server (NTRS)
Borsody, J.
1976-01-01
Mathematical equations are derived by using the Maximum Principle to obtain the maximum payload capability of a reusable tug for planetary missions. The mathematical formulation includes correction for nodal precession of the space shuttle orbit. The tug performs this nodal correction in returning to this precessed orbit. The sample case analyzed represents an inner planet mission as defined by the declination (fixed) and right ascension of the outgoing asymptote and the mission energy. Payload capability is derived for a typical cryogenic tug and the sample case with and without perigee propulsion. Optimal trajectory profiles and some important orbital elements are also discussed.
Effect of heat-setting on UV protection and antibacterial properties of cotton/spandex fabric
NASA Astrophysics Data System (ADS)
Pervez, M. N.; Talukder, M. E.; Shafiq, F.; Hasan, K. M. F.; Taher, M. A.; Meraz, M. M.; Cai, Y.; Lin, Lina
2018-01-01
An unexampled approach for simultaneous heat setting process with optimized condition at C3 (140°C, 45 s) and functional finishing, i.e. UV protection and antibacterial properties of cotton/spandex fabric were studied in this research. Experimental results disclosed that, ameliorative antibacterial efficacy and perdurable UV protection of heat-treated cotton/spandex fabrics with best sample A3 among all samples was achieved and mechanical properties also improved as the temperature rose from 120 to 140°C. In addition, Ultraviolet (UV) radiation protection and antibacterial properties are becoming increasingly necessary for human health, and textiles play an important role and this report will be appurtenant to meet regular demand.
Optimization of cDNA microarrays procedures using criteria that do not rely on external standards.
Bruland, Torunn; Anderssen, Endre; Doseth, Berit; Bergum, Hallgeir; Beisvag, Vidar; Laegreid, Astrid
2007-10-18
The measurement of gene expression using microarray technology is a complicated process in which a large number of factors can be varied. Due to the lack of standard calibration samples such as are used in traditional chemical analysis it may be a problem to evaluate whether changes done to the microarray procedure actually improve the identification of truly differentially expressed genes. The purpose of the present work is to report the optimization of several steps in the microarray process both in laboratory practices and in data processing using criteria that do not rely on external standards. We performed a cDNA microarry experiment including RNA from samples with high expected differential gene expression termed "high contrasts" (rat cell lines AR42J and NRK52E) compared to self-self hybridization, and optimized a pipeline to maximize the number of genes found to be differentially expressed in the "high contrasts" RNA samples by estimating the false discovery rate (FDR) using a null distribution obtained from the self-self experiment. The proposed high-contrast versus self-self method (HCSSM) requires only four microarrays per evaluation. The effects of blocking reagent dose, filtering, and background corrections methodologies were investigated. In our experiments a dose of 250 ng LNA (locked nucleic acid) dT blocker, no background correction and weight based filtering gave the largest number of differentially expressed genes. The choice of background correction method had a stronger impact on the estimated number of differentially expressed genes than the choice of filtering method. Cross platform microarray (Illumina) analysis was used to validate that the increase in the number of differentially expressed genes found by HCSSM was real. The results show that HCSSM can be a useful and simple approach to optimize microarray procedures without including external standards. Our optimizing method is highly applicable to both long oligo-probe microarrays which have become commonly used for well characterized organisms such as man, mouse and rat, as well as to cDNA microarrays which are still of importance for organisms with incomplete genome sequence information such as many bacteria, plants and fish.
NASA Astrophysics Data System (ADS)
Engeland, K.; Steinsland, I.
2012-04-01
This work is driven by the needs of next generation short term optimization methodology for hydro power production. Stochastic optimization are about to be introduced; i.e. optimizing when available resources (water) and utility (prices) are uncertain. In this paper we focus on the available resources, i.e. water, where uncertainty mainly comes from uncertainty in future runoff. When optimizing a water system all catchments and several lead times have to be considered simultaneously. Depending on the system of hydropower reservoirs, it might be a set of headwater catchments, a system of upstream /downstream reservoirs where water used from one catchment /dam arrives in a lower catchment maybe days later, or a combination of both. The aim of this paper is therefore to construct a simultaneous probabilistic forecast for several catchments and lead times, i.e. to provide a predictive distribution for the forecasts. Stochastic optimization methods need samples/ensembles of run-off forecasts as input. Hence, it should also be possible to sample from our probabilistic forecast. A post-processing approach is taken, and an error model based on Box- Cox transformation, power transform and a temporal-spatial copula model is used. It accounts for both between catchment and between lead time dependencies. In operational use it is strait forward to sample run-off ensembles from this models that inherits the catchment and lead time dependencies. The methodology is tested and demonstrated in the Ulla-Førre river system, and simultaneous probabilistic forecasts for five catchments and ten lead times are constructed. The methodology has enough flexibility to model operationally important features in this case study such as hetroscadasety, lead-time varying temporal dependency and lead-time varying inter-catchment dependency. Our model is evaluated using CRPS for marginal predictive distributions and energy score for joint predictive distribution. It is tested against deterministic run-off forecast, climatology forecast and a persistent forecast, and is found to be the better probabilistic forecast for lead time grater then two. From an operational point of view the results are interesting as the between catchment dependency gets stronger with longer lead-times.
Optimization of cDNA microarrays procedures using criteria that do not rely on external standards
Bruland, Torunn; Anderssen, Endre; Doseth, Berit; Bergum, Hallgeir; Beisvag, Vidar; Lægreid, Astrid
2007-01-01
Background The measurement of gene expression using microarray technology is a complicated process in which a large number of factors can be varied. Due to the lack of standard calibration samples such as are used in traditional chemical analysis it may be a problem to evaluate whether changes done to the microarray procedure actually improve the identification of truly differentially expressed genes. The purpose of the present work is to report the optimization of several steps in the microarray process both in laboratory practices and in data processing using criteria that do not rely on external standards. Results We performed a cDNA microarry experiment including RNA from samples with high expected differential gene expression termed "high contrasts" (rat cell lines AR42J and NRK52E) compared to self-self hybridization, and optimized a pipeline to maximize the number of genes found to be differentially expressed in the "high contrasts" RNA samples by estimating the false discovery rate (FDR) using a null distribution obtained from the self-self experiment. The proposed high-contrast versus self-self method (HCSSM) requires only four microarrays per evaluation. The effects of blocking reagent dose, filtering, and background corrections methodologies were investigated. In our experiments a dose of 250 ng LNA (locked nucleic acid) dT blocker, no background correction and weight based filtering gave the largest number of differentially expressed genes. The choice of background correction method had a stronger impact on the estimated number of differentially expressed genes than the choice of filtering method. Cross platform microarray (Illumina) analysis was used to validate that the increase in the number of differentially expressed genes found by HCSSM was real. Conclusion The results show that HCSSM can be a useful and simple approach to optimize microarray procedures without including external standards. Our optimizing method is highly applicable to both long oligo-probe microarrays which have become commonly used for well characterized organisms such as man, mouse and rat, as well as to cDNA microarrays which are still of importance for organisms with incomplete genome sequence information such as many bacteria, plants and fish. PMID:17949480
Local Feature Selection for Data Classification.
Armanfard, Narges; Reilly, James P; Komeili, Majid
2016-06-01
Typical feature selection methods choose an optimal global feature subset that is applied over all regions of the sample space. In contrast, in this paper we propose a novel localized feature selection (LFS) approach whereby each region of the sample space is associated with its own distinct optimized feature set, which may vary both in membership and size across the sample space. This allows the feature set to optimally adapt to local variations in the sample space. An associated method for measuring the similarities of a query datum to each of the respective classes is also proposed. The proposed method makes no assumptions about the underlying structure of the samples; hence the method is insensitive to the distribution of the data over the sample space. The method is efficiently formulated as a linear programming optimization problem. Furthermore, we demonstrate the method is robust against the over-fitting problem. Experimental results on eleven synthetic and real-world data sets demonstrate the viability of the formulation and the effectiveness of the proposed algorithm. In addition we show several examples where localized feature selection produces better results than a global feature selection method.
Saussele, Susanne; Hehlmann, Rüdiger; Fabarius, Alice; Jeromin, Sabine; Proetel, Ulrike; Rinaldetti, Sebastien; Kohlbrenner, Katharina; Einsele, Hermann; Falge, Christiane; Kanz, Lothar; Neubauer, Andreas; Kneba, Michael; Stegelmann, Frank; Pfreundschuh, Michael; Waller, Cornelius F; Oppliger Leibundgut, Elisabeth; Heim, Dominik; Krause, Stefan W; Hofmann, Wolf-Karsten; Hasford, Joerg; Pfirrmann, Markus; Müller, Martin C; Hochhaus, Andreas; Lauseker, Michael
2018-05-01
Major molecular remission (MMR) is an important therapy goal in chronic myeloid leukemia (CML). So far, MMR is not a failure criterion according to ELN management recommendation leading to uncertainties when to change therapy in CML patients not reaching MMR after 12 months. At monthly landmarks, for different molecular remission status Hazard ratios (HR) were estimated for patients registered to CML study IV who were divided in a learning and a validation sample. The minimum HR for MMR was found at 2.5 years with 0.28 (compared to patients without remission). In the validation sample, a significant advantage for progression-free survival (PFS) for patients in MMR could be detected (p-value 0.007). The optimal time to predict PFS in patients with MMR could be validated in an independent sample at 2.5 years. With our model we provide a suggestion when to define lack of MMR as therapy failure and thus treatment change should be considered. The optimal response time for 1% BCR-ABL at about 12-15 months was confirmed and for deep molecular remission no specific time point was detected. Nevertheless, it was demonstrated that the earlier the MMR is achieved the higher is the chance to attain deep molecular response later.
NASA Astrophysics Data System (ADS)
Ciarletti, Valérie; Clifford, Stephen; Plettemeier, Dirk; Le Gall, Alice; Hervé, Yann; Dorizon, Sophie; Quantin-Nataf, Cathy; Benedix, Wolf-Stefan; Schwenzer, Susanne; Pettinelli, Elena; Heggy, Essam; Herique, Alain; Berthelier, Jean-Jacques; Kofman, Wlodek; Vago, Jorge L.; Hamran, Svein-Erik; WISDOM Team
2017-07-01
The search for evidence of past or present life on Mars is the principal objective of the 2020 ESA-Roscosmos ExoMars Rover mission. If such evidence is to be found anywhere, it will most likely be in the subsurface, where organic molecules are shielded from the destructive effects of ionizing radiation and atmospheric oxidants. For this reason, the ExoMars Rover mission has been optimized to investigate the subsurface to identify, understand, and sample those locations where conditions for the preservation of evidence of past life are most likely to be found. The Water Ice Subsurface Deposit Observation on Mars (WISDOM) ground-penetrating radar has been designed to provide information about the nature of the shallow subsurface over depth ranging from 3 to 10 m (with a vertical resolution of up to 3 cm), depending on the dielectric properties of the regolith. This depth range is critical to understanding the geologic evolution stratigraphy and distribution and state of subsurface H2O, which provide important clues in the search for life and the identification of optimal drilling sites for investigation and sampling by the Rover's 2-m drill. WISDOM will help ensure the safety and success of drilling operations by identification of potential hazards that might interfere with retrieval of subsurface samples.
Wultsch, Claudia; Waits, Lisette P; Kelly, Marcella J
2014-11-01
There is a great need to develop efficient, noninvasive genetic sampling methods to study wild populations of multiple, co-occurring, threatened felids. This is especially important for molecular scatology studies occurring in challenging tropical environments where DNA degrades quickly and the quality of faecal samples varies greatly. We optimized 14 polymorphic microsatellite loci for jaguars (Panthera onca), pumas (Puma concolor) and ocelots (Leopardus pardalis) and assessed their utility for cross-species amplification. Additionally, we tested their reliability for species and individual identification using DNA from faeces of wild felids detected by a scat detector dog across Belize in Central America. All microsatellite loci were successfully amplified in the three target species, were polymorphic with average expected heterozygosities of HE = 0.60 ± 0.18 (SD) for jaguars, HE = 0.65 ± 0.21 (SD) for pumas and HE = 0.70 ± 0.13 (SD) for ocelots and had an overall PCR amplification success of 61%. We used this nuclear DNA primer set to successfully identify species and individuals from 49% of 1053 field-collected scat samples. This set of optimized microsatellite multiplexes represents a powerful tool for future efforts to conduct noninvasive studies on multiple, wild Neotropical felids. © 2014 John Wiley & Sons Ltd.
Alothman, Zeid A; Habila, Mohamed; Yilmaz, Erkan; Soylak, Mustafa
2013-01-01
A simple, environmentally friendly, and efficient dispersive liquid-liquid microextraction method combined with microsample injection flame atomic absorption spectrometry was developed for the separation and preconcentration of Cu(II). 2-(5-Bromo-2-pyridylazo)-5-(diethylamino)phenol (5-Br-PADAP) was used to form a hydrophobic complex of Cu(II) ions in the aqueous phase before extraction. To extract the Cu(II)-5-Br-PADAP complex from the aqueous phase to the organic phase, 2.0 mL of acetone as a disperser solvent and 200 microL of chloroform as an extraction solvent were used. The influences of important analytical parameters, such as the pH, types and volumes of the extraction and disperser solvents, amount of chelating agent, sample volume, and matrix effects, on the microextraction procedure were evaluated and optimized. Using the optimal conditions, the LOD, LOQ, preconcentration factor, and RSD were determined to be 1.4 microg/L, 4.7 microg/L, 120, and 6.5%, respectively. The accuracy of the proposed method was investigated using standard addition/recovery tests. The analysis of certified reference materials produced satisfactory analytical results. The developed method was applied for the determination of Cu in real samples.
Successful aging in Spanish older adults: the role of psychosocial resources.
Dumitrache, Cristina G; Rubio, Laura; Cordón-Pozo, Eulogio
2018-05-25
ABSTRACTBackground:Psychological and social resources such as extraversion, optimism, social support, or social networks contribute to adaptation and to successful aging. Building on assumptions derived from successful aging and from the developmental adaptation models, this study aims to analyze the joint impact of different psychosocial resources, such as personality, social relations, health, and socio-demographic characteristics on life satisfaction in a group of people aged 65 years-old and older from Spain. A cross-sectional survey using non-proportional quota sampling was carried out. The sample comprised 406 community-dwelling older adults (M = 74.88, SD = 6.75). In order to collect the data, face-to-face interviews were individually conducted. A structural equation model (SEM) was carried out using the PLS software. The results of the SEM model showed that, within this sample, psychosocial variables explain 47.4% of the variance in life satisfaction. Social relations and personality, specifically optimism, were strongly related with life satisfaction, while health status and socio-demographic characteristics were modestly associated with life satisfaction. Findings support the view that psychosocial resources are important for successful aging and therefore should be included in successful aging models. Furthermore, interventions aimed at fostering successful aging should take into account the role of psychosocial variables.
Sampling design optimization for spatial functions
Olea, R.A.
1984-01-01
A new procedure is presented for minimizing the sampling requirements necessary to estimate a mappable spatial function at a specified level of accuracy. The technique is based on universal kriging, an estimation method within the theory of regionalized variables. Neither actual implementation of the sampling nor universal kriging estimations are necessary to make an optimal design. The average standard error and maximum standard error of estimation over the sampling domain are used as global indices of sampling efficiency. The procedure optimally selects those parameters controlling the magnitude of the indices, including the density and spatial pattern of the sample elements and the number of nearest sample elements used in the estimation. As an illustration, the network of observation wells used to monitor the water table in the Equus Beds of Kansas is analyzed and an improved sampling pattern suggested. This example demonstrates the practical utility of the procedure, which can be applied equally well to other spatial sampling problems, as the procedure is not limited by the nature of the spatial function. ?? 1984 Plenum Publishing Corporation.
Optimized Clustering Estimators for BAO Measurements Accounting for Significant Redshift Uncertainty
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ross, Ashley J.; Banik, Nilanjan; Avila, Santiago
2017-05-15
We determine an optimized clustering statistic to be used for galaxy samples with significant redshift uncertainty, such as those that rely on photometric redshifts. To do so, we study the BAO information content as a function of the orientation of galaxy clustering modes with respect to their angle to the line-of-sight (LOS). The clustering along the LOS, as observed in a redshift-space with significant redshift uncertainty, has contributions from clustering modes with a range of orientations with respect to the true LOS. For redshift uncertaintymore » $$\\sigma_z \\geq 0.02(1+z)$$ we find that while the BAO information is confined to transverse clustering modes in the true space, it is spread nearly evenly in the observed space. Thus, measuring clustering in terms of the projected separation (regardless of the LOS) is an efficient and nearly lossless compression of the signal for $$\\sigma_z \\geq 0.02(1+z)$$. For reduced redshift uncertainty, a more careful consideration is required. We then use more than 1700 realizations of galaxy simulations mimicking the Dark Energy Survey Year 1 sample to validate our analytic results and optimized analysis procedure. We find that using the correlation function binned in projected separation, we can achieve uncertainties that are within 10 per cent of of those predicted by Fisher matrix forecasts. We predict that DES Y1 should achieve a 5 per cent distance measurement using our optimized methods. We expect the results presented here to be important for any future BAO measurements made using photometric redshift data.« less
Optimized clustering estimators for BAO measurements accounting for significant redshift uncertainty
NASA Astrophysics Data System (ADS)
Ross, Ashley J.; Banik, Nilanjan; Avila, Santiago; Percival, Will J.; Dodelson, Scott; Garcia-Bellido, Juan; Crocce, Martin; Elvin-Poole, Jack; Giannantonio, Tommaso; Manera, Marc; Sevilla-Noarbe, Ignacio
2017-12-01
We determine an optimized clustering statistic to be used for galaxy samples with significant redshift uncertainty, such as those that rely on photometric redshifts. To do so, we study the baryon acoustic oscillation (BAO) information content as a function of the orientation of galaxy clustering modes with respect to their angle to the line of sight (LOS). The clustering along the LOS, as observed in a redshift-space with significant redshift uncertainty, has contributions from clustering modes with a range of orientations with respect to the true LOS. For redshift uncertainty σz ≥ 0.02(1 + z), we find that while the BAO information is confined to transverse clustering modes in the true space, it is spread nearly evenly in the observed space. Thus, measuring clustering in terms of the projected separation (regardless of the LOS) is an efficient and nearly lossless compression of the signal for σz ≥ 0.02(1 + z). For reduced redshift uncertainty, a more careful consideration is required. We then use more than 1700 realizations (combining two separate sets) of galaxy simulations mimicking the Dark Energy Survey Year 1 (DES Y1) sample to validate our analytic results and optimized analysis procedure. We find that using the correlation function binned in projected separation, we can achieve uncertainties that are within 10 per cent of those predicted by Fisher matrix forecasts. We predict that DES Y1 should achieve a 5 per cent distance measurement using our optimized methods. We expect the results presented here to be important for any future BAO measurements made using photometric redshift data.
Factors influencing preclinical in vivo evaluation of mumps vaccine strain immunogenicity
Halassy, B; Kurtović, T; Brgles, M; Lang Balija, M; Forčić, D
2015-01-01
Immunogenicity testing in animals is a necessary preclinical assay for demonstration of vaccine efficacy the results of which are often the basis for the decision whether to proceed or withdraw the further development of the novel vaccine candidate. However, in vivo assays are rarely, if at all, optimized and validated. Here we clearly demonstrate the importance of in vivo assay (mumps virus immunogenicity testing in guinea pigs) optimization for gaining reliable results and the suitability of Fractional factorial design of experiments (DoE) for such a purpose. By the use of DoE with resolution IV (2IV(4-1)) we clearly revealed that the parameters significantly increasing assay sensitivity were interval between animal immunizations followed by the body weight of experimental animals. The quantity (0 versus 2%) of the stabilizer (fetal bovine serum, FBS) in the sample was shown as non-influencing parameter in DoE setup. However, the separate experiment investigating only the FBS influence, and performed under other parameters optimally set, showed that FBS also influences the results of immunogenicity assay. Such finding indicated that (a) factors with strong influence on the measured outcome can hide the effects of parameters with modest/low influence and (b) the matrix of mumps virus samples to be compared for immunogenicity must be identical for reliable virus immunogenicity comparison. Finally the 3 mumps vaccine strains widely used for decades in the licensed vaccines were for the first time compared in an animal model, and results obtained were in line with their reported immunogenicity in human population supporting the predictive power of the optimized in vivo assay. PMID:26376015
Factors influencing preclinical in vivo evaluation of mumps vaccine strain immunogenicity.
Halassy, B; Kurtović, T; Brgles, M; Lang Balija, M; Forčić, D
2015-01-01
Immunogenicity testing in animals is a necessary preclinical assay for demonstration of vaccine efficacy the results of which are often the basis for the decision whether to proceed or withdraw the further development of the novel vaccine candidate. However, in vivo assays are rarely, if at all, optimized and validated. Here we clearly demonstrate the importance of in vivo assay (mumps virus immunogenicity testing in guinea pigs) optimization for gaining reliable results and the suitability of Fractional factorial design of experiments (DoE) for such a purpose. By the use of DoE with resolution IV (2IV((4-1))) we clearly revealed that the parameters significantly increasing assay sensitivity were interval between animal immunizations followed by the body weight of experimental animals. The quantity (0 versus 2%) of the stabilizer (fetal bovine serum, FBS) in the sample was shown as non-influencing parameter in DoE setup. However, the separate experiment investigating only the FBS influence, and performed under other parameters optimally set, showed that FBS also influences the results of immunogenicity assay. Such finding indicated that (a) factors with strong influence on the measured outcome can hide the effects of parameters with modest/low influence and (b) the matrix of mumps virus samples to be compared for immunogenicity must be identical for reliable virus immunogenicity comparison. Finally the 3 mumps vaccine strains widely used for decades in the licensed vaccines were for the first time compared in an animal model, and results obtained were in line with their reported immunogenicity in human population supporting the predictive power of the optimized in vivo assay.
Coneo, A M C; Thompson, A R; Lavda, A
2017-05-01
Individuals with visible skin conditions often experience stigmatization and discrimination. This may trigger maladaptive responses such as feelings of anger and hostility, with negative consequences to social interactions and relationships. To identify psychosocial factors contributing to aggression levels in dermatology patients. Data were obtained from 91 participants recruited from outpatient clinics in the north of England, U.K. This study used dermatology-specific data extracted from a large U.K. database of medical conditions collected by The Appearance Research Collaboration. This study looked at the impact of optimism, perceptions of social support and social acceptance, fear of negative evaluation, appearance concern, appearance discrepancy, social comparison and well-being on aggression levels in a sample of dermatology patients. In order to assess the relationship between variables, a hierarchical regression analysis was performed. Dispositional style (optimism) was shown to have a strong negative relationship with aggression (β = -0·37, t = -2·97, P = 0·004). Higher levels of perceived social support were significantly associated with lower levels of aggression (β = -0·26, t = -2·26, P = 0·02). Anxiety was also found to have a significant positive relationship with aggression (β = 0·36, t = 2·56, P = 0·01). This study provides evidence for the importance of perceived social support and optimism in psychological adjustment to skin conditions. Psychosocial interventions provided to dermatology patients might need to address aggression levels and seek to enhance social support and the ability to be optimistic. © 2016 British Association of Dermatologists.
Dorival-García, N; Labajo-Recio, C; Zafra-Gómez, A; Juárez-Jiménez, B; Vílchez, J L
2015-06-01
The use of compost from sewage sludge for agricultural application is nowadays increasing, since composting is recognized as one of the most important recycling options for this material, being a source of nutrients for plants but also of contamination by persistent pollutants. In the present work, a multi-residue analytical method for the determination of 17 quinolone antibiotic residues in compost using multivariate optimization strategies and ultra high performance liquid chromatography-tandem mass spectrometry has been developed. It is based on the use of microwave-assisted extraction at drastic conditions with ACN:m-phosphoric acid (1% w/v) for 5 min at 120°C, in order to achieve a quantitative extraction of the compounds (>76% of extraction recovery). Extracts were cleaned-up by salt-assisted liquid-liquid extraction (SALLE) with NaCl at pH 1.5 (with HClO4) and then using a dispersive sorbent (PSA). After LC separation, the MS conditions, in positive electrospray ionization mode (ESI), were individually optimized for each analyte to obtain maximum sensitivity in the selected reaction monitoring mode (SRM). The analytes were separated in less than 7 min. Cincophen was used as surrogate standard. The limits of detection ranged from 0.2 to 0.5 ng g(-1), and the limits of the quantification from 0.5 to 1.5 ng g(-1), while intra- and inter-day variability (% RSD) was under 7% in all cases. A recovery assay was performed with spiked samples. Recoveries ranging from 95.3% to 106.2% were obtained. Cleanup procedure reduced significantly matrix effects, which constitutes an important achievement, considering the important drawbacks of matrix components in quality and validation parameters. This method was applied to several commercial compost samples. Only 6 of the studied antibiotics were not detected in any of the samples. The antibiotics with the highest concentrations were ciprofloxacin (836 ng g(-1)), ofloxacin (719 ng g(-1)), and enrofloxacin (674 ng g(-1)), which were also the only ones found in all the analyzed samples. The results showed that this method could also be potentially adapted for the analysis of other strong sorbed basic pharmaceuticals in solid environmental matrices. Copyright © 2015 Elsevier B.V. All rights reserved.
Forecasting of dissolved oxygen in the Guanting reservoir using an optimized NGBM (1,1) model.
An, Yan; Zou, Zhihong; Zhao, Yanfei
2015-03-01
An optimized nonlinear grey Bernoulli model was proposed by using a particle swarm optimization algorithm to solve the parameter optimization problem. In addition, each item in the first-order accumulated generating sequence was set in turn as an initial condition to determine which alternative would yield the highest forecasting accuracy. To test the forecasting performance, the optimized models with different initial conditions were then used to simulate dissolved oxygen concentrations in the Guanting reservoir inlet and outlet (China). The empirical results show that the optimized model can remarkably improve forecasting accuracy, and the particle swarm optimization technique is a good tool to solve parameter optimization problems. What's more, the optimized model with an initial condition that performs well in in-sample simulation may not do as well as in out-of-sample forecasting. Copyright © 2015. Published by Elsevier B.V.
NASA Astrophysics Data System (ADS)
Flinders, Bryn; Beasley, Emma; Verlaan, Ricky M.; Cuypers, Eva; Francese, Simona; Bassindale, Tom; Clench, Malcolm R.; Heeren, Ron M. A.
2017-08-01
Matrix-assisted laser desorption/ionization-mass spectrometry imaging (MALDI-MSI) has been employed to rapidly screen longitudinally sectioned drug user hair samples for cocaine and its metabolites using continuous raster imaging. Optimization of the spatial resolution and raster speed were performed on intact cocaine contaminated hair samples. The optimized settings (100 × 150 μm at 0.24 mm/s) were subsequently used to examine longitudinally sectioned drug user hair samples. The MALDI-MS/MS images showed the distribution of the most abundant cocaine product ion at m/z 182. Using the optimized settings, multiple hair samples obtained from two users were analyzed in approximately 3 h: six times faster than the standard spot-to-spot acquisition method. Quantitation was achieved using longitudinally sectioned control hair samples sprayed with a cocaine dilution series. A multiple reaction monitoring (MRM) experiment was also performed using the `dynamic pixel' imaging method to screen for cocaine and a range of its metabolites, in order to differentiate between contaminated hairs and drug users. Cocaine, benzoylecgonine, and cocaethylene were detectable, in agreement with analyses carried out using the standard LC-MS/MS method. [Figure not available: see fulltext.
Robust Airfoil Optimization to Achieve Consistent Drag Reduction Over a Mach Range
NASA Technical Reports Server (NTRS)
Li, Wu; Huyse, Luc; Padula, Sharon; Bushnell, Dennis M. (Technical Monitor)
2001-01-01
We prove mathematically that in order to avoid point-optimization at the sampled design points for multipoint airfoil optimization, the number of design points must be greater than the number of free-design variables. To overcome point-optimization at the sampled design points, a robust airfoil optimization method (called the profile optimization method) is developed and analyzed. This optimization method aims at a consistent drag reduction over a given Mach range and has three advantages: (a) it prevents severe degradation in the off-design performance by using a smart descent direction in each optimization iteration, (b) there is no random airfoil shape distortion for any iterate it generates, and (c) it allows a designer to make a trade-off between a truly optimized airfoil and the amount of computing time consumed. For illustration purposes, we use the profile optimization method to solve a lift-constrained drag minimization problem for 2-D airfoil in Euler flow with 20 free-design variables. A comparison with other airfoil optimization methods is also included.
Bailey, Regan L.; Denby, Nigel; Haycock, Bryan; Sherif, Katherine; Steinbaum, Suzanne; von Schacky, Clemens
2015-01-01
Limited data exist on consumer beliefs and practices on the role of omega-3 fatty acid and vitamin D dietary supplements and health. For this reason, the Global Health and Nutrition Alliance conducted an online survey in 3 countries (n = 3030; United States = 1022, Germany = 1002, United Kingdom = 1006) of a convenience sample of adults (aged 18–66 years) who represented the age, gender, and geographic composition within each country. More than half of the sample (52%) believed they consume all the key nutrients needed for optimal nutrition through food sources alone; fewer women (48%) than men (57%), and fewer middle-aged adults (48%) than younger (18–34 years [56%]) and older (≥55 years [54%]) adults agreed an optimal diet could be achieved through diet alone. Overall, 32% reported using omega-3s (45% in United States, 29% in United Kingdom, and 24% in Germany), and 42% reported using vitamin D dietary supplements (62% in United States, 32% in United Kingdom, and 31% in Germany). Seventy eight percent of the sample agreed that omega-3 fatty acids are beneficial for heart health; however, only 40% thought that their diet was adequate in omega-3 fatty acids. Similarly, 84% agreed that vitamin D was beneficial to overall, and 55% of adults from all countries were unsure or did not think they consume enough vitamin D in their diet. For most findings in our study, US adults reported more dietary supplement use and had stronger perceptions about the health effects of omega-3s and vitamin D than their counterparts in the United Kingdom and Germany. Nevertheless, the consistent findings across all countries were that adults are aware of the importance of nutrition, and most adults believe their diet is optimal for health. Our data serve to alert dietitians and health professionals that consumers may have an elevated sense of the healthfulness of their own diets and may require guidance and education to achieve optimal diets. PMID:26663954
Optimal sampling and quantization of synthetic aperture radar signals
NASA Technical Reports Server (NTRS)
Wu, C.
1978-01-01
Some theoretical and experimental results on optimal sampling and quantization of synthetic aperture radar (SAR) signals are presented. It includes a description of a derived theoretical relationship between the pixel signal to noise ratio of processed SAR images and the number of quantization bits per sampled signal, assuming homogeneous extended targets. With this relationship known, a solution may be realized for the problem of optimal allocation of a fixed data bit-volume (for specified surface area and resolution criterion) between the number of samples and the number of bits per sample. The results indicate that to achieve the best possible image quality for a fixed bit rate and a given resolution criterion, one should quantize individual samples coarsely and thereby maximize the number of multiple looks. The theoretical results are then compared with simulation results obtained by processing aircraft SAR data.
Comparative risk assessment and cessation information seeking among smokeless tobacco users.
Jun, Jungmi; Nan, Xiaoli
2018-05-01
This research examined (1) smokeless tobacco users' comparative optimism in assessing the health and addiction risks of their own product in comparison with cigarettes, and (2) the effects of comparative optimism on cessation information-seeking. A nationally-representative sample from the 2015 Health Information National Trends Survey (HINTS)-FDA was employed. The analyses revealed the presence of comparative optimism in assessing both health and addiction risks among smokeless tobacco users. Comparative optimism was negatively correlated with most cessation information-seeking variables. Health bias (the health risk rating gap between the subject's own tobacco product and cigarettes) was associated with decreased intent to use cessation support. However, the health bias and addiction bias (the addiction risk rating gap between the subject's own tobacco product and cigarettes) were not consistent predictors of all cessation information-seeking variables, when covariates of socio-demographics and tobacco use status were included. In addition, positive correlations between health bias and past/recent cessation-information searches were observed. Optimisic biases may negatively influence cessation behaviors not only directly but also indirectly by influencing an important moderator, cessation information-seeking. Future interventions should prioritize dispelling the comparative optimism in perceiving risks of smokeless tobacco use, as well as provide more reliable cessation information specific to smokeless tobacco users. Copyright © 2018 Elsevier Ltd. All rights reserved.
Genetic algorithm parameters tuning for resource-constrained project scheduling problem
NASA Astrophysics Data System (ADS)
Tian, Xingke; Yuan, Shengrui
2018-04-01
Project Scheduling Problem (RCPSP) is a kind of important scheduling problem. To achieve a certain optimal goal such as the shortest duration, the smallest cost, the resource balance and so on, it is required to arrange the start and finish of all tasks under the condition of satisfying project timing constraints and resource constraints. In theory, the problem belongs to the NP-hard problem, and the model is abundant. Many combinatorial optimization problems are special cases of RCPSP, such as job shop scheduling, flow shop scheduling and so on. At present, the genetic algorithm (GA) has been used to deal with the classical RCPSP problem and achieved remarkable results. Vast scholars have also studied the improved genetic algorithm for the RCPSP problem, which makes it to solve the RCPSP problem more efficiently and accurately. However, for the selection of the main parameters of the genetic algorithm, there is no parameter optimization in these studies. Generally, we used the empirical method, but it cannot ensure to meet the optimal parameters. In this paper, the problem was carried out, which is the blind selection of parameters in the process of solving the RCPSP problem. We made sampling analysis, the establishment of proxy model and ultimately solved the optimal parameters.
3D gravity inversion and uncertainty assessment of basement relief via Particle Swarm Optimization
NASA Astrophysics Data System (ADS)
Pallero, J. L. G.; Fernández-Martínez, J. L.; Bonvalot, S.; Fudym, O.
2017-04-01
Nonlinear gravity inversion in sedimentary basins is a classical problem in applied geophysics. Although a 2D approximation is widely used, 3D models have been also proposed to better take into account the basin geometry. A common nonlinear approach to this 3D problem consists in modeling the basin as a set of right rectangular prisms with prescribed density contrast, whose depths are the unknowns. Then, the problem is iteratively solved via local optimization techniques from an initial model computed using some simplifications or being estimated using prior geophysical models. Nevertheless, this kind of approach is highly dependent on the prior information that is used, and lacks from a correct solution appraisal (nonlinear uncertainty analysis). In this paper, we use the family of global Particle Swarm Optimization (PSO) optimizers for the 3D gravity inversion and model appraisal of the solution that is adopted for basement relief estimation in sedimentary basins. Synthetic and real cases are illustrated, showing that robust results are obtained. Therefore, PSO seems to be a very good alternative for 3D gravity inversion and uncertainty assessment of basement relief when used in a sampling while optimizing approach. That way important geological questions can be answered probabilistically in order to perform risk assessment in the decisions that are made.
NASA Technical Reports Server (NTRS)
Rao, R. G. S.; Ulaby, F. T.
1977-01-01
The paper examines optimal sampling techniques for obtaining accurate spatial averages of soil moisture, at various depths and for cell sizes in the range 2.5-40 acres, with a minimum number of samples. Both simple random sampling and stratified sampling procedures are used to reach a set of recommended sample sizes for each depth and for each cell size. Major conclusions from statistical sampling test results are that (1) the number of samples required decreases with increasing depth; (2) when the total number of samples cannot be prespecified or the moisture in only one single layer is of interest, then a simple random sample procedure should be used which is based on the observed mean and SD for data from a single field; (3) when the total number of samples can be prespecified and the objective is to measure the soil moisture profile with depth, then stratified random sampling based on optimal allocation should be used; and (4) decreasing the sensor resolution cell size leads to fairly large decreases in samples sizes with stratified sampling procedures, whereas only a moderate decrease is obtained in simple random sampling procedures.
Shan, Yi-chu; Zhang, Yu-kui; Zhao, Rui-huan
2002-07-01
In high performance liquid chromatography, it is necessary to apply multi-composition gradient elution for the separation of complex samples such as environmental and biological samples. Multivariate stepwise gradient elution is one of the most efficient elution modes, because it combines the high selectivity of multi-composition mobile phase and shorter analysis time of gradient elution. In practical separations, the separation selectivity of samples can be effectively adjusted by using ternary mobile phase. For the optimization of these parameters, the retention equation of samples must be obtained at first. Traditionally, several isocratic experiments are used to get the retention equation of solute. However, it is time consuming especially for the separation of complex samples with a wide range of polarity. A new method for the fast optimization of ternary stepwise gradient elution was proposed based on the migration rule of solute in column. First, the coefficients of retention equation of solute are obtained by running several linear gradient experiments, then the optimal separation conditions are searched according to the hierarchical chromatography response function which acts as the optimization criterion. For each kind of organic modifier, two initial linear gradient experiments are used to obtain the primary coefficients of retention equation of each solute. For ternary mobile phase, only four linear gradient runs are needed to get the coefficients of retention equation. Then the retention times of solutes under arbitrary mobile phase composition can be predicted. The initial optimal mobile phase composition is obtained by resolution mapping for all of the solutes. A hierarchical chromatography response function is used to evaluate the separation efficiencies and search the optimal elution conditions. In subsequent optimization, the migrating distance of solute in the column is considered to decide the mobile phase composition and sustaining time of the latter steps until all the solutes are eluted out. Thus the first stepwise gradient elution conditions are predicted. If the resolution of samples under the predicted optimal separation conditions is satisfactory, the optimization procedure is stopped; otherwise, the coefficients of retention equation are adjusted according to the experimental results under the previously predicted elution conditions. Then the new stepwise gradient elution conditions are predicted repeatedly until satisfactory resolution is obtained. Normally, the satisfactory separation conditions can be found only after six experiments by using the proposed method. In comparison with the traditional optimization method, the time needed to finish the optimization procedure can be greatly reduced. The method has been validated by its application to the separation of several samples such as amino acid derivatives, aromatic amines, in which satisfactory separations were obtained with predicted resolution.
Sample preparation techniques for the determination of trace residues and contaminants in foods.
Ridgway, Kathy; Lalljie, Sam P D; Smith, Roger M
2007-06-15
The determination of trace residues and contaminants in complex matrices, such as food, often requires extensive sample extraction and preparation prior to instrumental analysis. Sample preparation is often the bottleneck in analysis and there is a need to minimise the number of steps to reduce both time and sources of error. There is also a move towards more environmentally friendly techniques, which use less solvent and smaller sample sizes. Smaller sample size becomes important when dealing with real life problems, such as consumer complaints and alleged chemical contamination. Optimal sample preparation can reduce analysis time, sources of error, enhance sensitivity and enable unequivocal identification, confirmation and quantification. This review considers all aspects of sample preparation, covering general extraction techniques, such as Soxhlet and pressurised liquid extraction, microextraction techniques such as liquid phase microextraction (LPME) and more selective techniques, such as solid phase extraction (SPE), solid phase microextraction (SPME) and stir bar sorptive extraction (SBSE). The applicability of each technique in food analysis, particularly for the determination of trace organic contaminants in foods is discussed.
Willan, Andrew R; Eckermann, Simon
2012-10-01
Previous applications of value of information methods for determining optimal sample size in randomized clinical trials have assumed no between-study variation in mean incremental net benefit. By adopting a hierarchical model, we provide a solution for determining optimal sample size with this assumption relaxed. The solution is illustrated with two examples from the literature. Expected net gain increases with increasing between-study variation, reflecting the increased uncertainty in incremental net benefit and reduced extent to which data are borrowed from previous evidence. Hence, a trial can become optimal where current evidence is sufficient assuming no between-study variation. However, despite the expected net gain increasing, the optimal sample size in the illustrated examples is relatively insensitive to the amount of between-study variation. Further percentage losses in expected net gain were small even when choosing sample sizes that reflected widely different between-study variation. Copyright © 2011 John Wiley & Sons, Ltd.
Mandel, Ellen D; North, Shannon
2017-10-01
The PA profession is 50 years young. Practicing PAs and current students hail from several generational categories ranging from Builders to Generation Z. This article reviews how different generations may have experienced PA program expansion, professional identity, state licensing, and prescription delegation. The authors sampled a cohort of PA program applicants about their views on what evokes optimism and concern for the PA profession. These themes mirror the recently paved professional road, while posing the all-important question: What construction lies on the horizon?
Wen Lin; Asko Noormets; John S. King; Ge Sun; Steve McNulty; Jean-Christophe Domec; Lucas Cernusak
2017-01-01
Stable isotope ratios (δ13C and δ18O) of tree-ring α-cellulose are important tools in paleoclimatology, ecology, plant physiology and genetics. The Multiple Sample Isolation System for Solids (MSISS) was a major advance in the tree-ring α-cellulose extraction methods, offering greater throughput and reduced labor input compared to traditional alternatives. However, the...
NASA Astrophysics Data System (ADS)
Hoch, Jeffrey C.
2017-10-01
Non-Fourier methods of spectrum analysis are gaining traction in NMR spectroscopy, driven by their utility for processing nonuniformly sampled data. These methods afford new opportunities for optimizing experiment time, resolution, and sensitivity of multidimensional NMR experiments, but they also pose significant challenges not encountered with the discrete Fourier transform. A brief history of non-Fourier methods in NMR serves to place different approaches in context. Non-Fourier methods reflect broader trends in the growing importance of computation in NMR, and offer insights for future software development.
PubChem3D: Conformer generation
2011-01-01
Background PubChem, an open archive for the biological activities of small molecules, provides search and analysis tools to assist users in locating desired information. Many of these tools focus on the notion of chemical structure similarity at some level. PubChem3D enables similarity of chemical structure 3-D conformers to augment the existing similarity of 2-D chemical structure graphs. It is also desirable to relate theoretical 3-D descriptions of chemical structures to experimental biological activity. As such, it is important to be assured that the theoretical conformer models can reproduce experimentally determined bioactive conformations. In the present study, we investigate the effects of three primary conformer generation parameters (the fragment sampling rate, the energy window size, and force field variant) upon the accuracy of theoretical conformer models, and determined optimal settings for PubChem3D conformer model generation and conformer sampling. Results Using the software package OMEGA from OpenEye Scientific Software, Inc., theoretical 3-D conformer models were generated for 25,972 small-molecule ligands, whose 3-D structures were experimentally determined. Different values for primary conformer generation parameters were systematically tested to find optimal settings. Employing a greater fragment sampling rate than the default did not improve the accuracy of the theoretical conformer model ensembles. An ever increasing energy window did increase the overall average accuracy, with rapid convergence observed at 10 kcal/mol and 15 kcal/mol for model building and torsion search, respectively; however, subsequent study showed that an energy threshold of 25 kcal/mol for torsion search resulted in slightly improved results for larger and more flexible structures. Exclusion of coulomb terms from the 94s variant of the Merck molecular force field (MMFF94s) in the torsion search stage gave more accurate conformer models at lower energy windows. Overall average accuracy of reproduction of bioactive conformations was remarkably linear with respect to both non-hydrogen atom count ("size") and effective rotor count ("flexibility"). Using these as independent variables, a regression equation was developed to predict the RMSD accuracy of a theoretical ensemble to reproduce bioactive conformations. The equation was modified to give a minimum RMSD conformer sampling value to help ensure that 90% of the sampled theoretical models should contain at least one conformer within the RMSD sampling value to a "bioactive" conformation. Conclusion Optimal parameters for conformer generation using OMEGA were explored and determined. An equation was developed that provides an RMSD sampling value to use that is based on the relative accuracy to reproduce bioactive conformations. The optimal conformer generation parameters and RMSD sampling values determined are used by the PubChem3D project to generate theoretical conformer models. PMID:21272340
Sasse, Alexander; de Vries, Sjoerd J; Schindler, Christina E M; de Beauchêne, Isaure Chauvot; Zacharias, Martin
2017-01-01
Protein-protein docking protocols aim to predict the structures of protein-protein complexes based on the structure of individual partners. Docking protocols usually include several steps of sampling, clustering, refinement and re-scoring. The scoring step is one of the bottlenecks in the performance of many state-of-the-art protocols. The performance of scoring functions depends on the quality of the generated structures and its coupling to the sampling algorithm. A tool kit, GRADSCOPT (GRid Accelerated Directly SCoring OPTimizing), was designed to allow rapid development and optimization of different knowledge-based scoring potentials for specific objectives in protein-protein docking. Different atomistic and coarse-grained potentials can be created by a grid-accelerated directly scoring dependent Monte-Carlo annealing or by a linear regression optimization. We demonstrate that the scoring functions generated by our approach are similar to or even outperform state-of-the-art scoring functions for predicting near-native solutions. Of additional importance, we find that potentials specifically trained to identify the native bound complex perform rather poorly on identifying acceptable or medium quality (near-native) solutions. In contrast, atomistic long-range contact potentials can increase the average fraction of near-native poses by up to a factor 2.5 in the best scored 1% decoys (compared to existing scoring), emphasizing the need of specific docking potentials for different steps in the docking protocol.
Ahlawat, Sonika; Sharma, Rekha; Maitra, A.; Roy, Manoranjan; Tantia, M.S.
2014-01-01
New, quick, and inexpensive methods for genotyping novel caprine Fec gene polymorphisms through tetra-primer ARMS PCR were developed in the present investigation. Single nucleotide polymorphism (SNP) genotyping needs to be attempted to establish association between the identified mutations and traits of economic importance. In the current study, we have successfully genotyped three new SNPs identified in caprine fecundity genes viz. T(-242)C (BMPR1B), G1189A (GDF9) and G735A (BMP15). Tetra-primer ARMS PCR protocol was optimized and validated for these SNPs with short turn-around time and costs. The optimized techniques were tested on 158 random samples of Black Bengal goat breed. Samples with known genotypes for the described genes, previously tested in duplicate using the sequencing methods, were employed for validation of the assay. Upon validation, complete concordance was observed between the tetra-primer ARMS PCR assays and the sequencing results. These results highlight the ability of tetra-primer ARMS PCR in genotyping of mutations in Fec genes. Any associated SNP could be used to accelerate the improvement of goat reproductive traits by identifying high prolific animals at an early stage of life. Our results provide direct evidence that tetra-primer ARMS-PCR is a rapid, reliable, and cost-effective method for SNP genotyping of mutations in caprine Fec genes. PMID:25606428
Chen, Shasha; Zeng, Zhi; Hu, Na; Bai, Bo; Wang, Honglun; Suo, Yourui
2018-03-01
Lycium ruthenicum Murr. (LR) is a functional food that plays an important role in anti-oxidation due to its high level of phenolic compounds. This study aims to optimize ultrasound-assisted extraction (UAE) of phenolic compounds and antioxidant activities of obtained extracts from LR using response surface methodology (RSM). A four-factor-three-level Box-Behnken design (BBD) was employed to discuss the following extracting parameters: extraction time (X 1 ), ultrasonic power (X 2 ), solvent to sample ratio (X 3 ) and solvent concentration (X 4 ). The analysis of variance (ANOVA) results revealed that the solvent to sample ratio had a significant influence on all responses, while the extraction time had no statistically significant effect on phenolic compounds. The optimum values of the combination of phenolic compounds and antioxidant activities were obtained for X 1 =30min, X 2 =100W, X 3 =40mL/g, and X 4 =33% (v/v). Five phenolic acids, including chlorogenic acid, caffeic acid, syringic acid, p-coumaric acid and ferulic acid, were analyzed by HPLC. Our results indicated that optimization extraction is vital for the quantification of phenolic compounds and antioxidant activity in LR, which may be contributed to large-scale industrial applications and future pharmacological activities research. Copyright © 2017 Elsevier Ltd. All rights reserved.
Villarroel, Mario; Reyes, Carla; Hazbun, Julia; Karmelic, Julia
2007-03-01
Resistant starch (RS) Hi Maize 260, Sphagnum magellanicum Moss (SM) both natural resources rich in total dietary fiber, and defatted hazel nut flour (DHN) as protein resource were used in the development of a pastry product (queque) with functional characteristics. Taguchi methodology was utilized in the optimization process using the orthogonal array L934 with four control factors: RS, SM. DHN and Master Gluten 4000 (MG), 3 factor levels and 9 experimental trials. The best result of Sensory Quality (SQ) and signal to noise ratio (S/N) was obtained combining the minor levels of the independent variables. Main effect (average effects of factor) analysis and anova analysis showed that SM and DHN were the control factors with a significant influence (p<0.05) on the CS with a relative contribution of 83%. It is important to emphasize the total dietary fiber (8.7%) and protein (7.2%) values, the formers due to the presence of RS and SM. Shelf life study showed that the sensory characteristics flavour, appearance and texture were not affected when samples were stored at refrigerated temperatures but not at 20 degrees C, specifically flavour always kept a good preference during the whole period of time. Samples of optimized cakes showed very good results when they were submitted to hedonic test with 100% of favorable consumer's opinions.
Hu, Meng; Krauss, Martin; Brack, Werner; Schulze, Tobias
2016-11-01
Liquid chromatography-high resolution mass spectrometry (LC-HRMS) is a well-established technique for nontarget screening of contaminants in complex environmental samples. Automatic peak detection is essential, but its performance has only rarely been assessed and optimized so far. With the aim to fill this gap, we used pristine water extracts spiked with 78 contaminants as a test case to evaluate and optimize chromatogram and spectral data processing. To assess whether data acquisition strategies have a significant impact on peak detection, three values of MS cycle time (CT) of an LTQ Orbitrap instrument were tested. Furthermore, the key parameter settings of the data processing software MZmine 2 were optimized to detect the maximum number of target peaks from the samples by the design of experiments (DoE) approach and compared to a manual evaluation. The results indicate that short CT significantly improves the quality of automatic peak detection, which means that full scan acquisition without additional MS 2 experiments is suggested for nontarget screening. MZmine 2 detected 75-100 % of the peaks compared to manual peak detection at an intensity level of 10 5 in a validation dataset on both spiked and real water samples under optimal parameter settings. Finally, we provide an optimization workflow of MZmine 2 for LC-HRMS data processing that is applicable for environmental samples for nontarget screening. The results also show that the DoE approach is useful and effort-saving for optimizing data processing parameters. Graphical Abstract ᅟ.
Data-driven sensor placement from coherent fluid structures
NASA Astrophysics Data System (ADS)
Manohar, Krithika; Kaiser, Eurika; Brunton, Bingni W.; Kutz, J. Nathan; Brunton, Steven L.
2017-11-01
Optimal sensor placement is a central challenge in the prediction, estimation and control of fluid flows. We reinterpret sensor placement as optimizing discrete samples of coherent fluid structures for full state reconstruction. This permits a drastic reduction in the number of sensors required for faithful reconstruction, since complex fluid interactions can often be described by a small number of coherent structures. Our work optimizes point sensors using the pivoted matrix QR factorization to sample coherent structures directly computed from flow data. We apply this sampling technique in conjunction with various data-driven modal identification methods, including the proper orthogonal decomposition (POD) and dynamic mode decomposition (DMD). In contrast to POD-based sensors, DMD demonstrably enables the optimization of sensors for prediction in systems exhibiting multiple scales of dynamics. Finally, reconstruction accuracy from pivot sensors is shown to be competitive with sensors obtained using traditional computationally prohibitive optimization methods.
NASA Astrophysics Data System (ADS)
Vesselinov, V. V.; Harp, D.
2010-12-01
The process of decision making to protect groundwater resources requires a detailed estimation of uncertainties in model predictions. Various uncertainties associated with modeling a natural system, such as: (1) measurement and computational errors; (2) uncertainties in the conceptual model and model-parameter estimates; (3) simplifications in model setup and numerical representation of governing processes, contribute to the uncertainties in the model predictions. Due to this combination of factors, the sources of predictive uncertainties are generally difficult to quantify individually. Decision support related to optimal design of monitoring networks requires (1) detailed analyses of existing uncertainties related to model predictions of groundwater flow and contaminant transport, (2) optimization of the proposed monitoring network locations in terms of their efficiency to detect contaminants and provide early warning. We apply existing and newly-proposed methods to quantify predictive uncertainties and to optimize well locations. An important aspect of the analysis is the application of newly-developed optimization technique based on coupling of Particle Swarm and Levenberg-Marquardt optimization methods which proved to be robust and computationally efficient. These techniques and algorithms are bundled in a software package called MADS. MADS (Model Analyses for Decision Support) is an object-oriented code that is capable of performing various types of model analyses and supporting model-based decision making. The code can be executed under different computational modes, which include (1) sensitivity analyses (global and local), (2) Monte Carlo analysis, (3) model calibration, (4) parameter estimation, (5) uncertainty quantification, and (6) model selection. The code can be externally coupled with any existing model simulator through integrated modules that read/write input and output files using a set of template and instruction files (consistent with the PEST I/O protocol). MADS can also be internally coupled with a series of built-in analytical simulators. MADS provides functionality to work directly with existing control files developed for the code PEST (Doherty 2009). To perform the computational modes mentioned above, the code utilizes (1) advanced Latin-Hypercube sampling techniques (including Improved Distributed Sampling), (2) various gradient-based Levenberg-Marquardt optimization methods, (3) advanced global optimization methods (including Particle Swarm Optimization), and (4) a selection of alternative objective functions. The code has been successfully applied to perform various model analyses related to environmental management of real contamination sites. Examples include source identification problems, quantification of uncertainty, model calibration, and optimization of monitoring networks. The methodology and software codes are demonstrated using synthetic and real case studies where monitoring networks are optimized taking into account the uncertainty in model predictions of contaminant transport.
A Hypothesis-Driven Approach to Site Investigation
NASA Astrophysics Data System (ADS)
Nowak, W.
2008-12-01
Variability of subsurface formations and the scarcity of data lead to the notion of aquifer parameters as geostatistical random variables. Given an information need and limited resources for field campaigns, site investigation is often put into the context of optimal design. In optimal design, the types, numbers and positions of samples are optimized under case-specific objectives to meet the information needs. Past studies feature optimal data worth (balancing maximum financial profit in an engineering task versus the cost of additional sampling), or aim at a minimum prediction uncertainty of stochastic models for a prescribed investigation budget. Recent studies also account for other sources of uncertainty outside the hydrogeological range, such as uncertain toxicity, ingestion and behavioral parameters of the affected population when predicting the human health risk from groundwater contaminations. The current study looks at optimal site investigation from a new angle. Answering a yes/no question under uncertainty directly requires recasting the original question as a hypothesis test. Otherwise, false confidence in the resulting answer would be pretended. A straightforward example is whether a recent contaminant spill will cause contaminant concentrations in excess of a legal limit at a nearby drinking water well. This question can only be answered down to a specified chance of error, i.e., based on the significance level used in hypothesis tests. Optimal design is placed into the hypothesis-driven context by using the chance of providing a false yes/no answer as new criterion to be minimized. Different configurations apply for one-sided and two-sided hypothesis tests. If a false answer entails financial liability, the hypothesis-driven context can be re-cast in the context of data worth. The remaining difference is that failure is a hard constraint in the data worth context versus a monetary punishment term in the hypothesis-driven context. The basic principle is discussed and illustrated on the case of a hypothetical contaminant spill and the exceedance of critical contaminant levels at a downstream location. An tempting and important side question is whether site investigation could be tweaked towards a yes or no answer in maliciously biased campaigns by unfair formulation of the optimization objective.
Importance of sample preparation for molecular diagnosis of lyme borreliosis from urine.
Bergmann, A R; Schmidt, B L; Derler, A-M; Aberer, E
2002-12-01
Urine PCR has been used for the diagnosis of Borrelia burgdorferi infection in recent years but has been abandoned because of its low sensitivity and the irreproducibility of the results. Our study aimed to analyze technical details related to sample preparation and detection methods. Crucial for a successful urine PCR were (i) avoidance of the first morning urine sample; (ii) centrifugation at 36,000 x g; and (iii) the extraction method, with only DNAzol of the seven different extraction methods used yielding positive results with patient urine specimens. Furthermore, storage of frozen urine samples at -80 degrees C reduced the sensitivity of a positive urine PCR result obtained with samples from 72 untreated erythema migrans (EM) patients from 85% in the first 3 months to <30% after more than 3 months. Bands were detected at 276 bp on ethidium bromide-stained agarose gels after amplification by a nested PCR. The specificity of bands for 32 of 33 samples was proven by hybridization with a GEN-ETI-K-DEIA kit and for a 10 further positive amplicons by sequencing. By using all of these steps to optimize the urine PCR technique, B. burgdorferi infection could be diagnosed by using urine samples from EM patients with a sensitivity (85%) substantially better than that of serological methods (50%). This improved method could be of future importance as an additional laboratory technique for the diagnosis of unclear, unrecognized borrelia infections and diseases possibly related to Lyme borreliosis.
Yan, Rui; Edwards, Thomas J.; Pankratz, Logan M.; Kuhn, Richard J.; Lanman, Jason K.; Liu, Jun; Jiang, Wen
2015-01-01
Cryo-electron tomography (cryo-ET) is an emerging technique that can elucidate the architecture of macromolecular complexes and cellular ultrastructure in a near-native state. Some important sample parameters, such as thickness and tilt, are needed for 3-D reconstruction. However, these parameters can currently only be determined using trial 3-D reconstructions. Accurate electron mean free path plays a significant role in modeling image formation process essential for simulation of electron microscopy images and model-based iterative 3-D reconstruction methods; however, their values are voltage and sample dependent and have only been experimentally measured for a limited number of sample conditions. Here, we report a computational method, tomoThickness, based on the Beer-Lambert law, to simultaneously determine the sample thickness, tilt and electron inelastic mean free path by solving an overdetermined nonlinear least square optimization problem utilizing the strong constraints of tilt relationships. The method has been extensively tested with both stained and cryo datasets. The fitted electron mean free paths are consistent with reported experimental measurements. The accurate thickness estimation eliminates the need for a generous assignment of Z-dimension size of the tomogram. Interestingly, we have also found that nearly all samples are a few degrees tilted relative to the electron beam. Compensation of the intrinsic sample tilt can result in horizontal structure and reduced Z-dimension of tomograms. Our fast, pre-reconstruction method can thus provide important sample parameters that can help improve performance of tomographic reconstruction of a wide range of samples. PMID:26433027
Yan, Rui; Edwards, Thomas J; Pankratz, Logan M; Kuhn, Richard J; Lanman, Jason K; Liu, Jun; Jiang, Wen
2015-11-01
Cryo-electron tomography (cryo-ET) is an emerging technique that can elucidate the architecture of macromolecular complexes and cellular ultrastructure in a near-native state. Some important sample parameters, such as thickness and tilt, are needed for 3-D reconstruction. However, these parameters can currently only be determined using trial 3-D reconstructions. Accurate electron mean free path plays a significant role in modeling image formation process essential for simulation of electron microscopy images and model-based iterative 3-D reconstruction methods; however, their values are voltage and sample dependent and have only been experimentally measured for a limited number of sample conditions. Here, we report a computational method, tomoThickness, based on the Beer-Lambert law, to simultaneously determine the sample thickness, tilt and electron inelastic mean free path by solving an overdetermined nonlinear least square optimization problem utilizing the strong constraints of tilt relationships. The method has been extensively tested with both stained and cryo datasets. The fitted electron mean free paths are consistent with reported experimental measurements. The accurate thickness estimation eliminates the need for a generous assignment of Z-dimension size of the tomogram. Interestingly, we have also found that nearly all samples are a few degrees tilted relative to the electron beam. Compensation of the intrinsic sample tilt can result in horizontal structure and reduced Z-dimension of tomograms. Our fast, pre-reconstruction method can thus provide important sample parameters that can help improve performance of tomographic reconstruction of a wide range of samples. Copyright © 2015 Elsevier Inc. All rights reserved.
Fürll, M
2016-01-01
Systematic metabolic monitoring began in German-speaking countries in the late 1960s, early 1970s, due to an increase in metabolic disorders as a cause of infertility and mastitis and aimed at their prevention through early diagnosis. Development of a unified monitoring standard: Initiated by Rossow, Gürtler, Ehrentraut, Seidel and Furcht a standard "metabolic monitoring in cattle production" was developed in the 1970s. It included farm analysis, clinical and biochemical controls, prophylaxis and follow-up controls. Key points were: periodic screenings of heavily loaded, healthy indicator animals 2-4 days post partum (p. p.), 2-8 weeks p. p. and 1-2 weeks ante partum, maximal 10 animals/group, pooled samples are useful, optimal are individual samples, use of informative sample substrate and parameters, precise handling of specimens, expert assessment and follow-up. Metabolic controls during 1982-1989 in approximately 242 000 cows revealed means of 32.9% ketoses, 20.0% metabolic acidosis, 21.9% metabolic alkalosis, 34.2% nitrogen-metabolism disorders, 17.3% sodium deficiency and 23.7% liver disorders. Development of a metabolic profile after 1989: Reference values at higher milk yield, early diagnosis of diseases of the fat mobilization syndrome and improved early diagnosis by new indicators, including creatine kinase (CK), alkaline phosphatase (AP) with isoenzymes, acute phase proteins, cytokines, antioxidants, carnitine and lipoprotein fractions, were established. Optimized blood and urine screenings have important advantages over milk analysis. They are an important method of health and performance stabilization by exact analysis of causes and derived prevention. The fertility related parameters free fatty acids, β-hydroxybutyrate, urea, inorganic phosphate, CK, AP, sodium, potassium, selenium, copper, β-carotene and net acid-base excretion proved to be a standard spectrum for screenings. These should be tested once a year/herd, if necessary as an inexpensive pool sample for approximately 50 €.
spsann - optimization of sample patterns using spatial simulated annealing
NASA Astrophysics Data System (ADS)
Samuel-Rosa, Alessandro; Heuvelink, Gerard; Vasques, Gustavo; Anjos, Lúcia
2015-04-01
There are many algorithms and computer programs to optimize sample patterns, some private and others publicly available. A few have only been presented in scientific articles and text books. This dispersion and somewhat poor availability is holds back to their wider adoption and further development. We introduce spsann, a new R-package for the optimization of sample patterns using spatial simulated annealing. R is the most popular environment for data processing and analysis. Spatial simulated annealing is a well known method with widespread use to solve optimization problems in the soil and geo-sciences. This is mainly due to its robustness against local optima and easiness of implementation. spsann offers many optimizing criteria for sampling for variogram estimation (number of points or point-pairs per lag distance class - PPL), trend estimation (association/correlation and marginal distribution of the covariates - ACDC), and spatial interpolation (mean squared shortest distance - MSSD). spsann also includes the mean or maximum universal kriging variance (MUKV) as an optimizing criterion, which is used when the model of spatial variation is known. PPL, ACDC and MSSD were combined (PAN) for sampling when we are ignorant about the model of spatial variation. spsann solves this multi-objective optimization problem scaling the objective function values using their maximum absolute value or the mean value computed over 1000 random samples. Scaled values are aggregated using the weighted sum method. A graphical display allows to follow how the sample pattern is being perturbed during the optimization, as well as the evolution of its energy state. It is possible to start perturbing many points and exponentially reduce the number of perturbed points. The maximum perturbation distance reduces linearly with the number of iterations. The acceptance probability also reduces exponentially with the number of iterations. R is memory hungry and spatial simulated annealing is a computationally intensive method. As such, many strategies were used to reduce the computation time and memory usage: a) bottlenecks were implemented in C++, b) a finite set of candidate locations is used for perturbing the sample points, and c) data matrices are computed only once and then updated at each iteration instead of being recomputed. spsann is available at GitHub under a licence GLP Version 2.0 and will be further developed to: a) allow the use of a cost surface, b) implement other sensitive parts of the source code in C++, c) implement other optimizing criteria, d) allow to add or delete points to/from an existing point pattern.
Detecting glaucomatous change in visual fields: Analysis with an optimization framework.
Yousefi, Siamak; Goldbaum, Michael H; Varnousfaderani, Ehsan S; Belghith, Akram; Jung, Tzyy-Ping; Medeiros, Felipe A; Zangwill, Linda M; Weinreb, Robert N; Liebmann, Jeffrey M; Girkin, Christopher A; Bowd, Christopher
2015-12-01
Detecting glaucomatous progression is an important aspect of glaucoma management. The assessment of longitudinal series of visual fields, measured using Standard Automated Perimetry (SAP), is considered the reference standard for this effort. We seek efficient techniques for determining progression from longitudinal visual fields by formulating the problem as an optimization framework, learned from a population of glaucoma data. The longitudinal data from each patient's eye were used in a convex optimization framework to find a vector that is representative of the progression direction of the sample population, as a whole. Post-hoc analysis of longitudinal visual fields across the derived vector led to optimal progression (change) detection. The proposed method was compared to recently described progression detection methods and to linear regression of instrument-defined global indices, and showed slightly higher sensitivities at the highest specificities than other methods (a clinically desirable result). The proposed approach is simpler, faster, and more efficient for detecting glaucomatous changes, compared to our previously proposed machine learning-based methods, although it provides somewhat less information. This approach has potential application in glaucoma clinics for patient monitoring and in research centers for classification of study participants. Copyright © 2015 Elsevier Inc. All rights reserved.
Execution of Multidisciplinary Design Optimization Approaches on Common Test Problems
NASA Technical Reports Server (NTRS)
Balling, R. J.; Wilkinson, C. A.
1997-01-01
A class of synthetic problems for testing multidisciplinary design optimization (MDO) approaches is presented. These test problems are easy to reproduce because all functions are given as closed-form mathematical expressions. They are constructed in such a way that the optimal value of all variables and the objective is unity. The test problems involve three disciplines and allow the user to specify the number of design variables, state variables, coupling functions, design constraints, controlling design constraints, and the strength of coupling. Several MDO approaches were executed on two sample synthetic test problems. These approaches included single-level optimization approaches, collaborative optimization approaches, and concurrent subspace optimization approaches. Execution results are presented, and the robustness and efficiency of these approaches an evaluated for these sample problems.
Sun, Jian-Nan; Chen, Juan; Shi, Yan-Ping
2014-07-01
A new mode of ionic liquid based dispersive liquid-liquid microextraction (IL-DLLME) is developed. In this work, [C6MIm][PF6] was chosen as the extraction solvent, and two kinds of hydrophilic ionic liquids, [EMIm][BF4] and [BSO3HMIm][OTf], functioned as the dispersive solvent. So in the whole extraction procedure, no organic solvent was used. With the aid of SO3H group, the acidic compound was extracted from the sample solution without pH adjustment. Two phenolic compounds, namely, 2-naphthol and 4-nitrophenol were chosen as the target analytes. Important parameters affecting the extraction efficiency, such as the type of hydrophilic ionic liquids, the volume ratio of [EMIm][BF4] to [BSO3HMIm][OTf], type and volume of extraction solvent, pH value of sample solution, sonication time, extraction time and centrifugation time were investigated and optimized. Under the optimized extraction conditions, the method exhibited good sensitivity with the limits of detection (LODs) at 5.5 μg L(-1)and 10.0 μg L(-1) for 4-nitrophenol and 2-naphthol, respectively. Good linearity over the concentration ranges of 24-384 μg L(-1) for 4-nitrophenol and 28-336 μg L(-1) for 2-naphthol was obtained with correlation coefficients of 0.9998 and 0.9961, respectively. The proposed method can directly extract acidic compound from environmental sample or even more complex sample matrix without any pH adjustment procedure. Copyright © 2014 Elsevier B.V. All rights reserved.
Energy Efficient GNSS Signal Acquisition Using Singular Value Decomposition (SVD).
Bermúdez Ordoñez, Juan Carlos; Arnaldo Valdés, Rosa María; Gómez Comendador, Fernando
2018-05-16
A significant challenge in global navigation satellite system (GNSS) signal processing is a requirement for a very high sampling rate. The recently-emerging compressed sensing (CS) theory makes processing GNSS signals at a low sampling rate possible if the signal has a sparse representation in a certain space. Based on CS and SVD theories, an algorithm for sampling GNSS signals at a rate much lower than the Nyquist rate and reconstructing the compressed signal is proposed in this research, which is validated after the output from that process still performs signal detection using the standard fast Fourier transform (FFT) parallel frequency space search acquisition. The sparse representation of the GNSS signal is the most important precondition for CS, by constructing a rectangular Toeplitz matrix (TZ) of the transmitted signal, calculating the left singular vectors using SVD from the TZ, to achieve sparse signal representation. Next, obtaining the M-dimensional observation vectors based on the left singular vectors of the SVD, which are equivalent to the sampler operator in standard compressive sensing theory, the signal can be sampled below the Nyquist rate, and can still be reconstructed via ℓ 1 minimization with accuracy using convex optimization. As an added value, there is a GNSS signal acquisition enhancement effect by retaining the useful signal and filtering out noise by projecting the signal into the most significant proper orthogonal modes (PODs) which are the optimal distributions of signal power. The algorithm is validated with real recorded signals, and the results show that the proposed method is effective for sampling, reconstructing intermediate frequency (IF) GNSS signals in the time discrete domain.
Energy Efficient GNSS Signal Acquisition Using Singular Value Decomposition (SVD)
Arnaldo Valdés, Rosa María; Gómez Comendador, Fernando
2018-01-01
A significant challenge in global navigation satellite system (GNSS) signal processing is a requirement for a very high sampling rate. The recently-emerging compressed sensing (CS) theory makes processing GNSS signals at a low sampling rate possible if the signal has a sparse representation in a certain space. Based on CS and SVD theories, an algorithm for sampling GNSS signals at a rate much lower than the Nyquist rate and reconstructing the compressed signal is proposed in this research, which is validated after the output from that process still performs signal detection using the standard fast Fourier transform (FFT) parallel frequency space search acquisition. The sparse representation of the GNSS signal is the most important precondition for CS, by constructing a rectangular Toeplitz matrix (TZ) of the transmitted signal, calculating the left singular vectors using SVD from the TZ, to achieve sparse signal representation. Next, obtaining the M-dimensional observation vectors based on the left singular vectors of the SVD, which are equivalent to the sampler operator in standard compressive sensing theory, the signal can be sampled below the Nyquist rate, and can still be reconstructed via ℓ1 minimization with accuracy using convex optimization. As an added value, there is a GNSS signal acquisition enhancement effect by retaining the useful signal and filtering out noise by projecting the signal into the most significant proper orthogonal modes (PODs) which are the optimal distributions of signal power. The algorithm is validated with real recorded signals, and the results show that the proposed method is effective for sampling, reconstructing intermediate frequency (IF) GNSS signals in the time discrete domain. PMID:29772731
Optimizing Tissue Sampling for the Diagnosis, Subtyping, and Molecular Analysis of Lung Cancer
Ofiara, Linda Marie; Navasakulpong, Asma; Beaudoin, Stephane; Gonzalez, Anne Valerie
2014-01-01
Lung cancer has entered the era of personalized therapy with histologic subclassification and the presence of molecular biomarkers becoming increasingly important in therapeutic algorithms. At the same time, biopsy specimens are becoming increasingly smaller as diagnostic algorithms seek to establish diagnosis and stage with the least invasive techniques. Here, we review techniques used in the diagnosis of lung cancer including bronchoscopy, ultrasound-guided bronchoscopy, transthoracic needle biopsy, and thoracoscopy. In addition to discussing indications and complications, we focus our discussion on diagnostic yields and the feasibility of testing for molecular biomarkers such as epidermal growth factor receptor and anaplastic lymphoma kinase, emphasizing the importance of a sufficient tumor biopsy. PMID:25295226
Ghader, Masoud; Shokoufi, Nader; Es-Haghi, Ali; Kargosha, Kazem
2018-04-15
Vaccine production is a biological process in which variation in time and output is inevitable. Thus, the application of Process Analytical Technologies (PAT) will be important in this regard. Headspace solid - phase microextraction (HS-SPME) coupled with GC-MS can be used as a PAT for process monitoring. This method is suitable to chemical profiling of volatile organic compounds (VOCs) emitted from microorganisms. Tetanus is a lethal disease caused by Clostridium tetani (C. tetani) bacterium and vaccination is an ultimate way to prevent this disease. In this paper, SPME fiber was used for the investigation of VOCs emerging from C. tetani during cultivation. Different types of VOCs such as sulfur-containing compounds were identified and some of them were selected as biomarkers for bioreactor monitoring during vaccine production. In the second step, the portable dynamic air sampling (PDAS) device was used as an interface for sampling VOCs by SPME fibers. The sampling procedure was optimized by face-centered central composite design (FC-CCD). The optimized sampling time and inlet gas flow rates were 10 min and 2 m L s -1 , respectively. PDAS was mounted in exhausted gas line of bioreactor and 42 samples of VOCs were prepared by SPME fibers in 7 days during incubation. Simultaneously, pH and optical density (OD) were evaluated to cultivation process which showed good correlations with the identified VOCs (>80%). This method could be used for VOCs sampling from off-gas of a bioreactor to monitoring of the cultivation process. Copyright © 2018. Published by Elsevier B.V.
Wu, Ao-lin; Li, Min; Zhang, Shou-wen; Zhao, Ji-feng; Liu, Xiang; Wang, Chang-hua; Wang, Xiao-yun; Zhong, Guo-yue
2015-06-01
In order to find the optimal topographical factor for regionslization, the content of cimetidine in 116 Sinopodophyllum hexandrum sample collected from Sichuan, Qinghai, Gansu, Tibet, Yunnan and Shaanxi provinces, was determined. Using mathematical statistics and geographical spatial analysis of GIS analysis, the relationship between content of podophyllotoxin and influencing factors including altitude gradient and gradient position was analyzed. It is found that the optimal altitude was 2 800 m to 3 600 m, the aspect of slope north or northeast and northwest and the slope 12 degrees to 65 degrees with a high suitability degree. Considering the artificial planting, the suitable planting area for S. hexandrum is comfirmed. The topographical factor is important for S. hexandrum regionalization, but has hardly effect on podophyllotoxin content. The results of the study provide an important scientific basis for S. hexandrum production development. But there are many factors which affect suitability index and podophyllotoxin content of S. hexandrum, it is necessary to consider other factors like climate and soil while exploitation and protection of S. hexandrum.
Obsessive Passion: A Compensatory Response to Unsatisfied Needs.
Lalande, Daniel; Vallerand, Robert J; Lafrenière, Marc-André K; Verner-Filion, Jérémie; Laurent, François-Albert; Forest, Jacques; Paquet, Yvan
2017-04-01
The present research investigated the role of two sources of psychological need satisfaction (inside and outside a passionate activity) as determinants of harmonious (HP) and obsessive (OP) passion. Four studies were carried out with different samples of young and middle-aged adults (e.g., athletes, musicians; total N = 648). Different research designs (cross-sectional, mixed, longitudinal) were also used. Results showed that only a rigid engagement in a passionate activity (OP) was predicted by low levels of need satisfaction outside the passionate activity (in an important life context or in life in general), whereas both OP and a more favorable and balanced type of passion, HP were positively predicted by need satisfaction inside the passionate activity. Further, OP led to negative outcomes, and HP predicted positive outcomes. These results suggest that OP may represent a form of compensatory striving for psychological need satisfaction. It appears important to consider two distinct sources of need satisfaction, inside and outside the passionate activity, when investigating determinants of optimal and less optimal forms of activity engagement. © 2015 Wiley Periodicals, Inc.
Joint Optimization of Fluence Field Modulation and Regularization in Task-Driven Computed Tomography
Gang, G. J.; Siewerdsen, J. H.; Stayman, J. W.
2017-01-01
Purpose This work presents a task-driven joint optimization of fluence field modulation (FFM) and regularization in quadratic penalized-likelihood (PL) reconstruction. Conventional FFM strategies proposed for filtered-backprojection (FBP) are evaluated in the context of PL reconstruction for comparison. Methods We present a task-driven framework that leverages prior knowledge of the patient anatomy and imaging task to identify FFM and regularization. We adopted a maxi-min objective that ensures a minimum level of detectability index (d′) across sample locations in the image volume. The FFM designs were parameterized by 2D Gaussian basis functions to reduce dimensionality of the optimization and basis function coefficients were estimated using the covariance matrix adaptation evolutionary strategy (CMA-ES) algorithm. The FFM was jointly optimized with both space-invariant and spatially-varying regularization strength (β) - the former via an exhaustive search through discrete values and the latter using an alternating optimization where β was exhaustively optimized locally and interpolated to form a spatially-varying map. Results The optimal FFM inverts as β increases, demonstrating the importance of a joint optimization. For the task and object investigated, the optimal FFM assigns more fluence through less attenuating views, counter to conventional FFM schemes proposed for FBP. The maxi-min objective homogenizes detectability throughout the image and achieves a higher minimum detectability than conventional FFM strategies. Conclusions The task-driven FFM designs found in this work are counter to conventional patterns for FBP and yield better performance in terms of the maxi-min objective, suggesting opportunities for improved image quality and/or dose reduction when model-based reconstructions are applied in conjunction with FFM. PMID:28626290
Joint optimization of fluence field modulation and regularization in task-driven computed tomography
NASA Astrophysics Data System (ADS)
Gang, G. J.; Siewerdsen, J. H.; Stayman, J. W.
2017-03-01
Purpose: This work presents a task-driven joint optimization of fluence field modulation (FFM) and regularization in quadratic penalized-likelihood (PL) reconstruction. Conventional FFM strategies proposed for filtered-backprojection (FBP) are evaluated in the context of PL reconstruction for comparison. Methods: We present a task-driven framework that leverages prior knowledge of the patient anatomy and imaging task to identify FFM and regularization. We adopted a maxi-min objective that ensures a minimum level of detectability index (d') across sample locations in the image volume. The FFM designs were parameterized by 2D Gaussian basis functions to reduce dimensionality of the optimization and basis function coefficients were estimated using the covariance matrix adaptation evolutionary strategy (CMA-ES) algorithm. The FFM was jointly optimized with both space-invariant and spatially-varying regularization strength (β) - the former via an exhaustive search through discrete values and the latter using an alternating optimization where β was exhaustively optimized locally and interpolated to form a spatially-varying map. Results: The optimal FFM inverts as β increases, demonstrating the importance of a joint optimization. For the task and object investigated, the optimal FFM assigns more fluence through less attenuating views, counter to conventional FFM schemes proposed for FBP. The maxi-min objective homogenizes detectability throughout the image and achieves a higher minimum detectability than conventional FFM strategies. Conclusions: The task-driven FFM designs found in this work are counter to conventional patterns for FBP and yield better performance in terms of the maxi-min objective, suggesting opportunities for improved image quality and/or dose reduction when model-based reconstructions are applied in conjunction with FFM.
A Structure-Adaptive Hybrid RBF-BP Classifier with an Optimized Learning Strategy
Wen, Hui; Xie, Weixin; Pei, Jihong
2016-01-01
This paper presents a structure-adaptive hybrid RBF-BP (SAHRBF-BP) classifier with an optimized learning strategy. SAHRBF-BP is composed of a structure-adaptive RBF network and a BP network of cascade, where the number of RBF hidden nodes is adjusted adaptively according to the distribution of sample space, the adaptive RBF network is used for nonlinear kernel mapping and the BP network is used for nonlinear classification. The optimized learning strategy is as follows: firstly, a potential function is introduced into training sample space to adaptively determine the number of initial RBF hidden nodes and node parameters, and a form of heterogeneous samples repulsive force is designed to further optimize each generated RBF hidden node parameters, the optimized structure-adaptive RBF network is used for adaptively nonlinear mapping the sample space; then, according to the number of adaptively generated RBF hidden nodes, the number of subsequent BP input nodes can be determined, and the overall SAHRBF-BP classifier is built up; finally, different training sample sets are used to train the BP network parameters in SAHRBF-BP. Compared with other algorithms applied to different data sets, experiments show the superiority of SAHRBF-BP. Especially on most low dimensional and large number of data sets, the classification performance of SAHRBF-BP outperforms other training SLFNs algorithms. PMID:27792737
Dimensions of design space: a decision-theoretic approach to optimal research design.
Conti, Stefano; Claxton, Karl
2009-01-01
Bayesian decision theory can be used not only to establish the optimal sample size and its allocation in a single clinical study but also to identify an optimal portfolio of research combining different types of study design. Within a single study, the highest societal payoff to proposed research is achieved when its sample sizes and allocation between available treatment options are chosen to maximize the expected net benefit of sampling (ENBS). Where a number of different types of study informing different parameters in the decision problem could be conducted, the simultaneous estimation of ENBS across all dimensions of the design space is required to identify the optimal sample sizes and allocations within such a research portfolio. This is illustrated through a simple example of a decision model of zanamivir for the treatment of influenza. The possible study designs include: 1) a single trial of all the parameters, 2) a clinical trial providing evidence only on clinical endpoints, 3) an epidemiological study of natural history of disease, and 4) a survey of quality of life. The possible combinations, samples sizes, and allocation between trial arms are evaluated over a range of cost-effectiveness thresholds. The computational challenges are addressed by implementing optimization algorithms to search the ENBS surface more efficiently over such large dimensions.
Optimal time points sampling in pathway modelling.
Hu, Shiyan
2004-01-01
Modelling cellular dynamics based on experimental data is at the heart of system biology. Considerable progress has been made to dynamic pathway modelling as well as the related parameter estimation. However, few of them gives consideration for the issue of optimal sampling time selection for parameter estimation. Time course experiments in molecular biology rarely produce large and accurate data sets and the experiments involved are usually time consuming and expensive. Therefore, to approximate parameters for models with only few available sampling data is of significant practical value. For signal transduction, the sampling intervals are usually not evenly distributed and are based on heuristics. In the paper, we investigate an approach to guide the process of selecting time points in an optimal way to minimize the variance of parameter estimates. In the method, we first formulate the problem to a nonlinear constrained optimization problem by maximum likelihood estimation. We then modify and apply a quantum-inspired evolutionary algorithm, which combines the advantages of both quantum computing and evolutionary computing, to solve the optimization problem. The new algorithm does not suffer from the morass of selecting good initial values and being stuck into local optimum as usually accompanied with the conventional numerical optimization techniques. The simulation results indicate the soundness of the new method.
Sampling design for spatially distributed hydrogeologic and environmental processes
Christakos, G.; Olea, R.A.
1992-01-01
A methodology for the design of sampling networks over space is proposed. The methodology is based on spatial random field representations of nonhomogeneous natural processes, and on optimal spatial estimation techniques. One of the most important results of random field theory for physical sciences is its rationalization of correlations in spatial variability of natural processes. This correlation is extremely important both for interpreting spatially distributed observations and for predictive performance. The extent of site sampling and the types of data to be collected will depend on the relationship of subsurface variability to predictive uncertainty. While hypothesis formulation and initial identification of spatial variability characteristics are based on scientific understanding (such as knowledge of the physics of the underlying phenomena, geological interpretations, intuition and experience), the support offered by field data is statistically modelled. This model is not limited by the geometric nature of sampling and covers a wide range in subsurface uncertainties. A factorization scheme of the sampling error variance is derived, which possesses certain atttactive properties allowing significant savings in computations. By means of this scheme, a practical sampling design procedure providing suitable indices of the sampling error variance is established. These indices can be used by way of multiobjective decision criteria to obtain the best sampling strategy. Neither the actual implementation of the in-situ sampling nor the solution of the large spatial estimation systems of equations are necessary. The required values of the accuracy parameters involved in the network design are derived using reference charts (readily available for various combinations of data configurations and spatial variability parameters) and certain simple yet accurate analytical formulas. Insight is gained by applying the proposed sampling procedure to realistic examples related to sampling problems in two dimensions. ?? 1992.
Paper-based assay for red blood cell antigen typing by the indirect antiglobulin test.
Yeow, Natasha; McLiesh, Heather; Guan, Liyun; Shen, Wei; Garnier, Gil
2016-07-01
A rapid and simple paper-based elution assay for red blood cell antigen typing by the indirect antiglobulin test (IAT) was established. This allows to type blood using IgG antibodies for the important blood groups in which IgM antibodies do not exist. Red blood cells incubated with IgG anti-D were washed with saline and spotted onto the paper assay pre-treated with anti-IgG. The blood spot was eluted with an elution buffer solution in a chromatography tank. Positive samples were identified by the agglutinated and fixed red blood cells on the original spotting area, while red blood cells from negative samples completely eluted away from the spot of origin. Optimum concentrations for both anti-IgG and anti-D were identified to eliminate the washing step after the incubation phase. Based on the no-washing procedure, the critical variables were investigated to establish the optimal conditions for the paper-based assay. Two hundred ten donor blood samples were tested in optimal conditions for the paper test with anti-D and anti-Kell. Positive and negative samples were clearly distinguished. This assay opens up new applications of the IAT on paper including antibody detection and blood donor-recipient crossmatching and extends its uses into non-blood typing applications with IgG antibody-based diagnostics. Graphical abstract A rapid and simple paper-based assay for red blood cell antigen typing by the indirect antiglobulin test.
[Optimized application of nested PCR method for detection of malaria].
Yao-Guang, Z; Li, J; Zhen-Yu, W; Li, C
2017-04-28
Objective To optimize the application of the nested PCR method for the detection of malaria according to the working practice, so as to improve the efficiency of malaria detection. Methods Premixing solution of PCR, internal primers for further amplification and new designed primers that aimed at two Plasmodium ovale subspecies were employed to optimize the reaction system, reaction condition and specific primers of P . ovale on basis of routine nested PCR. Then the specificity and the sensitivity of the optimized method were analyzed. The positive blood samples and examination samples of malaria were detected by the routine nested PCR and the optimized method simultaneously, and the detection results were compared and analyzed. Results The optimized method showed good specificity, and its sensitivity could reach the pg to fg level. The two methods were used to detect the same positive malarial blood samples simultaneously, the results indicated that the PCR products of the two methods had no significant difference, but the non-specific amplification reduced obviously and the detection rates of P . ovale subspecies improved, as well as the total specificity also increased through the use of the optimized method. The actual detection results of 111 cases of malarial blood samples showed that the sensitivity and specificity of the routine nested PCR were 94.57% and 86.96%, respectively, and those of the optimized method were both 93.48%, and there was no statistically significant difference between the two methods in the sensitivity ( P > 0.05), but there was a statistically significant difference between the two methods in the specificity ( P < 0.05). Conclusion The optimized PCR can improve the specificity without reducing the sensitivity on the basis of the routine nested PCR, it also can save the cost and increase the efficiency of malaria detection as less experiment links.
Lim, Natalie Y. N.; Roco, Constance A.; Frostegård, Åsa
2016-01-01
Adequate comparisons of DNA and cDNA libraries from complex environments require methods for co-extraction of DNA and RNA due to the inherent heterogeneity of such samples, or risk bias caused by variations in lysis and extraction efficiencies. Still, there are few methods and kits allowing simultaneous extraction of DNA and RNA from the same sample, and the existing ones generally require optimization. The proprietary nature of kit components, however, makes modifications of individual steps in the manufacturer’s recommended procedure difficult. Surprisingly, enzymatic treatments are often performed before purification procedures are complete, which we have identified here as a major problem when seeking efficient genomic DNA removal from RNA extracts. Here, we tested several DNA/RNA co-extraction commercial kits on inhibitor-rich soils, and compared them to a commonly used phenol-chloroform co-extraction method. Since none of the kits/methods co-extracted high-quality nucleic acid material, we optimized the extraction workflow by introducing small but important improvements. In particular, we illustrate the need for extensive purification prior to all enzymatic procedures, with special focus on the DNase digestion step in RNA extraction. These adjustments led to the removal of enzymatic inhibition in RNA extracts and made it possible to reduce genomic DNA to below detectable levels as determined by quantitative PCR. Notably, we confirmed that DNase digestion may not be uniform in replicate extraction reactions, thus the analysis of “representative samples” is insufficient. The modular nature of our workflow protocol allows optimization of individual steps. It also increases focus on additional purification procedures prior to enzymatic processes, in particular DNases, yielding genomic DNA-free RNA extracts suitable for metatranscriptomic analysis. PMID:27803690
Moteghaed, Niloofar Yousefi; Maghooli, Keivan; Garshasbi, Masoud
2018-01-01
Background: Gene expression data are characteristically high dimensional with a small sample size in contrast to the feature size and variability inherent in biological processes that contribute to difficulties in analysis. Selection of highly discriminative features decreases the computational cost and complexity of the classifier and improves its reliability for prediction of a new class of samples. Methods: The present study used hybrid particle swarm optimization and genetic algorithms for gene selection and a fuzzy support vector machine (SVM) as the classifier. Fuzzy logic is used to infer the importance of each sample in the training phase and decrease the outlier sensitivity of the system to increase the ability to generalize the classifier. A decision-tree algorithm was applied to the most frequent genes to develop a set of rules for each type of cancer. This improved the abilities of the algorithm by finding the best parameters for the classifier during the training phase without the need for trial-and-error by the user. The proposed approach was tested on four benchmark gene expression profiles. Results: Good results have been demonstrated for the proposed algorithm. The classification accuracy for leukemia data is 100%, for colon cancer is 96.67% and for breast cancer is 98%. The results show that the best kernel used in training the SVM classifier is the radial basis function. Conclusions: The experimental results show that the proposed algorithm can decrease the dimensionality of the dataset, determine the most informative gene subset, and improve classification accuracy using the optimal parameters of the classifier with no user interface. PMID:29535919
Spectral CT of the extremities with a silicon strip photon counting detector
NASA Astrophysics Data System (ADS)
Sisniega, A.; Zbijewski, W.; Stayman, J. W.; Xu, J.; Taguchi, K.; Siewerdsen, J. H.
2015-03-01
Purpose: Photon counting x-ray detectors (PCXDs) are an important emerging technology for spectral imaging and material differentiation with numerous potential applications in diagnostic imaging. We report development of a Si-strip PCXD system originally developed for mammography with potential application to spectral CT of musculoskeletal extremities, including challenges associated with sparse sampling, spectral calibration, and optimization for higher energy x-ray beams. Methods: A bench-top CT system was developed incorporating a Si-strip PCXD, fixed anode x-ray source, and rotational and translational motions to execute complex acquisition trajectories. Trajectories involving rotation and translation combined with iterative reconstruction were investigated, including single and multiple axial scans and longitudinal helical scans. The system was calibrated to provide accurate spectral separation in dual-energy three-material decomposition of soft-tissue, bone, and iodine. Image quality and decomposition accuracy were assessed in experiments using a phantom with pairs of bone and iodine inserts (3, 5, 15 and 20 mm) and an anthropomorphic wrist. Results: The designed trajectories improved the sampling distribution from 56% minimum sampling of voxels to 75%. Use of iterative reconstruction (viz., penalized likelihood with edge preserving regularization) in combination with such trajectories resulted in a very low level of artifacts in images of the wrist. For large bone or iodine inserts (>5 mm diameter), the error in the estimated material concentration was <16% for (50 mg/mL) bone and <8% for (5 mg/mL) iodine with strong regularization. For smaller inserts, errors of 20-40% were observed and motivate improved methods for spectral calibration and optimization of the edge-preserving regularizer. Conclusion: Use of PCXDs for three-material decomposition in joint imaging proved feasible through a combination of rotation-translation acquisition trajectories and iterative reconstruction with optimized regularization.
Williams, P Stephen
2016-05-01
Asymmetrical flow field-flow fractionation (As-FlFFF) has become the most commonly used of the field-flow fractionation techniques. However, because of the interdependence of the channel flow and the cross flow through the accumulation wall, it is the most difficult of the techniques to optimize, particularly for programmed cross flow operation. For the analysis of polydisperse samples, the optimization should ideally be guided by the predicted fractionating power. Many experimentalists, however, neglect fractionating power and rely on light scattering detection simply to confirm apparent selectivity across the breadth of the eluted peak. The size information returned by the light scattering software is assumed to dispense with any reliance on theory to predict retention, and any departure of theoretical predictions from experimental observations is therefore considered of no importance. Separation depends on efficiency as well as selectivity, however, and efficiency can be a strong function of retention. The fractionation of a polydisperse sample by field-flow fractionation never provides a perfectly separated series of monodisperse fractions at the channel outlet. The outlet stream has some residual polydispersity, and it will be shown in this manuscript that the residual polydispersity is inversely related to the fractionating power. Due to the strong dependence of light scattering intensity and its angular distribution on the size of the scattering species, the outlet polydispersity must be minimized if reliable size data are to be obtained from the light scattering detector signal. It is shown that light scattering detection should be used with careful control of fractionating power to obtain optimized analysis of polydisperse samples. Part I is concerned with isocratic operation of As-FlFFF, and part II with programmed operation.
Tharwat, Alaa; Moemen, Yasmine S; Hassanien, Aboul Ella
2017-04-01
Measuring toxicity is an important step in drug development. Nevertheless, the current experimental methods used to estimate the drug toxicity are expensive and time-consuming, indicating that they are not suitable for large-scale evaluation of drug toxicity in the early stage of drug development. Hence, there is a high demand to develop computational models that can predict the drug toxicity risks. In this study, we used a dataset that consists of 553 drugs that biotransformed in liver. The toxic effects were calculated for the current data, namely, mutagenic, tumorigenic, irritant and reproductive effect. Each drug is represented by 31 chemical descriptors (features). The proposed model consists of three phases. In the first phase, the most discriminative subset of features is selected using rough set-based methods to reduce the classification time while improving the classification performance. In the second phase, different sampling methods such as Random Under-Sampling, Random Over-Sampling and Synthetic Minority Oversampling Technique (SMOTE), BorderLine SMOTE and Safe Level SMOTE are used to solve the problem of imbalanced dataset. In the third phase, the Support Vector Machines (SVM) classifier is used to classify an unknown drug into toxic or non-toxic. SVM parameters such as the penalty parameter and kernel parameter have a great impact on the classification accuracy of the model. In this paper, Whale Optimization Algorithm (WOA) has been proposed to optimize the parameters of SVM, so that the classification error can be reduced. The experimental results proved that the proposed model achieved high sensitivity to all toxic effects. Overall, the high sensitivity of the WOA+SVM model indicates that it could be used for the prediction of drug toxicity in the early stage of drug development. Copyright © 2017 Elsevier Inc. All rights reserved.
Local activation time sampling density for atrial tachycardia contact mapping: how much is enough?
Williams, Steven E; Harrison, James L; Chubb, Henry; Whitaker, John; Kiedrowicz, Radek; Rinaldi, Christopher A; Cooklin, Michael; Wright, Matthew; Niederer, Steven; O'Neill, Mark D
2018-02-01
Local activation time (LAT) mapping forms the cornerstone of atrial tachycardia diagnosis. Although anatomic and positional accuracy of electroanatomic mapping (EAM) systems have been validated, the effect of electrode sampling density on LAT map reconstruction is not known. Here, we study the effect of chamber geometry and activation complexity on optimal LAT sampling density using a combined in silico and in vivo approach. In vivo 21 atrial tachycardia maps were studied in three groups: (1) focal activation, (2) macro-re-entry, and (3) localized re-entry. In silico activation was simulated on a 4×4cm atrial monolayer, sampled randomly at 0.25-10 points/cm2 and used to re-interpolate LAT maps. Activation patterns were studied in the geometrically simple porcine right atrium (RA) and complex human left atrium (LA). Activation complexity was introduced into the porcine RA by incomplete inter-caval linear ablation. In all cases, optimal sampling density was defined as the highest density resulting in minimal further error reduction in the re-interpolated maps. Optimal sampling densities for LA tachycardias were 0.67 ± 0.17 points/cm2 (focal activation), 1.05 ± 0.32 points/cm2 (macro-re-entry) and 1.23 ± 0.26 points/cm2 (localized re-entry), P = 0.0031. Increasing activation complexity was associated with increased optimal sampling density both in silico (focal activation 1.09 ± 0.14 points/cm2; re-entry 1.44 ± 0.49 points/cm2; spiral-wave 1.50 ± 0.34 points/cm2, P < 0.0001) and in vivo (porcine RA pre-ablation 0.45 ± 0.13 vs. post-ablation 0.78 ± 0.17 points/cm2, P = 0.0008). Increasing chamber geometry was also associated with increased optimal sampling density (0.61 ± 0.22 points/cm2 vs. 1.0 ± 0.34 points/cm2, P = 0.0015). Optimal sampling densities can be identified to maximize diagnostic yield of LAT maps. Greater sampling density is required to correctly reveal complex activation and represent activation across complex geometries. Overall, the optimal sampling density for LAT map interpolation defined in this study was ∼1.0-1.5 points/cm2. Published on behalf of the European Society of Cardiology
Optimal updating magnitude in adaptive flat-distribution sampling
NASA Astrophysics Data System (ADS)
Zhang, Cheng; Drake, Justin A.; Ma, Jianpeng; Pettitt, B. Montgomery
2017-11-01
We present a study on the optimization of the updating magnitude for a class of free energy methods based on flat-distribution sampling, including the Wang-Landau (WL) algorithm and metadynamics. These methods rely on adaptive construction of a bias potential that offsets the potential of mean force by histogram-based updates. The convergence of the bias potential can be improved by decreasing the updating magnitude with an optimal schedule. We show that while the asymptotically optimal schedule for the single-bin updating scheme (commonly used in the WL algorithm) is given by the known inverse-time formula, that for the Gaussian updating scheme (commonly used in metadynamics) is often more complex. We further show that the single-bin updating scheme is optimal for very long simulations, and it can be generalized to a class of bandpass updating schemes that are similarly optimal. These bandpass updating schemes target only a few long-range distribution modes and their optimal schedule is also given by the inverse-time formula. Constructed from orthogonal polynomials, the bandpass updating schemes generalize the WL and Langfeld-Lucini-Rago algorithms as an automatic parameter tuning scheme for umbrella sampling.
Optimal updating magnitude in adaptive flat-distribution sampling.
Zhang, Cheng; Drake, Justin A; Ma, Jianpeng; Pettitt, B Montgomery
2017-11-07
We present a study on the optimization of the updating magnitude for a class of free energy methods based on flat-distribution sampling, including the Wang-Landau (WL) algorithm and metadynamics. These methods rely on adaptive construction of a bias potential that offsets the potential of mean force by histogram-based updates. The convergence of the bias potential can be improved by decreasing the updating magnitude with an optimal schedule. We show that while the asymptotically optimal schedule for the single-bin updating scheme (commonly used in the WL algorithm) is given by the known inverse-time formula, that for the Gaussian updating scheme (commonly used in metadynamics) is often more complex. We further show that the single-bin updating scheme is optimal for very long simulations, and it can be generalized to a class of bandpass updating schemes that are similarly optimal. These bandpass updating schemes target only a few long-range distribution modes and their optimal schedule is also given by the inverse-time formula. Constructed from orthogonal polynomials, the bandpass updating schemes generalize the WL and Langfeld-Lucini-Rago algorithms as an automatic parameter tuning scheme for umbrella sampling.
Solid-Phase Extraction (SPE): Principles and Applications in Food Samples.
Ötles, Semih; Kartal, Canan
2016-01-01
Solid-Phase Extraction (SPE) is a sample preparation method that is practised on numerous application fields due to its many advantages compared to other traditional methods. SPE was invented as an alternative to liquid/liquid extraction and eliminated multiple disadvantages, such as usage of large amount of solvent, extended operation time/procedure steps, potential sources of error, and high cost. Moreover, SPE can be plied to the samples combined with other analytical methods and sample preparation techniques optionally. SPE technique is a useful tool for many purposes through its versatility. Isolation, concentration, purification and clean-up are the main approaches in the practices of this method. Food structures represent a complicated matrix and can be formed into different physical stages, such as solid, viscous or liquid. Therefore, sample preparation step particularly has an important role for the determination of specific compounds in foods. SPE offers many opportunities not only for analysis of a large diversity of food samples but also for optimization and advances. This review aims to provide a comprehensive overview on basic principles of SPE and its applications for many analytes in food matrix.
Dispositional Optimism and Incidence of Cognitive Impairment in Older Adults
Gawronski, Katerina A.B.; Kim, Eric S.; Langa, Kenneth M.; Kubzansky, Laura D.
2017-01-01
Objective Higher levels of optimism have been linked with positive health behaviors, biological processes, and health conditions that are potentially protective against cognitive impairment in older adults. However, the association between optimism and cognitive impairment has not been directly examined. We examined whether optimism is associated with incident cognitive impairment in older adults. Methods Data are from the Health and Retirement Study, a nationally representative sample of older U.S. adults. Using multiple logistic regression models, we prospectively assessed whether optimism was associated with incident cognitive impairment in 4,624 adults aged 65+ over a four-year period. Results Among the 4,624 participants, 497 respondents developed cognitive impairment over the four-year follow-up (306 women and 191 men). Higher optimism was associated with decreased risk of incident cognitive impairment. When controlling for sociodemographic factors, each standard deviation increase in optimism was associated with reduced odds (OR=0.72, 95% CI, 0.62–0.83) of becoming cognitively impaired. A dose-response relationship was observed. Compared to those with the lowest levels of optimism, people with moderate levels of optimism had somewhat reduced odds of cognitive impairment (OR=0.79, 95% CI, 0.59–1.03), while people with the highest levels of optimism had the lowest odds of cognitive impairment (OR=0.53, 95% CI, 0.35–0.78). These associations remained after adjusting for health behaviors, biological factors, and psychological covariates that could either confound the association of interest or serve on the pathway. Conclusions Optimism was prospectively associated with a reduced likelihood of becoming cognitively impaired. If these results are replicated, the data suggest that potentially modifiable aspects of positive psychological functioning such as optimism play an important role in maintaining cognitive functioning. Thus, these factors may prove worthy of additional clinical and scientific attention. PMID:27284699
The hydroxyl-functionalized magnetic particles for purification of glycan-binding proteins.
Sun, Xiuxuan; Yang, Ganglong; Sun, Shisheng; Quan, Rui; Dai, Weiwei; Li, Bin; Chen, Chao; Li, Zheng
2009-12-01
Glycan-protein interactions play important biological roles in biological processes. Although there are some methods such as glycan arrays that may elucidate recognition events between carbohydrates and protein as well as screen the important glycan-binding proteins, there is a lack of simple effectively separate method to purify them from complex samples. In proteomics studies, fractionation of samples can help to reduce their complexity and to enrich specific classes of proteins for subsequent downstream analyses. Herein, a rapid simple method for purification of glycan-binding proteins from proteomic samples was developed using hydroxyl-coated magnetic particles coupled with underivatized carbohydrate. Firstly, the epoxy-coated magnetic particles were further hydroxyl functionalized with 4-hydroxybenzhydrazide, then the carbohydrates were efficiently immobilized on hydroxyl functionalized surface of magnetic particles by formation of glycosidic bond with the hemiacetal group at the reducing end of the suitable carbohydrates via condensation. All conditions of this method were optimized. The magnetic particle-carbohydrate conjugates were used to purify the glycan-binding proteins from human serum. The fractionated glycan-binding protein population was displayed by SDS-PAGE. The result showed that the amount of 1 mg magnetic particles coupled with mannose in acetate buffer (pH 5.4) was 10 micromol. The fractionated glycan-binding protein population in human serum could be eluted from the magnetic particle-mannose conjugates by 0.1% SDS. The methodology could work together with the glycan microarrays for screening and purification of the important GBPs from complex protein samples.
Momen, Awad A; Zachariadis, George A; Anthemidis, Aristidis N; Stratis, John A
2007-01-15
Two digestion procedures have been tested on nut samples for application in the determination of essential (Cr, Cu, Fe, Mg, Mn, Zn) and non-essential (Al, Ba, Cd, Pb) elements by inductively coupled plasma-optical emission spectrometry (ICP-OES). These included wet digestions with HNO(3)/H(2)SO(4) and HNO(3)/H(2)SO(4)/H(2)O(2). The later one is recommended for better analytes recoveries (relative error<11%). Two calibrations (aqueous standard and standard addition) procedures were studied and proved that standard addition was preferable for all analytes. Experimental designs for seven factors (HNO(3), H(2)SO(4) and H(2)O(2) volumes, digestion time, pre-digestion time, temperature of the hot plate and sample weight) were used for optimization of sample digestion procedures. For this purpose Plackett-Burman fractional factorial design, which involve eight experiments was adopted. The factors HNO(3) and H(2)O(2) volume, and the digestion time were found to be the most important parameters. The instrumental conditions were also optimized (using peanut matrix rather than aqueous standard solutions) considering radio-frequency (rf) incident power, nebulizer argon gas flow rate and sample uptake flow rate. The analytical performance, such as limits of detection (LOD<0.74mugg(-1)), precision of the overall procedures (relative standard deviation between 2.0 and 8.2%) and accuracy (relative errors between 0.4 and 11%) were assessed statistically to evaluate the developed analytical procedures. The good agreement between measured and certified values for all analytes (relative error <11%) with respect to IAEA-331 (spinach leaves) and IAEA-359 (cabbage) indicates that the developed analytical method is well suited for further studies on the fate of major elements in nuts and possibly similar matrices.
NASA Astrophysics Data System (ADS)
Chapon, Arnaud; Pigrée, Gilbert; Putmans, Valérie; Rogel, Gwendal
Search for low-energy β contaminations in industrial environments requires using Liquid Scintillation Counting. This indirect measurement method supposes a fine control from sampling to measurement itself. Thus, in this paper, we focus on the definition of a measurement method, as generic as possible, for both smears and aqueous samples' characterization. That includes choice of consumables, sampling methods, optimization of counting parameters and definition of energy windows, using the maximization of a Figure of Merit. Detection limits are then calculated considering these optimized parameters. For this purpose, we used PerkinElmer Tri-Carb counters. Nevertheless, except those relative to some parameters specific to PerkinElmer, most of the results presented here can be extended to other counters.
Kennedy, Gordon J; Afeworki, Mobae; Calabro, David C; Chase, Clarence E; Smiley, Randolph J
2004-06-01
Distinct hydrogen species are present in important inorganic solids such as zeolites, silicoaluminophosphates (SAPOs), mesoporous materials, amorphous silicas, and aluminas. These H species include hydrogens associated with acidic sites such as Al(OH)Si, non-framework aluminum sites, silanols, and surface functionalities. Direct and quantitative methodology to identify, measure, and monitor these hydrogen species are key to monitoring catalyst activity, optimizing synthesis conditions, tracking post-synthesis structural modifications, and in the preparation of novel catalytic materials. Many workers have developed several techniques to address these issues, including 1H MAS NMR (magic-angle spinning nuclear magnetic resonance). 1H MAS NMR offers many potential advantages over other techniques, but care is needed in recognizing experimental limitations and developing sample handling and NMR methodology to obtain quantitatively reliable data. A simplified approach is described that permits vacuum dehydration of multiple samples simultaneously and directly in the MAS rotor without the need for epoxy, flame sealing, or extensive glovebox use. We have found that careful optimization of important NMR conditions, such as magnetic field homogeneity and magic angle setting are necessary to acquire quantitative, high-resolution spectra that accurately measure the concentrations of the different hydrogen species present. Details of this 1H MAS NMR methodology with representative applications to zeolites, SAPOs, M41S, and silicas as a function of synthesis conditions and post-synthesis treatments (i.e., steaming, thermal dehydroxylation, and functionalization) are presented.
Combined optimization of image-gathering and image-processing systems for scene feature detection
NASA Technical Reports Server (NTRS)
Halyo, Nesim; Arduini, Robert F.; Samms, Richard W.
1987-01-01
The relationship between the image gathering and image processing systems for minimum mean squared error estimation of scene characteristics is investigated. A stochastic optimization problem is formulated where the objective is to determine a spatial characteristic of the scene rather than a feature of the already blurred, sampled and noisy image data. An analytical solution for the optimal characteristic image processor is developed. The Wiener filter for the sampled image case is obtained as a special case, where the desired characteristic is scene restoration. Optimal edge detection is investigated using the Laplacian operator x G as the desired characteristic, where G is a two dimensional Gaussian distribution function. It is shown that the optimal edge detector compensates for the blurring introduced by the image gathering optics, and notably, that it is not circularly symmetric. The lack of circular symmetry is largely due to the geometric effects of the sampling lattice used in image acquisition. The optimal image gathering optical transfer function is also investigated and the results of a sensitivity analysis are shown.
Method Development in Forensic Toxicology.
Peters, Frank T; Wissenbach, Dirk K; Busardo, Francesco Paolo; Marchei, Emilia; Pichini, Simona
2017-01-01
In the field of forensic toxicology, the quality of analytical methods is of great importance to ensure the reliability of results and to avoid unjustified legal consequences. A key to high quality analytical methods is a thorough method development. The presented article will provide an overview on the process of developing methods for forensic applications. This includes the definition of the method's purpose (e.g. qualitative vs quantitative) and the analytes to be included, choosing an appropriate sample matrix, setting up separation and detection systems as well as establishing a versatile sample preparation. Method development is concluded by an optimization process after which the new method is subject to method validation. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
Optimizing Fungal DNA Extraction Methods from Aerosol Filters
NASA Astrophysics Data System (ADS)
Jimenez, G.; Mescioglu, E.; Paytan, A.
2016-12-01
Fungi and fungal spores can be picked up from terrestrial ecosystems, transported long distances, and deposited into marine ecosystems. It is important to study dust-borne fungal communities, because they can stay viable and effect the ambient microbial populations, which are key players in biogeochemical cycles. One of the challenges of studying dust-borne fungal populations is that aerosol samples contain low biomass, making extracting good quality DNA very difficult. The aim of this project was to increase DNA yield by optimizing DNA extraction methods. We tested aerosol samples collected from Haifa, Israel (polycarbonate filter), Monterey Bay, CA (quartz filter) and Bermuda (quartz filter). Using the Qiagen DNeasy Plant Kit, we tested the effect of altering bead beating times and incubation times, adding three freeze and thaw steps, initially washing the filters with buffers for various lengths of time before using the kit, and adding a step with 30 minutes of sonication in 65C water. Adding three freeze/thaw steps, adding a sonication step, washing with a phosphate buffered saline overnight, and increasing incubation time to two hours, in that order, resulted in the highest increase in DNA for samples from Israel (polycarbonate). DNA yield of samples from Monterey (quart filter) increased about 5 times when washing with buffers overnight (phosphate buffered saline and potassium phophate buffer), adding a sonication step, and adding three freeze and thaw steps. Samples collected in Bermuda (quartz filter) had the highest increase in DNA yield from increasing incubation to 2 hours, increasing bead beating time to 6 minutes, and washing with buffers overnight (phosphate buffered saline and potassium phophate buffer). Our results show that DNA yield can be increased by altering various steps of the Qiagen DNeasy Plant Kit protocol, but different types of filters collected at different sites respond differently to alterations. These results can be used as preliminary results to continue developing fungi DNA extraction methods. Developing these methods will be important as dust storms are predicted to increase due to increased draughts and anthropogenic activity, and the fungal communities of these dust-storms are currently relatively understudied.
Lieder, Falk; Griffiths, Thomas L; Hsu, Ming
2018-01-01
People's decisions and judgments are disproportionately swayed by improbable but extreme eventualities, such as terrorism, that come to mind easily. This article explores whether such availability biases can be reconciled with rational information processing by taking into account the fact that decision makers value their time and have limited cognitive resources. Our analysis suggests that to make optimal use of their finite time decision makers should overrepresent the most important potential consequences relative to less important, put potentially more probable, outcomes. To evaluate this account, we derive and test a model we call utility-weighted sampling. Utility-weighted sampling estimates the expected utility of potential actions by simulating their outcomes. Critically, outcomes with more extreme utilities have a higher probability of being simulated. We demonstrate that this model can explain not only people's availability bias in judging the frequency of extreme events but also a wide range of cognitive biases in decisions from experience, decisions from description, and memory recall. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Park, Jinil; Shin, Taehoon; Yoon, Soon Ho; Goo, Jin Mo; Park, Jang-Yeon
2016-05-01
The purpose of this work was to develop a 3D radial-sampling strategy which maintains uniform k-space sample density after retrospective respiratory gating, and demonstrate its feasibility in free-breathing ultrashort-echo-time lung MRI. A multi-shot, interleaved 3D radial sampling function was designed by segmenting a single-shot trajectory of projection views such that each interleaf samples k-space in an incoherent fashion. An optimal segmentation factor for the interleaved acquisition was derived based on an approximate model of respiratory patterns such that radial interleaves are evenly accepted during the retrospective gating. The optimality of the proposed sampling scheme was tested by numerical simulations and phantom experiments using human respiratory waveforms. Retrospectively, respiratory-gated, free-breathing lung MRI with the proposed sampling strategy was performed in healthy subjects. The simulation yielded the most uniform k-space sample density with the optimal segmentation factor, as evidenced by the smallest standard deviation of the number of neighboring samples as well as minimal side-lobe energy in the point spread function. The optimality of the proposed scheme was also confirmed by minimal image artifacts in phantom images. Human lung images showed that the proposed sampling scheme significantly reduced streak and ring artifacts compared with the conventional retrospective respiratory gating while suppressing motion-related blurring compared with full sampling without respiratory gating. In conclusion, the proposed 3D radial-sampling scheme can effectively suppress the image artifacts due to non-uniform k-space sample density in retrospectively respiratory-gated lung MRI by uniformly distributing gated radial views across the k-space. Copyright © 2016 John Wiley & Sons, Ltd.
A normative inference approach for optimal sample sizes in decisions from experience
Ostwald, Dirk; Starke, Ludger; Hertwig, Ralph
2015-01-01
“Decisions from experience” (DFE) refers to a body of work that emerged in research on behavioral decision making over the last decade. One of the major experimental paradigms employed to study experience-based choice is the “sampling paradigm,” which serves as a model of decision making under limited knowledge about the statistical structure of the world. In this paradigm respondents are presented with two payoff distributions, which, in contrast to standard approaches in behavioral economics, are specified not in terms of explicit outcome-probability information, but by the opportunity to sample outcomes from each distribution without economic consequences. Participants are encouraged to explore the distributions until they feel confident enough to decide from which they would prefer to draw from in a final trial involving real monetary payoffs. One commonly employed measure to characterize the behavior of participants in the sampling paradigm is the sample size, that is, the number of outcome draws which participants choose to obtain from each distribution prior to terminating sampling. A natural question that arises in this context concerns the “optimal” sample size, which could be used as a normative benchmark to evaluate human sampling behavior in DFE. In this theoretical study, we relate the DFE sampling paradigm to the classical statistical decision theoretic literature and, under a probabilistic inference assumption, evaluate optimal sample sizes for DFE. In our treatment we go beyond analytically established results by showing how the classical statistical decision theoretic framework can be used to derive optimal sample sizes under arbitrary, but numerically evaluable, constraints. Finally, we critically evaluate the value of deriving optimal sample sizes under this framework as testable predictions for the experimental study of sampling behavior in DFE. PMID:26441720
Beom Kim, Seon; Kim, CheongTaek; Liu, Qing; Hee Jo, Yang; Joo Choi, Hak; Hwang, Bang Yeon; Kyum Kim, Sang; Kyeong Lee, Mi
2016-08-01
Coumarin derivatives have been reported to inhibit melanin biosynthesis. The melanogenesis inhibitory activity of osthol, a major coumarin of the fruits of Cnidium monnieri Cusson (Umbelliferae), and optimized extraction conditions for the maximum yield from the isolation of osthol from C. monnieri fruits were investigated. B16F10 melanomas were treated with osthol at concentration of 1, 3, and 10 μM for 72 h. The expression of melanogenesis genes, such as tyrosinase, TRP-1, and TRP-2 was also assessed. For optimization, extraction factors such as extraction solvent, extraction time, and sample/solvent ratio were tested and optimized for maximum yield of osthol using response surface methodology with the Box-Behnken design (BBD). Osthol inhibits melanin content in B16F10 melanoma cells with an IC50 value of 4.9 μM. The melanogenesis inhibitory activity of osthol was achieved not by direct inhibition of tyrosinase activity but by inhibiting melanogenic enzyme expressions, such as tyrosinase, TRP-1, and TRP-2. The optimal condition was obtained as a sample/solvent ratio, 1500 mg/10 ml; an extraction time 30.3 min; and a methanol concentration of 97.7%. The osthol yield under optimal conditions was found to be 15.0 mg/g dried samples, which were well matched with the predicted value of 14.9 mg/g dried samples. These results will provide useful information about optimized extraction conditions for the development of osthol as cosmetic therapeutics to reduce skin hyperpigmentation.
Nose biopsy: a comparison between two sampling techniques.
Segal, Nili; Osyntsov, Lidia; Olchowski, Judith; Kordeluk, Sofia; Plakht, Ygal
2016-06-01
Pre operative biopsy is important in obtaining preliminary information that may help in tailoring the optimal treatment. The aim of this study was to compare two sampling techniques of obtaining nasal biopsy-nasal forceps and nasal scissors in terms of pathological results. Biopsies of nasal lesions were taken from patients undergoing nasal surgery by two techniques- with nasal forceps and with nasal scissors. Each sample was examined by a senior pathologist that was blinded to the sampling method. A grading system was used to rate the crush artifact in every sample (none, mild, moderate, severe). A comparison was made between the severity of the crush artifact and the pathological results of the two techniques. One hundred and forty-four samples were taken from 46 patients. Thirty-one were males and the mean age was 49.6 years. Samples taken by forceps had significantly higher grades of crush artifacts compared to those taken by scissors. The degree of crush artifacts had a significant influence on the accuracy of the pre operative biopsy. Forceps cause significant amount of crush artifacts compared to scissors. The degree of crush artifact in the tissue sample influences the accuracy of the biopsy.
Optimizing the multicycle subrotational internal cooling of diatomic molecules
NASA Astrophysics Data System (ADS)
Aroch, A.; Kallush, S.; Kosloff, R.
2018-05-01
Subrotational cooling of the AlH+ ion to the miliKelvin regime, using optimally shaped pulses, is computed. The coherent electromagnetic fields induce purity-conserved transformations and do not change the sample temperature. A decrease in a sample temperature, manifested by an increase of purity, is achieved by the complementary uncontrolled spontaneous emission which changes the entropy of the system. We employ optimal control theory to find a pulse that stirs the system into a population configuration that will result in cooling, upon multicycle excitation-emission steps. The obtained optimal transformation was shown capable to cool molecular ions to the subkelvins regime.
Sanchez, Gaëtan; Lecaignard, Françoise; Otman, Anatole; Maby, Emmanuel; Mattout, Jérémie
2016-01-01
The relatively young field of Brain-Computer Interfaces has promoted the use of electrophysiology and neuroimaging in real-time. In the meantime, cognitive neuroscience studies, which make extensive use of functional exploration techniques, have evolved toward model-based experiments and fine hypothesis testing protocols. Although these two developments are mostly unrelated, we argue that, brought together, they may trigger an important shift in the way experimental paradigms are being designed, which should prove fruitful to both endeavors. This change simply consists in using real-time neuroimaging in order to optimize advanced neurocognitive hypothesis testing. We refer to this new approach as the instantiation of an Active SAmpling Protocol (ASAP). As opposed to classical (static) experimental protocols, ASAP implements online model comparison, enabling the optimization of design parameters (e.g., stimuli) during the course of data acquisition. This follows the well-known principle of sequential hypothesis testing. What is radically new, however, is our ability to perform online processing of the huge amount of complex data that brain imaging techniques provide. This is all the more relevant at a time when physiological and psychological processes are beginning to be approached using more realistic, generative models which may be difficult to tease apart empirically. Based upon Bayesian inference, ASAP proposes a generic and principled way to optimize experimental design adaptively. In this perspective paper, we summarize the main steps in ASAP. Using synthetic data we illustrate its superiority in selecting the right perceptual model compared to a classical design. Finally, we briefly discuss its future potential for basic and clinical neuroscience as well as some remaining challenges.
Yin, Shan; Guo, Pan; Hai, Dafu; Xu, Li; Shu, Jiale; Zhang, Wenjin; Khan, Muhammad Idrees; Kurland, Irwin J; Qiu, Yunping; Liu, Yumin
2017-12-01
In this paper, an optimized method based on gas chromatography/time-of-flight mass spectrometry (GC-TOFMS) platform has been developed for the analysis of gut microbial-host related co-metabolites in fecal samples. The optimization was performed with proportion of chloroform (C), methanol (M) and water (W) for the extraction of specific metabolic pathways of interest. Loading Bi-plots from the PLS regression model revealed that high concentration of chloroform emphasized the extraction of short chain fatty acids and TCA intermediates, while the higher concentration of methanol emphasized indole and phenyl derivatives. Low level of organic solution emphasized some TCA intermediates but not for indole and phenyl species. The highest sum of the peak area and the distribution of metabolites corresponded to the extraction of methanol/chloroform/water of 225:75:300 (v/v/v), which was then selected for method validation and utilized in our application. Excellent linearity was obtained with 62 reference standards representing different classes of gut microbial-host related co-metabolites, with correlation coefficients (r 2 ) higher than 0.99. Limit of detections (LODs) and limit of qualifications (LOQs) for these standards were below 0.9 nmol and 1.6 nmol, respectively. The reproducibility and repeatability of the majority of tested metabolites in fecal samples were observed with RSDs lower than 15%. Chinese rhubarb-treated rats had elevated indole and phenyl species, and decreased levels of polyamine such as putrescine, and several amino acids. Our optimized method has revealed host-microbe relationships of potential importance for intestinal microbial metabolite receptors such as pregnane X receptor (PXR) and aryl hydrocarbon receptor (AHR) activity, and for enzymes such as ornithine decarboxylase (ODC). Copyright © 2017 Elsevier B.V. All rights reserved.
The effect of different control point sampling sequences on convergence of VMAT inverse planning
NASA Astrophysics Data System (ADS)
Pardo Montero, Juan; Fenwick, John D.
2011-04-01
A key component of some volumetric-modulated arc therapy (VMAT) optimization algorithms is the progressive addition of control points to the optimization. This idea was introduced in Otto's seminal VMAT paper, in which a coarse sampling of control points was used at the beginning of the optimization and new control points were progressively added one at a time. A different form of the methodology is also present in the RapidArc optimizer, which adds new control points in groups called 'multiresolution levels', each doubling the number of control points in the optimization. This progressive sampling accelerates convergence, improving the results obtained, and has similarities with the ordered subset algorithm used to accelerate iterative image reconstruction. In this work we have used a VMAT optimizer developed in-house to study the performance of optimization algorithms which use different control point sampling sequences, most of which fall into three different classes: doubling sequences, which add new control points in groups such that the number of control points in the optimization is (roughly) doubled; Otto-like progressive sampling which adds one control point at a time, and equi-length sequences which contain several multiresolution levels each with the same number of control points. Results are presented in this study for two clinical geometries, prostate and head-and-neck treatments. A dependence of the quality of the final solution on the number of starting control points has been observed, in agreement with previous works. We have found that some sequences, especially E20 and E30 (equi-length sequences with 20 and 30 multiresolution levels, respectively), generate better results than a 5 multiresolution level RapidArc-like sequence. The final value of the cost function is reduced up to 20%, such reductions leading to small improvements in dosimetric parameters characterizing the treatments—slightly more homogeneous target doses and better sparing of the organs at risk.
Binns, Michael; de Atauri, Pedro; Vlysidis, Anestis; Cascante, Marta; Theodoropoulos, Constantinos
2015-02-18
Flux balance analysis is traditionally implemented to identify the maximum theoretical flux for some specified reaction and a single distribution of flux values for all the reactions present which achieve this maximum value. However it is well known that the uncertainty in reaction networks due to branches, cycles and experimental errors results in a large number of combinations of internal reaction fluxes which can achieve the same optimal flux value. In this work, we have modified the applied linear objective of flux balance analysis to include a poling penalty function, which pushes each new set of reaction fluxes away from previous solutions generated. Repeated poling-based flux balance analysis generates a sample of different solutions (a characteristic set), which represents all the possible functionality of the reaction network. Compared to existing sampling methods, for the purpose of generating a relatively "small" characteristic set, our new method is shown to obtain a higher coverage than competing methods under most conditions. The influence of the linear objective function on the sampling (the linear bias) constrains optimisation results to a subspace of optimal solutions all producing the same maximal fluxes. Visualisation of reaction fluxes plotted against each other in 2 dimensions with and without the linear bias indicates the existence of correlations between fluxes. This method of sampling is applied to the organism Actinobacillus succinogenes for the production of succinic acid from glycerol. A new method of sampling for the generation of different flux distributions (sets of individual fluxes satisfying constraints on the steady-state mass balances of intermediates) has been developed using a relatively simple modification of flux balance analysis to include a poling penalty function inside the resulting optimisation objective function. This new methodology can achieve a high coverage of the possible flux space and can be used with and without linear bias to show optimal versus sub-optimal solution spaces. Basic analysis of the Actinobacillus succinogenes system using sampling shows that in order to achieve the maximal succinic acid production CO₂ must be taken into the system. Solutions involving release of CO₂ all give sub-optimal succinic acid production.
Compressed sensing of hyperspectral images based on scrambled block Hadamard ensemble
NASA Astrophysics Data System (ADS)
Wang, Li; Feng, Yan
2016-11-01
A fast measurement matrix based on scrambled block Hadamard ensemble for compressed sensing (CS) of hyperspectral images (HSI) is investigated. The proposed measurement matrix offers several attractive features. First, the proposed measurement matrix possesses Gaussian behavior, which illustrates that the matrix is universal and requires a near-optimal number of samples for exact reconstruction. In addition, it could be easily implemented in the optical domain due to its integer-valued elements. More importantly, the measurement matrix only needs small memory for storage in the sampling process. Experimental results on HSIs reveal that the reconstruction performance of the proposed measurement matrix is comparable or better than Gaussian matrix and Bernoulli matrix using different reconstruction algorithms while consuming less computational time. The proposed matrix could be used in CS of HSI, which would save the storage memory on board, improve the sampling efficiency, and ameliorate the reconstruction quality.
SERS-active silver nanoparticle aggregates produced in high-iron float glass by ion exchange process
NASA Astrophysics Data System (ADS)
Karvonen, L.; Chen, Y.; Säynätjoki, A.; Taiviola, K.; Tervonen, A.; Honkanen, S.
2011-11-01
Silver nanoparticles were produced in iron containing float glasses by silver-sodium ion exchange and post-annealing. In particular, the effect of the concentration and the oxidation state of iron in the host glass on the nanoparticle formation was studied. After the nanoparticle fabrication process, the samples were characterized by optical absorption measurements. The samples were etched to expose nanoparticle aggregates on the surface, which were studied by optical microscopy and scanning electron microscopy. The SERS-activity of these glass samples was demonstrated and compared using a dye molecule Rhodamine 6G (R6G) as an analyte. The importance of the iron oxidation level for reduction process is discussed. The glass with high concentration of Fe 2+ ions was found to be superior in SERS applications of silver nanoparticles. The optimal surface features in terms of SERS enhancement are also discussed.
Leça, João M; Pereira, Ana C; Vieira, Ana C; Reis, Marco S; Marques, José C
2015-08-05
Vicinal diketones, namely diacetyl (DC) and pentanedione (PN), are compounds naturally found in beer that play a key role in the definition of its aroma. In lager beer, they are responsible for off-flavors (buttery flavor) and therefore their presence and quantification is of paramount importance to beer producers. Aiming at developing an accurate quantitative monitoring scheme to follow these off-flavor compounds during beer production and in the final product, the head space solid-phase microextraction (HS-SPME) analytical procedure was tuned through experiments planned in an optimal way and the final settings were fully validated. Optimal design of experiments (O-DOE) is a computational, statistically-oriented approach for designing experiences that are most informative according to a well-defined criterion. This methodology was applied for HS-SPME optimization, leading to the following optimal extraction conditions for the quantification of VDK: use a CAR/PDMS fiber, 5 ml of samples in 20 ml vial, 5 min of pre-incubation time followed by 25 min of extraction at 30 °C, with agitation. The validation of the final analytical methodology was performed using a matrix-matched calibration, in order to minimize matrix effects. The following key features were obtained: linearity (R(2) > 0.999, both for diacetyl and 2,3-pentanedione), high sensitivity (LOD of 0.92 μg L(-1) and 2.80 μg L(-1), and LOQ of 3.30 μg L(-1) and 10.01 μg L(-1), for diacetyl and 2,3-pentanedione, respectively), recoveries of approximately 100% and suitable precision (repeatability and reproducibility lower than 3% and 7.5%, respectively). The applicability of the methodology was fully confirmed through an independent analysis of several beer samples, with analyte concentrations ranging from 4 to 200 g L(-1). Copyright © 2015 Elsevier B.V. All rights reserved.
Sampling Mars: Analytical requirements and work to do in advance
NASA Technical Reports Server (NTRS)
Koeberl, Christian
1988-01-01
Sending a mission to Mars to collect samples and return them to the Earth for analysis is without doubt one of the most exciting and important tasks for planetary science in the near future. Many scientifically important questions are associated with the knowledge of the composition and structure of Martian samples. Amongst the most exciting questions is the clarification of the SNC problem- to prove or disprove a possible Martian origin of these meteorites. Since SNC meteorites have been used to infer the chemistry of the planet Mars, and its evolution (including the accretion history), it would be important to know if the whole story is true. But before addressing possible scientific results, we have to deal with the analytical requirements, and with possible pre-return work. It is unlikely to expect that a possible Mars sample return mission will bring back anything close to the amount returned by the Apollo missions. It will be more like the amount returned by the Luna missions, or at least in that order of magnitude. This requires very careful sample selection, and very precise analytical techniques. These techniques should be able to use minimal sample sizes and on the other hand optimize the scientific output. The possibility to work with extremely small samples should not obstruct another problem: possible sampling errors. As we know from terrestrial geochemical studies, sampling procedures are quite complicated and elaborate to ensure avoiding sampling errors. The significance of analyzing a milligram or submilligram sized sample and putting that in relationship with the genesis of whole planetary crusts has to be viewed with care. This leaves a dilemma on one hand, to minimize the sample size as far as possible in order to have the possibility of returning as many different samples as possible, and on the other hand to take a sample large enough to be representative. Whole rock samples are very useful, but should not exceed the 20 to 50 g range, except in cases of extreme inhomogeneity, because for larger samples the information tends to become redundant. Soil samples should be in the 2 to 10 g range, permitting the splitting of the returned samples for studies in different laboratories with variety of techniques.
Ondigo, Bartholomew N; Park, Gregory S; Gose, Severin O; Ho, Benjamin M; Ochola, Lyticia A; Ayodo, George O; Ofulla, Ayub V; John, Chandy C
2012-12-21
Multiplex cytometric bead assay (CBA) have a number of advantages over ELISA for antibody testing, but little information is available on standardization and validation of antibody CBA to multiple Plasmodium falciparum antigens. The present study was set to determine optimal parameters for multiplex testing of antibodies to P. falciparum antigens, and to compare results of multiplex CBA to ELISA. Antibodies to ten recombinant P. falciparum antigens were measured by CBA and ELISA in samples from 30 individuals from a malaria endemic area of Kenya and compared to known positive and negative control plasma samples. Optimal antigen amounts, monoplex vs multiplex testing, plasma dilution, optimal buffer, number of beads required were assessed for CBA testing, and results from CBA vs. ELISA testing were compared. Optimal amounts for CBA antibody testing differed according to antigen. Results for monoplex CBA testing correlated strongly with multiplex testing for all antigens (r = 0.88-0.99, P values from <0.0001 - 0.004), and antibodies to variants of the same antigen were accurately distinguished within a multiplex reaction. Plasma dilutions of 1:100 or 1:200 were optimal for all antigens for CBA testing. Plasma diluted in a buffer containing 0.05% sodium azide, 0.5% polyvinylalcohol, and 0.8% polyvinylpyrrolidone had the lowest background activity. CBA median fluorescence intensity (MFI) values with 1,000 antigen-conjugated beads/well did not differ significantly from MFI with 5,000 beads/well. CBA and ELISA results correlated well for all antigens except apical membrane antigen-1 (AMA-1). CBA testing produced a greater range of values in samples from malaria endemic areas and less background reactivity for blank samples than ELISA. With optimization, CBA may be the preferred method of testing for antibodies to P. falciparum antigens, as CBA can test for antibodies to multiple recombinant antigens from a single plasma sample and produces a greater range of values in positive samples and lower background readings for blank samples than ELISA.
Effect of template in MCM-41 on the adsorption of aniline from aqueous solution.
Yang, Xinxin; Guan, Qingxin; Li, Wei
2011-11-01
The effect of the surfactant template cetyltrimethylammonium bromide (CTAB) in MCM-41 on the adsorption of aniline was investigated. Various MCM-41 samples were prepared by controlling template removal using an extraction method. The samples were then used as adsorbents for the removal of aniline from aqueous solution. The results showed that the MCM-41 samples with the template partially removed (denoted as C-MCM-41) exhibited better adsorption performance than MCM-41 with the template completely removed (denoted as MCM-41). The reason for this difference may be that the C-MCM-41 samples had stronger hydrophobic properties and selectivity for aniline because of the presence of the template. The porosity and cationic sites generated by the template play an important role in the adsorption process. The optimal adsorbent with moderate template was achieved by changing the ratio of extractant; it has the potential for promising applications in the field of water pollution control. Copyright © 2011 Elsevier Ltd. All rights reserved.
Azemard, Sabine; Vassileva, Emilia
2015-06-01
In this paper, we present a simple, fast and cost-effective method for determination of methyl mercury (MeHg) in marine samples. All important parameters influencing the sample preparation process were investigated and optimized. Full validation of the method was performed in accordance to the ISO-17025 (ISO/IEC, 2005) and Eurachem guidelines. Blanks, selectivity, working range (0.09-3.0ng), recovery (92-108%), intermediate precision (1.7-4.5%), traceability, limit of detection (0.009ng), limit of quantification (0.045ng) and expanded uncertainty (15%, k=2) were assessed. Estimation of the uncertainty contribution of each parameter and the demonstration of traceability of measurement results was provided as well. Furthermore, the selectivity of the method was studied by analyzing the same sample extracts by advanced mercury analyzer (AMA) and gas chromatography-atomic fluorescence spectrometry (GC-AFS). Additional validation of the proposed procedure was effectuated by participation in the IAEA-461 worldwide inter-laboratory comparison exercises. Copyright © 2014 Elsevier Ltd. All rights reserved.
Mattsson, Leena; Xu, Jingjing; Preininger, Claudia; Tse Sum Bui, Bernadette; Haupt, Karsten
2018-05-01
We developed a competitive fluorescent molecularly imprinted polymer (MIP) assay to detect biogenic amines in fish samples. MIPs synthesized by precipitation polymerization using histamine as template were used in a batch binding assay analogous to competitive fluoroimmunoassays. Introducing a complex sample matrix, such as fish extract, into the assay changes the environment and the binding conditions, therefore the importance of the sample preparation is extensively discussed. Several extraction and purification methods for fish were comprehensively studied, and an optimal clean-up procedure for fish samples using liquid-liquid extraction was developed. The feasibility of the competitive MIP assay was shown in the purified fish extract over a broad histamine range (1 - 430µM). The MIP had the highest affinity towards histamine, but recognized also the structurally similar biogenic amines tyramine and tryptamine, as well as spermine and spermidine, providing simultaneous analysis and assessment of the total amount of biogenic amines. Copyright © 2018 Elsevier B.V. All rights reserved.
Kellogg, Joshua J.; Wallace, Emily D.; Graf, Tyler N.; Oberlies, Nicholas H.; Cech, Nadja B.
2018-01-01
Metabolomics has emerged as an important analytical technique for multiple applications. The value of information obtained from metabolomics analysis depends on the degree to which the entire metabolome is present and the reliability of sample treatment to ensure reproducibility across the study. The purpose of this study was to compare methods of preparing complex botanical extract samples prior to metabolomics profiling. Two extraction methodologies, accelerated solvent extraction and a conventional solvent maceration, were compared using commercial green tea [Camellia sinensis (L.) Kuntze (Theaceae)] products as a test case. The accelerated solvent protocol was first evaluated to ascertain critical factors influencing extraction using a D-optimal experimental design study. The accelerated solvent and conventional extraction methods yielded similar metabolite profiles for the green tea samples studied. The accelerated solvent extraction yielded higher total amounts of extracted catechins, was more reproducible, and required less active bench time to prepare the samples. This study demonstrates the effectiveness of accelerated solvent as an efficient methodology for metabolomics studies. PMID:28787673
Using lod scores to detect sex differences in male-female recombination fractions.
Feenstra, B; Greenberg, D A; Hodge, S E
2004-01-01
Human recombination fraction (RF) can differ between males and females, but investigators do not always know which disease genes are located in genomic areas of large RF sex differences. Knowledge of RF sex differences contributes to our understanding of basic biology and can increase the power of a linkage study, improve gene localization, and provide clues to possible imprinting. One way to detect these differences is to use lod scores. In this study we focused on detecting RF sex differences and answered the following questions, in both phase-known and phase-unknown matings: (1) How large a sample size is needed to detect a RF sex difference? (2) What are "optimal" proportions of paternally vs. maternally informative matings? (3) Does ascertaining nonoptimal proportions of paternally or maternally informative matings lead to ascertainment bias? Our results were as follows: (1) We calculated expected lod scores (ELODs) under two different conditions: "unconstrained," allowing sex-specific RF parameters (theta(female), theta(male)); and "constrained," requiring theta(female) = theta(male). We then examined the DeltaELOD (identical with difference between maximized constrained and unconstrained ELODs) and calculated minimum sample sizes required to achieve statistically significant DeltaELODs. For large RF sex differences, samples as small as 10 to 20 fully informative matings can achieve statistical significance. We give general sample size guidelines for detecting RF differences in informative phase-known and phase-unknown matings. (2) We defined p as the proportion of paternally informative matings in the dataset; and the optimal proportion p(circ) as that value of p that maximizes DeltaELOD. We determined that, surprisingly, p(circ) does not necessarily equal (1/2), although it does fall between approximately 0.4 and 0.6 in most situations. (3) We showed that if p in a sample deviates from its optimal value, no bias is introduced (asymptotically) to the maximum likelihood estimates of theta(female) and theta(male), even though ELOD is reduced (see point 2). This fact is important because often investigators cannot control the proportions of paternally and maternally informative families. In conclusion, it is possible to reliably detect sex differences in recombination fraction. Copyright 2004 S. Karger AG, Basel
Finding an optimal strategy for measuring the quality of groundwater as a source for drinking water
NASA Astrophysics Data System (ADS)
van Driezum, Inge; Saracevic, Ernis; Scheibz, Jürgen; Zessner, Matthias; Kirschner, Alexander; Sommer, Regina; Farnleitner, Andreas; Blaschke, Alfred Paul
2015-04-01
A good chemical and microbiological water quality is of great importance in riverbank filtration systems that are used as public water supplies. Water quality is ideally monitored frequently at the drinking water well using a steady pumping rate. Monitoring source water (like groundwater) however, can be more challenging. First of all, piezometers should be drilled in the correct layer of the aquifer. Secondly, the sampling design should include all preferred parameters (microbiological and chemical parameters) and should also take the hydrological conditions into account. In this study, we made use of different geophysical techniques (ERT and FDEM) to select the optimal placement of the piezometers. We also designed a sampling strategy which can be used to sample fecal indicators, biostability parameters, standard chemical parameters and a wide range of micropollutants. Several time series experiments were carried out in the study site Porous GroundWater Aquifer (PGWA) - an urban floodplain extending on the left bank of the river Danube downstream of the City of Vienna, Austria. The upper layer of the PGWA consist of silt and has a thickness from 1 to 6 meter. The underlying confined aquifer consists of sand and gravel and has a thickness of in between 3 and 15 meter. Hydraulic conductivities range from 5 x 10-2 m/s up to 5 x 10-5 m/s. Underneath the aquifer are alternating sand and clay/silt layers. As fecal markers Escherichia coli, enterococci and aerobic spores were measured. Biostability was measured using leucine incorporation. Additionally, several micropollutants and standard chemical parameters were measured. Results showed that physical and chemical parameters stayed stable in all monitoring wells during extended purging. A similar trend could be observed for E coli and enterococci. In the wells close to the river, aerobic spores and leucine incorporation decreased after 30 min. of pumping, whereas the well close to the backwater showed a different pattern. Overall, purging for 45 minutes was the optimal sampling procedure for the microbiological parameters. Samples for the detection of micropollutants were taken after 15 min. purging.
Fernandez-Alvarez, Maria; Llompart, Maria; Lamas, J Pablo; Lores, Marta; Garcia-Jares, Carmen; Cela, Rafael; Dagnac, Thierry
2008-06-09
A simple and rapid method based on solid-phase microextraction (SPME) technique followed by gas chromatography with microelectron-capture detection (GC-microECD) was developed for the simultaneous determination of more than 30 pesticides (pyrethroids and organochlorinated among others) in milk. To our knowledge, this is the first application of SPME for the determination of pyrethroid pesticides in milk. Negative matrix effects due to the complexity and lipophility of the studied matrix were reduced by diluting the sample with distilled water. A 2(5-1) fractional factorial design was performed to assess the influence of several factors (type of fiber coating, sampling mode, stirring, extraction temperature, and addition of sodium chloride) on the SPME procedure and to determine the optimal extraction conditions. After optimization of all the significant variables and interactions, the recommended procedure was established as follows: DSPME (using a polydimethylsiloxane (PDMS)/divinylbenzene (DVB) coating) of 1 mL of milk sample diluted with Milli-Q water (1:10 dilution ratio), at 100 degrees C, under stirring for 30 min. The proposed method showed good linearity and high sensitivity, with limits of detection (LOD) at the sub-ng mL(-1) level. Within a day and among days precisions were also evaluated (R.S.D.<15%). One of the most important attainments of this work was the use of external calibration with milk-matched standards to quantify the levels of the target analytes. The method was tested with liquid and powdered milk samples with different fat contents covering the whole commercial range. The efficiency of the extraction process was studied at several analyte concentration levels obtaining high recoveries (>80% in most cases) for different types of full-fat milks. The optimized procedure was validated with powdered milk certified reference material, which was quantified using external calibration and standard addition protocols. Finally, the DSPME-GC-microECD methodology was applied to the analysis of milk samples collected in farms of dairy cattle from NW Spain.
A Miniaturized Spectrometer for Optimized Selection of Subsurface Samples for Future MSR Missions
NASA Astrophysics Data System (ADS)
De Sanctis, M. C.; Altieri, F.; De Angelis, S.; Ferrari, M.; Frigeri, A.; Biondi, D.; Novi, S.; Antonacci, F.; Gabrieli, R.; Paolinetti, R.; Villa, F.; Ammannito, A.; Mugnuolo, R.; Pirrotta, S.
2018-04-01
We present the concept of a miniaturized spectrometer based on the ExoMars2020/Ma_MISS experiment. Coupled with a drill tool, it will allow an assessment of subsurface composition and optimize the selection of martian samples with a high astrobiological potential.
NASA Astrophysics Data System (ADS)
Deac, C.; Barbulescu, A.; Gligor, A.; Bibu, M.; Petrescu, V.
2016-11-01
The accidental or historic contamination of soils with hydrocarbons, in areas crossed by oil pipelines or where oil- or gas-extraction installations are located, is a major concern and has significant financial and ecological consequences, both for the owners of those areas and for the oil transportation or exploitation companies. Therefore it is very important to find the optimal method for removing the pollution. The current paper presents measures, mainly involving bioremediation, recommended and applied for the depollution of a contaminated area in Romania. While the topic of dealing with polluted soils is well-established in the Romanian speciality literature, bioremediation is a relatively novel approach and this paper presents important considerations in this regard. Contaminated soil samples were taken from 10 different locations within the targeted area and subjected to a thorough physical and chemical analysis, which led to determining a specific scoring table for assessing the bioremediation potential of the various samples. This has allowed the authors to establish for each of the sampled areas the best mix of factors such as nutrients (nitrogen, phosphorus, potassium), gypsum, microelements etc., that would lead to obtaining the best results in terms of the contaminants' biodegradation.
Spatial Prediction and Optimized Sampling Design for Sodium Concentration in Groundwater
Shabbir, Javid; M. AbdEl-Salam, Nasser; Hussain, Tajammal
2016-01-01
Sodium is an integral part of water, and its excessive amount in drinking water causes high blood pressure and hypertension. In the present paper, spatial distribution of sodium concentration in drinking water is modeled and optimized sampling designs for selecting sampling locations is calculated for three divisions in Punjab, Pakistan. Universal kriging and Bayesian universal kriging are used to predict the sodium concentrations. Spatial simulated annealing is used to generate optimized sampling designs. Different estimation methods (i.e., maximum likelihood, restricted maximum likelihood, ordinary least squares, and weighted least squares) are used to estimate the parameters of the variogram model (i.e, exponential, Gaussian, spherical and cubic). It is concluded that Bayesian universal kriging fits better than universal kriging. It is also observed that the universal kriging predictor provides minimum mean universal kriging variance for both adding and deleting locations during sampling design. PMID:27683016
NASA Technical Reports Server (NTRS)
Drusano, George L.
1991-01-01
The optimal sampling theory is evaluated in applications to studies related to the distribution and elimination of several drugs (including ceftazidime, piperacillin, and ciprofloxacin), using the SAMPLE module of the ADAPT II package of programs developed by D'Argenio and Schumitzky (1979, 1988) and comparing the pharmacokinetic parameter values with results obtained by traditional ten-sample design. The impact of the use of optimal sampling was demonstrated in conjunction with NONMEM (Sheiner et al., 1977) approach, in which the population is taken as the unit of analysis, allowing even fragmentary patient data sets to contribute to population parameter estimates. It is shown that this technique is applicable in both the single-dose and the multiple-dose environments. The ability to study real patients made it possible to show that there was a bimodal distribution in ciprofloxacin nonrenal clearance.
Abbasi, Ibrahim; Kirstein, Oscar D; Hailu, Asrat; Warburg, Alon
2016-10-01
Visceral leishmaniasis (VL), one of the most important neglected tropical diseases, is caused by Leishmania donovani eukaryotic protozoan parasite of the genus Leishmania, the disease is prevalent mainly in the Indian sub-continent, East Africa and Brazil. VL can be diagnosed by PCR amplifying ITS1 and/or kDNA genes. The current study involved the optimization of Loop-mediated isothermal amplification (LAMP) for the detection of Leishmania DNA in human blood or tissue samples. Three LAMP systems were developed; in two of those the primers were designed based on shared regions of the ITS1 gene among different Leishmania species, while the primers for the third LAMP system were derived from a newly identified repeated region in the Leishmania genome. The LAMP tests were shown to be sufficiently sensitive to detect 0.1pg of DNA from most Leishmania species. The green nucleic acid stain SYTO16, was used here for the first time to allow real-time monitoring of LAMP amplification. The advantage of real time-LAMP using SYTO 16 over end-point LAMP product detection is discussed. The efficacy of the real time-LAMP tests for detecting Leishmania DNA in dried blood samples from volunteers living in endemic areas, was compared with that of qRT-kDNA PCR. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
Wang, Lei; Zhou, Jia-Bin; Wang, Xia; Wang, Zhen-Hua; Zhao, Ru-Song
2016-06-01
Recently, a sponge-like material called carbon nanotube sponges (CNT sponges) has drawn considerable attention because it can remove large-area oil, nanoparticles, and organic dyes from water. In this paper, the feasibility of CNT sponges as a novel solid-phase extraction (SPE) adsorbent for the enrichment and determination of heavy metal ions (Co(2+), Cu(2+), and Hg(2+)) was investigated for the first time. Sodium diethyldithiocarbamate (DDTC) was used as the chelating agent and high performance liquid chromatography (HPLC) for the final analysis. Important factors which may influence extraction efficiency of SPE were optimized, such as the kind and volume of eluent, volume of DDTC, sample pH, flow rate, etc. Under the optimized conditions, wide range of linearity (0.5-400 μg L(-1)), low limits of detection (0.089~0.690 μg L(-1); 0.018~0.138 μg), and good repeatability (1.27~3.60 %, n = 5) were obtained. The developed method was applied for the analysis of the three metal ions in real water samples, and satisfactory results were achieved. All of these findings demonstrated that CNT sponges will be a good choice for the enrichment and determination of target ions at trace levels in the future.
Robust Design of Sheet Metal Forming Process Based on Kriging Metamodel
NASA Astrophysics Data System (ADS)
Xie, Yanmin
2011-08-01
Nowadays, sheet metal forming processes design is not a trivial task due to the complex issues to be taken into account (conflicting design goals, complex shapes forming and so on). Optimization methods have also been widely applied in sheet metal forming. Therefore, proper design methods to reduce time and costs have to be developed mostly based on computer aided procedures. At the same time, the existence of variations during manufacturing processes significantly may influence final product quality, rendering non-robust optimal solutions. In this paper, a small size of design of experiments is conducted to investigate how a stochastic behavior of noise factors affects drawing quality. The finite element software (LS_DYNA) is used to simulate the complex sheet metal stamping processes. The Kriging metamodel is adopted to map the relation between input process parameters and part quality. Robust design models for sheet metal forming process integrate adaptive importance sampling with Kriging model, in order to minimize impact of the variations and achieve reliable process parameters. In the adaptive sample, an improved criterion is used to provide direction in which additional training samples can be added to better the Kriging model. Nonlinear functions as test functions and a square stamping example (NUMISHEET'93) are employed to verify the proposed method. Final results indicate application feasibility of the aforesaid method proposed for multi-response robust design.
Optimization of the Hartmann-Shack microlens array
NASA Astrophysics Data System (ADS)
de Oliveira, Otávio Gomes; de Lima Monteiro, Davies William
2011-04-01
In this work we propose to optimize the microlens-array geometry for a Hartmann-Shack wavefront sensor. The optimization makes possible that regular microlens arrays with a larger number of microlenses are replaced by arrays with fewer microlenses located at optimal sampling positions, with no increase in the reconstruction error. The goal is to propose a straightforward and widely accessible numerical method to calculate an optimized microlens array for a known aberration statistics. The optimization comprises the minimization of the wavefront reconstruction error and/or the number of necessary microlenses in the array. We numerically generate, sample and reconstruct the wavefront, and use a genetic algorithm to discover the optimal array geometry. Within an ophthalmological context, as a case study, we demonstrate that an array with only 10 suitably located microlenses can be used to produce reconstruction errors as small as those of a 36-microlens regular array. The same optimization procedure can be employed for any application where the wavefront statistics is known.
Optimal Bayesian Adaptive Design for Test-Item Calibration.
van der Linden, Wim J; Ren, Hao
2015-06-01
An optimal adaptive design for test-item calibration based on Bayesian optimality criteria is presented. The design adapts the choice of field-test items to the examinees taking an operational adaptive test using both the information in the posterior distributions of their ability parameters and the current posterior distributions of the field-test parameters. Different criteria of optimality based on the two types of posterior distributions are possible. The design can be implemented using an MCMC scheme with alternating stages of sampling from the posterior distributions of the test takers' ability parameters and the parameters of the field-test items while reusing samples from earlier posterior distributions of the other parameters. Results from a simulation study demonstrated the feasibility of the proposed MCMC implementation for operational item calibration. A comparison of performances for different optimality criteria showed faster calibration of substantial numbers of items for the criterion of D-optimality relative to A-optimality, a special case of c-optimality, and random assignment of items to the test takers.
Salahinejad, Maryam; Aflaki, Fereydoon
2011-06-01
Dispersive liquid-liquid microextraction followed by inductively coupled plasma-optical emission spectrometry has been investigated for determination of Cd(II) ions in water samples. Ammonium pyrrolidine dithiocarbamate was used as chelating agent. Several factors influencing the microextraction efficiency of Cd (II) ions such as extracting and dispersing solvent type and their volumes, pH, sample volume, and salting effect were optimized. The optimization was performed both via one variable at a time, and central composite design methods and the optimum conditions were selected. Both optimization methods showed nearly the same results: sample size 5 mL; dispersive solvent ethanol; dispersive solvent volume 2 mL; extracting solvent chloroform; extracting solvent volume 200 [Formula: see text]L; pH and salt amount do not affect significantly the microextraction efficiency. The limits of detection and quantification were 0.8 and 2.5 ng L( - 1), respectively. The relative standard deviation for five replicate measurements of 0.50 mg L( - 1) of Cd (II) was 4.4%. The recoveries for the spiked real samples from tap, mineral, river, dam, and sea waters samples ranged from 92.2% to 104.5%.
Optimization of low-level LS counter Quantulus 1220 for tritium determination in water samples
NASA Astrophysics Data System (ADS)
Jakonić, Ivana; Todorović, Natasa; Nikolov, Jovana; Bronić, Ines Krajcar; Tenjović, Branislava; Vesković, Miroslav
2014-05-01
Liquid scintillation counting (LSC) is the most commonly used technique for measuring tritium. To optimize tritium analysis in waters by ultra-low background liquid scintillation spectrometer Quantulus 1220 the optimization of sample/scintillant ratio, choice of appropriate scintillation cocktail and comparison of their efficiency, background and minimal detectable activity (MDA), the effect of chemi- and photoluminescence and combination of scintillant/vial were performed. ASTM D4107-08 (2006) method had been successfully applied in our laboratory for two years. During our last preparation of samples a serious quench effect in count rates of samples that could be consequence of possible contamination by DMSO was noticed. The goal of this paper is to demonstrate development of new direct method in our laboratory proposed by Pujol and Sanchez-Cabeza (1999), which turned out to be faster and simpler than ASTM method while we are dealing with problem of neutralization of DMSO in apparatus. The minimum detectable activity achieved was 2.0 Bq l-1 for a total counting time of 300 min. In order to test the optimization of system for this method tritium level was determined in Danube river samples and also for several samples within intercomparison with Ruđer Bošković Institute (IRB).
Adaptive sampling of information in perceptual decision-making.
Cassey, Thomas C; Evens, David R; Bogacz, Rafal; Marshall, James A R; Ludwig, Casimir J H
2013-01-01
In many perceptual and cognitive decision-making problems, humans sample multiple noisy information sources serially, and integrate the sampled information to make an overall decision. We derive the optimal decision procedure for two-alternative choice tasks in which the different options are sampled one at a time, sources vary in the quality of the information they provide, and the available time is fixed. To maximize accuracy, the optimal observer allocates time to sampling different information sources in proportion to their noise levels. We tested human observers in a corresponding perceptual decision-making task. Observers compared the direction of two random dot motion patterns that were triggered only when fixated. Observers allocated more time to the noisier pattern, in a manner that correlated with their sensory uncertainty about the direction of the patterns. There were several differences between the optimal observer predictions and human behaviour. These differences point to a number of other factors, beyond the quality of the currently available sources of information, that influences the sampling strategy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Young-Min; Pennycook, Stephen J.; Borisevich, Albina Y.
Octahedral tilt behavior is increasingly recognized as an important contributing factor to the physical behavior of perovskite oxide materials and especially their interfaces, necessitating the development of high-resolution methods of tilt mapping. There are currently two major approaches for quantitative imaging of tilts in scanning transmission electron microscopy (STEM), bright field (BF) and annular bright field (ABF). In this study, we show that BF STEM can be reliably used for measurements of oxygen octahedral tilts. While optimal conditions for BF imaging are more restricted with respect to sample thickness and defocus, we find that BF imaging with an aberration-corrected microscopemore » with the accelerating voltage of 300 kV gives us the most accurate quantitative measurement of the oxygen column positions. Using the tilted perovskite structure of BiFeO 3 (BFO) as our test sample, we simulate BF and ABF images in a wide range of conditions, identifying the optimal imaging conditions for each mode. Finally, we show that unlike ABF imaging, BF imaging remains directly quantitatively interpretable for a wide range of the specimen mistilt, suggesting that it should be preferable to the ABF STEM imaging for quantitative structure determination.« less
Kim, Young-Min; Pennycook, Stephen J.; Borisevich, Albina Y.
2017-04-29
Octahedral tilt behavior is increasingly recognized as an important contributing factor to the physical behavior of perovskite oxide materials and especially their interfaces, necessitating the development of high-resolution methods of tilt mapping. There are currently two major approaches for quantitative imaging of tilts in scanning transmission electron microscopy (STEM), bright field (BF) and annular bright field (ABF). In this study, we show that BF STEM can be reliably used for measurements of oxygen octahedral tilts. While optimal conditions for BF imaging are more restricted with respect to sample thickness and defocus, we find that BF imaging with an aberration-corrected microscopemore » with the accelerating voltage of 300 kV gives us the most accurate quantitative measurement of the oxygen column positions. Using the tilted perovskite structure of BiFeO 3 (BFO) as our test sample, we simulate BF and ABF images in a wide range of conditions, identifying the optimal imaging conditions for each mode. Finally, we show that unlike ABF imaging, BF imaging remains directly quantitatively interpretable for a wide range of the specimen mistilt, suggesting that it should be preferable to the ABF STEM imaging for quantitative structure determination.« less
Efficient SRAM yield optimization with mixture surrogate modeling
NASA Astrophysics Data System (ADS)
Zhongjian, Jiang; Zuochang, Ye; Yan, Wang
2016-12-01
Largely repeated cells such as SRAM cells usually require extremely low failure-rate to ensure a moderate chi yield. Though fast Monte Carlo methods such as importance sampling and its variants can be used for yield estimation, they are still very expensive if one needs to perform optimization based on such estimations. Typically the process of yield calculation requires a lot of SPICE simulation. The circuit SPICE simulation analysis accounted for the largest proportion of time in the process yield calculation. In the paper, a new method is proposed to address this issue. The key idea is to establish an efficient mixture surrogate model. The surrogate model is based on the design variables and process variables. This model construction method is based on the SPICE simulation to get a certain amount of sample points, these points are trained for mixture surrogate model by the lasso algorithm. Experimental results show that the proposed model is able to calculate accurate yield successfully and it brings significant speed ups to the calculation of failure rate. Based on the model, we made a further accelerated algorithm to further enhance the speed of the yield calculation. It is suitable for high-dimensional process variables and multi-performance applications.
Kislinger, Thomas; Gramolini, Anthony O; MacLennan, David H; Emili, Andrew
2005-08-01
An optimized analytical expression profiling strategy based on gel-free multidimensional protein identification technology (MudPIT) is reported for the systematic investigation of biochemical (mal)-adaptations associated with healthy and diseased heart tissue. Enhanced shotgun proteomic detection coverage and improved biological inference is achieved by pre-fractionation of excised mouse cardiac muscle into subcellular components, with each organellar fraction investigated exhaustively using multiple repeat MudPIT analyses. Functional-enrichment, high-confidence identification, and relative quantification of hundreds of organelle- and tissue-specific proteins are achieved readily, including detection of low abundance transcriptional regulators, signaling factors, and proteins linked to cardiac disease. Important technical issues relating to data validation, including minimization of artifacts stemming from biased under-sampling and spurious false discovery, together with suggestions for further fine-tuning of sample preparation, are discussed. A framework for follow-up bioinformatic examination, pattern recognition, and data mining is also presented in the context of a stringent application of MudPIT for probing fundamental aspects of heart muscle physiology as well as the discovery of perturbations associated with heart failure.
NASA Astrophysics Data System (ADS)
Zhang, Yan; Wang, Youchao; Tang, Chuanjiang; Nie, Jingmei; Xu, Chengtao
2018-01-01
Perfluorinated compounds (PFCs), used to provide water, oil, grease, heat and stain repellency to a range of textile and other products, have been found to be persistent in the environment and are associated with adverse effects on humans and wildlife. This study presents the development and validation of an analytical method to determine the simultaneous presence of eleven PFCs in leather using solid-phase extraction followed by liquid chromatography-tandem mass spectrometry (LC-MS/MS). The perfluorinated compounds were primarily extracted from the samples by a liquid extraction procedure by ultrasonic, in which the parameters were optimized. Then the solid-phase extraction (SPE) is the most important advantages of the developed methodology. The sample volume and elution conditions were optimized by means of an experimental design. The proposed method was applied to determine the PFCs in leather, where the detection limits of the eleven compounds were 0.09-0.96 ng/L, and the recoveries of all compounds spiked at 5 ng/L concentration level were in the range of 65-96%, with a better RSD lower than 19% (n = 7).
Huang, Yuan; Zheng, Zhiqun; Huang, Liying; Yao, Hong; Wu, Xiao Shan; Li, Shaoguang; Lin, Dandan
2017-05-10
A rapid, simple, cost-effective dispersive liquid-phase microextraction based on solidified floating organic drop (SFOD-LPME) was developed in this study. Along with high-performance liquid chromatography, we used the developed approach to determine and enrich trace amounts of four glucocorticoids, namely, prednisone, betamethasone, dexamethasone, and cortisone acetate, in animal-derived food. We also investigated and optimized several important parameters that influenced the extraction efficiency of SFOD-LPME. These parameters include the extractant species, volumes of extraction and dispersant solvents, sodium chloride addition, sample pH, extraction time and temperature, and stirring rate. Under optimum experimental conditions, the calibration graph exhibited linearity over the range of 1.2-200.0ng/ml for the four analytes, with a reasonable linearity(r 2 : 0.9990-0.9999). The enrichment factor was 142-276, and the detection limits was 0.39-0.46ng/ml (0.078-0.23μg/kg). This method was successfully applied to analyze actual food samples, and good spiked recoveries of over 81.5%-114.3% were obtained. Copyright © 2017. Published by Elsevier B.V.
NASA Astrophysics Data System (ADS)
Acter, Thamina; Lee, Seulgidaun; Cho, Eunji; Jung, Maeng-Joon; Kim, Sunghwan
2018-01-01
In this study, continuous in-source hydrogen/deuterium exchange (HDX) atmospheric pressure photoionization (APPI) mass spectrometry (MS) with continuous feeding of D2O was developed and validated. D2O was continuously fed using a capillary line placed on the center of a metal plate positioned between the UV lamp and nebulizer. The proposed system overcomes the limitations of previously reported APPI HDX-MS approaches where deuterated solvents were premixed with sample solutions before ionization. This is particularly important for APPI because solvent composition can greatly influence ionization efficiency as well as the solubility of analytes. The experimental parameters for APPI HDX-MS with continuous feeding of D2O were optimized, and the optimized conditions were applied for the analysis of nitrogen-, oxygen-, and sulfur-containing compounds. The developed method was also applied for the analysis of the polar fraction of a petroleum sample. Thus, the data presented in this study clearly show that the proposed HDX approach can serve as an effective analytical tool for the structural analysis of complex mixtures. [Figure not available: see fulltext.
Investigation of interaction between magnetic silica particles and lambda phage DNA fragment.
Smerkova, Kristyna; Dostalova, Simona; Vaculovicova, Marketa; Kynicky, Jindrich; Trnkova, Libuse; Kralik, Miroslav; Adam, Vojtech; Hubalek, Jaromir; Provaznik, Ivo; Kizek, Rene
2013-12-01
Nucleic acids belong to the most important molecules and therefore the understanding of their properties, function and behavior is crucial. Even though a range of analytical and biochemical methods have been developed for this purpose, one common step is essential for all of them - isolation of the nucleic acid from the from complex sample matrix. The use of magnetic particles for the separation of nucleic acids has many advantages over other isolation methods. In this study, an isolation procedure for extraction of DNA was optimized. Each step of the isolation process including washing, immobilization and elution was optimized and therefore the efficiency was increased from 1.7% to 28.7% and the total time was shortened from 75 to 30min comparing to the previously described method. Quantification of the particular parameter influence was performed by square-wave voltammetry using hanging drop mercury electrode. Further, we compared the optimized method with standard chloroform extraction and applied on isolation of DNA from Staphylococcus aureus and Escherichia coli. Copyright © 2013 Elsevier B.V. All rights reserved.
Silva, Simone Alves da; Sampaio, Geni Rodrigues; Torres, Elizabeth Aparecida Ferraz da Silva
2017-04-15
Among the different food categories, the oils and fats are important sources of exposure to polycyclic aromatic hydrocarbons (PAHs), a group of organic chemical contaminants. The use of a validated method is essential to obtain reliable analytical results since the legislation establishes maximum limits in different foods. The objective of this study was to optimize and validate a method for the quantification of four PAHs [benzo(a)anthracene, chrysene, benzo(b)fluoranthene, benzo(a)pyrene] in vegetable oils. The samples were submitted to liquid-liquid extraction, followed by solid-phase extraction, and analyzed by ultra-high performance liquid chromatography. Under the optimized conditions, the validation parameters were evaluated according to the INMETRO Guidelines: linearity (r2 >0.99), selectivity (no matrix interference), limits of detection (0.08-0.30μgkg -1 ) and quantification (0.25-1.00μgkg -1 ), recovery (80.13-100.04%), repeatability and intermediate precision (<10% RSD). The method was found to be adequate for routine analysis of PAHs in the vegetable oils evaluated. Copyright © 2016. Published by Elsevier Ltd.
Petinataud, Dimitri; Berger, Sibel; Ferdynus, Cyril; Debourgogne, Anne; Contet-Audonneau, Nelly; Machouart, Marie
2016-05-01
Onychomycosis is a common nail disorder mainly due to dermatophytes for which the conventional diagnosis requires direct microscopic observation and culture of a biological sample. Nevertheless, antifungal treatments are commonly prescribed without a mycological examination having been performed, partly because of the slow growth of dermatophytes. Therefore, molecular biology has been applied to this pathology, to support a quick and accurate distinction between onychomycosis and other nail damage. Commercial kits are now available from several companies for improving traditional microbiological diagnosis. In this paper, we present the first evaluation of the real-time PCR kit marketed by Bio Evolution for the diagnosis of dermatophytosis. Secondly, we compare the efficacy of the kit on optimal and non-optimal samples. This study was conducted on 180 nails samples, processed by conventional methods and retrospectively analysed using this kit. According to our results, this molecular kit has shown high specificity and sensitivity in detecting dermatophytes, regardless of sample quality. On the other hand, and as expected, optimal samples allowed the identification of a higher number of dermatophytes by conventional mycological diagnosis, compared to non-optimal samples. Finally, we have suggested several strategies for the practical use of such a kit in a medical laboratory for quick pathogen detection. © 2016 Blackwell Verlag GmbH.
Electric Propulsion System Selection Process for Interplanetary Missions
NASA Technical Reports Server (NTRS)
Landau, Damon; Chase, James; Kowalkowski, Theresa; Oh, David; Randolph, Thomas; Sims, Jon; Timmerman, Paul
2008-01-01
The disparate design problems of selecting an electric propulsion system, launch vehicle, and flight time all have a significant impact on the cost and robustness of a mission. The effects of these system choices combine into a single optimization of the total mission cost, where the design constraint is a required spacecraft neutral (non-electric propulsion) mass. Cost-optimal systems are designed for a range of mass margins to examine how the optimal design varies with mass growth. The resulting cost-optimal designs are compared with results generated via mass optimization methods. Additional optimizations with continuous system parameters address the impact on mission cost due to discrete sets of launch vehicle, power, and specific impulse. The examined mission set comprises a near-Earth asteroid sample return, multiple main belt asteroid rendezvous, comet rendezvous, comet sample return, and a mission to Saturn.
DOE Office of Scientific and Technical Information (OSTI.GOV)
March-Leuba, S.; Jansen, J.F.; Kress, R.L.
1992-08-01
A new program package, Symbolic Manipulator Laboratory (SML), for the automatic generation of both kinematic and static manipulator models in symbolic form is presented. Critical design parameters may be identified and optimized using symbolic models as shown in the sample application presented for the Future Armor Rearm System (FARS) arm. The computer-aided development of the symbolic models yields equations with reduced numerical complexity. Important considerations have been placed on the closed form solutions simplification and on the user friendly operation. The main emphasis of this research is the development of a methodology which is implemented in a computer program capablemore » of generating symbolic kinematic and static forces models of manipulators. The fact that the models are obtained trigonometrically reduced is among the most significant results of this work and the most difficult to implement. Mathematica, a commercial program that allows symbolic manipulation, is used to implement the program package. SML is written such that the user can change any of the subroutines or create new ones easily. To assist the user, an on-line help has been written to make of SML a user friendly package. Some sample applications are presented. The design and optimization of the 5-degrees-of-freedom (DOF) FARS manipulator using SML is discussed. Finally, the kinematic and static models of two different 7-DOF manipulators are calculated symbolically.« less
NASA Astrophysics Data System (ADS)
Tang, J. L.; Cai, C. Z.; Xiao, T. T.; Huang, S. J.
2012-07-01
The electrical conductivity of solid oxide fuel cell (SOFC) cathode is one of the most important indices affecting the efficiency of SOFC. In order to improve the performance of fuel cell system, it is advantageous to have accurate model with which one can predict the electrical conductivity. In this paper, a model utilizing support vector regression (SVR) approach combined with particle swarm optimization (PSO) algorithm for its parameter optimization was established to modeling and predicting the electrical conductivity of Ba0.5Sr0.5Co0.8Fe0.2 O3-δ-xSm0.5Sr0.5CoO3-δ (BSCF-xSSC) composite cathode under two influence factors, including operating temperature (T) and SSC content (x) in BSCF-xSSC composite cathode. The leave-one-out cross validation (LOOCV) test result by SVR strongly supports that the generalization ability of SVR model is high enough. The absolute percentage error (APE) of 27 samples does not exceed 0.05%. The mean absolute percentage error (MAPE) of all 30 samples is only 0.09% and the correlation coefficient (R2) as high as 0.999. This investigation suggests that the hybrid PSO-SVR approach may be not only a promising and practical methodology to simulate the properties of fuel cell system, but also a powerful tool to be used for optimal designing or controlling the operating process of a SOFC system.
2011-09-01
Fbg αC 242-424. DNA for expressing Fbg αC 242-424 and FXIII A2 in Ecoli have been obtained from collaborators. Strategies for expressing and...the coming months. It will be important to 11 verify that the expressed FXIII A2 is active and that the Fbg αC 242-424 can serve as an effective...optimized. For the larger substrate Fbg αC 242-424, we will need to proteolytically digest the quenched kinetic samples with chymotrypsin prior to
Hoch, Jeffrey C
2017-10-01
Non-Fourier methods of spectrum analysis are gaining traction in NMR spectroscopy, driven by their utility for processing nonuniformly sampled data. These methods afford new opportunities for optimizing experiment time, resolution, and sensitivity of multidimensional NMR experiments, but they also pose significant challenges not encountered with the discrete Fourier transform. A brief history of non-Fourier methods in NMR serves to place different approaches in context. Non-Fourier methods reflect broader trends in the growing importance of computation in NMR, and offer insights for future software development. Copyright © 2017 Elsevier Inc. All rights reserved.
Training set optimization under population structure in genomic selection
USDA-ARS?s Scientific Manuscript database
The optimization of the training set (TRS) in genomic selection (GS) has received much interest in both animal and plant breeding, because it is critical to the accuracy of the prediction models. In this study, five different TRS sampling algorithms, stratified sampling, mean of the Coefficient of D...
Aircraft adaptive learning control
NASA Technical Reports Server (NTRS)
Lee, P. S. T.; Vanlandingham, H. F.
1979-01-01
The optimal control theory of stochastic linear systems is discussed in terms of the advantages of distributed-control systems, and the control of randomly-sampled systems. An optimal solution to longitudinal control is derived and applied to the F-8 DFBW aircraft. A randomly-sampled linear process model with additive process and noise is developed.
Moreno, Y; Moreno-Mesonero, L; Amorós, I; Pérez, R; Morillo, J A; Alonso, J L
2018-01-01
Understanding waterborne protozoan parasites (WPPs) diversity has important implications in public health. In this study, we evaluated a NGS-based method as a detection approach to identify simultaneously most important WPPs using 18S rRNA high-throughput sequencing. A set of primers to target the V4 18S rRNA region of WPPs such as Cryptosporidium spp., Giardia sp., Blastocystis sp., Entamoeba spp, Toxoplasma sp. and free-living amoebae (FLA) was designed. In order to optimize PCR conditions before sequencing, both a mock community with a defined composition of representative WPPs and a real water sample inoculated with specific WPPs DNA were prepared. Using the method proposed in this study, we have detected the presence of Giardia intestinalis, Acanthamoeba castellanii, Toxoplasma gondii, Entamoeba histolytica and Blastocystis sp. at species level in real irrigation water samples. Our results showed that untreated surface irrigation water in open fields can provide an important source of WPPs. Therefore, the methodology proposed in this study can establish a basis for an accurate and effective diagnostic of WPPs to provide a better understanding of the risk associated to irrigation water. Copyright © 2017 The Authors. Published by Elsevier GmbH.. All rights reserved.
Clime, Liviu; Hoa, Xuyen D; Corneau, Nathalie; Morton, Keith J; Luebbert, Christian; Mounier, Maxence; Brassard, Daniel; Geissler, Matthias; Bidawid, Sabah; Farber, Jeff; Veres, Teodor
2015-02-01
Detecting pathogenic bacteria in food or other biological samples with lab-on-a-chip (LOC) devices requires several sample preparation steps prior to analysis which commonly involves cleaning complex sample matrices of large debris. This often underestimated step is important to prevent these larger particles from clogging devices and to preserve initial concentrations when LOC techniques are used to concentrate or isolate smaller target microorganisms for downstream analysis. In this context, we developed a novel microfluidic system for membrane-free cleaning of biological samples from debris particles by combining hydrodynamic focusing and inertial lateral migration effects. The microfluidic device is fabricated using thermoplastic elastomers being compatible with thermoforming fabrication techniques leading to low-cost single-use devices. Microfluidic chip design and pumping protocols are optimized by investigating diffusive losses numerically with coupled Navier-Stokes and convective-diffusion theoretical models. Stability of inertial lateral migration and separation of debris is assessed through fluorescence microscopy measurements with labelled particles serving as a model system. Efficiency of debris cleaning is experimentally investigated by monitoring microchip outlets with in situ optical turbidity sensors, while retention of targeted pathogens (i.e., Listeria monocytogenes) within the sample stream is assessed through bacterial culture techniques. Optimized pumping protocols can remove up to 50 % of debris from ground beef samples while percentage for preserved microorganisms can account for 95 % in relatively clean samples. However, comparison between inoculated turbid and clean samples (i.e., with and without ground beef debris) indicate some degree of interference between debris inertial lateral migration and hydrodynamic focusing of small microorganisms. Although this interference can lead to significant decrease in chip performance through loss of target bacteria, it remains possible to reach 70 % for sample recovery and more than 50 % for debris removal even in the most turbid samples tested. Due to the relatively simple design, the robustness of the inertial migration effect itself, the high operational flow rates and fabrication methods that leverage low-cost materials, the proposed device can have an impact on a wide range of applications where high-throughput separation of particles and biological species is of interest.
The External Quality Assessment Scheme (EQAS): Experiences of a medium sized accredited laboratory.
Bhat, Vivek; Chavan, Preeti; Naresh, Chital; Poladia, Pratik
2015-06-15
We put forth our experiences of EQAS, analyzed the result discrepancies, reviewed the corrective actions and also put forth strategies for risk identification and prevention of potential errors in a medical laboratory. For hematology, EQAS samples - blood, peripheral and reticulocyte smears - were received quarterly every year. All the blood samples were processed on HMX hematology analyzer by Beckman-Coulter. For clinical chemistry, lyophilized samples were received and were processed on Siemens Dimension Xpand and RXL analyzers. For microbiology, EQAS samples were received quarterly every year as lyophilized strains along with smears and serological samples. In hematology no outliers were noted for reticulocyte and peripheral smear examination. Only one outlier was noted for CBC. In clinical chemistry outliers (SDI ≥ 2) were noted in 7 samples (23 parameters) out of total 36 samples (756 parameters) processed. Thirteen of these parameters were analyzed as random errors, 3 as transcriptional errors and seven instances of systemic error were noted. In microbiology, one discrepancy was noted in isolate identification and in the grading of smears for AFB by Ziehl Neelsen stain. EQAS along with IQC is a very important tool for maintaining optimal quality of services. Copyright © 2015 Elsevier B.V. All rights reserved.
Ferrer-Paris, José Rafael; Sánchez-Mercado, Ada; Rodríguez, Jon Paul
2013-03-01
The development of efficient sampling protocols is an essential prerequisite to evaluate and identify priority conservation areas. There are f ew protocols for fauna inventory and monitoring in wide geographical scales for the tropics, where the complexity of communities and high biodiversity levels, make the implementation of efficient protocols more difficult. We proposed here a simple strategy to optimize the capture of dung beetles, applied to sampling with baited traps and generalizable to other sampling methods. We analyzed data from eight transects sampled between 2006-2008 withthe aim to develop an uniform sampling design, that allows to confidently estimate species richness, abundance and composition at wide geographical scales. We examined four characteristics of any sampling design that affect the effectiveness of the sampling effort: the number of traps, sampling duration, type and proportion of bait, and spatial arrangement of the traps along transects. We used species accumulation curves, rank-abundance plots, indicator species analysis, and multivariate correlograms. We captured 40 337 individuals (115 species/morphospecies of 23 genera). Most species were attracted by both dung and carrion, but two thirds had greater relative abundance in traps baited with human dung. Different aspects of the sampling design influenced each diversity attribute in different ways. To obtain reliable richness estimates, the number of traps was the most important aspect. Accurate abundance estimates were obtained when the sampling period was increased, while the spatial arrangement of traps was determinant to capture the species composition pattern. An optimum sampling strategy for accurate estimates of richness, abundance and diversity should: (1) set 50-70 traps to maximize the number of species detected, (2) get samples during 48-72 hours and set trap groups along the transect to reliably estimate species abundance, (3) set traps in groups of at least 10 traps to suitably record the local species composition, and (4) separate trap groups by a distance greater than 5-10km to avoid spatial autocorrelation. For the evaluation of other sampling protocols we recommend to, first, identify the elements of sampling design that could affect the sampled effort (the number of traps, sampling duration, type and proportion of bait) and their spatial distribution (spatial arrangement of the traps) and then, to evaluate how they affect richness, abundance and species composition estimates.
Estimation After a Group Sequential Trial.
Milanzi, Elasma; Molenberghs, Geert; Alonso, Ariel; Kenward, Michael G; Tsiatis, Anastasios A; Davidian, Marie; Verbeke, Geert
2015-10-01
Group sequential trials are one important instance of studies for which the sample size is not fixed a priori but rather takes one of a finite set of pre-specified values, dependent on the observed data. Much work has been devoted to the inferential consequences of this design feature. Molenberghs et al (2012) and Milanzi et al (2012) reviewed and extended the existing literature, focusing on a collection of seemingly disparate, but related, settings, namely completely random sample sizes, group sequential studies with deterministic and random stopping rules, incomplete data, and random cluster sizes. They showed that the ordinary sample average is a viable option for estimation following a group sequential trial, for a wide class of stopping rules and for random outcomes with a distribution in the exponential family. Their results are somewhat surprising in the sense that the sample average is not optimal, and further, there does not exist an optimal, or even, unbiased linear estimator. However, the sample average is asymptotically unbiased, both conditionally upon the observed sample size as well as marginalized over it. By exploiting ignorability they showed that the sample average is the conventional maximum likelihood estimator. They also showed that a conditional maximum likelihood estimator is finite sample unbiased, but is less efficient than the sample average and has the larger mean squared error. Asymptotically, the sample average and the conditional maximum likelihood estimator are equivalent. This previous work is restricted, however, to the situation in which the the random sample size can take only two values, N = n or N = 2 n . In this paper, we consider the more practically useful setting of sample sizes in a the finite set { n 1 , n 2 , …, n L }. It is shown that the sample average is then a justifiable estimator , in the sense that it follows from joint likelihood estimation, and it is consistent and asymptotically unbiased. We also show why simulations can give the false impression of bias in the sample average when considered conditional upon the sample size. The consequence is that no corrections need to be made to estimators following sequential trials. When small-sample bias is of concern, the conditional likelihood estimator provides a relatively straightforward modification to the sample average. Finally, it is shown that classical likelihood-based standard errors and confidence intervals can be applied, obviating the need for technical corrections.
Chakraborty, Snehasis; Rao, Pavuluri Srinivasa; Mishra, Hari Niwas
2015-08-01
The high-pressure processing conditions were optimized for pineapple puree within the domain of 400-600 MPa, 40-60 °C, and 10-20 min using the response surface methodology (RSM). The target was to maximize the inactivation of polyphenoloxidase (PPO) along with a minimal loss in beneficial bromelain (BRM) activity, ascorbic acid (AA) content, antioxidant capacity, and color in the sample. The optimum condition was 600 MPa, 50 °C, and 13 min, having the highest desirability of 0.604, which resulted in 44% PPO and 47% BRM activities. However, 93% antioxidant activity and 85% AA were retained in optimized sample with a total color change (∆E*) value less than 2.5. A 10-fold reduction in PPO activity was obtained at 600 MPa/70 °C/20 min; however, the thermal degradation of nutrients was severe at this condition. Fuzzy mathematical approach confirmed that sensory acceptance of the optimized sample was close to the fresh sample; whereas, the thermally pasteurized sample (treated at 0.1 MPa, 95 °C for 12 min) had the least sensory score as compared to others. © 2015 Institute of Food Technologists®
Directed Diffusion Modelling for Tesso Nilo National Parks Case Study
NASA Astrophysics Data System (ADS)
Yasri, Indra; Safrianti, Ery
2018-01-01
— Directed Diffusion (DD has ability to achieve energy efficiency in Wireless Sensor Network (WSN). This paper proposes Directed Diffusion (DD) modelling for Tesso Nilo National Parks (TNNP) case study. There are 4 stages of scenarios involved in this modelling. It’s started by appointing of sampling area through GPS coordinate. The sampling area is determined by optimization processes from 500m x 500m up to 1000m x 1000m with 100m increment in between. The next stage is sensor node placement. Sensor node is distributed in sampling area with three different quantities i.e. 20 nodes, 30 nodes and 40 nodes. One of those quantities is choose as an optimized sensor node placement. The third stage is to implement all scenarios in stages 1 and stages 2 on DD modelling. In the last stage, the evaluation process to achieve most energy efficient in the combination of optimized sampling area and optimized sensor node placement on Direct Diffusion (DD) routing protocol. The result shows combination between sampling area 500m x 500m and 20 nodes able to achieve energy efficient to support a forest preventive fire system at Tesso Nilo National Parks.
Designing a multiple dependent state sampling plan based on the coefficient of variation.
Yan, Aijun; Liu, Sanyang; Dong, Xiaojuan
2016-01-01
A multiple dependent state (MDS) sampling plan is developed based on the coefficient of variation of the quality characteristic which follows a normal distribution with unknown mean and variance. The optimal plan parameters of the proposed plan are solved by a nonlinear optimization model, which satisfies the given producer's risk and consumer's risk at the same time and minimizes the sample size required for inspection. The advantages of the proposed MDS sampling plan over the existing single sampling plan are discussed. Finally an example is given to illustrate the proposed plan.
Drying step optimization to obtain large-size transparent magnesium-aluminate spinel samples
NASA Astrophysics Data System (ADS)
Petit, Johan; Lallemant, Lucile
2017-05-01
In the transparent ceramics processing, the green body elaboration step is probably the most critical one. Among the known techniques, wet shaping processes are particularly interesting because they enable the particles to find an optimum position on their own. Nevertheless, the presence of water molecules leads to drying issues. During the water removal, its concentration gradient induces cracks limiting the sample size: laboratory samples are generally less damaged because of their small size but upscaling the samples for industrial applications lead to an increasing cracking probability. Thanks to the drying step optimization, large size spinel samples were obtained.
Beiraghi, Asadollah; Shokri, Masood
2018-02-01
In the present study a new centrifuge-less dispersive liquid-liquid microextraction technique based on application of a new task specific magnetic polymeric ionic liquid (TSMPIL) as a chelating and extraction solvent for selective preconcentration of trace amounts of potassium from oil samples is developed, for the first time. After extraction, the fine droplets of TSMPIL were transferred into an eppendorf tube and diluted to 500µL using distilled water. Then, the enriched analyte was determined by flame atomic emission spectroscopy (FAES). Several important factors affecting both the complexation and extraction efficiency including extraction time, rate of vortex agitator, amount of carbonyl iron powder, pH of sample solution, volume of ionic liquid as well as effects of interfering species were investigated and optimized. Under the optimal conditions, the limits of detection (LOD) and quantification (LOQ) were 0.5 and 1.6µgL -1 respectively with the preconcentration factor of 128. The precision (RSD %) for seven replicate determinations at 10µgL -1 of potassium was better than 3.9%. The relative recoveries for the spiked samples were in the acceptable range of 95-104%. The results demonstrated that no remarkable interferences are created by other various ions in the determination of potassium, so that the tolerance limits (W Ion /W K ) of major cations and anions were in the range of 2500-10,000. The purposed method was successfully applied for the analysis of potassium in some oil samples. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Pressl, B.; Laiho, K.; Chen, H.; Günthner, T.; Schlager, A.; Auchter, S.; Suchomel, H.; Kamp, M.; Höfling, S.; Schneider, C.; Weihs, G.
2018-04-01
Semiconductor alloys of aluminum gallium arsenide (AlGaAs) exhibit strong second-order optical nonlinearities. This makes them prime candidates for the integration of devices for classical nonlinear optical frequency conversion or photon-pair production, for example, through the parametric down-conversion (PDC) process. Within this material system, Bragg-reflection waveguides (BRW) are a promising platform, but the specifics of the fabrication process and the peculiar optical properties of the alloys require careful engineering. Previously, BRW samples have been mostly derived analytically from design equations using a fixed set of aluminum concentrations. This approach limits the variety and flexibility of the device design. Here, we present a comprehensive guide to the design and analysis of advanced BRW samples and show how to automatize these tasks. Then, nonlinear optimization techniques are employed to tailor the BRW epitaxial structure towards a specific design goal. As a demonstration of our approach, we search for the optimal effective nonlinearity and mode overlap which indicate an improved conversion efficiency or PDC pair production rate. However, the methodology itself is much more versatile as any parameter related to the optical properties of the waveguide, for example the phasematching wavelength or modal dispersion, may be incorporated as design goals. Further, we use the developed tools to gain a reliable insight in the fabrication tolerances and challenges of real-world sample imperfections. One such example is the common thickness gradient along the wafer, which strongly influences the photon-pair rate and spectral properties of the PDC process. Detailed models and a better understanding of the optical properties of a realistic BRW structure are not only useful for investigating current samples, but also provide important feedback for the design and fabrication of potential future turn-key devices.
Predicting Presynaptic and Postsynaptic Neurotoxins by Developing Feature Selection Technique
Yang, Yunchun; Zhang, Chunmei; Chen, Rong; Huang, Po
2017-01-01
Presynaptic and postsynaptic neurotoxins are proteins which act at the presynaptic and postsynaptic membrane. Correctly predicting presynaptic and postsynaptic neurotoxins will provide important clues for drug-target discovery and drug design. In this study, we developed a theoretical method to discriminate presynaptic neurotoxins from postsynaptic neurotoxins. A strict and objective benchmark dataset was constructed to train and test our proposed model. The dipeptide composition was used to formulate neurotoxin samples. The analysis of variance (ANOVA) was proposed to find out the optimal feature set which can produce the maximum accuracy. In the jackknife cross-validation test, the overall accuracy of 94.9% was achieved. We believe that the proposed model will provide important information to study neurotoxins. PMID:28303250
Influence of scanning parameters on the estimation accuracy of control points of B-spline surfaces
NASA Astrophysics Data System (ADS)
Aichinger, Julia; Schwieger, Volker
2018-04-01
This contribution deals with the influence of scanning parameters like scanning distance, incidence angle, surface quality and sampling width on the average estimated standard deviations of the position of control points from B-spline surfaces which are used to model surfaces from terrestrial laser scanning data. The influence of the scanning parameters is analyzed by the Monte Carlo based variance analysis. The samples were generated for non-correlated and correlated data, leading to the samples generated by Latin hypercube and replicated Latin hypercube sampling algorithms. Finally, the investigations show that the most influential scanning parameter is the distance from the laser scanner to the object. The angle of incidence shows a significant effect for distances of 50 m and longer, while the surface quality contributes only negligible effects. The sampling width has no influence. Optimal scanning parameters can be found in the smallest possible object distance at an angle of incidence close to 0° in the highest surface quality. The consideration of correlations improves the estimation accuracy and underlines the importance of complete stochastic models for TLS measurements.
Low-sensitivity H ∞ filter design for linear delta operator systems with sampling time jitter
NASA Astrophysics Data System (ADS)
Guo, Xiang-Gui; Yang, Guang-Hong
2012-04-01
This article is concerned with the problem of designing H ∞ filters for a class of linear discrete-time systems with low-sensitivity to sampling time jitter via delta operator approach. Delta-domain model is used to avoid the inherent numerical ill-condition resulting from the use of the standard shift-domain model at high sampling rates. Based on projection lemma in combination with the descriptor system approach often used to solve problems related to delay, a novel bounded real lemma with three slack variables for delta operator systems is presented. A sensitivity approach based on this novel lemma is proposed to mitigate the effects of sampling time jitter on system performance. Then, the problem of designing a low-sensitivity filter can be reduced to a convex optimisation problem. An important consideration in the design of correlation filters is the optimal trade-off between the standard H ∞ criterion and the sensitivity of the transfer function with respect to sampling time jitter. Finally, a numerical example demonstrating the validity of the proposed design method is given.
Consistent Adjoint Driven Importance Sampling using Space, Energy and Angle
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peplow, Douglas E.; Mosher, Scott W; Evans, Thomas M
2012-08-01
For challenging radiation transport problems, hybrid methods combine the accuracy of Monte Carlo methods with the global information present in deterministic methods. One of the most successful hybrid methods is CADIS Consistent Adjoint Driven Importance Sampling. This method uses a deterministic adjoint solution to construct a biased source distribution and consistent weight windows to optimize a specific tally in a Monte Carlo calculation. The method has been implemented into transport codes using just the spatial and energy information from the deterministic adjoint and has been used in many applications to compute tallies with much higher figures-of-merit than analog calculations. CADISmore » also outperforms user-supplied importance values, which usually take long periods of user time to develop. This work extends CADIS to develop weight windows that are a function of the position, energy, and direction of the Monte Carlo particle. Two types of consistent source biasing are presented: one method that biases the source in space and energy while preserving the original directional distribution and one method that biases the source in space, energy, and direction. Seven simple example problems are presented which compare the use of the standard space/energy CADIS with the new space/energy/angle treatments.« less
Antecedent and Consequence of School Academic Optimism and Teachers' Academic Optimism Model
ERIC Educational Resources Information Center
Hong, Fu-Yuan
2017-01-01
The main purpose of this research was to examine the relationships among school principals' transformational leadership, school academic optimism, teachers' academic optimism and teachers' professional commitment. This study conducted a questionnaire survey on 367 teachers from 20 high schools in Taiwan by random sampling, using principals'…
Optimization and performance of the Robert Stobie Spectrograph Near-InfraRed detector system
NASA Astrophysics Data System (ADS)
Mosby, Gregory; Indahl, Briana; Eggen, Nathan; Wolf, Marsha; Hooper, Eric; Jaehnig, Kurt; Thielman, Don; Burse, Mahesh
2018-01-01
At the University of Wisconsin-Madison, we are building and testing the near-infrared (NIR) spectrograph for the Southern African Large Telescope-RSS-NIR. RSS-NIR will be an enclosed cooled integral field spectrograph. The RSS-NIR detector system uses a HAWAII-2RG (H2RG) HgCdTe detector from Teledyne controlled by the SIDECAR ASIC and an Inter-University Centre for Astronomy and Astrophysics (IUCCA) ISDEC card. We have successfully characterized and optimized the detector system and report on the optimization steps and performance of the system. We have reduced the CDS read noise to ˜20 e- for 200 kHz operation by optimizing ASIC settings. We show an additional factor of 3 reduction of read noise using Fowler sampling techniques and a factor of 2 reduction using up-the-ramp group sampling techniques. We also provide calculations to quantify the conditions for sky-limited observations using these sampling techniques.
Hofer, Jan; Busch, Holger; Šolcová, Iva Poláčková; Tavel, Peter
2017-04-01
It is often argued that declining health in elderly people makes death more salient and threatening. However, we argue that health, optimism, and social support interact to predict fear of death in samples from Cameroon, the Czech Republic, and Germany. Low health was associated with enhanced fear of death for participants who received only little social support. As the measure of optimism did not comply with psychometric requirements in the Cameroonian sample, the three-way interaction was tested only in the Czech and German samples. It was found that the two-way interaction was further qualified by optimism in that low health was associated with enhanced fear of death for participants with little social support unless they reported pronounced optimism. Thus, internal and external resources, respectively, can serve to buffer the effect of declining health on the fear of death in the elderly.
Rare Event Simulation in Radiation Transport
NASA Astrophysics Data System (ADS)
Kollman, Craig
This dissertation studies methods for estimating extremely small probabilities by Monte Carlo simulation. Problems in radiation transport typically involve estimating very rare events or the expected value of a random variable which is with overwhelming probability equal to zero. These problems often have high dimensional state spaces and irregular geometries so that analytic solutions are not possible. Monte Carlo simulation must be used to estimate the radiation dosage being transported to a particular location. If the area is well shielded the probability of any one particular particle getting through is very small. Because of the large number of particles involved, even a tiny fraction penetrating the shield may represent an unacceptable level of radiation. It therefore becomes critical to be able to accurately estimate this extremely small probability. Importance sampling is a well known technique for improving the efficiency of rare event calculations. Here, a new set of probabilities is used in the simulation runs. The results are multiplied by the likelihood ratio between the true and simulated probabilities so as to keep our estimator unbiased. The variance of the resulting estimator is very sensitive to which new set of transition probabilities are chosen. It is shown that a zero variance estimator does exist, but that its computation requires exact knowledge of the solution. A simple random walk with an associated killing model for the scatter of neutrons is introduced. Large deviation results for optimal importance sampling in random walks are extended to the case where killing is present. An adaptive "learning" algorithm for implementing importance sampling is given for more general Markov chain models of neutron scatter. For finite state spaces this algorithm is shown to give, with probability one, a sequence of estimates converging exponentially fast to the true solution. In the final chapter, an attempt to generalize this algorithm to a continuous state space is made. This involves partitioning the space into a finite number of cells. There is a tradeoff between additional computation per iteration and variance reduction per iteration that arises in determining the optimal grid size. All versions of this algorithm can be thought of as a compromise between deterministic and Monte Carlo methods, capturing advantages of both techniques.
Sampling methods for the study of pneumococcal carriage: a systematic review.
Gladstone, R A; Jefferies, J M; Faust, S N; Clarke, S C
2012-11-06
Streptococcus pneumoniae is an important pathogen worldwide. Accurate sampling of S. pneumoniae carriage is central to surveillance studies before and following conjugate vaccination programmes to combat pneumococcal disease. Any bias introduced during sampling will affect downstream recovery and typing. Many variables exist for the method of collection and initial processing, which can make inter-laboratory or international comparisons of data complex. In February 2003, a World Health Organisation working group published a standard method for the detection of pneumococcal carriage for vaccine trials to reduce or eliminate variability. We sought to describe the variables associated with the sampling of S. pneumoniae from collection to storage in the context of the methods recommended by the WHO and those used in pneumococcal carriage studies since its publication. A search of published literature in the online PubMed database was performed on the 1st June 2012, to identify published studies that collected pneumococcal carriage isolates, conducted after the publication of the WHO standard method. After undertaking a systematic analysis of the literature, we show that a number of differences in pneumococcal sampling protocol continue to exist between studies since the WHO publication. The majority of studies sample from the nasopharynx, but the choice of swab and swab transport media is more variable between studies. At present there is insufficient experimental data that supports the optimal sensitivity of any standard method. This may have contributed to incomplete adoption of the primary stages of the WHO detection protocol, alongside pragmatic or logistical issues associated with study design. Consequently studies may not provide a true estimate of pneumococcal carriage. Optimal sampling of carriage could lead to improvements in downstream analysis and the evaluation of pneumococcal vaccine impact and extrapolation to pneumococcal disease control therefore further in depth comparisons would be of value. Copyright © 2012 Elsevier Ltd. All rights reserved.
Yang, Ben; Zhang, Yaocun; Qian, Yun; ...
2014-03-26
Reasonably modeling the magnitude, south-north gradient and seasonal propagation of precipitation associated with the East Asian Summer Monsoon (EASM) is a challenging task in the climate community. In this study we calibrate five key parameters in the Kain-Fritsch convection scheme in the WRF model using an efficient importance-sampling algorithm to improve the EASM simulation. We also examine the impacts of the improved EASM precipitation on other physical process. Our results suggest similar model sensitivity and values of optimized parameters across years with different EASM intensities. By applying the optimal parameters, the simulated precipitation and surface energy features are generally improved.more » The parameters related to downdraft, entrainment coefficients and CAPE consumption time (CCT) can most sensitively affect the precipitation and atmospheric features. Larger downdraft coefficient or CCT decrease the heavy rainfall frequency, while larger entrainment coefficient delays the convection development but build up more potential for heavy rainfall events, causing a possible northward shift of rainfall distribution. The CCT is the most sensitive parameter over wet region and the downdraft parameter plays more important roles over drier northern region. Long-term simulations confirm that by using the optimized parameters the precipitation distributions are better simulated in both weak and strong EASM years. Due to more reasonable simulated precipitation condensational heating, the monsoon circulations are also improved. Lastly, by using the optimized parameters the biases in the retreating (beginning) of Mei-yu (northern China rainfall) simulated by the standard WRF model are evidently reduced and the seasonal and sub-seasonal variations of the monsoon precipitation are remarkably improved.« less
NASA Astrophysics Data System (ADS)
Jiang, Hao; Lu, Jiangang
2018-05-01
Corn starch is an important material which has been traditionally used in the fields of food and chemical industry. In order to enhance the rapidness and reliability of the determination for starch content in corn, a methodology is proposed in this work, using an optimal CC-PLSR-RBFNN calibration model and near-infrared (NIR) spectroscopy. The proposed model was developed based on the optimal selection of crucial parameters and the combination of correlation coefficient method (CC), partial least squares regression (PLSR) and radial basis function neural network (RBFNN). To test the performance of the model, a standard NIR spectroscopy data set was introduced, containing spectral information and chemical reference measurements of 80 corn samples. For comparison, several other models based on the identical data set were also briefly discussed. In this process, the root mean square error of prediction (RMSEP) and coefficient of determination (Rp2) in the prediction set were used to make evaluations. As a result, the proposed model presented the best predictive performance with the smallest RMSEP (0.0497%) and the highest Rp2 (0.9968). Therefore, the proposed method combining NIR spectroscopy with the optimal CC-PLSR-RBFNN model can be helpful to determine starch content in corn.
Robust learning for optimal treatment decision with NP-dimensionality
Shi, Chengchun; Song, Rui; Lu, Wenbin
2016-01-01
In order to identify important variables that are involved in making optimal treatment decision, Lu, Zhang and Zeng (2013) proposed a penalized least squared regression framework for a fixed number of predictors, which is robust against the misspecification of the conditional mean model. Two problems arise: (i) in a world of explosively big data, effective methods are needed to handle ultra-high dimensional data set, for example, with the dimension of predictors is of the non-polynomial (NP) order of the sample size; (ii) both the propensity score and conditional mean models need to be estimated from data under NP dimensionality. In this paper, we propose a robust procedure for estimating the optimal treatment regime under NP dimensionality. In both steps, penalized regressions are employed with the non-concave penalty function, where the conditional mean model of the response given predictors may be misspecified. The asymptotic properties, such as weak oracle properties, selection consistency and oracle distributions, of the proposed estimators are investigated. In addition, we study the limiting distribution of the estimated value function for the obtained optimal treatment regime. The empirical performance of the proposed estimation method is evaluated by simulations and an application to a depression dataset from the STAR*D study. PMID:28781717
NASA Astrophysics Data System (ADS)
Duan, Libin; Xiao, Ning-cong; Li, Guangyao; Cheng, Aiguo; Chen, Tao
2017-07-01
Tailor-rolled blank thin-walled (TRB-TH) structures have become important vehicle components owing to their advantages of light weight and crashworthiness. The purpose of this article is to provide an efficient lightweight design for improving the energy-absorbing capability of TRB-TH structures under dynamic loading. A finite element (FE) model for TRB-TH structures is established and validated by performing a dynamic axial crash test. Different material properties for individual parts with different thicknesses are considered in the FE model. Then, a multi-objective crashworthiness design of the TRB-TH structure is constructed based on the ɛ-support vector regression (ɛ-SVR) technique and non-dominated sorting genetic algorithm-II. The key parameters (C, ɛ and σ) are optimized to further improve the predictive accuracy of ɛ-SVR under limited sample points. Finally, the technique for order preference by similarity to the ideal solution method is used to rank the solutions in Pareto-optimal frontiers and find the best compromise optima. The results demonstrate that the light weight and crashworthiness performance of the optimized TRB-TH structures are superior to their uniform thickness counterparts. The proposed approach provides useful guidance for designing TRB-TH energy absorbers for vehicle bodies.
Silva, Catarina L; Gonçalves, João L; Câmara, José S
2012-08-20
A new approach based on microextraction by packed sorbent (MEPS) and reversed-phase high-throughput ultra high pressure liquid chromatography (UHPLC) method that uses a gradient elution and diode array detection to quantitate three biologically active flavonols in wines, myricetin, quercetin, and kaempferol, is described. In addition to performing routine experiments to establish the validity of the assay to internationally accepted criteria (selectivity, linearity, sensitivity, precision, accuracy), experiments are included to assess the effect of the important experimental parameters such as the type of sorbent material (C2, C8, C18, SIL, and C8/SCX), number of extraction cycles (extract-discard), elution volume, sample volume, and ethanol content, on the MEPS performance. The optimal conditions of MEPS extraction were obtained using C8 sorbent and small sample volumes (250μL) in five extraction cycle and in a short time period (about 5min for the entire sample preparation step). Under optimized conditions, excellent linearity (R(values)(2)>0.9963), limits of detection of 0.006μgmL(-1) (quercetin) to 0.013μgmL(-1) (myricetin) and precision within 0.5-3.1% were observed for the target flavonols. The average recoveries of myricetin, quercetin and kaempferol for real samples were 83.0-97.7% with relative standard deviation (RSD, %) lower than 1.6%. The results obtained showed that the most abundant flavonol in the analyzed samples was myricetin (5.8±3.7μgmL(-1)). Quercetin (0.97±0.41μgmL(-1)) and kaempferol (0.66±0.24μgmL(-1)) were found in a lower concentration. The optimized MEPS(C8) method was compared with a reverse-phase solid-phase extraction (SPE) procedure using as sorbent a macroporous copolymer made from a balanced ratio of two monomers, the lipophilic divinylbenzene and the hydrophilic N-vinylpyrrolidone (Oasis HLB) were used as reference. MEPS(C8) approach offers an attractive alternative for analysis of flavonols in wines, providing a number of advantages including highest extraction efficiency (from 85.9±0.9% to 92.1±0.5%) in the shortest extraction time with low solvent consumption, fast sample throughput, more environmentally friendly and easy to perform. Copyright © 2012 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Takayama, T.; Iwasaki, A.
2016-06-01
Above-ground biomass prediction of tropical rain forest using remote sensing data is of paramount importance to continuous large-area forest monitoring. Hyperspectral data can provide rich spectral information for the biomass prediction; however, the prediction accuracy is affected by a small-sample-size problem, which widely exists as overfitting in using high dimensional data where the number of training samples is smaller than the dimensionality of the samples due to limitation of require time, cost, and human resources for field surveys. A common approach to addressing this problem is reducing the dimensionality of dataset. Also, acquired hyperspectral data usually have low signal-to-noise ratio due to a narrow bandwidth and local or global shifts of peaks due to instrumental instability or small differences in considering practical measurement conditions. In this work, we propose a methodology based on fused lasso regression that select optimal bands for the biomass prediction model with encouraging sparsity and grouping, which solves the small-sample-size problem by the dimensionality reduction from the sparsity and the noise and peak shift problem by the grouping. The prediction model provided higher accuracy with root-mean-square error (RMSE) of 66.16 t/ha in the cross-validation than other methods; multiple linear analysis, partial least squares regression, and lasso regression. Furthermore, fusion of spectral and spatial information derived from texture index increased the prediction accuracy with RMSE of 62.62 t/ha. This analysis proves efficiency of fused lasso and image texture in biomass estimation of tropical forests.
Optimal prediction of the number of unseen species
Orlitsky, Alon; Suresh, Ananda Theertha; Wu, Yihong
2016-01-01
Estimating the number of unseen species is an important problem in many scientific endeavors. Its most popular formulation, introduced by Fisher et al. [Fisher RA, Corbet AS, Williams CB (1943) J Animal Ecol 12(1):42−58], uses n samples to predict the number U of hitherto unseen species that would be observed if t⋅n new samples were collected. Of considerable interest is the largest ratio t between the number of new and existing samples for which U can be accurately predicted. In seminal works, Good and Toulmin [Good I, Toulmin G (1956) Biometrika 43(102):45−63] constructed an intriguing estimator that predicts U for all t≤1. Subsequently, Efron and Thisted [Efron B, Thisted R (1976) Biometrika 63(3):435−447] proposed a modification that empirically predicts U even for some t>1, but without provable guarantees. We derive a class of estimators that provably predict U all of the way up to t∝logn. We also show that this range is the best possible and that the estimator’s mean-square error is near optimal for any t. Our approach yields a provable guarantee for the Efron−Thisted estimator and, in addition, a variant with stronger theoretical and experimental performance than existing methodologies on a variety of synthetic and real datasets. The estimators are simple, linear, computationally efficient, and scalable to massive datasets. Their performance guarantees hold uniformly for all distributions, and apply to all four standard sampling models commonly used across various scientific disciplines: multinomial, Poisson, hypergeometric, and Bernoulli product. PMID:27830649
Optimal prediction of the number of unseen species.
Orlitsky, Alon; Suresh, Ananda Theertha; Wu, Yihong
2016-11-22
Estimating the number of unseen species is an important problem in many scientific endeavors. Its most popular formulation, introduced by Fisher et al. [Fisher RA, Corbet AS, Williams CB (1943) J Animal Ecol 12(1):42-58], uses n samples to predict the number U of hitherto unseen species that would be observed if [Formula: see text] new samples were collected. Of considerable interest is the largest ratio t between the number of new and existing samples for which U can be accurately predicted. In seminal works, Good and Toulmin [Good I, Toulmin G (1956) Biometrika 43(102):45-63] constructed an intriguing estimator that predicts U for all [Formula: see text] Subsequently, Efron and Thisted [Efron B, Thisted R (1976) Biometrika 63(3):435-447] proposed a modification that empirically predicts U even for some [Formula: see text], but without provable guarantees. We derive a class of estimators that provably predict U all of the way up to [Formula: see text] We also show that this range is the best possible and that the estimator's mean-square error is near optimal for any t Our approach yields a provable guarantee for the Efron-Thisted estimator and, in addition, a variant with stronger theoretical and experimental performance than existing methodologies on a variety of synthetic and real datasets. The estimators are simple, linear, computationally efficient, and scalable to massive datasets. Their performance guarantees hold uniformly for all distributions, and apply to all four standard sampling models commonly used across various scientific disciplines: multinomial, Poisson, hypergeometric, and Bernoulli product.
Brahman, Kapil Dev; Kazi, Tasneem Gul; Baig, Jameel Ahmed; Afridi, Hassan Imran; Arain, Sadaf Sadia; Saraj, Saima; Arain, Muhammad B; Arain, Salma Aslam
2016-05-01
Simultaneous removal of fluoride (F(-)), inorganic arsenic species, As(III) and As(V), from aqueous samples has been performed using an economic indigenous biosorbent (Stem of Tecomella undulata). The inorganic As species in water samples before and after biosorption were determined by cloud point and solid phase extraction methods, while F(-) was determined by ion chromatography. Batch experiments were carried out to evaluate the equilibrium adsorption isotherm studies for As(III), As(V) and F(-) in aqueous solutions. Several parameters of biosorption were optimized such as pH, biomass dosage, analytes concentration, time and temperature. The surface of biosorbent was characterized by SEM and FTIR. The FTIR study indicated the presence of carbonyl and amine functional groups which may have important role in the sorption/removal of these ions. Thermodynamic and kinetic study indicated that the biosorption of As(III), As(V) and F(-) were spontaneous, exothermic and followed by pseudo-second-order. Meanwhile, the interference study revealed that there was no significant effect of co-existing ions for the removal of inorganic As species and F(-) from aqueous samples (p > 0.05). It was observed that the indigenous biosorbent material simultaneously adsorbed As(III) (108 μg g(-1)), As(V) (159 μg g(-1)) and F(-) (6.16 mg g(-1)) from water at optimized conditions. The proposed biosorbent was effectively regenerated and efficiently used for several experiments, to remove the As(III), As(V) and F(-) from real water sample collected from endemic area of Pakistan. Copyright © 2016 Elsevier Ltd. All rights reserved.
Su, Cheng-Kuan; Tseng, Po-Jen; Chiu, Hsien-Ting; Del Vall, Andrea; Huang, Yu-Fen; Sun, Yuh-Chang
2017-03-01
Probing tumor extracellular metabolites is a vitally important issue in current cancer biology. In this study an analytical system was constructed for the in vivo monitoring of mouse tumor extracellular hydrogen peroxide (H 2 O 2 ), lactate, and glucose by means of microdialysis (MD) sampling and fluorescence determination in conjunction with a smart sequential enzymatic derivatization scheme-involving a loading sequence of fluorogenic reagent/horseradish peroxidase, microdialysate, lactate oxidase, pyruvate, and glucose oxidase-for step-by-step determination of sampled H 2 O 2 , lactate, and glucose in mouse tumor microdialysate. After optimization of the overall experimental parameters, the system's detection limit reached as low as 0.002 mM for H 2 O 2 , 0.058 mM for lactate, and 0.055 mM for glucose, based on 3 μL of microdialysate, suggesting great potential for determining tumor extracellular concentrations of lactate and glucose. Spike analyses of offline-collected mouse tumor microdialysate and monitoring of the basal concentrations of mouse tumor extracellular H 2 O 2 , lactate, and glucose, as well as those after imparting metabolic disturbance through intra-tumor administration of a glucose solution through a prior-implanted cannula, were conducted to demonstrate the system's applicability. Our results evidently indicate that hyphenation of an MD sampling device with an optimized sequential enzymatic derivatization scheme and a fluorescence spectrometer can be used successfully for multi-analyte monitoring of tumor extracellular metabolites in living animals. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Baisden, W. T.; Canessa, S.
2013-01-01
In 1959, Athol Rafter began a substantial programme of systematically monitoring the flow of 14C produced by atmospheric thermonuclear tests through organic matter in New Zealand soils under stable land use. A database of ∼500 soil radiocarbon measurements spanning 50 years has now been compiled, and is used here to identify optimal approaches for soil C-cycle studies. Our results confirm the potential of 14C to determine residence times, by estimating the amount of ‘bomb 14C’ incorporated. High-resolution time series confirm this approach is appropriate, and emphasise that residence times can be calculated routinely with two or more time points as little as 10 years apart. This approach is generally robust to the key assumptions that can create large errors when single time-point 14C measurements are modelled. The three most critical assumptions relate to: (1) the distribution of turnover times, and particularly the proportion of old C (‘passive fraction’), (2) the lag time between photosynthesis and C entering the modelled pool, (3) changes in the rates of C input. When carrying out approaches using robust assumptions on time-series samples, multiple soil layers can be aggregated using a mixing equation. Where good archived samples are available, AMS measurements can develop useful understanding for calibrating models of the soil C cycle at regional to continental scales with sample numbers on the order of hundreds rather than thousands. Sample preparation laboratories and AMS facilities can play an important role in coordinating the efficient delivery of robust calculated residence times for soil carbon.
Gao, Xue; Guo, Hao; Wang, Junwei; Zhao, Qingbiao
2018-01-19
In this study, a sensitive and fast procedure of ultrasonic-assisted dispersive liquid-liquid microextraction (UADLLME) coupled with gas chromatography-tandem mass spectrometry (GC-MS/MS) for the determination of major pyrethroid pesticides (permethrin, tetramethrin, bifenthrin, fenvalerate, flucythrinate, fluvalinate, fenpropathrin, deltamethrin, and cyhalothrin) in blood samples was developed. Response surface methodology (RSM) combined with Box-Behnken design (BBD) and ANOVA function was used to optimize key factors affecting the extraction efficiency of UADLLME procedure. Target compounds were analyzed by GC-MS/MS. Under the optimal conditions, good linearity (R 2 >0.99) was achieved for all the analytes in the concentration range of 0.5 to 100 μg L -1 . The recoveries for spiked samples at 3 concentration levels were between 70.2 and 91.8%, with relative standard deviations (RSD) lower than 10%. Very low limits of detection (LODs) and limits of quantification (LOQs) ranging from 0.01 to 0.1 μg L -1 and from 0.03 to 0.3 μg L -1 were achieved. This method was successfully applied to the determination of low concentration of pyrethroids in blood samples from real forensic cases. High sensitivity, fast determination, simplicity in operation, small sample volume, and low usage of organic solvents are the advantages of this method. This methodology is of important value for sensitive and quick determination of residue pesticides and metabolites, study of residue pesticides behavior in human body, as well as application in real forensic cases. Copyright © 2018 John Wiley & Sons, Ltd.
Using geostatistics to evaluate cleanup goals
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marcon, M.F.; Hopkins, L.P.
1995-12-01
Geostatistical analysis is a powerful predictive tool typically used to define spatial variability in environmental data. The information from a geostatistical analysis using kriging, a geostatistical. tool, can be taken a step further to optimize sampling location and frequency and help quantify sampling uncertainty in both the remedial investigation and remedial design at a hazardous waste site. Geostatistics were used to quantify sampling uncertainty in attainment of a risk-based cleanup goal and determine the optimal sampling frequency necessary to delineate the horizontal extent of impacted soils at a Gulf Coast waste site.
NASA Astrophysics Data System (ADS)
Maymandi, Nahal; Kerachian, Reza; Nikoo, Mohammad Reza
2018-03-01
This paper presents a new methodology for optimizing Water Quality Monitoring (WQM) networks of reservoirs and lakes using the concept of the value of information (VOI) and utilizing results of a calibrated numerical water quality simulation model. With reference to the value of information theory, water quality of every checkpoint with a specific prior probability differs in time. After analyzing water quality samples taken from potential monitoring points, the posterior probabilities are updated using the Baye's theorem, and VOI of the samples is calculated. In the next step, the stations with maximum VOI is selected as optimal stations. This process is repeated for each sampling interval to obtain optimal monitoring network locations for each interval. The results of the proposed VOI-based methodology is compared with those obtained using an entropy theoretic approach. As the results of the two methodologies would be partially different, in the next step, the results are combined using a weighting method. Finally, the optimal sampling interval and location of WQM stations are chosen using the Evidential Reasoning (ER) decision making method. The efficiency and applicability of the methodology are evaluated using available water quantity and quality data of the Karkheh Reservoir in the southwestern part of Iran.
Immunohistochemistry for predictive biomarkers in non-small cell lung cancer.
Mino-Kenudson, Mari
2017-10-01
In the era of targeted therapy, predictive biomarker testing has become increasingly important for non-small cell lung cancer. Of multiple predictive biomarker testing methods, immunohistochemistry (IHC) is widely available and technically less challenging, can provide clinically meaningful results with a rapid turn-around-time and is more cost efficient than molecular platforms. In fact, several IHC assays for predictive biomarkers have already been implemented in routine pathology practice. In this review, we will discuss: (I) the details of anaplastic lymphoma kinase (ALK) and proto-oncogene tyrosine-protein kinase ROS (ROS1) IHC assays including the performance of multiple antibody clones, pros and cons of IHC platforms and various scoring systems to design an optimal algorithm for predictive biomarker testing; (II) issues associated with programmed death-ligand 1 (PD-L1) IHC assays; (III) appropriate pre-analytical tissue handling and selection of optimal tissue samples for predictive biomarker IHC.
Cosmological parameter estimation using Particle Swarm Optimization
NASA Astrophysics Data System (ADS)
Prasad, J.; Souradeep, T.
2014-03-01
Constraining parameters of a theoretical model from observational data is an important exercise in cosmology. There are many theoretically motivated models, which demand greater number of cosmological parameters than the standard model of cosmology uses, and make the problem of parameter estimation challenging. It is a common practice to employ Bayesian formalism for parameter estimation for which, in general, likelihood surface is probed. For the standard cosmological model with six parameters, likelihood surface is quite smooth and does not have local maxima, and sampling based methods like Markov Chain Monte Carlo (MCMC) method are quite successful. However, when there are a large number of parameters or the likelihood surface is not smooth, other methods may be more effective. In this paper, we have demonstrated application of another method inspired from artificial intelligence, called Particle Swarm Optimization (PSO) for estimating cosmological parameters from Cosmic Microwave Background (CMB) data taken from the WMAP satellite.
Immunohistochemistry for predictive biomarkers in non-small cell lung cancer
2017-01-01
In the era of targeted therapy, predictive biomarker testing has become increasingly important for non-small cell lung cancer. Of multiple predictive biomarker testing methods, immunohistochemistry (IHC) is widely available and technically less challenging, can provide clinically meaningful results with a rapid turn-around-time and is more cost efficient than molecular platforms. In fact, several IHC assays for predictive biomarkers have already been implemented in routine pathology practice. In this review, we will discuss: (I) the details of anaplastic lymphoma kinase (ALK) and proto-oncogene tyrosine-protein kinase ROS (ROS1) IHC assays including the performance of multiple antibody clones, pros and cons of IHC platforms and various scoring systems to design an optimal algorithm for predictive biomarker testing; (II) issues associated with programmed death-ligand 1 (PD-L1) IHC assays; (III) appropriate pre-analytical tissue handling and selection of optimal tissue samples for predictive biomarker IHC. PMID:29114473
Gu, C; Rao, D C
2001-01-01
Because simplistic designs will lead to prohibitively large sample sizes, the optimization of genetic study designs is critical for successfully mapping genes for complex diseases. Creative designs are necessary for detecting and amplifying the usually weak signals for complex traits. Two important outcomes of a study design--power and resolution--are implicitly tied together by the principle of uncertainty. Overemphasis on either one may lead to suboptimal designs. To achieve optimality for a particular study, therefore, practical measures such as cost-effectiveness must be used to strike a balance between power and resolution. In this light, the myriad of factors involved in study design can be checked for their effects on the ultimate outcomes, and the popular existing designs can be sorted into building blocks that may be useful for particular situations. It is hoped that imaginative construction of novel designs using such building blocks will lead to enhanced efficiency in finding genes for complex human traits.
Color separation in forensic image processing using interactive differential evolution.
Mushtaq, Harris; Rahnamayan, Shahryar; Siddiqi, Areeb
2015-01-01
Color separation is an image processing technique that has often been used in forensic applications to differentiate among variant colors and to remove unwanted image interference. This process can reveal important information such as covered text or fingerprints in forensic investigation procedures. However, several limitations prevent users from selecting the appropriate parameters pertaining to the desired and undesired colors. This study proposes the hybridization of an interactive differential evolution (IDE) and a color separation technique that no longer requires users to guess required control parameters. The IDE algorithm optimizes these parameters in an interactive manner by utilizing human visual judgment to uncover desired objects. A comprehensive experimental verification has been conducted on various sample test images, including heavily obscured texts, texts with subtle color variations, and fingerprint smudges. The advantage of IDE is apparent as it effectively optimizes the color separation parameters at a level indiscernible to the naked eyes. © 2014 American Academy of Forensic Sciences.
Unified interatomic potential and energy barrier distributions for amorphous oxides.
Trinastic, J P; Hamdan, R; Wu, Y; Zhang, L; Cheng, Hai-Ping
2013-10-21
Amorphous tantala, titania, and hafnia are important oxides for biomedical implants, optics, and gate insulators. Understanding the effects of oxide doping is crucial to optimize performance in these applications. However, no molecular dynamics potentials have been created to date that combine these and other oxides that would allow computational analyses of doping-dependent structural and mechanical properties. We report a novel set of computationally efficient, two-body potentials modeling van der Waals and covalent interactions that reproduce the structural and elastic properties of both pure and doped amorphous oxides. In addition, we demonstrate that the potential accurately produces energy barrier distributions for pure and doped samples. The distributions can be directly compared to experiment and used to calculate physical quantities such as internal friction to understand how doping affects material properties. Future analyses using these potentials will be of great value to determine optimal doping concentrations and material combinations for myriad material science applications.
NASA Astrophysics Data System (ADS)
Stockton, Amanda M.; Chiesl, Thomas N.; Lowenstein, Tim K.; Amashukeli, Xenia; Grunthaner, Frank; Mathies, Richard A.
2009-11-01
The Mars Organic Analyzer (MOA) has enabled the sensitive detection of amino acid and amine biomarkers in laboratory standards and in a variety of field sample tests. However, the MOA is challenged when samples are extremely acidic and saline or contain polyvalent cations. Here, we have optimized the MOA analysis, sample labeling, and sample dilution buffers to handle such challenging samples more robustly. Higher ionic strength buffer systems with pKa values near pH 9 were developed to provide better buffering capacity and salt tolerance. The addition of ethylaminediaminetetraacetic acid (EDTA) ameliorates the negative effects of multivalent cations. The optimized protocol utilizes a 75 mM borate buffer (pH 9.5) for Pacific Blue labeling of amines and amino acids. After labeling, 50 mM (final concentration) EDTA is added to samples containing divalent cations to ameliorate their effects. This optimized protocol was used to successfully analyze amino acids in a saturated brine sample from Saline Valley, California, and a subcritical water extract of a highly acidic sample from the RÃo Tinto, Spain. This work expands the analytical capabilities of the MOA and increases its sensitivity and robustness for samples from extraterrestrial environments that may exhibit pH and salt extremes as well as metal ions.
Stockton, Amanda M; Chiesl, Thomas N; Lowenstein, Tim K; Amashukeli, Xenia; Grunthaner, Frank; Mathies, Richard A
2009-11-01
The Mars Organic Analyzer (MOA) has enabled the sensitive detection of amino acid and amine biomarkers in laboratory standards and in a variety of field sample tests. However, the MOA is challenged when samples are extremely acidic and saline or contain polyvalent cations. Here, we have optimized the MOA analysis, sample labeling, and sample dilution buffers to handle such challenging samples more robustly. Higher ionic strength buffer systems with pK(a) values near pH 9 were developed to provide better buffering capacity and salt tolerance. The addition of ethylaminediaminetetraacetic acid (EDTA) ameliorates the negative effects of multivalent cations. The optimized protocol utilizes a 75 mM borate buffer (pH 9.5) for Pacific Blue labeling of amines and amino acids. After labeling, 50 mM (final concentration) EDTA is added to samples containing divalent cations to ameliorate their effects. This optimized protocol was used to successfully analyze amino acids in a saturated brine sample from Saline Valley, California, and a subcritical water extract of a highly acidic sample from the Río Tinto, Spain. This work expands the analytical capabilities of the MOA and increases its sensitivity and robustness for samples from extraterrestrial environments that may exhibit pH and salt extremes as well as metal ions.
Optimism and the experience of pain: benefits of seeing the glass as half full
Goodin, Burel R.; Bulls, Hailey W.
2014-01-01
There is a strong body of literature that lends support to the health-promoting effects of an optimistic personality disposition, observed across various physical and psychological dimensions. In accordance with this evidence base, it has been suggested that optimism may positively influence the course and experience of pain. Although the associations among optimism and pain outcomes have only recently begun to be adequately studied, emerging experimental and clinical research links optimism to lower pain sensitivity and better adjustment to chronic pain. This review highlights recent studies that have examined the effects of optimism on the pain experience using samples of individuals with clinically painful conditions as well as healthy samples in laboratory settings. Furthermore, factors such as catastrophizing, hope, acceptance and coping strategies, which are thought to play a role in how optimism exerts its beneficial effects on pain, are also addressed. PMID:23519832