Optimal design in pediatric pharmacokinetic and pharmacodynamic clinical studies.
Roberts, Jessica K; Stockmann, Chris; Balch, Alfred; Yu, Tian; Ward, Robert M; Spigarelli, Michael G; Sherwin, Catherine M T
2015-03-01
It is not trivial to conduct clinical trials with pediatric participants. Ethical, logistical, and financial considerations add to the complexity of pediatric studies. Optimal design theory allows investigators the opportunity to apply mathematical optimization algorithms to define how to structure their data collection to answer focused research questions. These techniques can be used to determine an optimal sample size, optimal sample times, and the number of samples required for pharmacokinetic and pharmacodynamic studies. The aim of this review is to demonstrate how to determine optimal sample size, optimal sample times, and the number of samples required from each patient by presenting specific examples using optimal design tools. Additionally, this review aims to discuss the relative usefulness of sparse vs rich data. This review is intended to educate the clinician, as well as the basic research scientist, whom plan on conducting a pharmacokinetic/pharmacodynamic clinical trial in pediatric patients. © 2015 John Wiley & Sons Ltd.
Urine sampling and collection system optimization and testing
NASA Technical Reports Server (NTRS)
Fogal, G. L.; Geating, J. A.; Koesterer, M. G.
1975-01-01
A Urine Sampling and Collection System (USCS) engineering model was developed to provide for the automatic collection, volume sensing and sampling of urine from each micturition. The purpose of the engineering model was to demonstrate verification of the system concept. The objective of the optimization and testing program was to update the engineering model, to provide additional performance features and to conduct system testing to determine operational problems. Optimization tasks were defined as modifications to minimize system fluid residual and addition of thermoelectric cooling.
Using geostatistics to evaluate cleanup goals
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marcon, M.F.; Hopkins, L.P.
1995-12-01
Geostatistical analysis is a powerful predictive tool typically used to define spatial variability in environmental data. The information from a geostatistical analysis using kriging, a geostatistical. tool, can be taken a step further to optimize sampling location and frequency and help quantify sampling uncertainty in both the remedial investigation and remedial design at a hazardous waste site. Geostatistics were used to quantify sampling uncertainty in attainment of a risk-based cleanup goal and determine the optimal sampling frequency necessary to delineate the horizontal extent of impacted soils at a Gulf Coast waste site.
Optimal Time-Resource Allocation for Energy-Efficient Physical Activity Detection
Thatte, Gautam; Li, Ming; Lee, Sangwon; Emken, B. Adar; Annavaram, Murali; Narayanan, Shrikanth; Spruijt-Metz, Donna; Mitra, Urbashi
2011-01-01
The optimal allocation of samples for physical activity detection in a wireless body area network for health-monitoring is considered. The number of biometric samples collected at the mobile device fusion center, from both device-internal and external Bluetooth heterogeneous sensors, is optimized to minimize the transmission power for a fixed number of samples, and to meet a performance requirement defined using the probability of misclassification between multiple hypotheses. A filter-based feature selection method determines an optimal feature set for classification, and a correlated Gaussian model is considered. Using experimental data from overweight adolescent subjects, it is found that allocating a greater proportion of samples to sensors which better discriminate between certain activity levels can result in either a lower probability of error or energy-savings ranging from 18% to 22%, in comparison to equal allocation of samples. The current activity of the subjects and the performance requirements do not significantly affect the optimal allocation, but employing personalized models results in improved energy-efficiency. As the number of samples is an integer, an exhaustive search to determine the optimal allocation is typical, but computationally expensive. To this end, an alternate, continuous-valued vector optimization is derived which yields approximately optimal allocations and can be implemented on the mobile fusion center due to its significantly lower complexity. PMID:21796237
Theory of sampling: four critical success factors before analysis.
Wagner, Claas; Esbensen, Kim H
2015-01-01
Food and feed materials characterization, risk assessment, and safety evaluations can only be ensured if QC measures are based on valid analytical data, stemming from representative samples. The Theory of Sampling (TOS) is the only comprehensive theoretical framework that fully defines all requirements to ensure sampling correctness and representativity, and to provide the guiding principles for sampling in practice. TOS also defines the concept of material heterogeneity and its impact on the sampling process, including the effects from all potential sampling errors. TOS's primary task is to eliminate bias-generating errors and to minimize sampling variability. Quantitative measures are provided to characterize material heterogeneity, on which an optimal sampling strategy should be based. Four critical success factors preceding analysis to ensure a representative sampling process are presented here.
Family Life and Human Development: Sample Units, K-6. Revised.
ERIC Educational Resources Information Center
Prince George's County Public Schools, Upper Marlboro, MD.
Sample unit outlines, designed for kindergarten through grade six, define the content, activities, and assessment tasks appropriate to specific grade levels. The units have been extracted from the Board-approved curriculum, Health Education: The Curricular Approach to Optimal Health. The instructional guidelines for grade one are: describing a…
Local activation time sampling density for atrial tachycardia contact mapping: how much is enough?
Williams, Steven E; Harrison, James L; Chubb, Henry; Whitaker, John; Kiedrowicz, Radek; Rinaldi, Christopher A; Cooklin, Michael; Wright, Matthew; Niederer, Steven; O'Neill, Mark D
2018-02-01
Local activation time (LAT) mapping forms the cornerstone of atrial tachycardia diagnosis. Although anatomic and positional accuracy of electroanatomic mapping (EAM) systems have been validated, the effect of electrode sampling density on LAT map reconstruction is not known. Here, we study the effect of chamber geometry and activation complexity on optimal LAT sampling density using a combined in silico and in vivo approach. In vivo 21 atrial tachycardia maps were studied in three groups: (1) focal activation, (2) macro-re-entry, and (3) localized re-entry. In silico activation was simulated on a 4×4cm atrial monolayer, sampled randomly at 0.25-10 points/cm2 and used to re-interpolate LAT maps. Activation patterns were studied in the geometrically simple porcine right atrium (RA) and complex human left atrium (LA). Activation complexity was introduced into the porcine RA by incomplete inter-caval linear ablation. In all cases, optimal sampling density was defined as the highest density resulting in minimal further error reduction in the re-interpolated maps. Optimal sampling densities for LA tachycardias were 0.67 ± 0.17 points/cm2 (focal activation), 1.05 ± 0.32 points/cm2 (macro-re-entry) and 1.23 ± 0.26 points/cm2 (localized re-entry), P = 0.0031. Increasing activation complexity was associated with increased optimal sampling density both in silico (focal activation 1.09 ± 0.14 points/cm2; re-entry 1.44 ± 0.49 points/cm2; spiral-wave 1.50 ± 0.34 points/cm2, P < 0.0001) and in vivo (porcine RA pre-ablation 0.45 ± 0.13 vs. post-ablation 0.78 ± 0.17 points/cm2, P = 0.0008). Increasing chamber geometry was also associated with increased optimal sampling density (0.61 ± 0.22 points/cm2 vs. 1.0 ± 0.34 points/cm2, P = 0.0015). Optimal sampling densities can be identified to maximize diagnostic yield of LAT maps. Greater sampling density is required to correctly reveal complex activation and represent activation across complex geometries. Overall, the optimal sampling density for LAT map interpolation defined in this study was ∼1.0-1.5 points/cm2. Published on behalf of the European Society of Cardiology
Zaher, Zaki Morad Mohd; Zambari, Robayaah; Pheng, Chan Siew; Muruga, Vadivale; Ng, Bernard; Appannah, Geeta; Onn, Lim Teck
2009-01-01
Many studies in Asia have demonstrated that Asian populations may require lower cut-off levels for body mass index (BMI) and waist circumference to define obesity and abdominal obesity respectively, compared to western populations. Optimal cut-off levels for body mass index and waist circumference were determined to assess the relationship between the two anthropometric- and cardiovascular indices. Receiver operating characteristics analysis was used to determine the optimal cut-off levels. The study sample included 1833 subjects (mean age of 44+/-14 years) from 93 primary care clinics in Malaysia. Eight hundred and seventy two of the subjects were men and 960 were women. The optimal body mass index cut-off values predicting dyslipidaemia, hypertension, diabetes mellitus, or at least one cardiovascular risk factor varied from 23.5 to 25.5 kg/m2 in men and 24.9 to 27.4 kg/m2 in women. As for waist circumference, the optimal cut-off values varied from 83 to 92 cm in men and from 83 to 88 cm in women. The optimal cut-off values from our study showed that body mass index of 23.5 kg/m2 in men and 24.9 kg/m2 in women and waist circumference of 83 cm in men and women may be more suitable for defining the criteria for overweight or obesity among adults in Malaysia. Waist circumference may be a better indicator for the prediction of obesity-related cardiovascular risk factors in men and women compared to BMI. Further investigation using a bigger sample size in Asia needs to be done to confirm our findings.
Mao, Yanhui; Roberts, Scott; Pagliaro, Stefano; Csikszentmihalyi, Mihaly; Bonaiuto, Marino
2016-01-01
Eudaimonistic identity theory posits a link between activity and identity, where a self-defining activity promotes the strength of a person's identity. An activity engaged in with high enjoyment, full involvement, and high concentration can facilitate the subjective experience of flow. In the present paper, we hypothesized in accordance with the theory of psychological selection that beyond the promotion of individual development and complexity at the personal level, the relationship between flow and identity at the social level is also positive through participation in self-defining activities. Three different samples (i.e., American, Chinese, and Spanish) filled in measures for flow and social identity, with reference to four previously self-reported activities, characterized by four different combinations of skills (low vs. high) and challenges (low vs. high). Findings indicated that flow was positively associated with social identity across each of the above samples, regardless of participants' gender and age. The results have implications for increasing social identity via participation in self-defining group activities that could facilitate flow.
Mao, Yanhui; Roberts, Scott; Pagliaro, Stefano; Csikszentmihalyi, Mihaly; Bonaiuto, Marino
2016-01-01
Eudaimonistic identity theory posits a link between activity and identity, where a self-defining activity promotes the strength of a person’s identity. An activity engaged in with high enjoyment, full involvement, and high concentration can facilitate the subjective experience of flow. In the present paper, we hypothesized in accordance with the theory of psychological selection that beyond the promotion of individual development and complexity at the personal level, the relationship between flow and identity at the social level is also positive through participation in self-defining activities. Three different samples (i.e., American, Chinese, and Spanish) filled in measures for flow and social identity, with reference to four previously self-reported activities, characterized by four different combinations of skills (low vs. high) and challenges (low vs. high). Findings indicated that flow was positively associated with social identity across each of the above samples, regardless of participants’ gender and age. The results have implications for increasing social identity via participation in self-defining group activities that could facilitate flow. PMID:26924995
NASA Astrophysics Data System (ADS)
Rosenberg, D. E.; Alafifi, A.
2016-12-01
Water resources systems analysis often focuses on finding optimal solutions. Yet an optimal solution is optimal only for the modelled issues and managers often seek near-optimal alternatives that address un-modelled objectives, preferences, limits, uncertainties, and other issues. Early on, Modelling to Generate Alternatives (MGA) formalized near-optimal as the region comprising the original problem constraints plus a new constraint that allowed performance within a specified tolerance of the optimal objective function value. MGA identified a few maximally-different alternatives from the near-optimal region. Subsequent work applied Markov Chain Monte Carlo (MCMC) sampling to generate a larger number of alternatives that span the near-optimal region of linear problems or select portions for non-linear problems. We extend the MCMC Hit-And-Run method to generate alternatives that span the full extent of the near-optimal region for non-linear, non-convex problems. First, start at a feasible hit point within the near-optimal region, then run a random distance in a random direction to a new hit point. Next, repeat until generating the desired number of alternatives. The key step at each iterate is to run a random distance along the line in the specified direction to a new hit point. If linear equity constraints exist, we construct an orthogonal basis and use a null space transformation to confine hits and runs to a lower-dimensional space. Linear inequity constraints define the convex bounds on the line that runs through the current hit point in the specified direction. We then use slice sampling to identify a new hit point along the line within bounds defined by the non-linear inequity constraints. This technique is computationally efficient compared to prior near-optimal alternative generation techniques such MGA, MCMC Metropolis-Hastings, evolutionary, or firefly algorithms because search at each iteration is confined to the hit line, the algorithm can move in one step to any point in the near-optimal region, and each iterate generates a new, feasible alternative. We use the method to generate alternatives that span the near-optimal regions of simple and more complicated water management problems and may be preferred to optimal solutions. We also discuss extensions to handle non-linear equity constraints.
GMOtrack: generator of cost-effective GMO testing strategies.
Novak, Petra Krau; Gruden, Kristina; Morisset, Dany; Lavrac, Nada; Stebih, Dejan; Rotter, Ana; Zel, Jana
2009-01-01
Commercialization of numerous genetically modified organisms (GMOs) has already been approved worldwide, and several additional GMOs are in the approval process. Many countries have adopted legislation to deal with GMO-related issues such as food safety, environmental concerns, and consumers' right of choice, making GMO traceability a necessity. The growing extent of GMO testing makes it important to study optimal GMO detection and identification strategies. This paper formally defines the problem of routine laboratory-level GMO tracking as a cost optimization problem, thus proposing a shift from "the same strategy for all samples" to "sample-centered GMO testing strategies." An algorithm (GMOtrack) for finding optimal two-phase (screening-identification) testing strategies is proposed. The advantages of cost optimization with increasing GMO presence on the market are demonstrated, showing that optimization approaches to analytic GMO traceability can result in major cost reductions. The optimal testing strategies are laboratory-dependent, as the costs depend on prior probabilities of local GMO presence, which are exemplified on food and feed samples. The proposed GMOtrack approach, publicly available under the terms of the General Public License, can be extended to other domains where complex testing is involved, such as safety and quality assurance in the food supply chain.
Effects of tree-to-tree variations on sap flux-based transpiration estimates in a forested watershed
NASA Astrophysics Data System (ADS)
Kume, Tomonori; Tsuruta, Kenji; Komatsu, Hikaru; Kumagai, Tomo'omi; Higashi, Naoko; Shinohara, Yoshinori; Otsuki, Kyoichi
2010-05-01
To estimate forest stand-scale water use, we assessed how sample sizes affect confidence of stand-scale transpiration (E) estimates calculated from sap flux (Fd) and sapwood area (AS_tree) measurements of individual trees. In a Japanese cypress plantation, we measured Fd and AS_tree in all trees (n = 58) within a 20 × 20 m study plot, which was divided into four 10 × 10 subplots. We calculated E from stand AS_tree (AS_stand) and mean stand Fd (JS) values. Using Monte Carlo analyses, we examined potential errors associated with sample sizes in E, AS_stand, and JS by using the original AS_tree and Fd data sets. Consequently, we defined optimal sample sizes of 10 and 15 for AS_stand and JS estimates, respectively, in the 20 × 20 m plot. Sample sizes greater than the optimal sample sizes did not decrease potential errors. The optimal sample sizes for JS changed according to plot size (e.g., 10 × 10 m and 10 × 20 m), while the optimal sample sizes for AS_stand did not. As well, the optimal sample sizes for JS did not change in different vapor pressure deficit conditions. In terms of E estimates, these results suggest that the tree-to-tree variations in Fd vary among different plots, and that plot size to capture tree-to-tree variations in Fd is an important factor. This study also discusses planning balanced sampling designs to extrapolate stand-scale estimates to catchment-scale estimates.
NASA Astrophysics Data System (ADS)
Clenet, A.; Ravera, L.; Bertrand, B.; den Hartog, R.; Jackson, B.; van Leeuwen, B.-J.; van Loon, D.; Parot, Y.; Pointecouteau, E.; Sournac, A.
2014-11-01
IRAP is developing the readout electronics of the SPICA-SAFARI's TES bolometer arrays. Based on the frequency domain multiplexing technique the readout electronics provides the AC-signals to voltage-bias the detectors; it demodulates the data; and it computes a feedback to linearize the detection chain. The feedback is computed with a specific technique, so called baseband feedback (BBFB) which ensures that the loop is stable even with long propagation and processing delays (i.e. several μ s) and with fast signals (i.e. frequency carriers of the order of 5 MHz). To optimize the power consumption we took advantage of the reduced science signal bandwidth to decouple the signal sampling frequency and the data processing rate. This technique allowed a reduction of the power consumption of the circuit by a factor of 10. Beyond the firmware architecture the optimization of the instrument concerns the characterization routines and the definition of the optimal parameters. Indeed, to operate an array TES one has to properly define about 21000 parameters. We defined a set of procedures to automatically characterize these parameters and find out the optimal settings.
Microwave resonances in dielectric samples probed in Corbino geometry: simulation and experiment.
Felger, M Maximilian; Dressel, Martin; Scheffler, Marc
2013-11-01
The Corbino approach, where the sample of interest terminates a coaxial cable, is a well-established method for microwave spectroscopy. If the sample is dielectric and if the probe geometry basically forms a conductive cavity, this combination can sustain well-defined microwave resonances that are detrimental for broadband measurements. Here, we present detailed simulations and measurements to investigate the resonance frequencies as a function of sample and probe size and of sample permittivity. This allows a quantitative optimization to increase the frequency of the lowest-lying resonance.
Development of fire shutters based on numerical optimizations
NASA Astrophysics Data System (ADS)
Novak, Ondrej; Kulhavy, Petr; Martinec, Tomas; Petru, Michal; Srb, Pavel
2018-06-01
This article deals with a prototype concept, real experiment and numerical simulation of a layered industrial fire shutter, based on some new insulating composite materials. The real fire shutter has been developed and optimized in laboratory and subsequently tested in the certified test room. A simulation of whole concept has been carried out as the non-premixed combustion process in the commercial final volume sw Pyrosim. Model of the combustion based on a stoichiometric defined mixture of gas and the tested layered samples showed good conformity with experimental results - i.e. thermal distribution inside and heat release rate that has gone through the sample.
NASA Astrophysics Data System (ADS)
Beckmann, Felix
2016-10-01
The Helmholtz-Zentrum Geesthacht, Germany, is operating the user experiments for microtomography at the beamlines P05 and P07 using synchrotron radiation produced in the storage ring PETRA III at DESY, Hamburg, Germany. In recent years the software pipeline, sample changing hardware for performing high throughput experiments were developed. In this talk the current status of the beamlines will be given. Furthermore, optimisation and automatisation of scanning techniques, will be presented. These are required to scan samples which are larger than the field of view defined by the X-ray beam. The integration into an optimized reconstruction pipeline will be shown.
Bezinge, Leonard; Maceiczyk, Richard M; Lignos, Ioannis; Kovalenko, Maksym V; deMello, Andrew J
2018-06-06
Recent advances in the development of hybrid organic-inorganic lead halide perovskite (LHP) nanocrystals (NCs) have demonstrated their versatility and potential application in photovoltaics and as light sources through compositional tuning of optical properties. That said, due to their compositional complexity, the targeted synthesis of mixed-cation and/or mixed-halide LHP NCs still represents an immense challenge for traditional batch-scale chemistry. To address this limitation, we herein report the integration of a high-throughput segmented-flow microfluidic reactor and a self-optimizing algorithm for the synthesis of NCs with defined emission properties. The algorithm, named Multiparametric Automated Regression Kriging Interpolation and Adaptive Sampling (MARIA), iteratively computes optimal sampling points at each stage of an experimental sequence to reach a target emission peak wavelength based on spectroscopic measurements. We demonstrate the efficacy of the method through the synthesis of multinary LHP NCs, (Cs/FA)Pb(I/Br) 3 (FA = formamidinium) and (Rb/Cs/FA)Pb(I/Br) 3 NCs, using MARIA to rapidly identify reagent concentrations that yield user-defined photoluminescence peak wavelengths in the green-red spectral region. The procedure returns a robust model around a target output in far fewer measurements than systematic screening of parametric space and additionally enables the prediction of other spectral properties, such as, full-width at half-maximum and intensity, for conditions yielding NCs with similar emission peak wavelength.
Damage identification in beams using speckle shearography and an optimal spatial sampling
NASA Astrophysics Data System (ADS)
Mininni, M.; Gabriele, S.; Lopes, H.; Araújo dos Santos, J. V.
2016-10-01
Over the years, the derivatives of modal displacement and rotation fields have been used to localize damage in beams. Usually, the derivatives are computed by applying finite differences. The finite differences propagate and amplify the errors that exist in real measurements, and thus, it is necessary to minimize this problem in order to get reliable damage localizations. A way to decrease the propagation and amplification of the errors is to select an optimal spatial sampling. This paper presents a technique where an optimal spatial sampling of modal rotation fields is computed and used to obtain the modal curvatures. Experimental measurements of modal rotation fields of a beam with single and multiple damages are obtained with shearography, which is an optical technique allowing the measurement of full-fields. These measurements are used to test the validity of the optimal sampling technique for the improvement of damage localization in real structures. An investigation on the ability of a model updating technique to quantify the damage is also reported. The model updating technique is defined by the variations of measured natural frequencies and measured modal rotations and aims at calibrating the values of the second moment of area in the damaged areas, which were previously localized.
OpenMSI Arrayed Analysis Tools v2.0
DOE Office of Scientific and Technical Information (OSTI.GOV)
BOWEN, BENJAMIN; RUEBEL, OLIVER; DE ROND, TRISTAN
2017-02-07
Mass spectrometry imaging (MSI) enables high-resolution spatial mapping of biomolecules in samples and is a valuable tool for the analysis of tissues from plants and animals, microbial interactions, high-throughput screening, drug metabolism, and a host of other applications. This is accomplished by desorbing molecules from the surface on spatially defined locations, using a laser or ion beam. These ions are analyzed by a mass spectrometry and collected into a MSI 'image', a dataset containing unique mass spectra from the sampled spatial locations. MSI is used in a diverse and increasing number of biological applications. The OpenMSI Arrayed Analysis Tool (OMAAT)more » is a new software method that addresses the challenges of analyzing spatially defined samples in large MSI datasets, by providing support for automatic sample position optimization and ion selection.« less
The SDSS-IV MaNGA Sample: Design, Optimization, and Usage Considerations
NASA Astrophysics Data System (ADS)
Wake, David A.; Bundy, Kevin; Diamond-Stanic, Aleksandar M.; Yan, Renbin; Blanton, Michael R.; Bershady, Matthew A.; Sánchez-Gallego, José R.; Drory, Niv; Jones, Amy; Kauffmann, Guinevere; Law, David R.; Li, Cheng; MacDonald, Nicholas; Masters, Karen; Thomas, Daniel; Tinker, Jeremy; Weijmans, Anne-Marie; Brownstein, Joel R.
2017-09-01
We describe the sample design for the SDSS-IV MaNGA survey and present the final properties of the main samples along with important considerations for using these samples for science. Our target selection criteria were developed while simultaneously optimizing the size distribution of the MaNGA integral field units (IFUs), the IFU allocation strategy, and the target density to produce a survey defined in terms of maximizing signal-to-noise ratio, spatial resolution, and sample size. Our selection strategy makes use of redshift limits that only depend on I-band absolute magnitude (M I ), or, for a small subset of our sample, M I and color (NUV - I). Such a strategy ensures that all galaxies span the same range in angular size irrespective of luminosity and are therefore covered evenly by the adopted range of IFU sizes. We define three samples: the Primary and Secondary samples are selected to have a flat number density with respect to M I and are targeted to have spectroscopic coverage to 1.5 and 2.5 effective radii (R e ), respectively. The Color-Enhanced supplement increases the number of galaxies in the low-density regions of color-magnitude space by extending the redshift limits of the Primary sample in the appropriate color bins. The samples cover the stellar mass range 5× {10}8≤slant {M}* ≤slant 3× {10}11 {M}⊙ {h}-2 and are sampled at median physical resolutions of 1.37 and 2.5 kpc for the Primary and Secondary samples, respectively. We provide weights that will statistically correct for our luminosity and color-dependent selection function and IFU allocation strategy, thus correcting the observed sample to a volume-limited sample.
New multirate sampled-data control law structure and synthesis algorithm
NASA Technical Reports Server (NTRS)
Berg, Martin C.; Mason, Gregory S.; Yang, Gen-Sheng
1992-01-01
A new multirate sampled-data control law structure is defined and a new parameter-optimization-based synthesis algorithm for that structure is introduced. The synthesis algorithm can be applied to multirate, multiple-input/multiple-output, sampled-data control laws having a prescribed dynamic order and structure, and a priori specified sampling/update rates for all sensors, processor states, and control inputs. The synthesis algorithm is applied to design two-input, two-output tip position controllers of various dynamic orders for a sixth-order, two-link robot arm model.
Barca, E; Castrignanò, A; Buttafuoco, G; De Benedetto, D; Passarella, G
2015-07-01
Soil survey is generally time-consuming, labor-intensive, and costly. Optimization of sampling scheme allows one to reduce the number of sampling points without decreasing or even increasing the accuracy of investigated attribute. Maps of bulk soil electrical conductivity (EC a ) recorded with electromagnetic induction (EMI) sensors could be effectively used to direct soil sampling design for assessing spatial variability of soil moisture. A protocol, using a field-scale bulk EC a survey, has been applied in an agricultural field in Apulia region (southeastern Italy). Spatial simulated annealing was used as a method to optimize spatial soil sampling scheme taking into account sampling constraints, field boundaries, and preliminary observations. Three optimization criteria were used. the first criterion (minimization of mean of the shortest distances, MMSD) optimizes the spreading of the point observations over the entire field by minimizing the expectation of the distance between an arbitrarily chosen point and its nearest observation; the second criterion (minimization of weighted mean of the shortest distances, MWMSD) is a weighted version of the MMSD, which uses the digital gradient of the grid EC a data as weighting function; and the third criterion (mean of average ordinary kriging variance, MAOKV) minimizes mean kriging estimation variance of the target variable. The last criterion utilizes the variogram model of soil water content estimated in a previous trial. The procedures, or a combination of them, were tested and compared in a real case. Simulated annealing was implemented by the software MSANOS able to define or redesign any sampling scheme by increasing or decreasing the original sampling locations. The output consists of the computed sampling scheme, the convergence time, and the cooling law, which can be an invaluable support to the process of sampling design. The proposed approach has found the optimal solution in a reasonable computation time. The use of bulk EC a gradient as an exhaustive variable, known at any node of an interpolation grid, has allowed the optimization of the sampling scheme, distinguishing among areas with different priority levels.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ridolfi, E.; Napolitano, F., E-mail: francesco.napolitano@uniroma1.it; Alfonso, L.
2016-06-08
The description of river topography has a crucial role in accurate one-dimensional (1D) hydraulic modelling. Specifically, cross-sectional data define the riverbed elevation, the flood-prone area, and thus, the hydraulic behavior of the river. Here, the problem of the optimal cross-sectional spacing is solved through an information theory-based concept. The optimal subset of locations is the one with the maximum information content and the minimum amount of redundancy. The original contribution is the introduction of a methodology to sample river cross sections in the presence of bridges. The approach is tested on the Grosseto River (IT) and is compared to existingmore » guidelines. The results show that the information theory-based approach can support traditional methods to estimate rivers’ cross-sectional spacing.« less
Statistical aspects of point count sampling
Barker, R.J.; Sauer, J.R.; Ralph, C.J.; Sauer, J.R.; Droege, S.
1995-01-01
The dominant feature of point counts is that they do not census birds, but instead provide incomplete counts of individuals present within a survey plot. Considering a simple model for point count sampling, we demon-strate that use of these incomplete counts can bias estimators and testing procedures, leading to inappropriate conclusions. A large portion of the variability in point counts is caused by the incomplete counting, and this within-count variation can be confounded with ecologically meaningful varia-tion. We recommend caution in the analysis of estimates obtained from point counts. Using; our model, we also consider optimal allocation of sampling effort. The critical step in the optimization process is in determining the goals of the study and methods that will be used to meet these goals. By explicitly defining the constraints on sampling and by estimating the relationship between precision and bias of estimators and time spent counting, we can predict the optimal time at a point for each of several monitoring goals. In general, time spent at a point will differ depending on the goals of the study.
Optimal Appearance Model for Visual Tracking
Wang, Yuru; Jiang, Longkui; Liu, Qiaoyuan; Yin, Minghao
2016-01-01
Many studies argue that integrating multiple cues in an adaptive way increases tracking performance. However, what is the definition of adaptiveness and how to realize it remains an open issue. On the premise that the model with optimal discriminative ability is also optimal for tracking the target, this work realizes adaptiveness and robustness through the optimization of multi-cue integration models. Specifically, based on prior knowledge and current observation, a set of discrete samples are generated to approximate the foreground and background distribution. With the goal of optimizing the classification margin, an objective function is defined, and the appearance model is optimized by introducing optimization algorithms. The proposed optimized appearance model framework is embedded into a particle filter for a field test, and it is demonstrated to be robust against various kinds of complex tracking conditions. This model is general and can be easily extended to other parameterized multi-cue models. PMID:26789639
Entropic Comparison of Atomic-Resolution Electron Tomography of Crystals and Amorphous Materials.
Collins, S M; Leary, R K; Midgley, P A; Tovey, R; Benning, M; Schönlieb, C-B; Rez, P; Treacy, M M J
2017-10-20
Electron tomography bears promise for widespread determination of the three-dimensional arrangement of atoms in solids. However, it remains unclear whether methods successful for crystals are optimal for amorphous solids. Here, we explore the relative difficulty encountered in atomic-resolution tomography of crystalline and amorphous nanoparticles. We define an informational entropy to reveal the inherent importance of low-entropy zone-axis projections in the reconstruction of crystals. In turn, we propose considerations for optimal sampling for tomography of ordered and disordered materials.
Least-mean-square spatial filter for IR sensors.
Takken, E H; Friedman, D; Milton, A F; Nitzberg, R
1979-12-15
A new least-mean-square filter is defined for signal-detection problems. The technique is proposed for scanning IR surveillance systems operating in poorly characterized but primarily low-frequency clutter interference. Near-optimal detection of point-source targets is predicted both for continuous-time and sampled-data systems.
ACS sampling system: design, implementation, and performance evaluation
NASA Astrophysics Data System (ADS)
Di Marcantonio, Paolo; Cirami, Roberto; Chiozzi, Gianluca
2004-09-01
By means of ACS (ALMA Common Software) framework we designed and implemented a sampling system which allows sampling of every Characteristic Component Property with a specific, user-defined, sustained frequency limited only by the hardware. Collected data are sent to various clients (one or more Java plotting widgets, a dedicated GUI or a COTS application) using the ACS/CORBA Notification Channel. The data transport is optimized: samples are cached locally and sent in packets with a lower and user-defined frequency to keep network load under control. Simultaneous sampling of the Properties of different Components is also possible. Together with the design and implementation issues we present the performance of the sampling system evaluated on two different platforms: on a VME based system using VxWorks RTOS (currently adopted by ALMA) and on a PC/104+ embedded platform using Red Hat 9 Linux operating system. The PC/104+ solution offers, as an alternative, a low cost PC compatible hardware environment with free and open operating system.
Fast imaging of live organisms with sculpted light sheets
NASA Astrophysics Data System (ADS)
Chmielewski, Aleksander K.; Kyrsting, Anders; Mahou, Pierre; Wayland, Matthew T.; Muresan, Leila; Evers, Jan Felix; Kaminski, Clemens F.
2015-04-01
Light-sheet microscopy is an increasingly popular technique in the life sciences due to its fast 3D imaging capability of fluorescent samples with low photo toxicity compared to confocal methods. In this work we present a new, fast, flexible and simple to implement method to optimize the illumination light-sheet to the requirement at hand. A telescope composed of two electrically tuneable lenses enables us to define thickness and position of the light-sheet independently but accurately within milliseconds, and therefore optimize image quality of the features of interest interactively. We demonstrated the practical benefit of this technique by 1) assembling large field of views from tiled single exposure each with individually optimized illumination settings; 2) sculpting the light-sheet to trace complex sample shapes within single exposures. This technique proved compatible with confocal line scanning detection, further improving image contrast and resolution. Finally, we determined the effect of light-sheet optimization in the context of scattering tissue, devising procedures for balancing image quality, field of view and acquisition speed.
Dynamics of hepatitis C under optimal therapy and sampling based analysis
NASA Astrophysics Data System (ADS)
Pachpute, Gaurav; Chakrabarty, Siddhartha P.
2013-08-01
We examine two models for hepatitis C viral (HCV) dynamics, one for monotherapy with interferon (IFN) and the other for combination therapy with IFN and ribavirin. Optimal therapy for both the models is determined using the steepest gradient method, by defining an objective functional which minimizes infected hepatocyte levels, virion population and side-effects of the drug(s). The optimal therapies for both the models show an initial period of high efficacy, followed by a gradual decline. The period of high efficacy coincides with a significant decrease in the viral load, whereas the efficacy drops after hepatocyte levels are restored. We use the Latin hypercube sampling technique to randomly generate a large number of patient scenarios and study the dynamics of each set under the optimal therapy already determined. Results show an increase in the percentage of responders (indicated by drop in viral load below detection levels) in case of combination therapy (72%) as compared to monotherapy (57%). Statistical tests performed to study correlations between sample parameters and time required for the viral load to fall below detection level, show a strong monotonic correlation with the death rate of infected hepatocytes, identifying it to be an important factor in deciding individual drug regimens.
Defining Tiger Parenting in Chinese Americans.
Kim, Su Yeong
2013-09-01
"Tiger" parenting, as described by Amy Chua [2011], has instigated scholarly discourse on this phenomenon and its possible effects on families. Our eight-year longitudinal study, published in the Asian American Journal of Psychology [Kim, Wang, Orozco-Lapray, Shen, & Murtuza, 2013b], demonstrates that tiger parenting is not a common parenting profile in a sample of 444 Chinese American families. Tiger parenting also does not relate to superior academic performance in children. In fact, the best developmental outcomes were found among children of supportive parents. We examine the complexities around defining tiger parenting by reviewing classical literature on parenting styles and scholarship on Asian American parenting, along with Amy Chua's own description of her parenting method, to develop, define, and categorize variability in parenting in a sample of Chinese American families. We also provide evidence that supportive parenting is important for the optimal development of Chinese American adolescents.
Defining Tiger Parenting in Chinese Americans
Kim, Su Yeong
2016-01-01
“Tiger” parenting, as described by Amy Chua [2011], has instigated scholarly discourse on this phenomenon and its possible effects on families. Our eight-year longitudinal study, published in the Asian American Journal of Psychology [Kim, Wang, Orozco-Lapray, Shen, & Murtuza, 2013b], demonstrates that tiger parenting is not a common parenting profile in a sample of 444 Chinese American families. Tiger parenting also does not relate to superior academic performance in children. In fact, the best developmental outcomes were found among children of supportive parents. We examine the complexities around defining tiger parenting by reviewing classical literature on parenting styles and scholarship on Asian American parenting, along with Amy Chua’s own description of her parenting method, to develop, define, and categorize variability in parenting in a sample of Chinese American families. We also provide evidence that supportive parenting is important for the optimal development of Chinese American adolescents. PMID:27182075
Age differences in optimism bias are mediated by reliance on intuition and religiosity.
Klaczynski, Paul A
2017-11-01
The relationships among age, optimism bias, religiosity, creationist beliefs, and reliance on intuition were examined in a sample of 211 high school students (M age =16.54years). Optimism bias was defined as the difference between predictions for positive and negative live events (e.g., divorce) for the self and age peers. Results indicated that older adolescents displayed less optimism bias, were less religious, believed less in creationism, and relied on intuition less than younger adolescents. Furthermore, the association between age and optimism bias was mediated by religiosity and reliance on intuition but not by creationist beliefs. These findings are considered from a dual-process theoretic perspective that emphasizes age increases in metacognitive abilities and epistemological beliefs and age declines in impulsive judgments. Research directed toward examining alternative explanations of the association among religiosity, age, and optimism bias is recommended. Copyright © 2017 Elsevier Inc. All rights reserved.
Saussele, Susanne; Hehlmann, Rüdiger; Fabarius, Alice; Jeromin, Sabine; Proetel, Ulrike; Rinaldetti, Sebastien; Kohlbrenner, Katharina; Einsele, Hermann; Falge, Christiane; Kanz, Lothar; Neubauer, Andreas; Kneba, Michael; Stegelmann, Frank; Pfreundschuh, Michael; Waller, Cornelius F; Oppliger Leibundgut, Elisabeth; Heim, Dominik; Krause, Stefan W; Hofmann, Wolf-Karsten; Hasford, Joerg; Pfirrmann, Markus; Müller, Martin C; Hochhaus, Andreas; Lauseker, Michael
2018-05-01
Major molecular remission (MMR) is an important therapy goal in chronic myeloid leukemia (CML). So far, MMR is not a failure criterion according to ELN management recommendation leading to uncertainties when to change therapy in CML patients not reaching MMR after 12 months. At monthly landmarks, for different molecular remission status Hazard ratios (HR) were estimated for patients registered to CML study IV who were divided in a learning and a validation sample. The minimum HR for MMR was found at 2.5 years with 0.28 (compared to patients without remission). In the validation sample, a significant advantage for progression-free survival (PFS) for patients in MMR could be detected (p-value 0.007). The optimal time to predict PFS in patients with MMR could be validated in an independent sample at 2.5 years. With our model we provide a suggestion when to define lack of MMR as therapy failure and thus treatment change should be considered. The optimal response time for 1% BCR-ABL at about 12-15 months was confirmed and for deep molecular remission no specific time point was detected. Nevertheless, it was demonstrated that the earlier the MMR is achieved the higher is the chance to attain deep molecular response later.
A predictive control framework for optimal energy extraction of wind farms
NASA Astrophysics Data System (ADS)
Vali, M.; van Wingerden, J. W.; Boersma, S.; Petrović, V.; Kühn, M.
2016-09-01
This paper proposes an adjoint-based model predictive control for optimal energy extraction of wind farms. It employs the axial induction factor of wind turbines to influence their aerodynamic interactions through the wake. The performance index is defined here as the total power production of the wind farm over a finite prediction horizon. A medium-fidelity wind farm model is utilized to predict the inflow propagation in advance. The adjoint method is employed to solve the formulated optimization problem in a cost effective way and the first part of the optimal solution is implemented over the control horizon. This procedure is repeated at the next controller sample time providing the feedback into the optimization. The effectiveness and some key features of the proposed approach are studied for a two turbine test case through simulations.
Optimized exploration resource evaluation using the MDT tool
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zainun, K.; Trice, M.L.
1995-10-01
This paper discusses exploration cost reduction and improved resource delineation benefits that were realized by use of the MDT (Modular Formation Dynamic Tester) tool to evaluate exploration prospects in the Malay Basin of the South China Sea. Frequently, open hole logs do not clearly define fluid content due to low salinity of the connate water and the effect of shale laminae or bioturbation in the silty, shaley sandstones. Therefore, extensive pressure measurements and fluid sampling are required to define fluid type and contacts. This paper briefly describes the features of the MDT tool which were utilized to reduce rig timemore » usage while providing more representative fluid samples and illustrates usage of these features with field examples. The tool has been used on several exploration wells and a comparison of MDT pressures and samples to results obtained with earlier vintage tools and production tests is also discussed.« less
Design Optimization of a Centrifugal Fan with Splitter Blades
NASA Astrophysics Data System (ADS)
Heo, Man-Woong; Kim, Jin-Hyuk; Kim, Kwang-Yong
2015-05-01
Multi-objective optimization of a centrifugal fan with additionally installed splitter blades was performed to simultaneously maximize the efficiency and pressure rise using three-dimensional Reynolds-averaged Navier-Stokes equations and hybrid multi-objective evolutionary algorithm. Two design variables defining the location of splitter, and the height ratio between inlet and outlet of impeller were selected for the optimization. In addition, the aerodynamic characteristics of the centrifugal fan were investigated with the variation of design variables in the design space. Latin hypercube sampling was used to select the training points, and response surface approximation models were constructed as surrogate models of the objective functions. With the optimization, both the efficiency and pressure rise of the centrifugal fan with splitter blades were improved considerably compared to the reference model.
An experimental sample of the field gamma-spectrometer based on solid state Si-photomultiplier
NASA Astrophysics Data System (ADS)
Denisov, Viktor; Korotaev, Valery; Titov, Aleksandr; Blokhina, Anastasia; Kleshchenok, Maksim
2017-05-01
Design of optical-electronic devices and systems involves the selection of such technical patterns that under given initial requirements and conditions are optimal according to certain criteria. The original characteristic of the OES for any purpose, defining its most important feature ability is a threshold detection. Based on this property, will be achieved the required functional quality of the device or system. Therefore, the original criteria and optimization methods have to subordinate to the idea of a better detectability. Generally reduces to the problem of optimal selection of the expected (predetermined) signals in the predetermined observation conditions. Thus the main purpose of optimization of the system when calculating its detectability is the choice of circuits and components that provide the most effective selection of a target.
Verant, Michelle L; Bohuski, Elizabeth A; Lorch, Jeffery M; Blehert, David S
2016-03-01
The continued spread of white-nose syndrome and its impacts on hibernating bat populations across North America has prompted nationwide surveillance efforts and the need for high-throughput, noninvasive diagnostic tools. Quantitative real-time polymerase chain reaction (qPCR) analysis has been increasingly used for detection of the causative fungus, Pseudogymnoascus destructans, in both bat- and environment-associated samples and provides a tool for quantification of fungal DNA useful for research and monitoring purposes. However, precise quantification of nucleic acid from P. destructans is dependent on effective and standardized methods for extracting nucleic acid from various relevant sample types. We describe optimized methodologies for extracting fungal nucleic acids from sediment, guano, and swab-based samples using commercial kits together with a combination of chemical, enzymatic, and mechanical modifications. Additionally, we define modifications to a previously published intergenic spacer-based qPCR test for P. destructans to refine quantification capabilities of this assay. © 2016 The Author(s).
Verant, Michelle; Bohuski, Elizabeth A.; Lorch, Jeffrey M.; Blehert, David
2016-01-01
The continued spread of white-nose syndrome and its impacts on hibernating bat populations across North America has prompted nationwide surveillance efforts and the need for high-throughput, noninvasive diagnostic tools. Quantitative real-time polymerase chain reaction (qPCR) analysis has been increasingly used for detection of the causative fungus, Pseudogymnoascus destructans, in both bat- and environment-associated samples and provides a tool for quantification of fungal DNA useful for research and monitoring purposes. However, precise quantification of nucleic acid fromP. destructans is dependent on effective and standardized methods for extracting nucleic acid from various relevant sample types. We describe optimized methodologies for extracting fungal nucleic acids from sediment, guano, and swab-based samples using commercial kits together with a combination of chemical, enzymatic, and mechanical modifications. Additionally, we define modifications to a previously published intergenic spacer–based qPCR test for P. destructans to refine quantification capabilities of this assay.
Aparicio, Juan Daniel; Raimondo, Enzo Emanuel; Gil, Raúl Andrés; Benimeli, Claudia Susana; Polti, Marta Alejandra
2018-01-15
The objective of the present work was to establish optimal biological and physicochemical parameters in order to remove simultaneously lindane and Cr(VI) at high and/or low pollutants concentrations from the soil by an actinobacteria consortium formed by Streptomyces sp. M7, MC1, A5, and Amycolatopsis tucumanensis AB0. Also, the final aim was to treat real soils contaminated with Cr(VI) and/or lindane from the Northwest of Argentina employing the optimal biological and physicochemical conditions. In this sense, after determining the optimal inoculum concentration (2gkg -1 ), an experimental design model with four factors (temperature, moisture, initial concentration of Cr(VI) and lindane) was employed for predicting the system behavior during bioremediation process. According to response optimizer, the optimal moisture level was 30% for all bioremediation processes. However, the optimal temperature was different for each situation: for low initial concentrations of both pollutants, the optimal temperature was 25°C; for low initial concentrations of Cr(VI) and high initial concentrations of lindane, the optimal temperature was 30°C; and for high initial concentrations of Cr(VI), the optimal temperature was 35°C. In order to confirm the model adequacy and the validity of the optimization procedure, experiments were performed in six real contaminated soils samples. The defined actinobacteria consortium reduced the contaminants concentrations in five of the six samples, by working at laboratory scale and employing the optimal conditions obtained through the factorial design. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Beyhaghi, Pooriya
2016-11-01
This work considers the problem of the efficient minimization of the infinite time average of a stationary ergodic process in the space of a handful of independent parameters which affect it. Problems of this class, derived from physical or numerical experiments which are sometimes expensive to perform, are ubiquitous in turbulence research. In such problems, any given function evaluation, determined with finite sampling, is associated with a quantifiable amount of uncertainty, which may be reduced via additional sampling. This work proposes the first algorithm of this type. Our algorithm remarkably reduces the overall cost of the optimization process for problems of this class. Further, under certain well-defined conditions, rigorous proof of convergence is established to the global minimum of the problem considered.
Gaussian mass optimization for kernel PCA parameters
NASA Astrophysics Data System (ADS)
Liu, Yong; Wang, Zulin
2011-10-01
This paper proposes a novel kernel parameter optimization method based on Gaussian mass, which aims to overcome the current brute force parameter optimization method in a heuristic way. Generally speaking, the choice of kernel parameter should be tightly related to the target objects while the variance between the samples, the most commonly used kernel parameter, doesn't possess much features of the target, which gives birth to Gaussian mass. Gaussian mass defined in this paper has the property of the invariance of rotation and translation and is capable of depicting the edge, topology and shape information. Simulation results show that Gaussian mass leads a promising heuristic optimization boost up for kernel method. In MNIST handwriting database, the recognition rate improves by 1.6% compared with common kernel method without Gaussian mass optimization. Several promising other directions which Gaussian mass might help are also proposed at the end of the paper.
Joseph, Priya; Calderón, Maritza M.; Gilman, Robert H.; Quispe, Monica L.; Cok, Jaime; Ticona, Eduardo; Chavez, Victor; Jimenez, Juan A.; Chang, Maria C.; Lopez, Martín J.; Evans, Carlton A.
2002-01-01
Toxoplasma gondii is a common life-threatening opportunistic infection. We used experimental murine T. gondii infection to optimize the PCR for diagnostic use, define its sensitivity, and characterize the time course and tissue distribution of experimental toxoplasmosis. PCR conditions were adjusted until the assay reliably detected quantities of DNA derived from less than a single parasite. Forty-two mice were inoculated intraperitoneally with T. gondii tachyzoites and sacrificed from 6 to 72 h later. Examination of tissues with PCR and histology revealed progression of infection from blood to lung, heart, liver, and brain, with PCR consistently detecting parasites earlier than microscopy and with no false-positive results. We then evaluated the diagnostic value of this PCR assay in human patients. We studied cerebrospinal fluid and serum samples from 12 patients with AIDS and confirmed toxoplasmic encephalitis (defined as positive mouse inoculation and/or all of the Centers for Disease Control clinical diagnostic criteria), 12 human immunodeficiency virus-infected patients with suspected cerebral toxoplasmosis who had neither CDC diagnostic criteria nor positive mouse inoculation, 26 human immunodeficiency virus-infected patients with other opportunistic infections and no signs of cerebral toxoplasmosis, and 18 immunocompetent patients with neurocysticercosis. Eleven of the 12 patients with confirmed toxoplasmosis had positive PCR results in either blood or cerebrospinal fluid samples (6 of 9 blood samples and 8 of 12 cerebrospinal fluid samples). All samples from control patients were negative. This study demonstrates the high sensitivity, specificity, and clinical utility of PCR in the diagnosis of toxoplasmic encephalitis in a resource-poor setting. PMID:12454142
NASA Astrophysics Data System (ADS)
Longting, M.; Ye, S.; Wu, J.
2014-12-01
Identification and removing the DNAPL source in aquifer system is vital in rendering remediation successful and lowering the remediation time and cost. Our work is to apply an optimal search strategy introduced by Zoi and Pinder[1], with some modifications, to a field site in Nanjing City, China to define the strength, and location of DNAPL sources using the least samples. The overall strategy uses Monte Carlo stochastic groundwater flow and transport modeling, incorporates existing sampling data into the search strategy, and determines optimal sampling locations that are selected according to the reduction in overall uncertainty of the field and the proximity to the source locations. After a sample is taken, the plume is updated using a Kalman filter. The updated plume is then compared to the concentration fields that emanate from each individual potential source using fuzzy set technique. The comparison followed provides weights that reflect the degree of truth regarding the location of the source. The above steps are repeated until the optimal source characteristics are determined. Considering our site case, some specific modifications and work have been done as follows. K random fields are generated after fitting the measurement K data to the variogram model. The locations of potential sources that are given initial weights are targeted based on the field survey, with multiple potential source locations around the workshops and wastewater basin. Considering the short history (1999-2010) of manufacturing optical brightener PF at the site, and the existing sampling data, a preliminary source strength is then estimated, which will be optimized by simplex method or GA later. The whole algorithm then will guide us for optimal sampling and update as the investigation proceeds, until the weights finally stabilized. Reference [1] Dokou Zoi, and George F. Pinder. "Optimal search strategy for the definition of a DNAPL source." Journal of Hydrology 376.3 (2009): 542-556. Acknowledgement: Funding supported by National Natural Science Foundation of China (No. 41030746, 40872155) and DuPont Company is appreciated.
Insurance principles and the design of prospective payment systems.
Ellis, R P; McGuire, T G
1988-09-01
This paper applies insurance principles to the issues of optimal outlier payments and designation of peer groups in Medicare's case-based prospective payment system for hospital care. Arrow's principle that full insurance after a deductible is optimal implies that, to minimize hospital risk, outlier payments should be based on hospital average loss per case rather than, as at present, based on individual case-level losses. The principle of experience rating implies defining more homogenous peer groups for the purpose of figuring average cost. The empirical significance of these results is examined using a sample of 470,568 discharges from 469 hospitals.
Detecting recurrence domains of dynamical systems by symbolic dynamics.
beim Graben, Peter; Hutt, Axel
2013-04-12
We propose an algorithm for the detection of recurrence domains of complex dynamical systems from time series. Our approach exploits the characteristic checkerboard texture of recurrence domains exhibited in recurrence plots. In phase space, recurrence plots yield intersecting balls around sampling points that could be merged into cells of a phase space partition. We construct this partition by a rewriting grammar applied to the symbolic dynamics of time indices. A maximum entropy principle defines the optimal size of intersecting balls. The final application to high-dimensional brain signals yields an optimal symbolic recurrence plot revealing functional components of the signal.
Design of partially supervised classifiers for multispectral image data
NASA Technical Reports Server (NTRS)
Jeon, Byeungwoo; Landgrebe, David
1993-01-01
A partially supervised classification problem is addressed, especially when the class definition and corresponding training samples are provided a priori only for just one particular class. In practical applications of pattern classification techniques, a frequently observed characteristic is the heavy, often nearly impossible requirements on representative prior statistical class characteristics of all classes in a given data set. Considering the effort in both time and man-power required to have a well-defined, exhaustive list of classes with a corresponding representative set of training samples, this 'partially' supervised capability would be very desirable, assuming adequate classifier performance can be obtained. Two different classification algorithms are developed to achieve simplicity in classifier design by reducing the requirement of prior statistical information without sacrificing significant classifying capability. The first one is based on optimal significance testing, where the optimal acceptance probability is estimated directly from the data set. In the second approach, the partially supervised classification is considered as a problem of unsupervised clustering with initially one known cluster or class. A weighted unsupervised clustering procedure is developed to automatically define other classes and estimate their class statistics. The operational simplicity thus realized should make these partially supervised classification schemes very viable tools in pattern classification.
Optimized, Budget-constrained Monitoring Well Placement Using DREAM
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yonkofski, Catherine M. R.; Davidson, Casie L.; Rodriguez, Luke R.
Defining the ideal suite of monitoring technologies to be deployed at a carbon capture and storage (CCS) site presents a challenge to project developers, financers, insurers, regulators and other stakeholders. The monitoring, verification, and accounting (MVA) toolkit offers a suite of technologies to monitor an extensive range of parameters across a wide span of spatial and temporal resolutions, each with their own degree of sensitivity to changes in the parameter being monitored. Understanding how best to optimize MVA budgets to minimize the time to leak detection could help to address issues around project risks, and in turn help support broadmore » CCS deployment. This paper presents a case study demonstrating an application of the Designs for Risk Evaluation and Management (DREAM) tool using an ensemble of CO 2 leakage scenarios taken from a previous study on leakage impacts to groundwater. Impacts were assessed and monitored as a function of pH, total dissolved solids (TDS), and trace metal concentrations of arsenic (As), cadmium (Cd), chromium (Cr), and lead (Pb). Using output from the previous study, DREAM was used to optimize monitoring system designs based on variable sampling locations and parameters. The algorithm requires the user to define a finite budget to limit the number of monitoring wells and technologies deployed, and then iterates well placement and sensor type and location until it converges on the configuration with the lowest time to first detection of the leak averaged across all scenarios. To facilitate an understanding of the optimal number of sampling wells, DREAM was used to assess the marginal utility of additional sampling locations. Based on assumptions about monitoring costs and replacement costs of degraded water, the incremental cost of each additional sampling well can be compared against its marginal value in terms of avoided aquifer degradation. Applying this method, DREAM identified the most cost-effective ensemble with 14 monitoring locations. Here, while this preliminary study applied relatively simplistic cost and technology assumptions, it provides an exciting proof-of-concept for the application of DREAM to questions of cost-optimized MVA system design that are informed not only by site-specific costs and technology options, but also by reservoir simulation results developed during site characterization and operation.« less
Optimized, Budget-constrained Monitoring Well Placement Using DREAM
Yonkofski, Catherine M. R.; Davidson, Casie L.; Rodriguez, Luke R.; ...
2017-08-18
Defining the ideal suite of monitoring technologies to be deployed at a carbon capture and storage (CCS) site presents a challenge to project developers, financers, insurers, regulators and other stakeholders. The monitoring, verification, and accounting (MVA) toolkit offers a suite of technologies to monitor an extensive range of parameters across a wide span of spatial and temporal resolutions, each with their own degree of sensitivity to changes in the parameter being monitored. Understanding how best to optimize MVA budgets to minimize the time to leak detection could help to address issues around project risks, and in turn help support broadmore » CCS deployment. This paper presents a case study demonstrating an application of the Designs for Risk Evaluation and Management (DREAM) tool using an ensemble of CO 2 leakage scenarios taken from a previous study on leakage impacts to groundwater. Impacts were assessed and monitored as a function of pH, total dissolved solids (TDS), and trace metal concentrations of arsenic (As), cadmium (Cd), chromium (Cr), and lead (Pb). Using output from the previous study, DREAM was used to optimize monitoring system designs based on variable sampling locations and parameters. The algorithm requires the user to define a finite budget to limit the number of monitoring wells and technologies deployed, and then iterates well placement and sensor type and location until it converges on the configuration with the lowest time to first detection of the leak averaged across all scenarios. To facilitate an understanding of the optimal number of sampling wells, DREAM was used to assess the marginal utility of additional sampling locations. Based on assumptions about monitoring costs and replacement costs of degraded water, the incremental cost of each additional sampling well can be compared against its marginal value in terms of avoided aquifer degradation. Applying this method, DREAM identified the most cost-effective ensemble with 14 monitoring locations. Here, while this preliminary study applied relatively simplistic cost and technology assumptions, it provides an exciting proof-of-concept for the application of DREAM to questions of cost-optimized MVA system design that are informed not only by site-specific costs and technology options, but also by reservoir simulation results developed during site characterization and operation.« less
Souza, C A; Oliveira, T C; Crovella, S; Santos, S M; Rabêlo, K C N; Soriano, E P; Carvalho, M V D; Junior, A F Caldas; Porto, G G; Campello, R I C; Antunes, A A; Queiroz, R A; Souza, S M
2017-04-28
The use of Y chromosome haplotypes, important for the detection of sexual crimes in forensics, has gained prominence with the use of databases that incorporate these genetic profiles in their system. Here, we optimized and validated an amplification protocol for Y chromosome profile retrieval in reference samples using lesser materials than those in commercial kits. FTA ® cards (Flinders Technology Associates) were used to support the oral cells of male individuals, which were amplified directly using the SwabSolution reagent (Promega). First, we optimized and validated the process to define the volume and cycling conditions. Three reference samples and nineteen 1.2 mm-diameter perforated discs were used per sample. Amplification of one or two discs (samples) with the PowerPlex ® Y23 kit (Promega) was performed using 25, 26, and 27 thermal cycles. Twenty percent, 32%, and 100% reagent volumes, one disc, and 26 cycles were used for the control per sample. Thereafter, all samples (N = 270) were amplified using 27 cycles, one disc, and 32% reagents (optimized conditions). Data was analyzed using a study of equilibrium values between fluorophore colors. In the samples analyzed with 20% volume, an imbalance was observed in peak heights, both inside and in-between each dye. In samples amplified with 32% reagents, the values obtained for the intra-color and inter-color standard balance calculations for verification of the quality of the analyzed peaks were similar to those of samples amplified with 100% of the recommended volume. The quality of the profiles obtained with 32% reagents was suitable for insertion into databases.
Sinkó, József; Kákonyi, Róbert; Rees, Eric; Metcalf, Daniel; Knight, Alex E.; Kaminski, Clemens F.; Szabó, Gábor; Erdélyi, Miklós
2014-01-01
Localization-based super-resolution microscopy image quality depends on several factors such as dye choice and labeling strategy, microscope quality and user-defined parameters such as frame rate and number as well as the image processing algorithm. Experimental optimization of these parameters can be time-consuming and expensive so we present TestSTORM, a simulator that can be used to optimize these steps. TestSTORM users can select from among four different structures with specific patterns, dye and acquisition parameters. Example results are shown and the results of the vesicle pattern are compared with experimental data. Moreover, image stacks can be generated for further evaluation using localization algorithms, offering a tool for further software developments. PMID:24688813
Co-optimization of CO 2 -EOR and Storage Processes under Geological Uncertainty
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ampomah, William; Balch, Robert; Will, Robert
This paper presents an integrated numerical framework to co-optimize EOR and CO 2 storage performance in the Farnsworth field unit (FWU), Ochiltree County, Texas. The framework includes a field-scale compositional reservoir flow model, an uncertainty quantification model and a neural network optimization process. The reservoir flow model has been constructed based on the field geophysical, geological, and engineering data. A laboratory fluid analysis was tuned to an equation of state and subsequently used to predict the thermodynamic minimum miscible pressure (MMP). A history match of primary and secondary recovery processes was conducted to estimate the reservoir and multiphase flow parametersmore » as the baseline case for analyzing the effect of recycling produced gas, infill drilling and water alternating gas (WAG) cycles on oil recovery and CO 2 storage. A multi-objective optimization model was defined for maximizing both oil recovery and CO 2 storage. The uncertainty quantification model comprising the Latin Hypercube sampling, Monte Carlo simulation, and sensitivity analysis, was used to study the effects of uncertain variables on the defined objective functions. Uncertain variables such as bottom hole injection pressure, WAG cycle, injection and production group rates, and gas-oil ratio among others were selected. The most significant variables were selected as control variables to be used for the optimization process. A neural network optimization algorithm was utilized to optimize the objective function both with and without geological uncertainty. The vertical permeability anisotropy (Kv/Kh) was selected as one of the uncertain parameters in the optimization process. The simulation results were compared to a scenario baseline case that predicted CO 2 storage of 74%. The results showed an improved approach for optimizing oil recovery and CO 2 storage in the FWU. The optimization process predicted more than 94% of CO 2 storage and most importantly about 28% of incremental oil recovery. The sensitivity analysis reduced the number of control variables to decrease computational time. A risk aversion factor was used to represent results at various confidence levels to assist management in the decision-making process. The defined objective functions were proved to be a robust approach to co-optimize oil recovery and CO 2 storage. The Farnsworth CO 2 project will serve as a benchmark for future CO 2–EOR or CCUS projects in the Anadarko basin or geologically similar basins throughout the world.« less
Co-optimization of CO 2 -EOR and Storage Processes under Geological Uncertainty
Ampomah, William; Balch, Robert; Will, Robert; ...
2017-07-01
This paper presents an integrated numerical framework to co-optimize EOR and CO 2 storage performance in the Farnsworth field unit (FWU), Ochiltree County, Texas. The framework includes a field-scale compositional reservoir flow model, an uncertainty quantification model and a neural network optimization process. The reservoir flow model has been constructed based on the field geophysical, geological, and engineering data. A laboratory fluid analysis was tuned to an equation of state and subsequently used to predict the thermodynamic minimum miscible pressure (MMP). A history match of primary and secondary recovery processes was conducted to estimate the reservoir and multiphase flow parametersmore » as the baseline case for analyzing the effect of recycling produced gas, infill drilling and water alternating gas (WAG) cycles on oil recovery and CO 2 storage. A multi-objective optimization model was defined for maximizing both oil recovery and CO 2 storage. The uncertainty quantification model comprising the Latin Hypercube sampling, Monte Carlo simulation, and sensitivity analysis, was used to study the effects of uncertain variables on the defined objective functions. Uncertain variables such as bottom hole injection pressure, WAG cycle, injection and production group rates, and gas-oil ratio among others were selected. The most significant variables were selected as control variables to be used for the optimization process. A neural network optimization algorithm was utilized to optimize the objective function both with and without geological uncertainty. The vertical permeability anisotropy (Kv/Kh) was selected as one of the uncertain parameters in the optimization process. The simulation results were compared to a scenario baseline case that predicted CO 2 storage of 74%. The results showed an improved approach for optimizing oil recovery and CO 2 storage in the FWU. The optimization process predicted more than 94% of CO 2 storage and most importantly about 28% of incremental oil recovery. The sensitivity analysis reduced the number of control variables to decrease computational time. A risk aversion factor was used to represent results at various confidence levels to assist management in the decision-making process. The defined objective functions were proved to be a robust approach to co-optimize oil recovery and CO 2 storage. The Farnsworth CO 2 project will serve as a benchmark for future CO 2–EOR or CCUS projects in the Anadarko basin or geologically similar basins throughout the world.« less
Analytical Method for Determining Tetrazene in Water.
1987-12-01
8217-decanesulfonic acid sodium salt. The mobile phase pH was adjusted to 3 with glacial acetic acid. The modified mobile phase was optimal for separating of...modified with sodium tartrate, gave a well-defined reduction wave at the dropping mercury electrode. The height of the reduction wave was proportional to...anitmony trisulphide, nitrocellulose, PETN, powdered aluminum and calcium silicide . The primer samples were sequentially extracted, first with
NASA Astrophysics Data System (ADS)
Volk, J. M.; Turner, M. A.; Huntington, J. L.; Gardner, M.; Tyler, S.; Sheneman, L.
2016-12-01
Many distributed models that simulate watershed hydrologic processes require a collection of multi-dimensional parameters as input, some of which need to be calibrated before the model can be applied. The Precipitation Runoff Modeling System (PRMS) is a physically-based and spatially distributed hydrologic model that contains a considerable number of parameters that often need to be calibrated. Modelers can also benefit from uncertainty analysis of these parameters. To meet these needs, we developed a modular framework in Python to conduct PRMS parameter optimization, uncertainty analysis, interactive visual inspection of parameters and outputs, and other common modeling tasks. Here we present results for multi-step calibration of sensitive parameters controlling solar radiation, potential evapo-transpiration, and streamflow in a PRMS model that we applied to the snow-dominated Dry Creek watershed in Idaho. We also demonstrate how our modular approach enables the user to use a variety of parameter optimization and uncertainty methods or easily define their own, such as Monte Carlo random sampling, uniform sampling, or even optimization methods such as the downhill simplex method or its commonly used, more robust counterpart, shuffled complex evolution.
Robust linear discriminant analysis with distance based estimators
NASA Astrophysics Data System (ADS)
Lim, Yai-Fung; Yahaya, Sharipah Soaad Syed; Ali, Hazlina
2017-11-01
Linear discriminant analysis (LDA) is one of the supervised classification techniques concerning relationship between a categorical variable and a set of continuous variables. The main objective of LDA is to create a function to distinguish between populations and allocating future observations to previously defined populations. Under the assumptions of normality and homoscedasticity, the LDA yields optimal linear discriminant rule (LDR) between two or more groups. However, the optimality of LDA highly relies on the sample mean and pooled sample covariance matrix which are known to be sensitive to outliers. To alleviate these conflicts, a new robust LDA using distance based estimators known as minimum variance vector (MVV) has been proposed in this study. The MVV estimators were used to substitute the classical sample mean and classical sample covariance to form a robust linear discriminant rule (RLDR). Simulation and real data study were conducted to examine on the performance of the proposed RLDR measured in terms of misclassification error rates. The computational result showed that the proposed RLDR is better than the classical LDR and was comparable with the existing robust LDR.
TRO-2D - A code for rational transonic aerodynamic optimization
NASA Technical Reports Server (NTRS)
Davis, W. H., Jr.
1985-01-01
Features and sample applications of the transonic rational optimization (TRO-2D) code are outlined. TRO-2D includes the airfoil analysis code FLO-36, the CONMIN optimization code and a rational approach to defining aero-function shapes for geometry modification. The program is part of an effort to develop an aerodynamically smart optimizer that will simplify and shorten the design process. The user has a selection of drag minimization and associated minimum lift, moment, and the pressure distribution, a choice among 14 resident aero-function shapes, and options on aerodynamic and geometric constraints. Design variables such as the angle of attack, leading edge radius and camber, shock strength and movement, supersonic pressure plateau control, etc., are discussed. The results of calculations of a reduced leading edge camber transonic airfoil and an airfoil with a natural laminar flow are provided, showing that only four design variables need be specified to obtain satisfactory results.
Optimal experience among teachers: new insights into the work paradox.
Bassi, Marta; Delle Fave, Antonella
2012-01-01
Several studies highlighted that individuals perceive work as an opportunity for flow or optimal experience, but not as desirable and pleasant. This finding was defined as the work paradox. The present study addressed this issue among teachers from the perspective of self-determination theory, investigating work-related intrinsic and extrinsic motivation, as well as autonomous and controlled behavior regulation. In Study 1, 14 teachers were longitudinally monitored with Experience Sampling Method for one work week. In Study 2, 184 teachers were administered Flow Questionnaire and Work Preference Inventory, respectively investigating opportunities for optimal experience, and motivational orientations at work. Results showed that work-related optimal experiences were associated with both autonomous regulation and with controlled regulation. Moreover, teachers reported both intrinsic and extrinsic motivation at work, with a prevailing intrinsic orientation. Findings provide novel insights on the work paradox, and suggestions for teachers' well-being promotion.
NASA Technical Reports Server (NTRS)
Sensmeier, Mark D.; Samareh, Jamshid A.
2005-01-01
An approach is proposed for the application of rapid generation of moderate-fidelity structural finite element models of air vehicle structures to allow more accurate weight estimation earlier in the vehicle design process. This should help to rapidly assess many structural layouts before the start of the preliminary design phase and eliminate weight penalties imposed when actual structure weights exceed those estimated during conceptual design. By defining the structural topology in a fully parametric manner, the structure can be mapped to arbitrary vehicle configurations being considered during conceptual design optimization. A demonstration of this process is shown for two sample aircraft wing designs.
[Transfusion supply optimization in multiple-discipline surgical hospital].
Solov'eva, I N; Trekova, N A; Krapivkin, I A
2016-01-01
To define optimal variant of transfusion supply of hospital by blood components and to decrease donor blood expense via application of blood preserving technologies. Donor blood components expense, volume of hemotransfusions and their proportion for the period 2012-2014 were analyzed. Number of recipients of packed red cells, fresh-frozen plasma and packed platelets reduced 18.5%, 25% and 80% respectively. Need for donor plasma decreased 35%. Expense of autologous plasma in cardiac surgery was 76% of overall volume. Preoperative plasma sampling is introduced in patients with aortic aneurysm. Number of cardiac interventions performed without donor blood is increased 7-31% depending on its complexity.
Nieminen, Teemu; Lähteenmäki, Pasi; Tan, Zhenbing; Cox, Daniel; Hakonen, Pertti J
2016-11-01
We present a microwave correlation measurement system based on two low-cost USB-connected software defined radio dongles modified to operate as coherent receivers by using a common local oscillator. Existing software is used to obtain I/Q samples from both dongles simultaneously at a software tunable frequency. To achieve low noise, we introduce an easy low-noise solution for cryogenic amplification at 600-900 MHz based on single discrete HEMT with 21 dB gain and 7 K noise temperature. In addition, we discuss the quantization effects in a digital correlation measurement and determination of optimal integration time by applying Allan deviation analysis.
Growth, Characterization and Applications of Beta-Barium Borate and Related Crystals
1993-10-31
Crystal symmetry determines the form of the second order polarization tensor. The second order polarizability tensor is defined by the piezoelectric...cold finger. A temperature oscillation technique1 I was used to limit the number of nuclei formed . These experiments typically yielded thin crystal...statistically sampled to determine the optimal seeding orientation. % was reasoned that the large crystal plates were formed from nucleii which had a favorable
Castro Grijalba, Alexander; Martinis, Estefanía M; Wuilloud, Rodolfo G
2017-03-15
A highly sensitive vortex assisted liquid-liquid microextraction (VA-LLME) method was developed for inorganic Se [Se(IV) and Se(VI)] speciation analysis in Allium and Brassica vegetables. Trihexyl(tetradecyl)phosphonium decanoate phosphonium ionic liquid (IL) was applied for the extraction of Se(IV)-ammonium pyrrolidine dithiocarbamate (APDC) complex followed by Se determination with electrothermal atomic absorption spectrometry. A complete optimization of the graphite furnace temperature program was developed for accurate determination of Se in the IL-enriched extracts and multivariate statistical optimization was performed to define the conditions for the highest extraction efficiency. Significant factors of IL-VA-LLME method were sample volume, extraction pH, extraction time and APDC concentration. High extraction efficiency (90%), a 100-fold preconcentration factor and a detection limit of 5.0ng/L were achieved. The high sensitivity obtained with preconcentration and the non-chromatographic separation of inorganic Se species in complex matrix samples such as garlic, onion, leek, broccoli and cauliflower, are the main advantages of IL-VA-LLME. Copyright © 2016 Elsevier Ltd. All rights reserved.
Sirichai, Somsak; Khanatharana, Proespichaya
2008-09-15
Capillary electrophoresis (CE) with UV detection for the simultaneous and short-time analysis of clenbuterol, salbutamol, procaterol, fenoterol is described and validated. Optimized conditions were found to be a 10 mmoll(-1) borate buffer (pH 10.0), an separation voltage of 19 kV, and a separation temperature of 32 degrees C. Detection was set at 205 nm. Under the optimized conditions, analyses of the four analytes in pharmaceutical and human urine samples were carried out in approximately 1 min. The interference of the sample matrix was not observed. The LOD (limits of detection) defined at S/N of 3:1 was found between 0.5 and 2.0 mgl(-1) for the analytes. The linearity of the detector response was within the range from 2.0 to 30 mgl(-1) with correlation coefficient >0.996.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chaikuad, Apirat, E-mail: apirat.chaikuad@sgc.ox.ac.uk; Knapp, Stefan; Johann Wolfgang Goethe-University, Building N240 Room 3.03, Max-von-Laue-Strasse 9, 60438 Frankfurt am Main
An alternative strategy for PEG sampling is suggested through the use of four newly defined PEG smears to enhance chemical space in reduced screens with a benefit towards protein crystallization. The quest for an optimal limited set of effective crystallization conditions remains a challenge in macromolecular crystallography, an issue that is complicated by the large number of chemicals which have been deemed to be suitable for promoting crystal growth. The lack of rational approaches towards the selection of successful chemical space and representative combinations has led to significant overlapping conditions, which are currently present in a multitude of commercially availablemore » crystallization screens. Here, an alternative approach to the sampling of widely used PEG precipitants is suggested through the use of PEG smears, which are mixtures of different PEGs with a requirement of either neutral or cooperatively positive effects of each component on crystal growth. Four newly defined smears were classified by molecular-weight groups and enabled the preservation of specific properties related to different polymer sizes. These smears not only allowed a wide coverage of properties of these polymers, but also reduced PEG variables, enabling greater sampling of other parameters such as buffers and additives. The efficiency of the smear-based screens was evaluated on more than 220 diverse recombinant human proteins, which overall revealed a good initial crystallization success rate of nearly 50%. In addition, in several cases successful crystallizations were only obtained using PEG smears, while various commercial screens failed to yield crystals. The defined smears therefore offer an alternative approach towards PEG sampling, which will benefit the design of crystallization screens sampling a wide chemical space of this key precipitant.« less
Distributed Space Mission Design for Earth Observation Using Model-Based Performance Evaluation
NASA Technical Reports Server (NTRS)
Nag, Sreeja; LeMoigne-Stewart, Jacqueline; Cervantes, Ben; DeWeck, Oliver
2015-01-01
Distributed Space Missions (DSMs) are gaining momentum in their application to earth observation missions owing to their unique ability to increase observation sampling in multiple dimensions. DSM design is a complex problem with many design variables, multiple objectives determining performance and cost and emergent, often unexpected, behaviors. There are very few open-access tools available to explore the tradespace of variables, minimize cost and maximize performance for pre-defined science goals, and therefore select the most optimal design. This paper presents a software tool that can multiple DSM architectures based on pre-defined design variable ranges and size those architectures in terms of predefined science and cost metrics. The tool will help a user select Pareto optimal DSM designs based on design of experiments techniques. The tool will be applied to some earth observation examples to demonstrate its applicability in making some key decisions between different performance metrics and cost metrics early in the design lifecycle.
Aspects of the "Design Space" in high pressure liquid chromatography method development.
Molnár, I; Rieger, H-J; Monks, K E
2010-05-07
The present paper describes a multifactorial optimization of 4 critical HPLC method parameters, i.e. gradient time (t(G)), temperature (T), pH and ternary composition (B(1):B(2)) based on 36 experiments. The effect of these experimental variables on critical resolution and selectivity was carried out in such a way as to systematically vary all four factors simultaneously. The basic element is a gradient time-temperature (t(G)-T) plane, which is repeated at three different pH's of the eluent A and at three different ternary compositions of eluent B between methanol and acetonitrile. The so-defined volume enables the investigation of the critical resolution for a part of the Design Space of a given sample. Further improvement of the analysis time, with conservation of the previously optimized selectivity, was possible by reducing the gradient time and increasing the flow rate. Multidimensional robust regions were successfully defined and graphically depicted. Copyright (c) 2010 Elsevier B.V. All rights reserved.
A modified and cost-effective method for hair cortisol analysis.
Xiang, Lianbin; Sunesara, Imran; Rehm, Kristina E; Marshall, Gailen D
2016-01-01
Hair cortisol may hold potential as a biomarker for assessment of chronic psychological stress. We report a modified and cost-effective method to prepare hair samples for cortisol assay. Hair samples were ground using an inexpensive ball grinder - ULTRA-TURRAX tube drive. Cortisol was extracted from the powder under various defined conditions. The data showed that the optimal conditions for this method include cortisol extraction at room temperature and evaporation using a stream of room air. These findings should allow more widespread research using economical technology to validate the utility of hair cortisol as a biomarker for assessing chronic stress status.
Ayres, Cynthia G; Mahat, Ganga
2012-07-01
This study developed and tested a theory to better understand positive health practices (PHP) among Asian Americans aged 18 to 21 years. It tested theoretical relationships postulated between PHP and (a) social support (SS), (b) optimism, and (c) acculturation, and between SS and optimism and acculturation. Optimism and acculturation were also tested as possible mediators in the relationship between SS and PHP. A correlational study design was used. A convenience sample of 163 Asian college students in an urban setting completed four questionnaires assessing SS, PHP, optimism, and acculturation and one demographic questionnaire. There were statistically significant positive relationships between SS and optimism with PHP, between acculturation and PHP, and between optimism and SS. Optimism mediated the relationship between SS and PHP, whereas acculturation did not. Findings extend knowledge regarding these relationships to a defined population of Asian Americans aged 18 to 21 years. Findings contribute to a more comprehensive knowledge base regarding health practices among Asian Americans. The theoretical and empirical findings of this study provide the direction for future research as well. Further studies need to be conducted to identify and test other mediators in order to better understand the relationship between these two variables.
Defining a region of optimization based on engine usage data
Jiang, Li; Lee, Donghoon; Yilmaz, Hakan; Stefanopoulou, Anna
2015-08-04
Methods and systems for engine control optimization are provided. One or more operating conditions of a vehicle engine are detected. A value for each of a plurality of engine control parameters is determined based on the detected one or more operating conditions of the vehicle engine. A range of the most commonly detected operating conditions of the vehicle engine is identified and a region of optimization is defined based on the range of the most commonly detected operating conditions of the vehicle engine. The engine control optimization routine is initiated when the one or more operating conditions of the vehicle engine are within the defined region of optimization.
Monzani, Dario; Steca, Patrizia; Greco, Andrea
2014-02-01
Dispositional optimism is an individual difference promoting psychosocial adjustment and well-being during adolescence. Dispositional optimism was originally defined as a one-dimensional construct; however, empirical evidence suggests two correlated factors in the Life Orientation Test - Revised (LOT-R). The main aim of the study was to evaluate the dimensionality of the LOT-R. This study is the first attempt to identify the best factor structure, comparing congeneric, two correlated-factor, and two orthogonal-factor models in a sample of adolescents. Concurrent validity was also assessed. The results demonstrated the superior fit of the two orthogonal-factor model thus reconciling the one-dimensional definition of dispositional optimism with the bi-dimensionality of the LOT-R. Moreover, the results of correlational analyses proved the concurrent validity of this self-report measure: optimism is moderately related to indices of psychosocial adjustment and well-being. Thus, the LOT-R is a useful, valid, and reliable self-report measure to properly assess optimism in adolescence. Copyright © 2013 The Foundation for Professionals in Services for Adolescents. Published by Elsevier Ltd. All rights reserved.
Latent classes of resilience and psychological response among only-child loss parents in China.
Wang, An-Ni; Zhang, Wen; Zhang, Jing-Ping; Huang, Fei-Fei; Ye, Man; Yao, Shu-Yu; Luo, Yuan-Hui; Li, Zhi-Hua; Zhang, Jie; Su, Pan
2017-10-01
Only-child loss parents in China recently gained extensive attention as a newly defined social group. Resilience could be a probable solution out of the psychological dilemma. Using a sample of 185 only-child loss people, this study employed latent class analysis (a) to explore whether different classes of resilience could be identified, (b) to determine socio-demographic characteristics of each class, and (c) to compare the depression and the subjective well-being of each class. The results supported a three-class solution, defined as 'high tenacity-strength but moderate optimism class', 'moderate resilience but low self-efficacy class' and 'low tenacity but moderate adaption-dependence class'. Parents with low income and medical insurance of low reimbursement type and without endowment insurance occupied more proportions in the latter two classes. The latter two classes also had a significant higher depression scores and lower subjective well-being scores than high tenacity-strength but moderate optimism class. Future work should care those socio-economically vulnerable bereaved parents, and an elastic economic assistance policy was needed. To develop targeted resilience interventions, the emphasis of high tenacity-strength but moderate optimism class should be the optimism. Moderate resilience but low self-efficacy class should be self-efficacy, and low tenacity but moderate adaption-dependence class should be tenacity. Copyright © 2016 John Wiley & Sons, Ltd.
Selection of optimal spectral sensitivity functions for color filter arrays.
Parmar, Manu; Reeves, Stanley J
2010-12-01
A color image meant for human consumption can be appropriately displayed only if at least three distinct color channels are present. Typical digital cameras acquire three-color images with only one sensor. A color filter array (CFA) is placed on the sensor such that only one color is sampled at a particular spatial location. This sparsely sampled signal is then reconstructed to form a color image with information about all three colors at each location. In this paper, we show that the wavelength sensitivity functions of the CFA color filters affect both the color reproduction ability and the spatial reconstruction quality of recovered images. We present a method to select perceptually optimal color filter sensitivity functions based upon a unified spatial-chromatic sampling framework. A cost function independent of particular scenes is defined that expresses the error between a scene viewed by the human visual system and the reconstructed image that represents the scene. A constrained minimization of the cost function is used to obtain optimal values of color-filter sensitivity functions for several periodic CFAs. The sensitivity functions are shown to perform better than typical RGB and CMY color filters in terms of both the s-CIELAB ∆E error metric and a qualitative assessment.
Martinon, Alice; Cronin, Ultan P; Wilkinson, Martin G
2012-01-01
In this article, four types of standards were assessed in a SYBR Green-based real-time PCR procedure for the quantification of Staphylococcus aureus (S. aureus) in DNA samples. The standards were purified S. aureus genomic DNA (type A), circular plasmid DNA containing a thermonuclease (nuc) gene fragment (type B), DNA extracted from defined populations of S. aureus cells generated by Fluorescence Activated Cell Sorting (FACS) technology with (type C) or without purification of DNA by boiling (type D). The optimal efficiency of 2.016 was obtained on Roche LightCycler(®) 4.1. software for type C standards, whereas the lowest efficiency (1.682) corresponded to type D standards. Type C standards appeared to be more suitable for quantitative real-time PCR because of the use of defined populations for construction of standard curves. Overall, Fieller Confidence Interval algorithm may be improved for replicates having a low standard deviation in Cycle Threshold values such as found for type B and C standards. Stabilities of diluted PCR standards stored at -20°C were compared after 0, 7, 14 and 30 days and were lower for type A or C standards compared with type B standards. However, FACS generated standards may be useful for bacterial quantification in real-time PCR assays once optimal storage and temperature conditions are defined.
Adaptive Sampling-Based Information Collection for Wireless Body Area Networks.
Xu, Xiaobin; Zhao, Fang; Wang, Wendong; Tian, Hui
2016-08-31
To collect important health information, WBAN applications typically sense data at a high frequency. However, limited by the quality of wireless link, the uploading of sensed data has an upper frequency. To reduce upload frequency, most of the existing WBAN data collection approaches collect data with a tolerable error. These approaches can guarantee precision of the collected data, but they are not able to ensure that the upload frequency is within the upper frequency. Some traditional sampling based approaches can control upload frequency directly, however, they usually have a high loss of information. Since the core task of WBAN applications is to collect health information, this paper aims to collect optimized information under the limitation of upload frequency. The importance of sensed data is defined according to information theory for the first time. Information-aware adaptive sampling is proposed to collect uniformly distributed data. Then we propose Adaptive Sampling-based Information Collection (ASIC) which consists of two algorithms. An adaptive sampling probability algorithm is proposed to compute sampling probabilities of different sensed values. A multiple uniform sampling algorithm provides uniform samplings for values in different intervals. Experiments based on a real dataset show that the proposed approach has higher performance in terms of data coverage and information quantity. The parameter analysis shows the optimized parameter settings and the discussion shows the underlying reason of high performance in the proposed approach.
Adaptive Sampling-Based Information Collection for Wireless Body Area Networks
Xu, Xiaobin; Zhao, Fang; Wang, Wendong; Tian, Hui
2016-01-01
To collect important health information, WBAN applications typically sense data at a high frequency. However, limited by the quality of wireless link, the uploading of sensed data has an upper frequency. To reduce upload frequency, most of the existing WBAN data collection approaches collect data with a tolerable error. These approaches can guarantee precision of the collected data, but they are not able to ensure that the upload frequency is within the upper frequency. Some traditional sampling based approaches can control upload frequency directly, however, they usually have a high loss of information. Since the core task of WBAN applications is to collect health information, this paper aims to collect optimized information under the limitation of upload frequency. The importance of sensed data is defined according to information theory for the first time. Information-aware adaptive sampling is proposed to collect uniformly distributed data. Then we propose Adaptive Sampling-based Information Collection (ASIC) which consists of two algorithms. An adaptive sampling probability algorithm is proposed to compute sampling probabilities of different sensed values. A multiple uniform sampling algorithm provides uniform samplings for values in different intervals. Experiments based on a real dataset show that the proposed approach has higher performance in terms of data coverage and information quantity. The parameter analysis shows the optimized parameter settings and the discussion shows the underlying reason of high performance in the proposed approach. PMID:27589758
Media and growth conditions for induction of secondary metabolite production.
Frisvad, Jens C
2012-01-01
Growth media and incubation conditions have a very strong influence of secondary metabolite production. There is no consensus on which media are the optimal for metabolite production, but a series of useful and effective media and incubation conditions have been listed here. Chemically well-defined media are suited for biochemical studies, but in order to get chemical diversity expressed in filamentous fungi, sources rich in amino acids, vitamins, and trace metals have to be added, such as yeast extract and oatmeal. A battery of solid agar media is recommended for exploration of chemical diversity as agar plug samples are easily analyzed to get an optimal representation of the qualitative secondary metabolome. Standard incubation for a week at 25°C in darkness is recommended, but optimal conditions have to be modified depending on the ecology and physiology of different filamentous fungi.
[Dispositional optimism, pessimism and realism in technological potential entrepreneurs].
López Puga, Jorge; García García, Juan
2011-11-01
Optimism has been classically considered a key trait in entrepreneurs' personality but it has been studied from a psychological point of view only in recent years. The main aim of this research is to study the relationship between dispositional optimism, pessimism and realism as a function of the tendency to create technology-based businesses. A sample of undergraduate students (n= 205) filled in an electronic questionnaire containing the Life Orientation Test-Revised after they were classified as potential technological entrepreneurs, potential general entrepreneurs and non-potential entrepreneurs. Our results show that technology-based entrepreneurs are more optimistic than non-potential entrepreneurs, whereas there were no statistical differences in pessimism and realism. The results are interpreted theoretically to define the potential entrepreneur and, from an applied perspective, to design training programmes to support future technological entrepreneurs.
Stochastic control and the second law of thermodynamics
NASA Technical Reports Server (NTRS)
Brockett, R. W.; Willems, J. C.
1979-01-01
The second law of thermodynamics is studied from the point of view of stochastic control theory. We find that the feedback control laws which are of interest are those which depend only on average values, and not on sample path behavior. We are lead to a criterion which, when satisfied, permits one to assign a temperature to a stochastic system in such a way as to have Carnot cycles be the optimal trajectories of optimal control problems. Entropy is also defined and we are able to prove an equipartition of energy theorem using this definition of temperature. Our formulation allows one to treat irreversibility in a quite natural and completely precise way.
Stabilization for sampled-data neural-network-based control systems.
Zhu, Xun-Lin; Wang, Youyi
2011-02-01
This paper studies the problem of stabilization for sampled-data neural-network-based control systems with an optimal guaranteed cost. Unlike previous works, the resulting closed-loop system with variable uncertain sampling cannot simply be regarded as an ordinary continuous-time system with a fast-varying delay in the state. By defining a novel piecewise Lyapunov functional and using a convex combination technique, the characteristic of sampled-data systems is captured. A new delay-dependent stabilization criterion is established in terms of linear matrix inequalities such that the maximal sampling interval and the minimal guaranteed cost control performance can be obtained. It is shown that the newly proposed approach can lead to less conservative and less complex results than the existing ones. Application examples are given to illustrate the effectiveness and the benefits of the proposed method.
Sauter, Jennifer L; Grogg, Karen L; Vrana, Julie A; Law, Mark E; Halvorson, Jennifer L; Henry, Michael R
2016-02-01
The objective of the current study was to establish a process for validating immunohistochemistry (IHC) protocols for use on the Cellient cell block (CCB) system. Thirty antibodies were initially tested on CCBs using IHC protocols previously validated on formalin-fixed, paraffin-embedded tissue (FFPE). Cytology samples were split to generate thrombin cell blocks (TCB) and CCBs. IHC was performed in parallel. Antibody immunoreactivity was scored, and concordance or discordance in immunoreactivity between the TCBs and CCBs for each sample was determined. Criteria for validation of an antibody were defined as concordant staining in expected positive and negative cells, in at least 5 samples each, and concordance in at least 90% of the samples total. Antibodies that failed initial validation were retested after alterations in IHC conditions. Thirteen of the 30 antibodies (43%) did not meet initial validation criteria. Of those, 8 antibodies (calretinin, clusters of differentiation [CD] 3, CD20, CDX2, cytokeratin 20, estrogen receptor, MOC-31, and p16) were optimized for CCBs and subsequently validated. Despite several alterations in conditions, 3 antibodies (Ber-EP4, D2-40, and paired box gene 8 [PAX8]) were not successfully validated. Nearly one-half of the antibodies tested in the current study failed initial validation using IHC conditions that were established in the study laboratory for FFPE material. Although some antibodies subsequently met validation criteria after optimization of conditions, a few continued to demonstrate inadequate immunoreactivity. These findings emphasize the importance of validating IHC protocols for methanol-fixed tissue before clinical use and suggest that optimization for alcohol fixation may be needed to obtain adequate immunoreactivity on CCBs. © 2016 American Cancer Society.
Optimal Scaling of Digital Transcriptomes
Glusman, Gustavo; Caballero, Juan; Robinson, Max; Kutlu, Burak; Hood, Leroy
2013-01-01
Deep sequencing of transcriptomes has become an indispensable tool for biology, enabling expression levels for thousands of genes to be compared across multiple samples. Since transcript counts scale with sequencing depth, counts from different samples must be normalized to a common scale prior to comparison. We analyzed fifteen existing and novel algorithms for normalizing transcript counts, and evaluated the effectiveness of the resulting normalizations. For this purpose we defined two novel and mutually independent metrics: (1) the number of “uniform” genes (genes whose normalized expression levels have a sufficiently low coefficient of variation), and (2) low Spearman correlation between normalized expression profiles of gene pairs. We also define four novel algorithms, one of which explicitly maximizes the number of uniform genes, and compared the performance of all fifteen algorithms. The two most commonly used methods (scaling to a fixed total value, or equalizing the expression of certain ‘housekeeping’ genes) yielded particularly poor results, surpassed even by normalization based on randomly selected gene sets. Conversely, seven of the algorithms approached what appears to be optimal normalization. Three of these algorithms rely on the identification of “ubiquitous” genes: genes expressed in all the samples studied, but never at very high or very low levels. We demonstrate that these include a “core” of genes expressed in many tissues in a mutually consistent pattern, which is suitable for use as an internal normalization guide. The new methods yield robustly normalized expression values, which is a prerequisite for the identification of differentially expressed and tissue-specific genes as potential biomarkers. PMID:24223126
Saha, Krishanu; Mei, Ying; Reisterer, Colin M; Pyzocha, Neena Kenton; Yang, Jing; Muffat, Julien; Davies, Martyn C; Alexander, Morgan R; Langer, Robert; Anderson, Daniel G; Jaenisch, Rudolf
2011-11-15
The current gold standard for the culture of human pluripotent stem cells requires the use of a feeder layer of cells. Here, we develop a spatially defined culture system based on UV/ozone radiation modification of typical cell culture plastics to define a favorable surface environment for human pluripotent stem cell culture. Chemical and geometrical optimization of the surfaces enables control of early cell aggregation from fully dissociated cells, as predicted from a numerical model of cell migration, and results in significant increases in cell growth of undifferentiated cells. These chemically defined xeno-free substrates generate more than three times the number of cells than feeder-containing substrates per surface area. Further, reprogramming and typical gene-targeting protocols can be readily performed on these engineered surfaces. These substrates provide an attractive cell culture platform for the production of clinically relevant factor-free reprogrammed cells from patient tissue samples and facilitate the definition of standardized scale-up friendly methods for disease modeling and cell therapeutic applications.
Optimization of the monitoring of landfill gas and leachate in closed methanogenic landfills.
Jovanov, Dejan; Vujić, Bogdana; Vujić, Goran
2018-06-15
Monitoring of the gas and leachate parameters in a closed landfill is a long-term activity defined by national legislative worldwide. Serbian Waste Disposal Law defines the monitoring of a landfill at least 30 years after its closing, but the definition of the monitoring extent (number and type of parameters) is incomplete. In order to define and clear all the uncertainties, this research focuses on process of monitoring optimization, using the closed landfill in Zrenjanin, Serbia, as the experimental model. The aim of optimization was to find representative parameters which would define the physical, chemical and biological processes in the closed methanogenic landfill and to make this process less expensive. Research included development of the five monitoring models with different number of gas and leachate parameters and each model has been processed in open source software GeoGebra which is often used for solving optimization problems. The results of optimization process identified the most favorable monitoring model which fulfills all the defined criteria not only from the point of view of mathematical analyses, but also from the point of view of environment protection. The final outcome of this research - the minimal required parameters which should be included in the landfill monitoring are precisely defined. Copyright © 2017 Elsevier Ltd. All rights reserved.
On Adding Structure to Unstructured Overlay Networks
NASA Astrophysics Data System (ADS)
Leitão, João; Carvalho, Nuno A.; Pereira, José; Oliveira, Rui; Rodrigues, Luís
Unstructured peer-to-peer overlay networks are very resilient to churn and topology changes, while requiring little maintenance cost. Therefore, they are an infrastructure to build highly scalable large-scale services in dynamic networks. Typically, the overlay topology is defined by a peer sampling service that aims at maintaining, in each process, a random partial view of peers in the system. The resulting random unstructured topology is suboptimal when a specific performance metric is considered. On the other hand, structured approaches (for instance, a spanning tree) may optimize a given target performance metric but are highly fragile. In fact, the cost for maintaining structures with strong constraints may easily become prohibitive in highly dynamic networks. This chapter discusses different techniques that aim at combining the advantages of unstructured and structured networks. Namely we focus on two distinct approaches, one based on optimizing the overlay and another based on optimizing the gossip mechanism itself.
Piehowski, Paul D; Petyuk, Vladislav A; Sandoval, John D; Burnum, Kristin E; Kiebel, Gary R; Monroe, Matthew E; Anderson, Gordon A; Camp, David G; Smith, Richard D
2013-03-01
For bottom-up proteomics, there are wide variety of database-searching algorithms in use for matching peptide sequences to tandem MS spectra. Likewise, there are numerous strategies being employed to produce a confident list of peptide identifications from the different search algorithm outputs. Here we introduce a grid-search approach for determining optimal database filtering criteria in shotgun proteomics data analyses that is easily adaptable to any search. Systematic Trial and Error Parameter Selection--referred to as STEPS--utilizes user-defined parameter ranges to test a wide array of parameter combinations to arrive at an optimal "parameter set" for data filtering, thus maximizing confident identifications. The benefits of this approach in terms of numbers of true-positive identifications are demonstrated using datasets derived from immunoaffinity-depleted blood serum and a bacterial cell lysate, two common proteomics sample types. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Technical Reports Server (NTRS)
Taylor, R. B.; Zwicke, P. E.; Gold, P.; Miao, W.
1980-01-01
An analytical study was conducted to define the basic configuration of an active control system for helicopter vibration and gust response alleviation. The study culminated in a control system design which has two separate systems: narrow band loop for vibration reduction and wider band loop for gust response alleviation. The narrow band vibration loop utilizes the standard swashplate control configuration to input controller for the vibration loop is based on adaptive optimal control theory and is designed to adapt to any flight condition including maneuvers and transients. The prime characteristics of the vibration control system is its real time capability. The gust alleviation control system studied consists of optimal sampled data feedback gains together with an optimal one-step-ahead prediction. The prediction permits the estimation of the gust disturbance which can then be used to minimize the gust effects on the helicopter.
Design of Biomedical Robots for Phenotype Prediction Problems
deAndrés-Galiana, Enrique J.; Sonis, Stephen T.
2016-01-01
Abstract Genomics has been used with varying degrees of success in the context of drug discovery and in defining mechanisms of action for diseases like cancer and neurodegenerative and rare diseases in the quest for orphan drugs. To improve its utility, accuracy, and cost-effectiveness optimization of analytical methods, especially those that translate to clinically relevant outcomes, is critical. Here we define a novel tool for genomic analysis termed a biomedical robot in order to improve phenotype prediction, identifying disease pathogenesis and significantly defining therapeutic targets. Biomedical robot analytics differ from historical methods in that they are based on melding feature selection methods and ensemble learning techniques. The biomedical robot mathematically exploits the structure of the uncertainty space of any classification problem conceived as an ill-posed optimization problem. Given a classifier, there exist different equivalent small-scale genetic signatures that provide similar predictive accuracies. We perform the sensitivity analysis to noise of the biomedical robot concept using synthetic microarrays perturbed by different kinds of noises in expression and class assignment. Finally, we show the application of this concept to the analysis of different diseases, inferring the pathways and the correlation networks. The final aim of a biomedical robot is to improve knowledge discovery and provide decision systems to optimize diagnosis, treatment, and prognosis. This analysis shows that the biomedical robots are robust against different kinds of noises and particularly to a wrong class assignment of the samples. Assessing the uncertainty that is inherent to any phenotype prediction problem is the right way to address this kind of problem. PMID:27347715
Design of Biomedical Robots for Phenotype Prediction Problems.
deAndrés-Galiana, Enrique J; Fernández-Martínez, Juan Luis; Sonis, Stephen T
2016-08-01
Genomics has been used with varying degrees of success in the context of drug discovery and in defining mechanisms of action for diseases like cancer and neurodegenerative and rare diseases in the quest for orphan drugs. To improve its utility, accuracy, and cost-effectiveness optimization of analytical methods, especially those that translate to clinically relevant outcomes, is critical. Here we define a novel tool for genomic analysis termed a biomedical robot in order to improve phenotype prediction, identifying disease pathogenesis and significantly defining therapeutic targets. Biomedical robot analytics differ from historical methods in that they are based on melding feature selection methods and ensemble learning techniques. The biomedical robot mathematically exploits the structure of the uncertainty space of any classification problem conceived as an ill-posed optimization problem. Given a classifier, there exist different equivalent small-scale genetic signatures that provide similar predictive accuracies. We perform the sensitivity analysis to noise of the biomedical robot concept using synthetic microarrays perturbed by different kinds of noises in expression and class assignment. Finally, we show the application of this concept to the analysis of different diseases, inferring the pathways and the correlation networks. The final aim of a biomedical robot is to improve knowledge discovery and provide decision systems to optimize diagnosis, treatment, and prognosis. This analysis shows that the biomedical robots are robust against different kinds of noises and particularly to a wrong class assignment of the samples. Assessing the uncertainty that is inherent to any phenotype prediction problem is the right way to address this kind of problem.
Kennedy, Kristen M.; Erickson, Kirk I.; Rodrigue, Karen M.; Voss, Michelle W.; Colcombe, Stan J.; Kramer, Arthur F.; Acker, James D.; Raz, Naftali
2009-01-01
Regional manual volumetry is the gold standard of in vivo neuroanatomy, but is labor-intensive, can be imperfectly reliable, and allows for measuring limited number of regions. Voxel-based morphometry (VBM) has perfect repeatability and assesses local structure across the whole brain. However, its anatomic validity is unclear, and with its increasing popularity, a systematic comparison of VBM to manual volumetry is necessary. The few existing comparison studies are limited by small samples, qualitative comparisons, and limited selection and modest reliability of manual measures. Our goal was to overcome those limitations by quantitatively comparing optimized VBM findings with highly reliable multiple regional measures in a large sample (N = 200) across a wide agespan (18–81). We report a complex pattern of similarities and differences. Peak values of VBM volume estimates (modulated density) produced stronger age differences and a different spatial distribution from manual measures. However, when we aggregated VBM-derived information across voxels contained in specific anatomically defined regions (masks), the patterns of age differences became more similar, although important discrepancies emerged. Notably, VBM revealed stronger age differences in the regions bordering CSF and white matter areas prone to leukoaraiosis, and VBM was more likely to report nonlinearities in age-volume relationships. In the white matter regions, manual measures showed stronger negative associations with age than the corresponding VBM-based masks. We conclude that VBM provides realistic estimates of age differences in the regional gray matter only when applied to anatomically defined regions, but overestimates effects when individual peaks are interpreted. It may be beneficial to use VBM as a first-pass strategy, followed by manual measurement of anatomically-defined regions. PMID:18276037
Variance of discharge estimates sampled using acoustic Doppler current profilers from moving boats
Garcia, Carlos M.; Tarrab, Leticia; Oberg, Kevin; Szupiany, Ricardo; Cantero, Mariano I.
2012-01-01
This paper presents a model for quantifying the random errors (i.e., variance) of acoustic Doppler current profiler (ADCP) discharge measurements from moving boats for different sampling times. The model focuses on the random processes in the sampled flow field and has been developed using statistical methods currently available for uncertainty analysis of velocity time series. Analysis of field data collected using ADCP from moving boats from three natural rivers of varying sizes and flow conditions shows that, even though the estimate of the integral time scale of the actual turbulent flow field is larger than the sampling interval, the integral time scale of the sampled flow field is on the order of the sampling interval. Thus, an equation for computing the variance error in discharge measurements associated with different sampling times, assuming uncorrelated flow fields is appropriate. The approach is used to help define optimal sampling strategies by choosing the exposure time required for ADCPs to accurately measure flow discharge.
An Optimal Estimation Method to Obtain Surface Layer Turbulent Fluxes from Profile Measurements
NASA Astrophysics Data System (ADS)
Kang, D.
2015-12-01
In the absence of direct turbulence measurements, the turbulence characteristics of the atmospheric surface layer are often derived from measurements of the surface layer mean properties based on Monin-Obukhov Similarity Theory (MOST). This approach requires two levels of the ensemble mean wind, temperature, and water vapor, from which the fluxes of momentum, sensible heat, and water vapor can be obtained. When only one measurement level is available, the roughness heights and the assumed properties of the corresponding variables at the respective roughness heights are used. In practice, the temporal mean with large number of samples are used in place of the ensemble mean. However, in many situations the samples of data are taken from multiple levels. It is thus desirable to derive the boundary layer flux properties using all measurements. In this study, we used an optimal estimation approach to derive surface layer properties based on all available measurements. This approach assumes that the samples are taken from a population whose ensemble mean profile follows the MOST. An optimized estimate is obtained when the results yield a minimum cost function defined as a weighted summation of all error variance at each sample altitude. The weights are based one sample data variance and the altitude of the measurements. This method was applied to measurements in the marine atmospheric surface layer from a small boat using radiosonde on a tethered balloon where temperature and relative humidity profiles in the lowest 50 m were made repeatedly in about 30 minutes. We will present the resultant fluxes and the derived MOST mean profiles using different sets of measurements. The advantage of this method over the 'traditional' methods will be illustrated. Some limitations of this optimization method will also be discussed. Its application to quantify the effects of marine surface layer environment on radar and communication signal propagation will be shown as well.
Using maximum entropy modeling for optimal selection of sampling sites for monitoring networks
Stohlgren, Thomas J.; Kumar, Sunil; Barnett, David T.; Evangelista, Paul H.
2011-01-01
Environmental monitoring programs must efficiently describe state shifts. We propose using maximum entropy modeling to select dissimilar sampling sites to capture environmental variability at low cost, and demonstrate a specific application: sample site selection for the Central Plains domain (453,490 km2) of the National Ecological Observatory Network (NEON). We relied on four environmental factors: mean annual temperature and precipitation, elevation, and vegetation type. A “sample site” was defined as a 20 km × 20 km area (equal to NEON’s airborne observation platform [AOP] footprint), within which each 1 km2 cell was evaluated for each environmental factor. After each model run, the most environmentally dissimilar site was selected from all potential sample sites. The iterative selection of eight sites captured approximately 80% of the environmental envelope of the domain, an improvement over stratified random sampling and simple random designs for sample site selection. This approach can be widely used for cost-efficient selection of survey and monitoring sites.
Assessment of physicochemical and antioxidant characteristics of Quercus pyrenaica honeydew honeys.
Shantal Rodríguez Flores, M; Escuredo, Olga; Carmen Seijo, M
2015-01-01
Consumers are exhibiting increasing interest in honeydew honey, principally due to its functional properties. Some plants can be sources of honeydew honey, but in north-western Spain, this honey type only comes from Quercus pyrenaica. In the present study, the melissopalynological and physicochemical characteristics and the antioxidant properties of 32 honeydew honey samples are described. Q. pyrenaica honeydew honey was defined by its colour, high pH, phenols and flavonoids. Multivariate statistical techniques were used to analyse the influence of the production year on the honey's physicochemical parameters and polyphenol content. Differences among the honey samples were found, showing that weather affected the physicochemical composition of the honey samples. Optimal conditions for oak growth favoured the production of honeydew honey. Copyright © 2014 Elsevier Ltd. All rights reserved.
Adegboye, Amanda R A; Anderssen, Sigmund A; Froberg, Karsten; Sardinha, Luis B; Heitmann, Berit L; Steene-Johannessen, Jostein; Kolle, Elin; Andersen, Lars B
2011-07-01
To define the optimal cut-off for low aerobic fitness and to evaluate its accuracy to predict clustering of risk factors for cardiovascular disease in children and adolescents. Study of diagnostic accuracy using a cross-sectional database. European Youth Heart Study including Denmark, Portugal, Estonia and Norway. 4500 schoolchildren aged 9 or 15 years. Aerobic fitness was expressed as peak oxygen consumption relative to bodyweight (mlO(2)/min/kg). Risk factors included in the composite risk score (mean of z-scores) were systolic blood pressure, triglyceride, total cholesterol/HDL-cholesterol ratio, insulin resistance and sum of four skinfolds. 14.5% of the sample, with a risk score above one SD, were defined as being at risk. Receiver operating characteristic analysis was used to define the optimal cut-off for sex and age-specific distribution. In girls, the optimal cut-offs for identifying individuals at risk were: 37.4 mlO(2)/min/kg (9-year-old) and 33.0 mlO(2)/min/kg (15-year-old). In boys, the optimal cut-offs were 43.6 mlO(2)/min/kg (9-year-old) and 46.0 mlO(2)/min/kg (15-year-old). Specificity (range 79.3-86.4%) was markedly higher than sensitivity (range 29.7-55.6%) for all cut-offs. Positive predictive values ranged from 19% to 41% and negative predictive values ranged from 88% to 90%. The diagnostic accuracy for identifying children at risk, measured by the area under the curve (AUC), was significantly higher than what would be expected by chance (AUC >0.5) for all cut-offs. Aerobic fitness is easy to measure, and is an accurate tool for screening children with clustering of cardiovascular risk factors. Promoting physical activity in children with aerobic fitness level lower than the suggested cut-points might improve their health.
Using Approximations to Accelerate Engineering Design Optimization
NASA Technical Reports Server (NTRS)
Torczon, Virginia; Trosset, Michael W.
1998-01-01
Optimization problems that arise in engineering design are often characterized by several features that hinder the use of standard nonlinear optimization techniques. Foremost among these features is that the functions used to define the engineering optimization problem often are computationally intensive. Within a standard nonlinear optimization algorithm, the computational expense of evaluating the functions that define the problem would necessarily be incurred for each iteration of the optimization algorithm. Faced with such prohibitive computational costs, an attractive alternative is to make use of surrogates within an optimization context since surrogates can be chosen or constructed so that they are typically much less expensive to compute. For the purposes of this paper, we will focus on the use of algebraic approximations as surrogates for the objective. In this paper we introduce the use of so-called merit functions that explicitly recognize the desirability of improving the current approximation to the objective during the course of the optimization. We define and experiment with the use of merit functions chosen to simultaneously improve both the solution to the optimization problem (the objective) and the quality of the approximation. Our goal is to further improve the effectiveness of our general approach without sacrificing any of its rigor.
Structural topology optimization with fuzzy constraint
NASA Astrophysics Data System (ADS)
Rosko, Peter
2011-12-01
The paper deals with the structural topology optimization with fuzzy constraint. The optimal topology of structure is defined as a material distribution problem. The objective is the weight of the structure. The multifrequency dynamic loading is considered. The optimal topology design of the structure has to eliminate the danger of the resonance vibration. The uncertainty of the loading is defined with help of fuzzy loading. Special fuzzy constraint is created from exciting frequencies. Presented study is applicable in engineering and civil engineering. Example demonstrates the presented theory.
NASA Technical Reports Server (NTRS)
Borsody, J.
1976-01-01
Mathematical equations are derived by using the Maximum Principle to obtain the maximum payload capability of a reusable tug for planetary missions. The mathematical formulation includes correction for nodal precession of the space shuttle orbit. The tug performs this nodal correction in returning to this precessed orbit. The sample case analyzed represents an inner planet mission as defined by the declination (fixed) and right ascension of the outgoing asymptote and the mission energy. Payload capability is derived for a typical cryogenic tug and the sample case with and without perigee propulsion. Optimal trajectory profiles and some important orbital elements are also discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Quinlan, D.; Yi, Q.; Buduc, R.
2005-02-17
ROSE is an object-oriented software infrastructure for source-to-source translation that provides an interface for programmers to write their own specialized translators for optimizing scientific applications. ROSE is a part of current research on telescoping languages, which provides optimizations of the use of libraries in scientific applications. ROSE defines approaches to extend the optimization techniques, common in well defined languages, to the optimization of scientific applications using well defined libraries. ROSE includes a rich set of tools for generating customized transformations to support optimization of applications codes. We currently support full C and C++ (including template instantiation etc.), with Fortran 90more » support under development as part of a collaboration and contract with Rice to use their version of the open source Open64 F90 front-end. ROSE represents an attempt to define an open compiler infrastructure to handle the full complexity of full scale DOE applications codes using the languages common to scientific computing within DOE. We expect that such an infrastructure will also be useful for the development of numerous tools that may then realistically expect to work on DOE full scale applications.« less
Zanchetti Meneghini, Leonardo; Rübensam, Gabriel; Claudino Bica, Vinicius; Ceccon, Amanda; Barreto, Fabiano; Flores Ferrão, Marco; Bergold, Ana Maria
2014-01-01
A simple and inexpensive method based on solvent extraction followed by low temperature clean-up was applied for determination of seven pyrethroids residues in bovine raw milk using gas chromatography coupled to tandem mass spectrometry (GC-MS/MS) and gas chromatography with electron-capture detector (GC-ECD). Sample extraction procedure was established through the evaluation of seven different extraction protocols, evaluated in terms of analyte recovery and cleanup efficiency. Sample preparation optimization was based on Doehlert design using fifteen runs with three different variables. Response surface methodologies and polynomial analysis were used to define the best extraction conditions. Method validation was carried out based on SANCO guide parameters and assessed by multivariate analysis. Method performance was considered satisfactory since mean recoveries were between 87% and 101% for three distinct concentrations. Accuracy and precision were lower than ±20%, and led to no significant differences (p < 0.05) between results obtained by GC-ECD and GC-MS/MS techniques. The method has been applied to routine analysis for determination of pyrethroid residues in bovine raw milk in the Brazilian National Residue Control Plan since 2013, in which a total of 50 samples were analyzed. PMID:25380457
von Glischinski, M; Willutzki, U; Stangier, U; Hiller, W; Hoyer, J; Leibing, E; Leichsenring, F; Hirschfeld, G
2018-02-11
The Liebowitz Social Anxiety Scale (LSAS) is the most frequently used instrument to assess social anxiety disorder (SAD) in clinical research and practice. Both a self-reported (LSAS-SR) and a clinician-administered (LSAS-CA) version are available. The aim of the present study was to define optimal cut-off (OC) scores for remission and response to treatment for the LSAS in a German sample. Data of N = 311 patients with SAD were used who had completed psychotherapeutic treatment within a multicentre randomized controlled trial. Diagnosis of SAD and reduction in symptom severity according to the Structured Clinical Interview for Diagnostic and Statistical Manual of Mental Disorders, 4th edition, served as gold standard. OCs yielding the best balance between sensitivity and specificity were determined using receiver operating characteristics. The variability of the resulting OCs was estimated by nonparametric bootstrapping. Using diagnosis of SAD (present vs. absent) as a criterion, results for remission indicated cut-off values of 35 for the LSAS-SR and 30 for the LSAS-CA, with acceptable sensitivity (LSAS-SR: .83, LSAS-CA: .88) and specificity (LSAS-SR: .82, LSAS-CA: .87). For detection of response to treatment, assessed by a 1-point reduction in the Structured Clinical Interview for Diagnostic and Statistical Manual of Mental Disorders, 4th edition, rating, a reduction of 28% for the LSAS-SR and 29% for the LSAS-CA yielded the best balance between sensitivity (LSAS-SR: .75, LSAS-CA: .83) and specificity (LSAS-SR: .76, LSAS-CA: .80). To our knowledge, we are the first to define cut points for the LSAS in a German sample. Overall, the cut points for remission and response corroborate previously reported cut points, now building on a broader data basis. Copyright © 2018 John Wiley & Sons, Ltd.
Multidisciplinary design optimization: An emerging new engineering discipline
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, Jaroslaw
1993-01-01
This paper defines the Multidisciplinary Design Optimization (MDO) as a new field of research endeavor and as an aid in the design of engineering systems. It examines the MDO conceptual components in relation to each other and defines their functions.
An optimization model to agroindustrial sector in antioquia (Colombia, South America)
NASA Astrophysics Data System (ADS)
Fernandez, J.
2015-06-01
This paper develops a proposal of a general optimization model for the flower industry, which is defined by using discrete simulation and nonlinear optimization, whose mathematical models have been solved by using ProModel simulation tools and Gams optimization. It defines the operations that constitute the production and marketing of the sector, statistically validated data taken directly from each operation through field work, the discrete simulation model of the operations and the linear optimization model of the entire industry chain are raised. The model is solved with the tools described above and presents the results validated in a case study.
Chaikuad, Apirat; Knapp, Stefan; von Delft, Frank
2015-01-01
The quest for an optimal limited set of effective crystallization conditions remains a challenge in macromolecular crystallography, an issue that is complicated by the large number of chemicals which have been deemed to be suitable for promoting crystal growth. The lack of rational approaches towards the selection of successful chemical space and representative combinations has led to significant overlapping conditions, which are currently present in a multitude of commercially available crystallization screens. Here, an alternative approach to the sampling of widely used PEG precipitants is suggested through the use of PEG smears, which are mixtures of different PEGs with a requirement of either neutral or cooperatively positive effects of each component on crystal growth. Four newly defined smears were classified by molecular-weight groups and enabled the preservation of specific properties related to different polymer sizes. These smears not only allowed a wide coverage of properties of these polymers, but also reduced PEG variables, enabling greater sampling of other parameters such as buffers and additives. The efficiency of the smear-based screens was evaluated on more than 220 diverse recombinant human proteins, which overall revealed a good initial crystallization success rate of nearly 50%. In addition, in several cases successful crystallizations were only obtained using PEG smears, while various commercial screens failed to yield crystals. The defined smears therefore offer an alternative approach towards PEG sampling, which will benefit the design of crystallization screens sampling a wide chemical space of this key precipitant. PMID:26249344
Chaikuad, Apirat; Knapp, Stefan; von Delft, Frank
2015-08-01
The quest for an optimal limited set of effective crystallization conditions remains a challenge in macromolecular crystallography, an issue that is complicated by the large number of chemicals which have been deemed to be suitable for promoting crystal growth. The lack of rational approaches towards the selection of successful chemical space and representative combinations has led to significant overlapping conditions, which are currently present in a multitude of commercially available crystallization screens. Here, an alternative approach to the sampling of widely used PEG precipitants is suggested through the use of PEG smears, which are mixtures of different PEGs with a requirement of either neutral or cooperatively positive effects of each component on crystal growth. Four newly defined smears were classified by molecular-weight groups and enabled the preservation of specific properties related to different polymer sizes. These smears not only allowed a wide coverage of properties of these polymers, but also reduced PEG variables, enabling greater sampling of other parameters such as buffers and additives. The efficiency of the smear-based screens was evaluated on more than 220 diverse recombinant human proteins, which overall revealed a good initial crystallization success rate of nearly 50%. In addition, in several cases successful crystallizations were only obtained using PEG smears, while various commercial screens failed to yield crystals. The defined smears therefore offer an alternative approach towards PEG sampling, which will benefit the design of crystallization screens sampling a wide chemical space of this key precipitant.
European consensus conference for external quality assessment in molecular pathology.
van Krieken, J H; Siebers, A G; Normanno, N
2013-08-01
Molecular testing of tumor samples to guide treatment decisions is of increasing importance. Several drugs have been approved for treatment of molecularly defined subgroups of patients, and the number of agents requiring companion diagnostics for their prescription is expected to rapidly increase. The results of such testing directly influence the management of individual patients, with both false-negative and false-positive results being harmful for patients. In this respect, external quality assurance (EQA) programs are essential to guarantee optimal quality of testing. There are several EQA schemes available in Europe, but they vary in scope, size and execution. During a conference held in early 2012, medical oncologists, pathologists, geneticists, molecular biologists, EQA providers and representatives from pharmaceutical industries developed a guideline to harmonize the standards applied by EQA schemes in molecular pathology. The guideline comprises recommendations on the organization of an EQA scheme, defining the criteria for reference laboratories, requirements for EQA test samples and the number of samples that are needed for an EQA scheme. Furthermore, a scoring system is proposed and consequences of poor performance are formulated. Lastly, the contents of an EQA report, communication of the EQA results, EQA databases and participant manual are given.
Computational Tools and Algorithms for Designing Customized Synthetic Genes
Gould, Nathan; Hendy, Oliver; Papamichail, Dimitris
2014-01-01
Advances in DNA synthesis have enabled the construction of artificial genes, gene circuits, and genomes of bacterial scale. Freedom in de novo design of synthetic constructs provides significant power in studying the impact of mutations in sequence features, and verifying hypotheses on the functional information that is encoded in nucleic and amino acids. To aid this goal, a large number of software tools of variable sophistication have been implemented, enabling the design of synthetic genes for sequence optimization based on rationally defined properties. The first generation of tools dealt predominantly with singular objectives such as codon usage optimization and unique restriction site incorporation. Recent years have seen the emergence of sequence design tools that aim to evolve sequences toward combinations of objectives. The design of optimal protein-coding sequences adhering to multiple objectives is computationally hard, and most tools rely on heuristics to sample the vast sequence design space. In this review, we study some of the algorithmic issues behind gene optimization and the approaches that different tools have adopted to redesign genes and optimize desired coding features. We utilize test cases to demonstrate the efficiency of each approach, as well as identify their strengths and limitations. PMID:25340050
Sample Training Based Wildfire Segmentation by 2D Histogram θ-Division with Minimum Error
Dong, Erqian; Sun, Mingui; Jia, Wenyan; Zhang, Dengyi; Yuan, Zhiyong
2013-01-01
A novel wildfire segmentation algorithm is proposed with the help of sample training based 2D histogram θ-division and minimum error. Based on minimum error principle and 2D color histogram, the θ-division methods were presented recently, but application of prior knowledge on them has not been explored. For the specific problem of wildfire segmentation, we collect sample images with manually labeled fire pixels. Then we define the probability function of error division to evaluate θ-division segmentations, and the optimal angle θ is determined by sample training. Performances in different color channels are compared, and the suitable channel is selected. To further improve the accuracy, the combination approach is presented with both θ-division and other segmentation methods such as GMM. Our approach is tested on real images, and the experiments prove its efficiency for wildfire segmentation. PMID:23878526
Lucernoni, Federico; Rizzotto, Matteo; Tapparo, Federica; Capelli, Laura; Sironi, Selena; Busini, Valentina
2016-11-01
The work focuses on the principles for the design of a specific static hood and on the definition of an optimal sampling procedure for the assessment of landfill gas (LFG) surface emissions. This is carried out by means of computational fluid dynamics (CFD) simulations to investigate the fluid dynamics conditions of the hood. The study proves that understanding the fluid dynamic conditions is fundamental in order to understand the sampling results and correctly interpret the measured concentration values by relating them to a suitable LFG emission model, and therefore to estimate emission rates. For this reason, CFD is a useful tool for the design and evaluation of sampling systems, among others, to verify the fundamental hypotheses on which the mass balance for the sampling hood is defined. The procedure here discussed, which is specific for the case of the investigated landfill, can be generalized to be applied also to different scenarios, where hood sampling is involved. Copyright © 2016 Elsevier Ltd. All rights reserved.
Evaluation of the Red Blood Cell Advanced Software Application on the CellaVision DM96.
Criel, M; Godefroid, M; Deckers, B; Devos, H; Cauwelier, B; Emmerechts, J
2016-08-01
The CellaVision Advanced Red Blood Cell (RBC) Software Application is a new software for advanced morphological analysis of RBCs on a digital microscopy system. Upon automated precharacterization into 21 categories, the software offers the possibility of reclassification of RBCs by the operator. We aimed to define the optimal cut-off to detect morphological RBC abnormalities and to evaluate the precharacterization performance of this software. Thirty-eight blood samples of healthy donors and sixty-eight samples of hospitalized patients were analyzed. Different methodologies to define a cut-off between negativity and positivity were used. Sensitivity and specificity were calculated according to these different cut-offs using the manual microscopic method as the gold standard. Imprecision was assessed by measuring analytical within-run and between-run variability and by measuring between-observer variability. By optimizing the cut-off between negativity and positivity, sensitivities exceeded 80% for 'critical' RBC categories (target cells, tear drop cells, spherocytes, sickle cells, and parasites), while specificities exceeded 80% for the other RBC morphological categories. Results of within-run, between-run, and between-observer variabilities were all clinically acceptable. The CellaVision Advanced RBC Software Application is an easy-to-use software that helps to detect most RBC morphological abnormalities in a sensitive and specific way without increasing work load, provided the proper cut-offs are chosen. However, evaluation of the images by an experienced observer remains necessary. © 2016 John Wiley & Sons Ltd.
Optimizing read-out of the NECTAr front-end electronics
NASA Astrophysics Data System (ADS)
Vorobiov, S.; Feinstein, F.; Bolmont, J.; Corona, P.; Delagnes, E.; Falvard, A.; Gascón, D.; Glicenstein, J.-F.; Naumann, C. L.; Nayman, P.; Ribo, M.; Sanuy, A.; Tavernet, J.-P.; Toussenel, F.; Vincent, P.
2012-12-01
We describe the optimization of the read-out specifications of the NECTAr front-end electronics for the Cherenkov Telescope Array (CTA). The NECTAr project aims at building and testing a demonstrator module of a new front-end electronics design, which takes an advantage of the know-how acquired while building the cameras of the CAT, H.E.S.S.-I and H.E.S.S.-II experiments. The goal of the optimization work is to define the specifications of the digitizing electronics of a CTA camera, in particular integration time window, sampling rate, analog bandwidth using physics simulations. We employed for this work real photomultiplier pulses, sampled at 100 ps with a 600 MHz bandwidth oscilloscope. The individual pulses are drawn randomly at the times at which the photo-electrons, originating from atmospheric showers, arrive at the focal planes of imaging atmospheric Cherenkov telescopes. The timing information is extracted from the existing CTA simulations on the GRID and organized in a local database, together with all the relevant physical parameters (energy, primary particle type, zenith angle, distance from the shower axis, pixel offset from the optical axis, night-sky background level, etc.), and detector configurations (telescope types, camera/mirror configurations, etc.). While investigating the parameter space, an optimal pixel charge integration time window, which minimizes relative error in the measured charge, has been determined. This will allow to gain in sensitivity and to lower the energy threshold of CTA telescopes. We present results of our optimizations and first measurements obtained using the NECTAr demonstrator module.
NASA Technical Reports Server (NTRS)
Samareh, Jamshid A.; Sensmeier, mark D.; Stewart, Bret A.
2006-01-01
Algorithms for rapid generation of moderate-fidelity structural finite element models of air vehicle structures to allow more accurate weight estimation earlier in the vehicle design process have been developed. Application of these algorithms should help to rapidly assess many structural layouts before the start of the preliminary design phase and eliminate weight penalties imposed when actual structure weights exceed those estimated during conceptual design. By defining the structural topology in a fully parametric manner, the structure can be mapped to arbitrary vehicle configurations being considered during conceptual design optimization. Recent enhancements to this approach include the porting of the algorithms to a platform-independent software language Python, and modifications to specifically consider morphing aircraft-type configurations. Two sample cases which illustrate these recent developments are presented.
Chemical disorder influence on magnetic state of optimally-doped La0.7Ca0.3MnO3
NASA Astrophysics Data System (ADS)
Rozenberg, E.; Auslender, M.; Shames, A. I.; Jung, G.; Felner, I.; Tsindlekht, M. I.; Mogilyansky, D.; Sominski, E.; Gedanken, A.; Mukovskii, Ya. M.; Gorodetsky, G.
2011-10-01
X-band electron magnetic resonance and dc/ac magnetic measurements have been employed to study the effects of chemical disorder on magnetic ordering in bulk and nanometer-sized single crystals and bulk ceramics of optimally-doped La0.7Ca0.3MnO3 manganite. The magnetic ground state of bulk samples appeared to be ferromagnetic with the lower Curie temperature and higher magnetic homogeneity in the vicinity of the ferromagnetic-paramagnetic phase transition in the crystal, as compared with those characteristics in the ceramics. The influence of technological driven "macroscopic" fluctuations of Ca-dopant level in crystal and "mesoscopic" disorder within grain boundary regions in ceramics was proposed to be responsible for these effects. Surface spin disorder together with pronounced inter-particle interactions within agglomerated nano-sample results in well defined core/shell spin configuration in La0.7Ca0.3MnO3 nano-crystals. The analysis of the electron paramagnetic resonance data enlightened the reasons for the observed difference in the magnetic order. Lattice effects dominate the first-order nature of magnetic phase transition in bulk samples. However, mesoscale chemical disorder seems to be responsible for the appearance of small ferromagnetic polarons in the paramagnetic state of bulk ceramics. The experimental results and their analysis indicate that a chemical/magnetic disorder has a strong impact on the magnetic state even in the case of mostly stable optimally hole-doped manganites.
Summers, R. J.; Boudreaux, D. P.; Srinivasan, V. R.
1979-01-01
Steady-state continuous culture was used to optimize lean chemically defined media for a Cellulomonas sp. and Bacillus cereus strain T. Both organisms were extremely sensitive to variations in trace-metal concentrations. However, medium optimization by this technique proved rapid, and multifactor screening was easily conducted by using a minimum of instrumentation. The optimized media supported critical dilution rates of 0.571 and 0.467 h−1 for Cellulomonas and Bacillus, respectively. These values approximated maximum growth rate values observed in batch culture. PMID:16345417
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, S. M.; Kim, K. Y.
Printed circuit heat exchanger (PCHE) is recently considered as a recuperator for the high temperature gas cooled reactor. In this work, the zigzag-channels of a PCHE have been optimized by using three-dimensional Reynolds-Averaged Navier-Stokes (RANS) analysis and response surface approximation (RSA) modeling technique to enhance thermal-hydraulic performance. Shear stress transport turbulence model is used as a turbulence closure. The objective function is defined as a linear combination of the functions related to heat transfer and friction loss of the PCHE, respectively. Three geometric design variables viz., the ratio of the radius of the fillet to hydraulic diameter of the channels,more » the ratio of wavelength to hydraulic diameter of the channels, and the ratio of wave height to hydraulic diameter of the channels, are used for the optimization. Design points are selected through Latin-hypercube sampling. The optimal design is determined through the RSA model which uses RANS derived calculations at the design points. The results show that the optimum shape enhances considerably the thermal-hydraulic performance than a reference shape. (authors)« less
NASA Astrophysics Data System (ADS)
Ha, Taewoo; Lee, Howon; Sim, Kyung Ik; Kim, Jonghyeon; Jo, Young Chan; Kim, Jae Hoon; Baek, Na Yeon; Kang, Dai-ill; Lee, Han Hyoung
2017-05-01
We have established optimal methods for terahertz time-domain spectroscopic analysis of highly absorbing pigments in powder form based on our investigation of representative traditional Chinese pigments, such as azurite [blue-based color pigment], Chinese vermilion [red-based color pigment], and arsenic yellow [yellow-based color pigment]. To accurately extract the optical constants in the terahertz region of 0.1 - 3 THz, we carried out transmission measurements in such a way that intense absorption peaks did not completely suppress the transmission level. This required preparation of pellet samples with optimized thicknesses and material densities. In some cases, mixing the pigments with polyethylene powder was required to minimize absorption due to certain peak features. The resulting distortion-free terahertz spectra of the investigated set of pigment species exhibited well-defined unique spectral fingerprints. Our study will be useful to future efforts to establish non-destructive analysis methods of traditional pigments, to construct their spectral databases, and to apply these tools to restoration of cultural heritage materials.
Treatment selection in a randomized clinical trial via covariate-specific treatment effect curves.
Ma, Yunbei; Zhou, Xiao-Hua
2017-02-01
For time-to-event data in a randomized clinical trial, we proposed two new methods for selecting an optimal treatment for a patient based on the covariate-specific treatment effect curve, which is used to represent the clinical utility of a predictive biomarker. To select an optimal treatment for a patient with a specific biomarker value, we proposed pointwise confidence intervals for each covariate-specific treatment effect curve and the difference between covariate-specific treatment effect curves of two treatments. Furthermore, to select an optimal treatment for a future biomarker-defined subpopulation of patients, we proposed confidence bands for each covariate-specific treatment effect curve and the difference between each pair of covariate-specific treatment effect curve over a fixed interval of biomarker values. We constructed the confidence bands based on a resampling technique. We also conducted simulation studies to evaluate finite-sample properties of the proposed estimation methods. Finally, we illustrated the application of the proposed method in a real-world data set.
Chance-Constrained AC Optimal Power Flow: Reformulations and Efficient Algorithms
Roald, Line Alnaes; Andersson, Goran
2017-08-29
Higher levels of renewable electricity generation increase uncertainty in power system operation. To ensure secure system operation, new tools that account for this uncertainty are required. Here, in this paper, we adopt a chance-constrained AC optimal power flow formulation, which guarantees that generation, power flows and voltages remain within their bounds with a pre-defined probability. We then discuss different chance-constraint reformulations and solution approaches for the problem. Additionally, we first discuss an analytical reformulation based on partial linearization, which enables us to obtain a tractable representation of the optimization problem. We then provide an efficient algorithm based on an iterativemore » solution scheme which alternates between solving a deterministic AC OPF problem and assessing the impact of uncertainty. This more flexible computational framework enables not only scalable implementations, but also alternative chance-constraint reformulations. In particular, we suggest two sample based reformulations that do not require any approximation or relaxation of the AC power flow equations.« less
GC/MS analysis of pesticides in the Ferrara area (Italy) surface water: a chemometric study.
Pasti, Luisa; Nava, Elisabetta; Morelli, Marco; Bignami, Silvia; Dondi, Francesco
2007-01-01
The development of a network to monitor surface waters is a critical element in the assessment, restoration and protection of water quality. In this study, concentrations of 42 pesticides--determined by GC-MS on samples from 11 points along the Ferrara area rivers--have been analyzed by chemometric tools. The data were collected over a three-year period (2002-2004). Principal component analysis of the detected pesticides was carried out in order to define the best spatial locations for the sampling points. The results obtained have been interpreted in view of agricultural land use. Time series data regarding pesticide contents in surface waters has been analyzed using the Autocorrelation function. This chemometric tool allows for seasonal trends and makes it possible to optimize sampling frequency in order to detect the effective maximum pesticide content.
On optimal soft-decision demodulation. [in digital communication system
NASA Technical Reports Server (NTRS)
Lee, L.-N.
1976-01-01
A necessary condition is derived for optimal J-ary coherent demodulation of M-ary (M greater than 2) signals. Optimality is defined as maximality of the symmetric cutoff rate of the resulting discrete memoryless channel. Using a counterexample, it is shown that the condition derived is generally not sufficient for optimality. This condition is employed as the basis for an iterative optimization method to find the optimal demodulator decision regions from an initial 'good guess'. In general, these regions are found to be bounded by hyperplanes in likelihood space; the corresponding regions in signal space are found to have hyperplane asymptotes for the important case of additive white Gaussian noise. Some examples are presented, showing that the regions in signal space bounded by these asymptotic hyperplanes define demodulator decision regions that are virtually optimal.
A comparative study on stress and compliance based structural topology optimization
NASA Astrophysics Data System (ADS)
Hailu Shimels, G.; Dereje Engida, W.; Fakhruldin Mohd, H.
2017-10-01
Most of structural topology optimization problems have been formulated and solved to either minimize compliance or weight of a structure under volume or stress constraints, respectively. Even if, a lot of researches are conducted on these two formulation techniques separately, there is no clear comparative study between the two approaches. This paper intends to compare these formulation techniques, so that an end user or designer can choose the best one based on the problems they have. Benchmark problems under the same boundary and loading conditions are defined, solved and results are compared based on these formulations. Simulation results shows that the two formulation techniques are dependent on the type of loading and boundary conditions defined. Maximum stress induced in the design domain is higher when the design domains are formulated using compliance based formulations. Optimal layouts from compliance minimization formulation has complex layout than stress based ones which may lead the manufacturing of the optimal layouts to be challenging. Optimal layouts from compliance based formulations are dependent on the material to be distributed. On the other hand, optimal layouts from stress based formulation are dependent on the type of material used to define the design domain. High computational time for stress based topology optimization is still a challenge because of the definition of stress constraints at element level. Results also shows that adjustment of convergence criterions can be an alternative solution to minimize the maximum stress developed in optimal layouts. Therefore, a designer or end user should choose a method of formulation based on the design domain defined and boundary conditions considered.
Progress toward the determination of correct classification rates in fire debris analysis.
Waddell, Erin E; Song, Emma T; Rinke, Caitlin N; Williams, Mary R; Sigman, Michael E
2013-07-01
Principal components analysis (PCA), linear discriminant analysis (LDA), and quadratic discriminant analysis (QDA) were used to develop a multistep classification procedure for determining the presence of ignitable liquid residue in fire debris and assigning any ignitable liquid residue present into the classes defined under the American Society for Testing and Materials (ASTM) E 1618-10 standard method. A multistep classification procedure was tested by cross-validation based on model data sets comprised of the time-averaged mass spectra (also referred to as total ion spectra) of commercial ignitable liquids and pyrolysis products from common building materials and household furnishings (referred to simply as substrates). Fire debris samples from laboratory-scale and field test burns were also used to test the model. The optimal model's true-positive rate was 81.3% for cross-validation samples and 70.9% for fire debris samples. The false-positive rate was 9.9% for cross-validation samples and 8.9% for fire debris samples. © 2013 American Academy of Forensic Sciences.
Choi, Yoonha; Liu, Tiffany Ting; Pankratz, Daniel G; Colby, Thomas V; Barth, Neil M; Lynch, David A; Walsh, P Sean; Raghu, Ganesh; Kennedy, Giulia C; Huang, Jing
2018-05-09
We developed a classifier using RNA sequencing data that identifies the usual interstitial pneumonia (UIP) pattern for the diagnosis of idiopathic pulmonary fibrosis. We addressed significant challenges, including limited sample size, biological and technical sample heterogeneity, and reagent and assay batch effects. We identified inter- and intra-patient heterogeneity, particularly within the non-UIP group. The models classified UIP on transbronchial biopsy samples with a receiver-operating characteristic area under the curve of ~ 0.9 in cross-validation. Using in silico mixed samples in training, we prospectively defined a decision boundary to optimize specificity at ≥85%. The penalized logistic regression model showed greater reproducibility across technical replicates and was chosen as the final model. The final model showed sensitivity of 70% and specificity of 88% in the test set. We demonstrated that the suggested methodologies appropriately addressed challenges of the sample size, disease heterogeneity and technical batch effects and developed a highly accurate and robust classifier leveraging RNA sequencing for the classification of UIP.
Convex Accelerated Maximum Entropy Reconstruction
Worley, Bradley
2016-01-01
Maximum entropy (MaxEnt) spectral reconstruction methods provide a powerful framework for spectral estimation of nonuniformly sampled datasets. Many methods exist within this framework, usually defined based on the magnitude of a Lagrange multiplier in the MaxEnt objective function. An algorithm is presented here that utilizes accelerated first-order convex optimization techniques to rapidly and reliably reconstruct nonuniformly sampled NMR datasets using the principle of maximum entropy. This algorithm – called CAMERA for Convex Accelerated Maximum Entropy Reconstruction Algorithm – is a new approach to spectral reconstruction that exhibits fast, tunable convergence in both constant-aim and constant-lambda modes. A high-performance, open source NMR data processing tool is described that implements CAMERA, and brief comparisons to existing reconstruction methods are made on several example spectra. PMID:26894476
Baumes, Laurent A
2006-01-01
One of the main problems in high-throughput research for materials is still the design of experiments. At early stages of discovery programs, purely exploratory methodologies coupled with fast screening tools should be employed. This should lead to opportunities to find unexpected catalytic results and identify the "groups" of catalyst outputs, providing well-defined boundaries for future optimizations. However, very few new papers deal with strategies that guide exploratory studies. Mostly, traditional designs, homogeneous covering, or simple random samplings are exploited. Typical catalytic output distributions exhibit unbalanced datasets for which an efficient learning is hardly carried out, and interesting but rare classes are usually unrecognized. Here is suggested a new iterative algorithm for the characterization of the search space structure, working independently of learning processes. It enhances recognition rates by transferring catalysts to be screened from "performance-stable" space zones to "unsteady" ones which necessitate more experiments to be well-modeled. The evaluation of new algorithm attempts through benchmarks is compulsory due to the lack of past proofs about their efficiency. The method is detailed and thoroughly tested with mathematical functions exhibiting different levels of complexity. The strategy is not only empirically evaluated, the effect or efficiency of sampling on future Machine Learning performances is also quantified. The minimum sample size required by the algorithm for being statistically discriminated from simple random sampling is investigated.
DNASynth: a software application to optimization of artificial gene synthesis
NASA Astrophysics Data System (ADS)
Muczyński, Jan; Nowak, Robert M.
2017-08-01
DNASynth is a client-server software application in which the client runs in a web browser. The aim of this program is to support and optimize process of artificial gene synthesizing using Ligase Chain Reaction. Thanks to LCR it is possible to obtain DNA strand coding defined by user peptide. The DNA sequence is calculated by optimization algorithm that consider optimal codon usage, minimal energy of secondary structures and minimal number of required LCR. Additionally absence of sequences characteristic for defined by user set of restriction enzymes is guaranteed. The presented software was tested on synthetic and real data.
Optimization of Composite Structures with Curved Fiber Trajectories
NASA Astrophysics Data System (ADS)
Lemaire, Etienne; Zein, Samih; Bruyneel, Michael
2014-06-01
This paper studies the problem of optimizing composites shells manufactured using Automated Tape Layup (ATL) or Automated Fiber Placement (AFP) processes. The optimization procedure relies on a new approach to generate equidistant fiber trajectories based on Fast Marching Method. Starting with a (possibly curved) reference fiber direction defined on a (possibly curved) meshed surface, the new method allows determining fibers orientation resulting from a uniform thickness layup. The design variables are the parameters defining the position and the shape of the reference curve which results in very few design variables. Thanks to this efficient parameterization, maximum stiffness optimization numerical applications are proposed. The shape of the design space is discussed, regarding local and global optimal solutions.
Hamilton, Lei; McConley, Marc; Angermueller, Kai; Goldberg, David; Corba, Massimiliano; Kim, Louis; Moran, James; Parks, Philip D; Sang Chin; Widge, Alik S; Dougherty, Darin D; Eskandar, Emad N
2015-08-01
A fully autonomous intracranial device is built to continually record neural activities in different parts of the brain, process these sampled signals, decode features that correlate to behaviors and neuropsychiatric states, and use these features to deliver brain stimulation in a closed-loop fashion. In this paper, we describe the sampling and stimulation aspects of such a device. We first describe the signal processing algorithms of two unsupervised spike sorting methods. Next, we describe the LFP time-frequency analysis and feature derivation from the two spike sorting methods. Spike sorting includes a novel approach to constructing a dictionary learning algorithm in a Compressed Sensing (CS) framework. We present a joint prediction scheme to determine the class of neural spikes in the dictionary learning framework; and, the second approach is a modified OSort algorithm which is implemented in a distributed system optimized for power efficiency. Furthermore, sorted spikes and time-frequency analysis of LFP signals can be used to generate derived features (including cross-frequency coupling, spike-field coupling). We then show how these derived features can be used in the design and development of novel decode and closed-loop control algorithms that are optimized to apply deep brain stimulation based on a patient's neuropsychiatric state. For the control algorithm, we define the state vector as representative of a patient's impulsivity, avoidance, inhibition, etc. Controller parameters are optimized to apply stimulation based on the state vector's current state as well as its historical values. The overall algorithm and software design for our implantable neural recording and stimulation system uses an innovative, adaptable, and reprogrammable architecture that enables advancement of the state-of-the-art in closed-loop neural control while also meeting the challenges of system power constraints and concurrent development with ongoing scientific research designed to define brain network connectivity and neural network dynamics that vary at the individual patient level and vary over time.
Lewandowska, Dagmara W; Zagordi, Osvaldo; Geissberger, Fabienne-Desirée; Kufner, Verena; Schmutz, Stefan; Böni, Jürg; Metzner, Karin J; Trkola, Alexandra; Huber, Michael
2017-08-08
Sequence-specific PCR is the most common approach for virus identification in diagnostic laboratories. However, as specific PCR only detects pre-defined targets, novel virus strains or viruses not included in routine test panels will be missed. Recently, advances in high-throughput sequencing allow for virus-sequence-independent identification of entire virus populations in clinical samples, yet standardized protocols are needed to allow broad application in clinical diagnostics. Here, we describe a comprehensive sample preparation protocol for high-throughput metagenomic virus sequencing using random amplification of total nucleic acids from clinical samples. In order to optimize metagenomic sequencing for application in virus diagnostics, we tested different enrichment and amplification procedures on plasma samples spiked with RNA and DNA viruses. A protocol including filtration, nuclease digestion, and random amplification of RNA and DNA in separate reactions provided the best results, allowing reliable recovery of viral genomes and a good correlation of the relative number of sequencing reads with the virus input. We further validated our method by sequencing a multiplexed viral pathogen reagent containing a range of human viruses from different virus families. Our method proved successful in detecting the majority of the included viruses with high read numbers and compared well to other protocols in the field validated against the same reference reagent. Our sequencing protocol does work not only with plasma but also with other clinical samples such as urine and throat swabs. The workflow for virus metagenomic sequencing that we established proved successful in detecting a variety of viruses in different clinical samples. Our protocol supplements existing virus-specific detection strategies providing opportunities to identify atypical and novel viruses commonly not accounted for in routine diagnostic panels.
Monitoring of trace elements in breast milk sampling and measurement procedures.
Spĕvácková, V; Rychlík, S; Cejchanová, M; Spĕvácek, V
2005-06-01
The aims of this study were to test analytical procedures for the determination of Cd, Cu, Mn, Pb, Se and Zn in breast milk and to establish optimum sampling conditions for monitoring purposes. Two population groups were analysed: (1) Seven women from Prague whose breast milk was sampled on days 1,2, 3, 4, 10, 20 and 30 after delivery; (2) 200 women from four (two industrial and two rural) regions whose breast milk was sampled at defined intervals. All samples were mineralised in a microwave oven in the mixture of HNO3 + H2O2 and analysed by atomic absorption spectrometry. Conditions for the measurement of the elements under study (i.e. those for the electrothermal atomisation for Cd, Mn and Pb, flame technique for Cu and Zn, and hydride generation technique for Se) were optimized. Using optimized parameters the analysis was performed and the following conclusion has been made: the concentrations of zinc and manganese decreased very sharply over the first days, that of copper slightly increased within the first two days and then slightly decreased, that of selenium did not change significantly. Partial "stabilisation" was achieved after the second decade. No correlation among the elements was found. A significant difference between whole and skim milk was only found for selenium (26% rel. higher in whole milk). The majority concentrations of cadmium and lead were below the detection limit of the method (0.3 microg x l(-1) and 8.2 microg x l(-1), respectively, as calculated for the original sample). To provide biological monitoring, the maintenance of sampling conditions and especially the time of sampling is crucial.
Helicopter TEM parameters analysis and system optimization based on time constant
NASA Astrophysics Data System (ADS)
Xiao, Pan; Wu, Xin; Shi, Zongyang; Li, Jutao; Liu, Lihua; Fang, Guangyou
2018-03-01
Helicopter transient electromagnetic (TEM) method is a kind of common geophysical prospecting method, widely used in mineral detection, underground water exploration and environment investigation. In order to develop an efficient helicopter TEM system, it is necessary to analyze and optimize the system parameters. In this paper, a simple and quantitative method is proposed to analyze the system parameters, such as waveform, power, base frequency, measured field and sampling time. A wire loop model is used to define a comprehensive 'time constant domain' that shows a range of time constant, analogous to a range of conductance, after which the characteristics of the system parameters in this domain is obtained. It is found that the distortion caused by the transmitting base frequency is less than 5% when the ratio of the transmitting period to the target time constant is greater than 6. When the sampling time window is less than the target time constant, the distortion caused by the sampling time window is less than 5%. According to this method, a helicopter TEM system, called CASHTEM, is designed, and flight test has been carried out in the known mining area. The test results show that the system has good detection performance, verifying the effectiveness of the method.
Sohn, Martin Y; Barnes, Bryan M; Silver, Richard M
2018-03-01
Accurate optics-based dimensional measurements of features sized well-below the diffraction limit require a thorough understanding of the illumination within the optical column and of the three-dimensional scattered fields that contain the information required for quantitative metrology. Scatterfield microscopy can pair simulations with angle-resolved tool characterization to improve agreement between the experiment and calculated libraries, yielding sub-nanometer parametric uncertainties. Optimized angle-resolved illumination requires bi-telecentric optics in which a telecentric sample plane defined by a Köhler illumination configuration and a telecentric conjugate back focal plane (CBFP) of the objective lens; scanning an aperture or an aperture source at the CBFP allows control of the illumination beam angle at the sample plane with minimal distortion. A bi-telecentric illumination optics have been designed enabling angle-resolved illumination for both aperture and source scanning modes while yielding low distortion and chief ray parallelism. The optimized design features a maximum chief ray angle at the CBFP of 0.002° and maximum wavefront deviations of less than 0.06 λ for angle-resolved illumination beams at the sample plane, holding promise for high quality angle-resolved illumination for improved measurements of deep-subwavelength structures using deep-ultraviolet light.
A two-stage stochastic rule-based model to determine pre-assembly buffer content
NASA Astrophysics Data System (ADS)
Gunay, Elif Elcin; Kula, Ufuk
2018-01-01
This study considers instant decision-making needs of the automobile manufactures for resequencing vehicles before final assembly (FA). We propose a rule-based two-stage stochastic model to determine the number of spare vehicles that should be kept in the pre-assembly buffer to restore the altered sequence due to paint defects and upstream department constraints. First stage of the model decides the spare vehicle quantities, where the second stage model recovers the scrambled sequence respect to pre-defined rules. The problem is solved by sample average approximation (SAA) algorithm. We conduct a numerical study to compare the solutions of heuristic model with optimal ones and provide following insights: (i) as the mismatch between paint entrance and scheduled sequence decreases, the rule-based heuristic model recovers the scrambled sequence as good as the optimal resequencing model, (ii) the rule-based model is more sensitive to the mismatch between the paint entrance and scheduled sequences for recovering the scrambled sequence, (iii) as the defect rate increases, the difference in recovery effectiveness between rule-based heuristic and optimal solutions increases, (iv) as buffer capacity increases, the recovery effectiveness of the optimization model outperforms heuristic model, (v) as expected the rule-based model holds more inventory than the optimization model.
Noninferiority trial designs for odds ratios and risk differences.
Hilton, Joan F
2010-04-30
This study presents constrained maximum likelihood derivations of the design parameters of noninferiority trials for binary outcomes with the margin defined on the odds ratio (ψ) or risk-difference (δ) scale. The derivations show that, for trials in which the group-specific response rates are equal under the point-alternative hypothesis, the common response rate, π(N), is a fixed design parameter whose value lies between the control and experimental rates hypothesized at the point-null, {π(C), π(E)}. We show that setting π(N) equal to the value of π(C) that holds under H(0) underestimates the overall sample size requirement. Given {π(C), ψ} or {π(C), δ} and the type I and II error rates, or algorithm finds clinically meaningful design values of π(N), and the corresponding minimum asymptotic sample size, N=n(E)+n(C), and optimal allocation ratio, γ=n(E)/n(C). We find that optimal allocations are increasingly imbalanced as ψ increases, with γ(ψ)<1 and γ(δ)≈1/γ(ψ), and that ranges of allocation ratios map to the minimum sample size. The latter characteristic allows trialists to consider trade-offs between optimal allocation at a smaller N and a preferred allocation at a larger N. For designs with relatively large margins (e.g. ψ>2.5), trial results that are presented on both scales will differ in power, with more power lost if the study is designed on the risk-difference scale and reported on the odds ratio scale than vice versa. 2010 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Vilhelmsen, Troels N.; Ferré, Ty P. A.
2016-04-01
Hydrological models are often developed to forecasting future behavior in response due to natural or human induced changes in stresses affecting hydrologic systems. Commonly, these models are conceptualized and calibrated based on existing data/information about the hydrological conditions. However, most hydrologic systems lack sufficient data to constrain models with adequate certainty to support robust decision making. Therefore, a key element of a hydrologic study is the selection of additional data to improve model performance. Given the nature of hydrologic investigations, it is not practical to select data sequentially, i.e. to choose the next observation, collect it, refine the model, and then repeat the process. Rather, for timing and financial reasons, measurement campaigns include multiple wells or sampling points. There is a growing body of literature aimed at defining the expected data worth based on existing models. However, these are almost all limited to identifying single additional observations. In this study, we present a methodology for simultaneously selecting multiple potential new observations based on their expected ability to reduce the uncertainty of the forecasts of interest. This methodology is based on linear estimates of the predictive uncertainty, and it can be used to determine the optimal combinations of measurements (location and number) established to reduce the uncertainty of multiple predictions. The outcome of the analysis is an estimate of the optimal sampling locations; the optimal number of samples; as well as a probability map showing the locations within the investigated area that are most likely to provide useful information about the forecasting of interest.
RNAblueprint: flexible multiple target nucleic acid sequence design.
Hammer, Stefan; Tschiatschek, Birgit; Flamm, Christoph; Hofacker, Ivo L; Findeiß, Sven
2017-09-15
Realizing the value of synthetic biology in biotechnology and medicine requires the design of molecules with specialized functions. Due to its close structure to function relationship, and the availability of good structure prediction methods and energy models, RNA is perfectly suited to be synthetically engineered with predefined properties. However, currently available RNA design tools cannot be easily adapted to accommodate new design specifications. Furthermore, complicated sampling and optimization methods are often developed to suit a specific RNA design goal, adding to their inflexibility. We developed a C ++ library implementing a graph coloring approach to stochastically sample sequences compatible with structural and sequence constraints from the typically very large solution space. The approach allows to specify and explore the solution space in a well defined way. Our library also guarantees uniform sampling, which makes optimization runs performant by not only avoiding re-evaluation of already found solutions, but also by raising the probability of finding better solutions for long optimization runs. We show that our software can be combined with any other software package to allow diverse RNA design applications. Scripting interfaces allow the easy adaption of existing code to accommodate new scenarios, making the whole design process very flexible. We implemented example design approaches written in Python to demonstrate these advantages. RNAblueprint , Python implementations and benchmark datasets are available at github: https://github.com/ViennaRNA . s.hammer@univie.ac.at, ivo@tbi.univie.ac.at or sven@tbi.univie.ac.at. Supplementary data are available at Bioinformatics online. © The Author(s) 2017. Published by Oxford University Press.
Bova, G Steven; Eltoum, Isam A; Kiernan, John A; Siegal, Gene P; Frost, Andra R; Best, Carolyn J M; Gillespie, John W; Emmert-Buck, Michael R
2005-01-01
Isolation of well-preserved pure cell populations is a prerequisite for sound studies of the molecular basis of pancreatic malignancy and other biological phenomena. This chapter reviews current methods for obtaining anatomically specific signals from molecules isolated from tissues, a basic requirement for productive linking of phenotype and genotype. The quality of samples isolated from tissue and used for molecular analysis is often glossed-over or omitted from publications, making interpretation and replication of data difficult or impossible. Fortunately, recently developed techniques allow life scientists to better document and control the quality of samples used for a given assay, creating a foundation for improvement in this area. Tissue processing for molecular studies usually involves some or all of the following steps: tissue collection, gross dissection/identification, fixation, processing/embedding, storage/archiving, sectioning, staining, microdissection/annotation, and pure analyte labeling/identification. High-quality tissue microdissection does not necessarily mean high-quality samples to analyze. The quality of biomaterials obtained for analysis is highly dependent on steps upstream and downstream from tissue microdissection. We provide protocols for each of these steps, and encourage you to improve upon these. It is worth the effort of every laboratory to optimize and document its technique at each stage of the process, and we provide a starting point for those willing to spend the time to optimize. In our view, poor documentation of tissue and cell type of origin and the use of nonoptimized protocols is a source of inefficiency in current life science research. Even incremental improvement in this area will increase productivity significantly.
Research on Rigid Body Motion Tracing in Space based on NX MCD
NASA Astrophysics Data System (ADS)
Wang, Junjie; Dai, Chunxiang; Shi, Karen; Qin, Rongkang
2018-03-01
In the use of MCD (Mechatronics Concept Designer) which is a module belong to SIEMENS Ltd industrial design software UG (Unigraphics NX), user can define rigid body and kinematic joint to make objects move according to the existing plan in simulation. At this stage, user may have the desire to see the path of some points in the moving object intuitively. In response to this requirement, this paper will compute the pose through the transformation matrix which can be available from the solver engine, and then fit these sampling points through B-spline curve. Meanwhile, combined with the actual constraints of rigid bodies, the traditional equal interval sampling strategy was optimized. The result shown that this method could satisfy the demand and make up for the deficiency in traditional sampling method. User can still edit and model on this 3D curve. Expected result has been achieved.
Design of a Sample Recovery Assembly for Magnetic Ramp-Wave Loading
NASA Astrophysics Data System (ADS)
Chantrenne, S.; Wise, J. L.; Asay, J. R.; Kipp, M. E.; Hall, C. A.
2009-06-01
Characterization of material behavior under dynamic loading requires studies at strain rates ranging from quasi-static to the limiting values of shock compression. For completeness, these studies involve complementary time-resolved data, which define the mechanical constitutive properties, and microstructural data, which reveal physical mechanisms underlying the observed mechanical response. Well-preserved specimens must be recovered for microstructural investigations. Magnetically generated ramp waves produce strain rates lower than those associated with shock waves, but recovery methods have been lacking for this type of loading. We adapted existing shock recovery techniques for application to magnetic ramp loading using 2-D and 3-D ALEGRA MHD code calculations to optimize the recovery design for mitigation of undesired late-time processing of the sample due to edge effects and secondary stress waves. To assess the validity of our simulations, measurements of sample deformation were compared to wavecode predictions.
Chen, Cong; Beckman, Robert A
2009-01-01
This manuscript discusses optimal cost-effective designs for Phase II proof of concept (PoC) trials. Unlike a confirmatory registration trial, a PoC trial is exploratory in nature, and sponsors of such trials have the liberty to choose the type I error rate and the power. The decision is largely driven by the perceived probability of having a truly active treatment per patient exposure (a surrogate measure to development cost), which is naturally captured in an efficiency score to be defined in this manuscript. Optimization of the score function leads to type I error rate and power (and therefore sample size) for the trial that is most cost-effective. This in turn leads to cost-effective go-no go criteria for development decisions. The idea is applied to derive optimal trial-level, program-level, and franchise-level design strategies. The study is not meant to provide any general conclusion because the settings used are largely simplified for illustrative purposes. However, through the examples provided herein, a reader should be able to gain useful insight into these design problems and apply them to the design of their own PoC trials.
Detecting glaucomatous change in visual fields: Analysis with an optimization framework.
Yousefi, Siamak; Goldbaum, Michael H; Varnousfaderani, Ehsan S; Belghith, Akram; Jung, Tzyy-Ping; Medeiros, Felipe A; Zangwill, Linda M; Weinreb, Robert N; Liebmann, Jeffrey M; Girkin, Christopher A; Bowd, Christopher
2015-12-01
Detecting glaucomatous progression is an important aspect of glaucoma management. The assessment of longitudinal series of visual fields, measured using Standard Automated Perimetry (SAP), is considered the reference standard for this effort. We seek efficient techniques for determining progression from longitudinal visual fields by formulating the problem as an optimization framework, learned from a population of glaucoma data. The longitudinal data from each patient's eye were used in a convex optimization framework to find a vector that is representative of the progression direction of the sample population, as a whole. Post-hoc analysis of longitudinal visual fields across the derived vector led to optimal progression (change) detection. The proposed method was compared to recently described progression detection methods and to linear regression of instrument-defined global indices, and showed slightly higher sensitivities at the highest specificities than other methods (a clinically desirable result). The proposed approach is simpler, faster, and more efficient for detecting glaucomatous changes, compared to our previously proposed machine learning-based methods, although it provides somewhat less information. This approach has potential application in glaucoma clinics for patient monitoring and in research centers for classification of study participants. Copyright © 2015 Elsevier Inc. All rights reserved.
An introduction to the COLIN optimization interface.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hart, William Eugene
2003-03-01
We describe COLIN, a Common Optimization Library INterface for C++. COLIN provides C++ template classes that define a generic interface for both optimization problems and optimization solvers. COLIN is specifically designed to facilitate the development of hybrid optimizers, for which one optimizer calls another to solve an optimization subproblem. We illustrate the capabilities of COLIN with an example of a memetic genetic programming solver.
Experimental test of an online ion-optics optimizer
NASA Astrophysics Data System (ADS)
Amthor, A. M.; Schillaci, Z. M.; Morrissey, D. J.; Portillo, M.; Schwarz, S.; Steiner, M.; Sumithrarachchi, Ch.
2018-07-01
A technique has been developed and tested to automatically adjust multiple electrostatic or magnetic multipoles on an ion optical beam line - according to a defined optimization algorithm - until an optimal tune is found. This approach simplifies the process of determining high-performance optical tunes, satisfying a given set of optical properties, for an ion optical system. The optimization approach is based on the particle swarm method and is entirely model independent, thus the success of the optimization does not depend on the accuracy of an extant ion optical model of the system to be optimized. Initial test runs of a first order optimization of a low-energy (<60 keV) all-electrostatic beamline at the NSCL show reliable convergence of nine quadrupole degrees of freedom to well-performing tunes within a reasonable number of trial solutions, roughly 500, with full beam optimization run times of roughly two hours. Improved tunes were found both for quasi-local optimizations and for quasi-global optimizations, indicating a good ability of the optimizer to find a solution with or without a well defined set of initial multipole settings.
Clinical implementation of stereotaxic brain implant optimization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rosenow, U.F.; Wojcicka, J.B.
1991-03-01
This optimization method for stereotaxic brain implants is based on seed/strand configurations of the basic type developed for the National Cancer Institute (NCI) atlas of regular brain implants. Irregular target volume shapes are determined from delineation in a stack of contrast enhanced computed tomography scans. The neurosurgeon may then select up to ten directions, or entry points, of surgical approach of which the program finds the optimal one under the criterion of smallest target volume diameter. Target volume cross sections are then reconstructed in 5-mm-spaced planes perpendicular to the implantation direction defined by the entry point and the target volumemore » center. This information is used to define a closed line in an implant cross section along which peripheral seed strands are positioned and which has now an irregular shape. Optimization points are defined opposite peripheral seeds on the target volume surface to which the treatment dose rate is prescribed. Three different optimization algorithms are available: linear least-squares programming, quadratic programming with constraints, and a simplex method. The optimization routine is implemented into a commercial treatment planning system. It generates coordinate and source strength information of the optimized seed configurations for further dose rate distribution calculation with the treatment planning system, and also the coordinate settings for the stereotaxic Brown-Roberts-Wells (BRW) implantation device.« less
NASA Astrophysics Data System (ADS)
Khmara, I.; Koneracka, M.; Kubovcikova, M.; Zavisova, V.; Antal, I.; Csach, K.; Kopcansky, P.; Vidlickova, I.; Csaderova, L.; Pastorekova, S.; Zatovicova, M.
2017-04-01
This study was aimed at development of biocompatible amino-functionalized magnetic nanoparticles as carriers of specific antibodies able to detect and/or target cancer cells. Poly-L-lysine (PLL)-modified magnetic nanoparticle samples with different PLL/Fe3O4 content were prepared and tested to define the optimal PLL/Fe3O4 weight ratio. The samples were characterized for particle size and morphology (SEM, TEM and DLS), and surface properties (zeta potential measurements). The optimal PLL/Fe3O4 weight ratio of 1.0 based on both zeta potential and DLS measurements was in agreement with the UV/VIS measurements. Magnetic nanoparticles with the optimal PLL content were conjugated with antibody specific for the cancer biomarker carbonic anhydrase IX (CA IX), which is induced by hypoxia, a physiologic stress present in solid tumors and linked with aggressive tumor behavior. CA IX is localized on the cell surface with the antibody-binding epitope facing the extracellular space and is therefore suitable for antibody-based targeting of tumor cells. Here we showed that PLL/Fe3O4 magnetic nanoparticles exhibit cytotoxic activities in a cell type-dependent manner and bind to cells expressing CA IX when conjugated with the CA IX-specific antibody. These data support further investigations of the CA IX antibody-conjugated, magnetic field-guided/activated nanoparticles as tools in anticancer strategies.
Optimizing image registration and infarct definition in stroke research.
Harston, George W J; Minks, David; Sheerin, Fintan; Payne, Stephen J; Chappell, Michael; Jezzard, Peter; Jenkinson, Mark; Kennedy, James
2017-03-01
Accurate representation of final infarct volume is essential for assessing the efficacy of stroke interventions in imaging-based studies. This study defines the impact of image registration methods used at different timepoints following stroke, and the implications for infarct definition in stroke research. Patients presenting with acute ischemic stroke were imaged serially using magnetic resonance imaging. Infarct volume was defined manually using four metrics: 24-h b1000 imaging; 1-week and 1-month T2-weighted FLAIR; and automatically using predefined thresholds of ADC at 24 h. Infarct overlap statistics and volumes were compared across timepoints following both rigid body and nonlinear image registration to the presenting MRI. The effect of nonlinear registration on a hypothetical trial sample size was calculated. Thirty-seven patients were included. Nonlinear registration improved infarct overlap statistics and consistency of total infarct volumes across timepoints, and reduced infarct volumes by 4.0 mL (13.1%) and 7.1 mL (18.2%) at 24 h and 1 week, respectively, compared to rigid body registration. Infarct volume at 24 h, defined using a predetermined ADC threshold, was less sensitive to infarction than b1000 imaging. 1-week T2-weighted FLAIR imaging was the most accurate representation of final infarct volume. Nonlinear registration reduced hypothetical trial sample size, independent of infarct volume, by an average of 13%. Nonlinear image registration may offer the opportunity of improving the accuracy of infarct definition in serial imaging studies compared to rigid body registration, helping to overcome the challenges of anatomical distortions at subacute timepoints, and reducing sample size for imaging-based clinical trials.
Salzer, Simone; Stiller, Christian; Tacke-Pook, Achim; Jacobi, Claus; Leibing, Eric
2009-01-01
Objective: Pathological worry is considered to be a defining feature for Generalized Anxiety Disorder (GAD). The Penn State Worry Questionnaire (PSWQ) is an instrument for assessing pathological worry. Two earlier studies demonstrated the suitability of the PSWQ as screening instrument for GAD in outpatient and non-clinical samples. This study examined the suitability of the PSWQ as a screening instrument for GAD in a German inpatient sample (N=237). Furthermore, a comparison of patients with GAD and patients with depression and other anxiety disorders regarding pathological worry and depression was carried out in a sub-sample of N=118 patients. Method: Cut-off scores optimizing sensitivity, optimizing specificity and simultaneously optimizing both sensitivity and specificity were calculated for the PSWQ score by receiver operating characteristic analysis (ROC). Differences regarding pathological worry and depression measured by the PSWQ and the Beck Depression Inventory (BDI) across five diagnostic subgroups were examined by conducting one-way ANOVAs. The influence of depression on pathological worry was controlled by conducting an ANCOVA with BDI score as a covariate. Results: The ROC analysis showed an area under the curve of AUC=.67 (p=0.02) with only 54.4% of the patients correctly classified. Comparison of diagnostic subgroups showed that after controlling the influence of depression, differences referring to pathological worry between diagnostic subgroups no longer existed. Conclusions: Contrary to the earlier results we found that the use of the PSWQ as a screening instrument for GAD at least in a sample of psychotherapy inpatients is not meaningful. Instead of that, the PSWQ can be used to discriminate high from low worriers in clinical samples. Thus, the instrument can be useful in establishing e.g. symptom-oriented group interventions as they are established in behavioural-medicine inpatient settings. Furthermore, our findings stress the influence of (comorbid) depressive symptoms on the process of worrying. PMID:19742048
Salzer, Simone; Stiller, Christian; Tacke-Pook, Achim; Jacobi, Claus; Leibing, Eric
2009-07-09
Pathological worry is considered to be a defining feature for Generalized Anxiety Disorder (GAD). The Penn State Worry Questionnaire (PSWQ) is an instrument for assessing pathological worry. Two earlier studies demonstrated the suitability of the PSWQ as screening instrument for GAD in outpatient and non-clinical samples. This study examined the suitability of the PSWQ as a screening instrument for GAD in a German inpatient sample (N=237). Furthermore, a comparison of patients with GAD and patients with depression and other anxiety disorders regarding pathological worry and depression was carried out in a sub-sample of N=118 patients. Cut-off scores optimizing sensitivity, optimizing specificity and simultaneously optimizing both sensitivity and specificity were calculated for the PSWQ score by receiver operating characteristic analysis (ROC). Differences regarding pathological worry and depression measured by the PSWQ and the Beck Depression Inventory (BDI) across five diagnostic subgroups were examined by conducting one-way ANOVAs. The influence of depression on pathological worry was controlled by conducting an ANCOVA with BDI score as a covariate. The ROC analysis showed an area under the curve of AUC=.67 (p=0.02) with only 54.4% of the patients correctly classified. Comparison of diagnostic subgroups showed that after controlling the influence of depression, differences referring to pathological worry between diagnostic subgroups no longer existed. Contrary to the earlier results we found that the use of the PSWQ as a screening instrument for GAD at least in a sample of psychotherapy inpatients is not meaningful. Instead of that, the PSWQ can be used to discriminate high from low worriers in clinical samples. Thus, the instrument can be useful in establishing e.g. symptom-oriented group interventions as they are established in behavioural-medicine inpatient settings. Furthermore, our findings stress the influence of (comorbid) depressive symptoms on the process of worrying.
Conditional optimal spacing in exponential distribution.
Park, Sangun
2006-12-01
In this paper, we propose the conditional optimal spacing defined as the optimal spacing after specifying a predetermined order statistic. If we specify a censoring time, then the optimal inspection times for grouped inspection can be determined from this conditional optimal spacing. We take an example of exponential distribution, and provide a simple method of finding the conditional optimal spacing.
Blümel, Juan E; Legorreta, Deborah; Chedraui, Peter; Ayala, Felix; Bencosme, Ascanio; Danckers, Luis; Lange, Diego; Espinoza, Maria T; Gomez, Gustavo; Grandia, Elena; Izaguirre, Humberto; Manriquez, Valentin; Martino, Mabel; Navarro, Daysi; Ojeda, Eliana; Onatra, William; Pozzo, Estela; Prada, Mariela; Royer, Monique; Saavedra, Javier M; Sayegh, Fabiana; Tserotas, Konstantinos; Vallejo, Maria S; Zuñiga, Cristina
2012-04-01
The aim of this study was to determine an optimal waist circumference (WC) cutoff value for defining the metabolic syndrome (METS) in postmenopausal Latin American women. A total of 3,965 postmenopausal women (age, 45-64 y), with self-reported good health, attending routine consultation at 12 gynecological centers in major Latin American cities were included in this cross-sectional study. Modified guidelines of the US National Cholesterol Education Program, Adult Treatment Panel III were used to assess METS risk factors. Receiver operator characteristic curve analysis was used to obtain an optimal WC cutoff value best predicting at least two other METS components. Optimal cutoff values were calculated by plotting the true-positive rate (sensitivity) against the false-positive rate (1 - specificity). In addition, total accuracy, distance to receiver operator characteristic curve, and the Youden Index were calculated. Of the participants, 51.6% (n = 2,047) were identified as having two or more nonadipose METS risk components (excluding a positive WC component). These women were older, had more years since menopause onset, used hormone therapy less frequently, and had higher body mass indices than women with fewer metabolic risk factors. The optimal WC cutoff value best predicting at least two other METS components was determined to be 88 cm, equal to that defined by the Adult Treatment Panel III. A WC cutoff value of 88 cm is optimal for defining METS in this postmenopausal Latin American series.
NASA Astrophysics Data System (ADS)
Jouvel, S.; Kneib, J.-P.; Bernstein, G.; Ilbert, O.; Jelinsky, P.; Milliard, B.; Ealet, A.; Schimd, C.; Dahlen, T.; Arnouts, S.
2011-08-01
Context. With the discovery of the accelerated expansion of the universe, different observational probes have been proposed to investigate the presence of dark energy, including possible modifications to the gravitation laws by accurately measuring the expansion of the Universe and the growth of structures. We need to optimize the return from future dark energy surveys to obtain the best results from these probes. Aims: A high precision weak-lensing analysis requires not an only accurate measurement of galaxy shapes but also a precise and unbiased measurement of galaxy redshifts. The survey strategy has to be defined following both the photometric redshift and shape measurement accuracy. Methods: We define the key properties of the weak-lensing instrument and compute the effective PSF and the overall throughput and sensitivities. We then investigate the impact of the pixel scale on the sampling of the effective PSF, and place upper limits on the pixel scale. We then define the survey strategy computing the survey area including in particular both the Galactic absorption and Zodiacal light variation accross the sky. Using the Le Phare photometric redshift code and realistic galaxy mock catalog, we investigate the properties of different filter-sets and the importance of the u-band photometry quality to optimize the photometric redshift and the dark energy figure of merit (FoM). Results: Using the predicted photometric redshift quality, simple shape measurement requirements, and a proper sky model, we explore what could be an optimal weak-lensing dark energy mission based on FoM calculation. We find that we can derive the most accurate the photometric redshifts for the bulk of the faint galaxy population when filters have a resolution ℛ ~ 3.2. We show that an optimal mission would survey the sky through eight filters using two cameras (visible and near infrared). Assuming a five-year mission duration, a mirror size of 1.5 m and a 0.5 deg2 FOV with a visible pixel scale of 0.15'', we found that a homogeneous survey reaching a survey population of IAB = 25.6 (10σ) with a sky coverage of ~11 000 deg2 maximizes the weak lensing FoM. The effective number density of galaxies used for WL is then ~45 gal/arcmin2, which is at least a factor of two higher than ground-based surveys. Conclusions: This study demonstrates that a full account of the observational strategy is required to properly optimize the instrument parameters and maximize the FoM of the future weak-lensing space dark energy mission.
NASA Astrophysics Data System (ADS)
Miriello, D.; Bloise, A.; De Luca, R.; Apollaro, C.; Crisci, G. M.; Medaglia, S.; Taliano Grasso, A.
2015-06-01
Dressel 2-4 amphorae are a type of pottery, which was used to transport wine and produced in the Mediterranean area between the first century BC and the second century AD. This study shows, for the first time, that their production also occurred in Ionian Calabria. These results were achieved by studying 11 samples of archaeological pottery (five samples of Dressel 2-4 and six samples of other ceramic types) taken from Cariati (Calabria—Southern Italy). The composition of the pottery was compared with that of the local raw materials (clays and sands) potentially usable for their production. Samples were studied by polarized optical microscopy and analysed by XRF, XRPD and Raman spectroscopy. An innovative approach, based on Microsoft Excel optimizer "Solver" add-in on geochemical data, was used to define the provenance of archaeological pottery and to calculate the mixtures of local clay and sand needed for the production of the pottery itself.
The ARIEL mission reference sample
NASA Astrophysics Data System (ADS)
Zingales, Tiziano; Tinetti, Giovanna; Pillitteri, Ignazio; Leconte, Jérémy; Micela, Giuseppina; Sarkar, Subhajit
2018-02-01
The ARIEL (Atmospheric Remote-sensing Exoplanet Large-survey) mission concept is one of the three M4 mission candidates selected by the European Space Agency (ESA) for a Phase A study, competing for a launch in 2026. ARIEL has been designed to study the physical and chemical properties of a large and diverse sample of exoplanets and, through those, understand how planets form and evolve in our galaxy. Here we describe the assumptions made to estimate an optimal sample of exoplanets - including already known exoplanets and expected ones yet to be discovered - observable by ARIEL and define a realistic mission scenario. To achieve the mission objectives, the sample should include gaseous and rocky planets with a range of temperatures around stars of different spectral type and metallicity. The current ARIEL design enables the observation of ˜1000 planets, covering a broad range of planetary and stellar parameters, during its four year mission lifetime. This nominal list of planets is expected to evolve over the years depending on the new exoplanet discoveries.
Alegana, Victor A; Wright, Jim; Bosco, Claudio; Okiro, Emelda A; Atkinson, Peter M; Snow, Robert W; Tatem, Andrew J; Noor, Abdisalan M
2017-11-21
One pillar to monitoring progress towards the Sustainable Development Goals is the investment in high quality data to strengthen the scientific basis for decision-making. At present, nationally-representative surveys are the main source of data for establishing a scientific evidence base, monitoring, and evaluation of health metrics. However, little is known about the optimal precisions of various population-level health and development indicators that remains unquantified in nationally-representative household surveys. Here, a retrospective analysis of the precision of prevalence from these surveys was conducted. Using malaria indicators, data were assembled in nine sub-Saharan African countries with at least two nationally-representative surveys. A Bayesian statistical model was used to estimate between- and within-cluster variability for fever and malaria prevalence, and insecticide-treated bed nets (ITNs) use in children under the age of 5 years. The intra-class correlation coefficient was estimated along with the optimal sample size for each indicator with associated uncertainty. Results suggest that the estimated sample sizes for the current nationally-representative surveys increases with declining malaria prevalence. Comparison between the actual sample size and the modelled estimate showed a requirement to increase the sample size for parasite prevalence by up to 77.7% (95% Bayesian credible intervals 74.7-79.4) for the 2015 Kenya MIS (estimated sample size of children 0-4 years 7218 [7099-7288]), and 54.1% [50.1-56.5] for the 2014-2015 Rwanda DHS (12,220 [11,950-12,410]). This study highlights the importance of defining indicator-relevant sample sizes to achieve the required precision in the current national surveys. While expanding the current surveys would need additional investment, the study highlights the need for improved approaches to cost effective sampling.
Huffman, Mark D; Prabhakaran, Dorairaj; Abraham, AK; Krishnan, Mangalath Narayanan; Nambiar, C. Asokan; Mohanan, Padinhare Purayil
2013-01-01
Background In-hospital and post-discharge treatment rates for acute coronary syndrome (ACS) remain low in India. However, little is known about the prevalence and predictors of the package of optimal ACS medical care in India. Our objective was to define the prevalence, predictors, and impact of optimal in-hospital and discharge medical therapy in the Kerala ACS Registry of 25,718 admissions. Methods and Results We defined optimal in-hospital ACS medical therapy as receiving the following five medications: aspirin, clopidogrel, heparin, beta-blocker, and statin. We defined optimal discharge ACS medical therapy as receiving all of the above therapies except heparin. Comparisons by optimal vs. non-optimal ACS care were made via Student’s t test for continuous variables and chi-square test for categorical variables. We created random effects logistic regression models to evaluate the association between GRACE risk score variables and optimal in-hospital or discharge medical therapy. Optimal in-hospital and discharge medical care was delivered in 40% and 46% of admissions, respectively. Wide variability in both in-hospital and discharge medical care was present with few hospitals reaching consistently high (>90%) levels. Patients receiving optimal in-hospital medical therapy had an adjusted OR (95%CI)=0.93 (0.71, 1.22) for in-hospital death and an adjusted OR (95%CI)=0.79 (0.63, 0.99) for MACE. Patients who received optimal in-hospital medical care were far more likely to receive optimal discharge care (adjusted OR [95%CI]=10.48 [9.37, 11.72]). Conclusions Strategies to improve in-hospital and discharge medical therapy are needed to improve local process-of-care measures and improve ACS outcomes in Kerala. PMID:23800985
Preentry communications study. Outer planets atmospheric entry probe
NASA Technical Reports Server (NTRS)
Hinrichs, C. A.
1976-01-01
A pre-entry communications study is presented for a relay link between a Jupiter entry probe and a spacecraft in hyperbolic orbit. Two generic communications links of interest are described: a pre-entry link to a spun spacecraft antenna, and a pre-entry link to a despun spacecraft antenna. The propagation environment of Jupiter is defined. Although this is one of the least well known features of Jupiter, enough information exists to reasonably establish bounds on the performance of a communications link. Within these bounds, optimal carrier frequencies are defined. The next step is to identify optimal relative geometries between the probe and the spacecraft. Optimal trajectories are established for both spun and despun spacecraft antennas. Given the optimal carrier frequencies, and the optimal trajectories, the data carrying capacities of the pre-entry links are defined. The impact of incorporating pre-entry communications into a basic post entry probe is then assessed. This assessment covers the disciplines of thermal control, power source, mass properties and design layout. A conceptual design is developed of an electronically despun antenna for use on a Pioneer class of spacecraft.
Development of an Optimization Methodology for the Aluminum Alloy Wheel Casting Process
NASA Astrophysics Data System (ADS)
Duan, Jianglan; Reilly, Carl; Maijer, Daan M.; Cockcroft, Steve L.; Phillion, Andre B.
2015-08-01
An optimization methodology has been developed for the aluminum alloy wheel casting process. The methodology is focused on improving the timing of cooling processes in a die to achieve improved casting quality. This methodology utilizes (1) a casting process model, which was developed within the commercial finite element package, ABAQUS™—ABAQUS is a trademark of Dassault Systèms; (2) a Python-based results extraction procedure; and (3) a numerical optimization module from the open-source Python library, Scipy. To achieve optimal casting quality, a set of constraints have been defined to ensure directional solidification, and an objective function, based on the solidification cooling rates, has been defined to either maximize, or target a specific, cooling rate. The methodology has been applied to a series of casting and die geometries with different cooling system configurations, including a 2-D axisymmetric wheel and die assembly generated from a full-scale prototype wheel. The results show that, with properly defined constraint and objective functions, solidification conditions can be improved and optimal cooling conditions can be achieved leading to process productivity and product quality improvements.
Schweitzer, Mary Higby; Schroeter, Elena R; Goshe, Michael B
2014-07-15
Advances in resolution and sensitivity of analytical techniques have provided novel applications, including the analyses of fossil material. However, the recovery of original proteinaceous components from very old fossil samples (defined as >1 million years (1 Ma) from previously named limits in the literature) is far from trivial. Here, we discuss the challenges to recovery of proteinaceous components from fossils, and the need for new sample preparation techniques, analytical methods, and bioinformatics to optimize and fully utilize the great potential of information locked in the fossil record. We present evidence for survival of original components across geological time, and discuss the potential benefits of recovery, analyses, and interpretation of fossil materials older than 1 Ma, both within and outside of the fields of evolutionary biology.
Barbagallo, Simone; Corradi, Luca; de Ville de Goyet, Jean; Iannucci, Marina; Porro, Ivan; Rosso, Nicola; Tanfani, Elena; Testi, Angela
2015-05-17
The Operating Room (OR) is a key resource of all major hospitals, but it also accounts for up 40% of resource costs. Improving cost effectiveness, while maintaining a quality of care, is a universal objective. These goals imply an optimization of planning and a scheduling of the activities involved. This is highly challenging due to the inherent variable and unpredictable nature of surgery. A Business Process Modeling Notation (BPMN 2.0) was used for the representation of the "OR Process" (being defined as the sequence of all of the elementary steps between "patient ready for surgery" to "patient operated upon") as a general pathway ("path"). The path was then both further standardized as much as possible and, at the same time, keeping all of the key-elements that would allow one to address or define the other steps of planning, and the inherent and wide variability in terms of patient specificity. The path was used to schedule OR activity, room-by-room, and day-by-day, feeding the process from a "waiting list database" and using a mathematical optimization model with the objective of ending up in an optimized planning. The OR process was defined with special attention paid to flows, timing and resource involvement. Standardization involved a dynamics operation and defined an expected operating time for each operation. The optimization model has been implemented and tested on real clinical data. The comparison of the results reported with the real data, shows that by using the optimization model, allows for the scheduling of about 30% more patients than in actual practice, as well as to better exploit the OR efficiency, increasing the average operating room utilization rate up to 20%. The optimization of OR activity planning is essential in order to manage the hospital's waiting list. Optimal planning is facilitated by defining the operation as a standard pathway where all variables are taken into account. By allowing a precise scheduling, it feeds the process of planning and, further up-stream, the management of a waiting list in an interactive and bi-directional dynamic process.
Optimal Budget Allocation for Sample Average Approximation
2011-06-01
an optimization algorithm applied to the sample average problem. We examine the convergence rate of the estimator as the computing budget tends to...regime for the optimization algorithm . 1 Introduction Sample average approximation (SAA) is a frequently used approach to solving stochastic programs...appealing due to its simplicity and the fact that a large number of standard optimization algorithms are often available to optimize the resulting sample
NASA Astrophysics Data System (ADS)
Khusainov, R.; Klimchik, A.; Magid, E.
2017-01-01
The paper presents comparison analysis of two approaches in defining leg trajectories for biped locomotion. The first one operates only with kinematic limitations of leg joints and finds the maximum possible locomotion speed for given limits. The second approach defines leg trajectories from the dynamic stability point of view and utilizes ZMP criteria. We show that two methods give different trajectories and demonstrate that trajectories based on pure dynamic optimization cannot be realized due to joint limits. Kinematic optimization provides unstable solution which can be balanced by upper body movement.
NASA Astrophysics Data System (ADS)
junfeng, Li; zhengying, Wei
2017-11-01
Process optimization and microstructure characterization of Ti6Al4V manufactured by selective laser melting (SLM) were investigated in this article. The relative density of sampled fabricated by SLM is influenced by the main process parameters, including laser power, scan speed and hatch distance. The volume energy density (VED) was defined to account for the combined effect of the main process parameters on the relative density. The results shown that the relative density changed with the change of VED and the optimized process interval is 55˜60J/mm3. Furthermore, compared with laser power, scan speed and hatch distance by taguchi method, it was found that the scan speed had the greatest effect on the relative density. Compared with the microstructure of the cross-section of the specimen at different scanning speeds, it was found that the microstructures at different speeds had similar characteristics, all of them were needle-like martensite distributed in the β matrix, but with the increase of scanning speed, the microstructure is finer and the lower scan speed leads to coarsening of the microstructure.
Bissacco, Alessandro; Chiuso, Alessandro; Soatto, Stefano
2007-11-01
We address the problem of performing decision tasks, and in particular classification and recognition, in the space of dynamical models in order to compare time series of data. Motivated by the application of recognition of human motion in image sequences, we consider a class of models that include linear dynamics, both stable and marginally stable (periodic), both minimum and non-minimum phase, driven by non-Gaussian processes. This requires extending existing learning and system identification algorithms to handle periodic modes and nonminimum phase behavior, while taking into account higher-order statistics of the data. Once a model is identified, we define a kernel-based cord distance between models that includes their dynamics, their initial conditions as well as input distribution. This is made possible by a novel kernel defined between two arbitrary (non-Gaussian) distributions, which is computed by efficiently solving an optimal transport problem. We validate our choice of models, inference algorithm, and distance on the tasks of human motion synthesis (sample paths of the learned models), and recognition (nearest-neighbor classification in the computed distance). However, our work can be applied more broadly where one needs to compare historical data while taking into account periodic trends, non-minimum phase behavior, and non-Gaussian input distributions.
Tank, Marcus; Bryant, Donald A.
2015-03-27
A novel thermophilic, microaerophilic, anoxygenic, and chlorophototrophic member of the phylum Acidobacteria, Chloracidobacterium thermophilum strain B T, was isolated from a cyanobacterial enrichment culture derived from microbial mats associated with Octopus Spring, Yellowstone National Park, Wyoming. C. thermophilum is strictly dependent on light and oxygen and grows optimally as a photoheterotroph at irradiance values between 20 and 50 µmol photons m⁻² s⁻¹. C. thermophilum is unable to synthesize branched-chain amino acids (AAs), L-lysine, and vitamin B₁₂, which are required for growth. Although the organism lacks genes for autotrophic carbon fixation, bicarbonate is also required. Mixtures of other AAs and 2-oxoglutaratemore » stimulate growth. As suggested from genomic sequence data, C. thermophilum requires a reduced sulfur source such as thioglycolate, cysteine, methionine, or thiosulfate. The organism can be grown in a defined medium at 51° C (T opt; range 44–58°C) in the pH range 5.5–9.5 (pH opt = ~7.0). Using the defined growth medium and optimal conditions, it was possible to isolate new C. thermophilum strains directly from samples of hot springs mats in Yellowstone National Park, Wyoming. The new isolates differ from the type strain with respect to pigment composition, morphology in liquid culture, and temperature adaptation.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tank, Marcus; Bryant, Donald A.
A novel thermophilic, microaerophilic, anoxygenic, and chlorophototrophic member of the phylum Acidobacteria, Chloracidobacterium thermophilum strain B T, was isolated from a cyanobacterial enrichment culture derived from microbial mats associated with Octopus Spring, Yellowstone National Park, Wyoming. C. thermophilum is strictly dependent on light and oxygen and grows optimally as a photoheterotroph at irradiance values between 20 and 50 µmol photons m⁻² s⁻¹. C. thermophilum is unable to synthesize branched-chain amino acids (AAs), L-lysine, and vitamin B₁₂, which are required for growth. Although the organism lacks genes for autotrophic carbon fixation, bicarbonate is also required. Mixtures of other AAs and 2-oxoglutaratemore » stimulate growth. As suggested from genomic sequence data, C. thermophilum requires a reduced sulfur source such as thioglycolate, cysteine, methionine, or thiosulfate. The organism can be grown in a defined medium at 51° C (T opt; range 44–58°C) in the pH range 5.5–9.5 (pH opt = ~7.0). Using the defined growth medium and optimal conditions, it was possible to isolate new C. thermophilum strains directly from samples of hot springs mats in Yellowstone National Park, Wyoming. The new isolates differ from the type strain with respect to pigment composition, morphology in liquid culture, and temperature adaptation.« less
Fleets of enduring drones to probe atmospheric phenomena with clouds
NASA Astrophysics Data System (ADS)
Lacroix, Simon; Roberts, Greg; Benard, Emmanuel; Bronz, Murat; Burnet, Frédéric; Bouhoubeiny, Elkhedim; Condomines, Jean-Philippe; Doll, Carsten; Hattenberger, Gautier; Lamraoui, Fayçal; Renzaglia, Alessandro; Reymann, Christophe
2016-04-01
A full spatio-temporal four-dimensional characterization of the microphysics and dynamics of cloud formation including the onset of precipitation has never been reached. Such a characterization would yield a better understanding of clouds, e.g. to assess the dominant mixing mechanism and the main source of cloudy updraft dilution. It is the sampling strategy that matters: fully characterizing the evolution over time of the various parameters (P, T, 3D wind, liquid water content, aerosols...) within a cloud volume requires dense spatial sampling for durations of the order of one hour. A fleet of autonomous lightweight UAVs that coordinate themselves in real-time as an intelligent network can fulfill this purpose. The SkyScanner project targets the development of a fleet of autonomous UAVs to adaptively sample cumuli, so as to provide relevant data to address long standing questions in atmospheric science. It mixes basic researches and experimental developments, and gathers scientists in UAV conception, in optimal flight control, in intelligent cooperative behaviors, and of course atmospheric scientists. Two directions of researches are explored: optimal UAV conception and control, and optimal control of a fleet of UAVs. The design of UAVs for atmospheric science involves the satisfaction of trade-offs between payload, endurance, ease of deployment... A rational conception scheme that integrates the constraints to optimize a series of criteria, in particular energy consumption, would yield the definition of efficient UAVs. This requires a fine modeling of each involved sub-system and phenomenon, from the motor/propeller efficiency to the aerodynamics at small scale, including the flight control algorithms. The definition of mission profiles is also essential, considering the aerodynamics of clouds, to allow energy harvesting schemes that exploit thermals or gusts. The conception also integrates specific sensors, in particular wind sensor, for which classic technologies are challenged at the low speeds of lightweight UAVs. The overall control of the fleet so as to gather series of synchronized data in the cloud volume is a poorly informed and highly constrained adaptive sampling problem, in which the UAV motions must be defined to maximize the amount of gathered information and the mission duration. The overall approach casts the problem in a hierarchy of two modeling and decision stages. A macroscopic parametrized model of the cloud is built from the gathered data and exploited at the higher level by an operator, who sets information gathering goals. A subset of the UAV fleet is allocated to each goal, considering the current fleet state. These high level goals are handled by the lower level, which autonomously optimizes the selected UAVs trajectories using an on-line updated dense model of the variables of interest. Building the models involves Gaussian processes techniques (kriging) to fuse the gathered data with a generic cumulus conceptual model, the latter being defined from thorough statistics on realistic MesoNH cloud simulations. The model is exploited by a planner to generate trajectories that minimize the uncertainty in the map, while steering the vehicles within the air flows to save energy.
NASA Astrophysics Data System (ADS)
Macian-Sorribes, Hector; Pulido-Velazquez, Manuel
2016-04-01
This contribution presents a methodology for defining optimal seasonal operating rules in multireservoir systems coupling expert criteria and stochastic optimization. Both sources of information are combined using fuzzy logic. The structure of the operating rules is defined based on expert criteria, via a joint expert-technician framework consisting in a series of meetings, workshops and surveys carried out between reservoir managers and modelers. As a result, the decision-making process used by managers can be assessed and expressed using fuzzy logic: fuzzy rule-based systems are employed to represent the operating rules and fuzzy regression procedures are used for forecasting future inflows. Once done that, a stochastic optimization algorithm can be used to define optimal decisions and transform them into fuzzy rules. Finally, the optimal fuzzy rules and the inflow prediction scheme are combined into a Decision Support System for making seasonal forecasts and simulate the effect of different alternatives in response to the initial system state and the foreseen inflows. The approach presented has been applied to the Jucar River Basin (Spain). Reservoir managers explained how the system is operated, taking into account the reservoirs' states at the beginning of the irrigation season and the inflows previewed during that season. According to the information given by them, the Jucar River Basin operating policies were expressed via two fuzzy rule-based (FRB) systems that estimate the amount of water to be allocated to the users and how the reservoir storages should be balanced to guarantee those deliveries. A stochastic optimization model using Stochastic Dual Dynamic Programming (SDDP) was developed to define optimal decisions, which are transformed into optimal operating rules embedding them into the two FRBs previously created. As a benchmark, historical records are used to develop alternative operating rules. A fuzzy linear regression procedure was employed to foresee future inflows depending on present and past hydrological and meteorological variables actually used by the reservoir managers to define likely inflow scenarios. A Decision Support System (DSS) was created coupling the FRB systems and the inflow prediction scheme in order to give the user a set of possible optimal releases in response to the reservoir states at the beginning of the irrigation season and the fuzzy inflow projections made using hydrological and meteorological information. The results show that the optimal DSS created using the FRB operating policies are able to increase the amount of water allocated to the users in 20 to 50 Mm3 per irrigation season with respect to the current policies. Consequently, the mechanism used to define optimal operating rules and transform them into a DSS is able to increase the water deliveries in the Jucar River Basin, combining expert criteria and optimization algorithms in an efficient way. This study has been partially supported by the IMPADAPT project (CGL2013-48424-C2-1-R) with Spanish MINECO (Ministerio de Economía y Competitividad) and FEDER funds. It also has received funding from the European Union's Horizon 2020 research and innovation programme under the IMPREX project (grant agreement no: 641.811).
Optimal Sampling to Provide User-Specific Climate Information.
NASA Astrophysics Data System (ADS)
Panturat, Suwanna
The types of weather-related world problems which are of socio-economic importance selected in this study as representative of three different levels of user groups include: (i) a regional problem concerned with air pollution plumes which lead to acid rain in the north eastern United States, (ii) a state-level problem in the form of winter wheat production in Oklahoma, and (iii) an individual-level problem involving reservoir management given errors in rainfall estimation at Lake Ellsworth, upstream from Lawton, Oklahoma. The study is aimed at designing optimal sampling networks which are based on customer value systems and also abstracting from data sets that information which is most cost-effective in reducing the climate-sensitive aspects of a given user problem. Three process models being used in this study to interpret climate variability in terms of the variables of importance to the user comprise: (i) the HEFFTER-SAMSON diffusion model as the climate transfer function for acid rain, (ii) the CERES-MAIZE plant process model for winter wheat production and (iii) the AGEHYD streamflow model selected as "a black box" for reservoir management. A state-of-the-art Non Linear Program (NLP) algorithm for minimizing an objective function is employed to determine the optimal number and location of various sensors. Statistical quantities considered in determining sensor locations including Bayes Risk, the chi-squared value, the probability of the Type I error (alpha) and the probability of the Type II error (beta) and the noncentrality parameter delta^2. Moreover, the number of years required to detect a climate change resulting in a given bushel per acre change in mean wheat production is determined; the number of seasons of observations required to reduce the standard deviation of the error variance of the ambient sulfur dioxide to less than a certain percent of the mean is found; and finally the policy of maintaining pre-storm flood pools at selected levels is examined given information from the optimal sampling network as defined by the study.
Sabbatini, Amber K; Merck, Lisa H; Froemming, Adam T; Vaughan, William; Brown, Michael D; Hess, Erik P; Applegate, Kimberly E; Comfere, Nneka I
2015-12-01
Patient-centered emergency diagnostic imaging relies on efficient communication and multispecialty care coordination to ensure optimal imaging utilization. The construct of the emergency diagnostic imaging care coordination cycle with three main phases (pretest, test, and posttest) provides a useful framework to evaluate care coordination in patient-centered emergency diagnostic imaging. This article summarizes findings reached during the patient-centered outcomes session of the 2015 Academic Emergency Medicine consensus conference "Diagnostic Imaging in the Emergency Department: A Research Agenda to Optimize Utilization." The primary objective was to develop a research agenda focused on 1) defining component parts of the emergency diagnostic imaging care coordination process, 2) identifying gaps in communication that affect emergency diagnostic imaging, and 3) defining optimal methods of communication and multidisciplinary care coordination that ensure patient-centered emergency diagnostic imaging. Prioritized research questions provided the framework to define a research agenda for multidisciplinary care coordination in emergency diagnostic imaging. © 2015 by the Society for Academic Emergency Medicine.
Soverini, Simona; De Benedittis, Caterina; Castagnetti, Fausto; Gugliotta, Gabriele; Mancini, Manuela; Bavaro, Luana; Machova Polakova, Katerina; Linhartova, Jana; Iurlo, Alessandra; Russo, Domenico; Pane, Fabrizio; Saglio, Giuseppe; Rosti, Gianantonio; Cavo, Michele; Baccarani, Michele; Martinelli, Giovanni
2016-08-02
Imatinib-resistant chronic myeloid leukemia (CML) patients receiving second-line tyrosine kinase inhibitor (TKI) therapy with dasatinib or nilotinib have a higher risk of disease relapse and progression and not infrequently BCR-ABL1 kinase domain (KD) mutations are implicated in therapeutic failure. In this setting, earlier detection of emerging BCR-ABL1 KD mutations would offer greater chances of efficacy for subsequent salvage therapy and limit the biological consequences of full BCR-ABL1 kinase reactivation. Taking advantage of an already set up and validated next-generation deep amplicon sequencing (DS) assay, we aimed to assess whether DS may allow a larger window of detection of emerging BCR-ABL1 KD mutants predicting for an impending relapse. a total of 125 longitudinal samples from 51 CML patients who had acquired dasatinib- or nilotinib-resistant mutations during second-line therapy were analyzed by DS from the time of failure and mutation detection by conventional sequencing backwards. BCR-ABL1/ABL1%(IS) transcript levels were used to define whether the patient had 'optimal response', 'warning' or 'failure' at the time of first mutation detection by DS. DS was able to backtrack dasatinib- or nilotinib-resistant mutations to the previous sample(s) in 23/51 (45 %) pts. Median mutation burden at the time of first detection by DS was 5.5 % (range, 1.5-17.5 %); median interval between detection by DS and detection by conventional sequencing was 3 months (range, 1-9 months). In 5 cases, the mutations were detectable at baseline. In the remaining cases, response level at the time mutations were first detected by DS could be defined as 'Warning' (according to the 2013 ELN definitions of response to 2nd-line therapy) in 13 cases, as 'Optimal response' in one case, as 'Failure' in 4 cases. No dasatinib- or nilotinib-resistant mutations were detected by DS in 15 randomly selected patients with 'warning' at various timepoints, that later turned into optimal responders with no treatment changes. DS enables a larger window of detection of emerging BCR-ABL1 KD mutations predicting for an impending relapse. A 'Warning' response may represent a rational trigger, besides 'Failure', for DS-based mutation screening in CML patients undergoing second-line TKI therapy.
A Higher Harmonic Optimal Controller to Optimise Rotorcraft Aeromechanical Behaviour
NASA Technical Reports Server (NTRS)
Leyland, Jane Anne
1996-01-01
Three methods to optimize rotorcraft aeromechanical behavior for those cases where the rotorcraft plant can be adequately represented by a linear model system matrix were identified and implemented in a stand-alone code. These methods determine the optimal control vector which minimizes the vibration metric subject to constraints at discrete time points, and differ from the commonly used non-optimal constraint penalty methods such as those employed by conventional controllers in that the constraints are handled as actual constraints to an optimization problem rather than as just additional terms in the performance index. The first method is to use a Non-linear Programming algorithm to solve the problem directly. The second method is to solve the full set of non-linear equations which define the necessary conditions for optimality. The third method is to solve each of the possible reduced sets of equations defining the necessary conditions for optimality when the constraints are pre-selected to be either active or inactive, and then to simply select the best solution. The effects of maneuvers and aeroelasticity on the systems matrix are modelled by using a pseudo-random pseudo-row-dependency scheme to define the systems matrix. Cases run to date indicate that the first method of solution is reliable, robust, and easiest to use, and that it was superior to the conventional controllers which were considered.
Stan : A Probabilistic Programming Language
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carpenter, Bob; Gelman, Andrew; Hoffman, Matthew D.
Stan is a probabilistic programming language for specifying statistical models. A Stan program imperatively defines a log probability function over parameters conditioned on specified data and constants. As of version 2.14.0, Stan provides full Bayesian inference for continuous-variable models through Markov chain Monte Carlo methods such as the No-U-Turn sampler, an adaptive form of Hamiltonian Monte Carlo sampling. Penalized maximum likelihood estimates are calculated using optimization methods such as the limited memory Broyden-Fletcher-Goldfarb-Shanno algorithm. Stan is also a platform for computing log densities and their gradients and Hessians, which can be used in alternative algorithms such as variational Bayes, expectationmore » propagation, and marginal inference using approximate integration. To this end, Stan is set up so that the densities, gradients, and Hessians, along with intermediate quantities of the algorithm such as acceptance probabilities, are easily accessible. Stan can also be called from the command line using the cmdstan package, through R using the rstan package, and through Python using the pystan package. All three interfaces support sampling and optimization-based inference with diagnostics and posterior analysis. rstan and pystan also provide access to log probabilities, gradients, Hessians, parameter transforms, and specialized plotting.« less
Stan : A Probabilistic Programming Language
Carpenter, Bob; Gelman, Andrew; Hoffman, Matthew D.; ...
2017-01-01
Stan is a probabilistic programming language for specifying statistical models. A Stan program imperatively defines a log probability function over parameters conditioned on specified data and constants. As of version 2.14.0, Stan provides full Bayesian inference for continuous-variable models through Markov chain Monte Carlo methods such as the No-U-Turn sampler, an adaptive form of Hamiltonian Monte Carlo sampling. Penalized maximum likelihood estimates are calculated using optimization methods such as the limited memory Broyden-Fletcher-Goldfarb-Shanno algorithm. Stan is also a platform for computing log densities and their gradients and Hessians, which can be used in alternative algorithms such as variational Bayes, expectationmore » propagation, and marginal inference using approximate integration. To this end, Stan is set up so that the densities, gradients, and Hessians, along with intermediate quantities of the algorithm such as acceptance probabilities, are easily accessible. Stan can also be called from the command line using the cmdstan package, through R using the rstan package, and through Python using the pystan package. All three interfaces support sampling and optimization-based inference with diagnostics and posterior analysis. rstan and pystan also provide access to log probabilities, gradients, Hessians, parameter transforms, and specialized plotting.« less
Optimization of batteries for plug-in hybrid electric vehicles
NASA Astrophysics Data System (ADS)
English, Jeffrey Robb
This thesis presents a method to quickly determine the optimal battery for an electric vehicle given a set of vehicle characteristics and desired performance metrics. The model is based on four independent design variables: cell count, cell capacity, state-of-charge window, and battery chemistry. Performance is measured in seven categories: cost, all-electric range, maximum speed, acceleration, battery lifetime, lifetime greenhouse gas emissions, and charging time. The performance of each battery is weighted according to a user-defined objective function to determine its overall fitness. The model is informed by a series of battery tests performed on scaled-down battery samples. Seven battery chemistries were tested for capacity at different discharge rates, maximum output power at different charge levels, and performance in a real-world automotive duty cycle. The results of these tests enable a prediction of the performance of the battery in an automobile. Testing was performed at both room temperature and low temperature to investigate the effects of battery temperature on operation. The testing highlighted differences in behavior between lithium, nickel, and lead based batteries. Battery performance decreased with temperature across all samples with the largest effect on nickel-based chemistries. Output power also decreased with lead acid batteries being the least affected by temperature. Lithium-ion batteries were found to be highly efficient (>95%) under a vehicular duty cycle; nickel and lead batteries have greater losses. Low temperatures hindered battery performance and resulted in accelerated failure in several samples. Lead acid, lead tin, and lithium nickel alloy batteries were unable to complete the low temperature testing regime without losing significant capacity and power capability. This is a concern for their applicability in electric vehicles intended for cold climates which have to maintain battery temperature during long periods of inactivity. Three sample optimizations were performed: a compact car, a, truck, and a sports car. The compact car benefits from increased battery capacity despite the associated higher cost. The truck returned the smallest possible battery of each chemistry, indicating that electrification is not advisable. The sports car optimization resulted in the largest possible battery, indicating large performance from increased electrification. These results mirror the current state of the electric vehicle market.
Optimally Stopped Optimization
NASA Astrophysics Data System (ADS)
Vinci, Walter; Lidar, Daniel
We combine the fields of heuristic optimization and optimal stopping. We propose a strategy for benchmarking randomized optimization algorithms that minimizes the expected total cost for obtaining a good solution with an optimal number of calls to the solver. To do so, rather than letting the objective function alone define a cost to be minimized, we introduce a further cost-per-call of the algorithm. We show that this problem can be formulated using optimal stopping theory. The expected cost is a flexible figure of merit for benchmarking probabilistic solvers that can be computed when the optimal solution is not known, and that avoids the biases and arbitrariness that affect other measures. The optimal stopping formulation of benchmarking directly leads to a real-time, optimal-utilization strategy for probabilistic optimizers with practical impact. We apply our formulation to benchmark the performance of a D-Wave 2X quantum annealer and the HFS solver, a specialized classical heuristic algorithm designed for low tree-width graphs. On a set of frustrated-loop instances with planted solutions defined on up to N = 1098 variables, the D-Wave device is between one to two orders of magnitude faster than the HFS solver.
Gorelick, Philip B; Furie, Karen L; Iadecola, Costantino; Smith, Eric E; Waddy, Salina P; Lloyd-Jones, Donald M; Bae, Hee-Joon; Bauman, Mary Ann; Dichgans, Martin; Duncan, Pamela W; Girgus, Meighan; Howard, Virginia J; Lazar, Ronald M; Seshadri, Sudha; Testai, Fernando D; van Gaal, Stephen; Yaffe, Kristine; Wasiak, Hank; Zerna, Charlotte
2017-10-01
Cognitive function is an important component of aging and predicts quality of life, functional independence, and risk of institutionalization. Advances in our understanding of the role of cardiovascular risks have shown them to be closely associated with cognitive impairment and dementia. Because many cardiovascular risks are modifiable, it may be possible to maintain brain health and to prevent dementia in later life. The purpose of this American Heart Association (AHA)/American Stroke Association presidential advisory is to provide an initial definition of optimal brain health in adults and guidance on how to maintain brain health. We identify metrics to define optimal brain health in adults based on inclusion of factors that could be measured, monitored, and modified. From these practical considerations, we identified 7 metrics to define optimal brain health in adults that originated from AHA's Life's Simple 7: 4 ideal health behaviors (nonsmoking, physical activity at goal levels, healthy diet consistent with current guideline levels, and body mass index <25 kg/m 2 ) and 3 ideal health factors (untreated blood pressure <120/<80 mm Hg, untreated total cholesterol <200 mg/dL, and fasting blood glucose <100 mg/dL). In addition, in relation to maintenance of cognitive health, we recommend following previously published guidance from the AHA/American Stroke Association, Institute of Medicine, and Alzheimer's Association that incorporates control of cardiovascular risks and suggest social engagement and other related strategies. We define optimal brain health but recognize that the truly ideal circumstance may be uncommon because there is a continuum of brain health as demonstrated by AHA's Life's Simple 7. Therefore, there is opportunity to improve brain health through primordial prevention and other interventions. Furthermore, although cardiovascular risks align well with brain health, we acknowledge that other factors differing from those related to cardiovascular health may drive cognitive health. Defining optimal brain health in adults and its maintenance is consistent with the AHA's Strategic Impact Goal to improve cardiovascular health of all Americans by 20% and to reduce deaths resulting from cardiovascular disease and stroke by 20% by the year 2020. This work in defining optimal brain health in adults serves to provide the AHA/American Stroke Association with a foundation for a new strategic direction going forward in cardiovascular health promotion and disease prevention. © 2017 American Heart Association, Inc.
Defining Optimal Brain Health in Adults
Gorelick, Philip B.; Furie, Karen L.; Iadecola, Costantino; Smith, Eric E.; Waddy, Salina P.; Lloyd-Jones, Donald M.; Bae, Hee-Joon; Bauman, Mary Ann; Dichgans, Martin; Duncan, Pamela W.; Girgus, Meighan; Howard, Virginia J.; Lazar, Ronald M.; Seshadri, Sudha; Testai, Fernando D.; van Gaal, Stephen; Yaffe, Kristine; Wasiak, Hank; Zerna, Charlotte
2017-01-01
Cognitive function is an important component of aging and predicts quality of life, functional independence, and risk of institutionalization. Advances in our understanding of the role of cardiovascular risks have shown them to be closely associated with cognitive impairment and dementia. Because many cardiovascular risks are modifiable, it may be possible to maintain brain health and to prevent dementia in later life. The purpose of this American Heart Association (AHA)/American Stroke Association presidential advisory is to provide an initial definition of optimal brain health in adults and guidance on how to maintain brain health. We identify metrics to define optimal brain health in adults based on inclusion of factors that could be measured, monitored, and modified. From these practical considerations, we identified 7 metrics to define optimal brain health in adults that originated from AHA’s Life’s Simple 7: 4 ideal health behaviors (nonsmoking, physical activity at goal levels, healthy diet consistent with current guideline levels, and body mass index <25 kg/m2) and 3 ideal health factors (untreated blood pressure <120/<80 mm Hg, untreated total cholesterol <200 mg/dL, and fasting blood glucose <100 mg/dL). In addition, in relation to maintenance of cognitive health, we recommend following previously published guidance from the AHA/American Stroke Association, Institute of Medicine, and Alzheimer’s Association that incorporates control of cardiovascular risks and suggest social engagement and other related strategies. We define optimal brain health but recognize that the truly ideal circumstance may be uncommon because there is a continuum of brain health as demonstrated by AHA’s Life’s Simple 7. Therefore, there is opportunity to improve brain health through primordial prevention and other interventions. Furthermore, although cardiovascular risks align well with brain health, we acknowledge that other factors differing from those related to cardiovascular health may drive cognitive health. Defining optimal brain health in adults and its maintenance is consistent with the AHA’s Strategic Impact Goal to improve cardiovascular health of all Americans by 20% and to reduce deaths resulting from cardiovascular disease and stroke by 20% by the year 2020. This work in defining optimal brain health in adults serves to provide the AHA/American Stroke Association with a foundation for a new strategic direction going forward in cardiovascular health promotion and disease prevention. PMID:28883125
Design of a sensitive grating-based phase contrast mammography prototype (Conference Presentation)
NASA Astrophysics Data System (ADS)
Arboleda Clavijo, Carolina; Wang, Zhentian; Köhler, Thomas; van Stevendaal, Udo; Martens, Gerhard; Bartels, Matthias; Villanueva-Perez, Pablo; Roessl, Ewald; Stampanoni, Marco
2017-03-01
Grating-based phase contrast mammography can help facilitate breast cancer diagnosis, as several research works have demonstrated. To translate this technique to the clinics, it has to be adapted to cover a large field of view within a limited exposure time and with a clinically acceptable radiation dose. This indicates that a straightforward approach would be to install a grating interferometer (GI) into a commercial mammography device. We developed a wave propagation based optimization method to select the most convenient GI designs in terms of phase and dark-field sensitivities for the Philips Microdose Mammography (PMM) setup. The phase sensitivity was defined as the minimum detectable breast tissue electron density gradient, whereas the dark-field sensitivity was defined as its corresponding signal-to-noise Ratio (SNR). To be able to derive sample-dependent sensitivity metrics, a visibility reduction model for breast tissue was formulated, based on previous research works on the dark-field signal and utilizing available Ultra-Small-Angle X-ray Scattering (USAXS) data and the outcomes of measurements on formalin-fixed breast tissue specimens carried out in tube-based grating interferometers. The results of this optimization indicate the optimal scenarios for each metric are different and fundamentally depend on the noise behavior of the signals and the visibility reduction trend with respect to the system autocorrelation length. In addition, since the inter-grating distance is constrained by the space available between the breast support and the detector, the best way we have to improve sensitivity is to count on a small G2 pitch.
Locations of Sampling Stations for Water Quality Monitoring in Water Distribution Networks.
Rathi, Shweta; Gupta, Rajesh
2014-04-01
Water quality is required to be monitored in the water distribution networks (WDNs) at salient locations to assure the safe quality of water supplied to the consumers. Such monitoring stations (MSs) provide warning against any accidental contaminations. Various objectives like demand coverage, time for detection, volume of water contaminated before detection, extent of contamination, expected population affected prior to detection, detection likelihood and others, have been independently or jointly considered in determining optimal number and location of MSs in WDNs. "Demand coverage" defined as the percentage of network demand monitored by a particular monitoring station is a simple measure to locate MSs. Several methods based on formulation of coverage matrix using pre-specified coverage criteria and optimization have been suggested. Coverage criteria is defined as some minimum percentage of total flow received at the monitoring stations that passed through any upstream node included then as covered node of the monitoring station. Number of monitoring stations increases with the increase in the value of coverage criteria. Thus, the design of monitoring station becomes subjective. A simple methodology is proposed herein which priority wise iteratively selects MSs to achieve targeted demand coverage. The proposed methodology provided the same number and location of MSs for illustrative network as an optimization method did. Further, the proposed method is simple and avoids subjectivity that could arise from the consideration of coverage criteria. The application of methodology is also shown on a WDN of Dharampeth zone (Nagpur city WDN in Maharashtra, India) having 285 nodes and 367 pipes.
Metabolomics biomarkers to predict acamprosate treatment response in alcohol-dependent subjects.
Hinton, David J; Vázquez, Marely Santiago; Geske, Jennifer R; Hitschfeld, Mario J; Ho, Ada M C; Karpyak, Victor M; Biernacka, Joanna M; Choi, Doo-Sup
2017-05-31
Precision medicine for alcohol use disorder (AUD) allows optimal treatment of the right patient with the right drug at the right time. Here, we generated multivariable models incorporating clinical information and serum metabolite levels to predict acamprosate treatment response. The sample of 120 patients was randomly split into a training set (n = 80) and test set (n = 40) five independent times. Treatment response was defined as complete abstinence (no alcohol consumption during 3 months of acamprosate treatment) while nonresponse was defined as any alcohol consumption during this period. In each of the five training sets, we built a predictive model using a least absolute shrinkage and section operator (LASSO) penalized selection method and then evaluated the predictive performance of each model in the corresponding test set. The models predicted acamprosate treatment response with a mean sensitivity and specificity in the test sets of 0.83 and 0.31, respectively, suggesting our model performed well at predicting responders, but not non-responders (i.e. many non-responders were predicted to respond). Studies with larger sample sizes and additional biomarkers will expand the clinical utility of predictive algorithms for pharmaceutical response in AUD.
The DCU: the detector control unit for SPICA-SAFARI
NASA Astrophysics Data System (ADS)
Clénet, Antoine; Ravera, Laurent; Bertrand, Bernard; den Hartog, Roland H.; Jackson, Brian D.; van Leeuven, Bert-Joost; van Loon, Dennis; Parot, Yann; Pointecouteau, Etienne; Sournac, Anthony
2014-08-01
IRAP is developing the warm electronic, so called Detector Control Unit" (DCU), in charge of the readout of the SPICA-SAFARI's TES type detectors. The architecture of the electronics used to readout the 3 500 sensors of the 3 focal plane arrays is based on the frequency domain multiplexing technique (FDM). In each of the 24 detection channels the data of up to 160 pixels are multiplexed in frequency domain between 1 and 3:3 MHz. The DCU provides the AC signals to voltage-bias the detectors; it demodulates the detectors data which are readout in the cold by a SQUID; and it computes a feedback signal for the SQUID to linearize the detection chain in order to optimize its dynamic range. The feedback is computed with a specific technique, so called baseband feedback (BBFB) which ensures that the loop is stable even with long propagation and processing delays (i.e. several µs) and with fast signals (i.e. frequency carriers at 3:3 MHz). This digital signal processing is complex and has to be done at the same time for the 3 500 pixels. It thus requires an optimisation of the power consumption. We took the advantage of the relatively reduced science signal bandwidth (i.e. 20 - 40 Hz) to decouple the signal sampling frequency (10 MHz) and the data processing rate. Thanks to this method we managed to reduce the total number of operations per second and thus the power consumption of the digital processing circuit by a factor of 10. Moreover we used time multiplexing techniques to share the resources of the circuit (e.g. a single BBFB module processes 32 pixels). The current version of the firmware is under validation in a Xilinx Virtex 5 FPGA, the final version will be developed in a space qualified digital ASIC. Beyond the firmware architecture the optimization of the instrument concerns the characterization routines and the definition of the optimal parameters. Indeed the operation of the detection and readout chains requires to properly define more than 17 500 parameters (about 5 parameters per pixel). Thus it is mandatory to work out an automatic procedure to set up these optimal values. We defined a fast algorithm which characterizes the phase correction to be applied by the BBFB firmware and the pixel resonance frequencies. We also defined a technique to define the AC-carrier initial phases in such a way that the amplitude of their sum is minimized (for a better use of the DAC dynamic range).
Electrofishing effort requirements for estimating species richness in the Kootenai River, Idaho
Watkins, Carson J.; Quist, Michael C.; Shepard, Bradley B.; Ireland, Susan C.
2016-01-01
This study was conducted on the Kootenai River, Idaho to provide insight on sampling requirements to optimize future monitoring effort associated with the response of fish assemblages to habitat rehabilitation. Our objective was to define the electrofishing effort (m) needed to have a 95% probability of sampling 50, 75, and 100% of the observed species richness and to evaluate the relative influence of depth, velocity, and instream woody cover on sample size requirements. Sidechannel habitats required more sampling effort to achieve 75 and 100% of the total species richness than main-channel habitats. The sampling effort required to have a 95% probability of sampling 100% of the species richness was 1100 m for main-channel sites and 1400 m for side-channel sites. We hypothesized that the difference in sampling requirements between main- and side-channel habitats was largely due to differences in habitat characteristics and species richness between main- and side-channel habitats. In general, main-channel habitats had lower species richness than side-channel habitats. Habitat characteristics (i.e., depth, current velocity, and woody instream cover) were not related to sample size requirements. Our guidelines will improve sampling efficiency during monitoring effort in the Kootenai River and provide insight on sampling designs for other large western river systems where electrofishing is used to assess fish assemblages.
Kelishadi, Roya; Heshmat, Ramin; Ardalan, Gelayol; Qorbani, Mostafa; Taslimi, Mahnaz; Poursafa, Parinaz; Keramatian, Kasra; Taheri, Majzoubeh; Motlagh, Mohammad-Esmaeil
2014-01-01
This study aimed to simplify the diagnostic criteria of pre-hypertension (pre-HTN) and hypertension (HTN) in the pediatric age group, and to determine the accuracy of these simple indexes in a nationally-representative sample of Iranian children and adolescents. The diagnostic accuracy of the indexes of systolic blood pressure-to-height ratio (SBPHR) and diastolic BPHR (DBPHR) to define pre-HTN and HTN was determined by the area under the curve of the receiver operator characteristic curves. The study population consisted of 5,738 Iranian students (2,875 females) with mean (SD) age of 14.7 (2.4) years. The prevalences of pre-HTN and HTN were 6.9% and 5.6%. The optimal thresholds for defining pre-HTN were 0.73 in males and 0.71 in females for SBPHR, and 0.47 in males and 0.45 in females for DBPHR, respectively. The corresponding figures for HTN were 0.73, 0.71, 0.48, and 0.46, respectively. In both genders, the accuracies of SBPHR and DBPHR in diagnosing pre-HTN and HTN were approximately 80%. BPHR is a valid, simple, inexpensive, and accurate tool to diagnose pre-HTN and HTN in adolescents. The optimal thresholds of SBPHR and DBPHR were consistent with the corresponding figures in other populations of children and adolescents with different racial and ethnic backgrounds. Thus, it is suggested that the use of these indexes can be generalized in programs aiming to screen elevated blood pressure in the pediatric age group. Copyright © 2013 Sociedade Brasileira de Pediatria. Published by Elsevier Editora Ltda. All rights reserved.
Alladio, Eugenio; Biosa, Giulia; Seganti, Fabrizio; Di Corcia, Daniele; Salomone, Alberto; Vincenti, Marco; Baumgartner, Markus R
2018-05-11
The quantitative determination of ethyl glucuronide (EtG) in hair samples is consistently used throughout the world to assess chronic excessive alcohol consumption. For administrative and legal purposes, the analytical results are compared with cut-off values recognized by regulatory authorities and scientific societies. However, it has been recently recognized that the analytical results depend on the hair sample pretreatment procedures, including the crumbling and extraction conditions. A systematic evaluation of the EtG extraction conditions from pulverized scalp hair was conducted by design of experiments (DoE) considering the extraction time, temperature, pH, and solvent composition as potential influencing factors. It was concluded that an overnight extraction at 60°C with pure water at neutral pH represents the most effective conditions to achieve high extraction yields. The absence of differential degradation of the internal standard (isotopically-labeled EtG) under such conditions was confirmed and the overall analytical method was validated according to SGWTOX and ISO17025 criteria. Twenty real hair samples with different EtG content were analyzed with three commonly accepted procedures: (a) hair manually cut in snippets and extracted at room temperature; (b) pulverized hair extracted at room temperature; (c) hair treated with the optimized method. Average increments of EtG concentration around 69% (from a to c) and 29% (from b to c) were recorded. In light of these results, the authors urge the scientific community to undertake an inter-laboratory study with the aim of defining more in detail the optimal hair EtG detection method and verifying the corresponding cut-off level for legal enforcements. This article is protected by copyright. All rights reserved.
Determination of tramadol by dispersive liquid-liquid microextraction combined with GC-MS.
Habibollahi, Saeed; Tavakkoli, Nahid; Nasirian, Vahid; Khani, Hossein
2015-01-01
Dispersive liquid-liquid microextraction (DLLME) coupled with gas chromatography-mass spectrometry (GC-MS) has been developed for preconcentration and determination of tramadol, ((±)-cis-2-[(dimethylamino)methyl]-1-(3-methoxyphenyl)cyclohexanol-HCl), in aqueous and biological samples (urine, blood). DLLME is a simple, rapid and efficient method for determination of drugs in aqueous samples. Efficient factors on the DLLME process has defined and optimized for extraction of tramadol including type of extraction and disperser solvents and their volumes, pH of donor phase, time of extraction and ionic strength of donor phase. Based on the results of this study, under optimal conditions and by using 2-nitro phenol as internal standard, tramadol was determined by GC-MS, and the figures of merit of this work were evaluated. The enrichment factor, relative recovery and limit of detection were obtained 420, 99.2% and 0.08 µg L(-1), respectively. The linear range was between 0.26 and 220.00 µg L(-1) (R(2) = 0.9970). The relative standard deviation for 50.00 µg L(-1) of tramadol in aqueous samples by using 2-nitro phenol as IS was 3.6% (n = 7). Finally, the performance of DLLME was evaluated for analysis of tramadol in urine and blood. Published by Oxford University Press 2014. This work is written by (a) US Government employee(s) and is in the public domain in the US.
Munoz, F. D.; Hobbs, B. F.; Watson, J. -P.
2016-02-01
A novel two-phase bounding and decomposition approach to compute optimal and near-optimal solutions to large-scale mixed-integer investment planning problems is proposed and it considers a large number of operating subproblems, each of which is a convex optimization. Our motivating application is the planning of power transmission and generation in which policy constraints are designed to incentivize high amounts of intermittent generation in electric power systems. The bounding phase exploits Jensen’s inequality to define a lower bound, which we extend to stochastic programs that use expected-value constraints to enforce policy objectives. The decomposition phase, in which the bounds are tightened, improvesmore » upon the standard Benders’ algorithm by accelerating the convergence of the bounds. The lower bound is tightened by using a Jensen’s inequality-based approach to introduce an auxiliary lower bound into the Benders master problem. Upper bounds for both phases are computed using a sub-sampling approach executed on a parallel computer system. Numerical results show that only the bounding phase is necessary if loose optimality gaps are acceptable. But, the decomposition phase is required to attain optimality gaps. Moreover, use of both phases performs better, in terms of convergence speed, than attempting to solve the problem using just the bounding phase or regular Benders decomposition separately.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Munoz, F. D.; Hobbs, B. F.; Watson, J. -P.
A novel two-phase bounding and decomposition approach to compute optimal and near-optimal solutions to large-scale mixed-integer investment planning problems is proposed and it considers a large number of operating subproblems, each of which is a convex optimization. Our motivating application is the planning of power transmission and generation in which policy constraints are designed to incentivize high amounts of intermittent generation in electric power systems. The bounding phase exploits Jensen’s inequality to define a lower bound, which we extend to stochastic programs that use expected-value constraints to enforce policy objectives. The decomposition phase, in which the bounds are tightened, improvesmore » upon the standard Benders’ algorithm by accelerating the convergence of the bounds. The lower bound is tightened by using a Jensen’s inequality-based approach to introduce an auxiliary lower bound into the Benders master problem. Upper bounds for both phases are computed using a sub-sampling approach executed on a parallel computer system. Numerical results show that only the bounding phase is necessary if loose optimality gaps are acceptable. But, the decomposition phase is required to attain optimality gaps. Moreover, use of both phases performs better, in terms of convergence speed, than attempting to solve the problem using just the bounding phase or regular Benders decomposition separately.« less
Case studies on optimization problems in MATLAB and COMSOL multiphysics by means of the livelink
NASA Astrophysics Data System (ADS)
Ozana, Stepan; Pies, Martin; Docekal, Tomas
2016-06-01
LiveLink for COMSOL is a tool that integrates COMSOL Multiphysics with MATLAB to extend one's modeling with scripting programming in the MATLAB environment. It allows user to utilize the full power of MATLAB and its toolboxes in preprocessing, model manipulation, and post processing. At first, the head script launches COMSOL with MATLAB and defines initial value of all parameters, refers to the objective function J described in the objective function and creates and runs the defined optimization task. Once the task is launches, the COMSOL model is being called in the iteration loop (from MATLAB environment by use of API interface), changing defined optimization parameters so that the objective function is minimized, using fmincon function to find a local or global minimum of constrained linear or nonlinear multivariable function. Once the minimum is found, it returns exit flag, terminates optimization and returns the optimized values of the parameters. The cooperation with MATLAB via LiveLink enhances a powerful computational environment with complex multiphysics simulations. The paper will introduce using of the LiveLink for COMSOL for chosen case studies in the field of technical cybernetics and bioengineering.
NASA Astrophysics Data System (ADS)
Peng, Guoyi; Cao, Shuliang; Ishizuka, Masaru; Hayama, Shinji
2002-06-01
This paper is concerned with the design optimization of axial flow hydraulic turbine runner blade geometry. In order to obtain a better design plan with good performance, a new comprehensive performance optimization procedure has been presented by combining a multi-variable multi-objective constrained optimization model with a Q3D inverse computation and a performance prediction procedure. With careful analysis of the inverse design of axial hydraulic turbine runner, the total hydraulic loss and the cavitation coefficient are taken as optimization objectives and a comprehensive objective function is defined using the weight factors. Parameters of a newly proposed blade bound circulation distribution function and parameters describing positions of blade leading and training edges in the meridional flow passage are taken as optimization variables.The optimization procedure has been applied to the design optimization of a Kaplan runner with specific speed of 440 kW. Numerical results show that the performance of designed runner is successfully improved through optimization computation. The optimization model is found to be validated and it has the feature of good convergence. With the multi-objective optimization model, it is possible to control the performance of designed runner by adjusting the value of weight factors defining the comprehensive objective function. Copyright
The Optimization by Using the Learning Styles in the Adaptive Hypermedia Applications
ERIC Educational Resources Information Center
Hamza, Lamia; Tlili, Guiassa Yamina
2018-01-01
This article addresses the learning style as a criterion for optimization of adaptive content in hypermedia applications. First, the authors present the different optimization approaches proposed in the area of adaptive hypermedia systems whose goal is to define the optimization problem in this type of system. Then, they present the architecture…
An initiative in multidisciplinary optimization of rotorcraft
NASA Technical Reports Server (NTRS)
Adelman, Howard M.; Mantay, Wayne R.
1989-01-01
Described is a joint NASA/Army initiative at the Langley Research Center to develop optimization procedures aimed at improving the rotor blade design process by integrating appropriate disciplines and accounting for important interactions among the disciplines. The activity is being guided by a Steering Committee made up of key NASA and Army researchers and managers. The committee, which has been named IRASC (Integrated Rotorcraft Analysis Steering Committee), has defined two principal foci for the activity: a white paper which sets forth the goals and plans of the effort; and a rotor design project which will validate the basic constituents, as well as the overall design methodology for multidisciplinary optimization. The optimization formulation is described in terms of the objective function, design variables, and constraints. Additionally, some of the analysis aspects are discussed and an initial attempt at defining the interdisciplinary couplings is summarized. At this writing, some significant progress has been made, principally in the areas of single discipline optimization. Results are given which represent accomplishments in rotor aerodynamic performance optimization for minimum hover horsepower, rotor dynamic optimization for vibration reduction, and rotor structural optimization for minimum weight.
An initiative in multidisciplinary optimization of rotorcraft
NASA Technical Reports Server (NTRS)
Adelman, Howard M.; Mantay, Wayne R.
1988-01-01
Described is a joint NASA/Army initiative at the Langley Research Center to develop optimization procedures aimed at improving the rotor blade design process by integrating appropriate disciplines and accounting for important interactions among the disciplines. The activity is being guided by a Steering Committee made up of key NASA and Army researchers and managers. The committee, which has been named IRASC (Integrated Rotorcraft Analysis Steering Committee), has defined two principal foci for the activity: a white paper which sets forth the goals and plans of the effort; and a rotor design project which will validate the basic constituents, as well as the overall design methodology for multidisciplinary optimization. The paper describes the optimization formulation in terms of the objective function, design variables, and constraints. Additionally, some of the analysis aspects are discussed and an initial attempt at defining the interdisciplinary couplings is summarized. At this writing, some significant progress has been made, principally in the areas of single discipline optimization. Results are given which represent accomplishments in rotor aerodynamic performance optimization for minimum hover horsepower, rotor dynamic optimization for vibration reduction, and rotor structural optimization for minimum weight.
Automatic genetic optimization approach to two-dimensional blade profile design for steam turbines
DOE Office of Scientific and Technical Information (OSTI.GOV)
Trigg, M.A.; Tubby, G.R.; Sheard, A.G.
1999-01-01
In this paper a systematic approach to the optimization of two-dimensional blade profiles is presented. A genetic optimizer has been developed that modifies the blade profile and calculates its profile loss. This process is automatic, producing profile designs significantly faster and with significantly lower loss than has previously been possible. The optimizer developed uses a genetic algorithm to optimize a two-dimensional profile, defined using 17 parameters, for minimum loss with a given flow condition. The optimizer works with a population of two-dimensional profiles with varied parameters. A CFD mesh is generated for each profile, and the result is analyzed usingmore » a two-dimensional blade-to-blade solver, written for steady viscous compressible flow, to determine profile loss. The loss is used as the measure of a profile`s fitness. The optimizer uses this information to select the members of the next population, applying crossovers, mutations, and elitism in the process. Using this method, the optimizer tends toward the best values for the parameters defining the profile with minimum loss.« less
An Optimized Configuration for the Brazilian Decimetric Array
NASA Astrophysics Data System (ADS)
Sawant, Hanumant; Faria, Claudio; Stephany, Stephan
The Brazilian Decimetric Array (BDA) is a radio interferometer designed to operate in the frequency range of 1.2-1.7, 2.8 and 5.6 GHz and to obtain images of radio sources with high dynamic range. A 5-antenna configuration is already operational being implemented in BDA phase I. Phase II will provide a 26-antenna configuration forming a compact T-array, whereas phase III will include further 12 antennas. However, the BDA site has topographic constraints that preclude the placement of these antennas along the lines defined by the 3 arms of the T-array. Therefore, some antennas must be displaced in a direction that is slightly transverse tothese lines. This work presents the investigation of possible optimized configurations for all 38 antennas spread over the distances of 2.5 x 1.25 km. It was required to determine the optimal position of the last 12 antennas.A new optimization strategy was then proposed in order to obtain the optimal array configuration. It is based on the entropy of the distribution of the sampled points in the Fourier plane. A stochastic model, Ant Colony Optimization, uses the entropy of the such distribution to iteratively refine the candidate solutions. The proposed strategy can be used to determine antenna locations for free-shape arrays in order to provide uniform u-v coverage with minimum redundancy of sampled points in u-v plane that are less susceptible to errors due to unmeasured Fourier components. A different distribution could be chosen for the coverage. It also allows to consider the topographical constraints of the available site. Furthermore, it provides an optimal configuration even considering the predetermined placement of the 26 antennas that compose the central T-array. In this case, the optimal location of the last 12 antennas was determined. Performance results corresponding to the Fourier plane coverage, synthesized beam and sidelobes levels are shown for this optimized BDA configuration and are compared to the results of the standard T-array configuration that cannot be implemented due to site constraints. —————————————————————————————-
Continuous Fiber Ceramic Composites
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fareed, Ali; Craig, Phillip A.
2002-09-01
Fiber-reinforced ceramic composites demonstrate the high-temperature stability of ceramics--with an increased fracture toughness resulting from the fiber reinforcement of the composite. The material optimization performed under the continuous fiber ceramic composites (CFCC) included a series of systematic optimizations. The overall goals were to define the processing window, to increase the robustinous of the process, to increase process yield while reducing costs, and to define the complexity of parts that could be fabricated.
Bayesian Phase II optimization for time-to-event data based on historical information.
Bertsche, Anja; Fleischer, Frank; Beyersmann, Jan; Nehmiz, Gerhard
2017-01-01
After exploratory drug development, companies face the decision whether to initiate confirmatory trials based on limited efficacy information. This proof-of-concept decision is typically performed after a Phase II trial studying a novel treatment versus either placebo or an active comparator. The article aims to optimize the design of such a proof-of-concept trial with respect to decision making. We incorporate historical information and develop pre-specified decision criteria accounting for the uncertainty of the observed treatment effect. We optimize these criteria based on sensitivity and specificity, given the historical information. Specifically, time-to-event data are considered in a randomized 2-arm trial with additional prior information on the control treatment. The proof-of-concept criterion uses treatment effect size, rather than significance. Criteria are defined on the posterior distribution of the hazard ratio given the Phase II data and the historical control information. Event times are exponentially modeled within groups, allowing for group-specific conjugate prior-to-posterior calculation. While a non-informative prior is placed on the investigational treatment, the control prior is constructed via the meta-analytic-predictive approach. The design parameters including sample size and allocation ratio are then optimized, maximizing the probability of taking the right decision. The approach is illustrated with an example in lung cancer.
PhyloChip™ microarray comparison of sampling methods used for coral microbial ecology
Kellogg, Christina A.; Piceno, Yvette M.; Tom, Lauren M.; DeSantis, Todd Z.; Zawada, David G.; Andersen, Gary L.
2012-01-01
Interest in coral microbial ecology has been increasing steadily over the last decade, yet standardized methods of sample collection still have not been defined. Two methods were compared for their ability to sample coral-associated microbial communities: tissue punches and foam swabs, the latter being less invasive and preferred by reef managers. Four colonies of star coral, Montastraea annularis, were sampled in the Dry Tortugas National Park (two healthy and two with white plague disease). The PhyloChip™ G3 microarray was used to assess microbial community structure of amplified 16S rRNA gene sequences. Samples clustered based on methodology rather than coral colony. Punch samples from healthy and diseased corals were distinct. All swab samples clustered closely together with the seawater control and did not group according to the health state of the corals. Although more microbial taxa were detected by the swab method, there is a much larger overlap between the water control and swab samples than punch samples, suggesting some of the additional diversity is due to contamination from water absorbed by the swab. While swabs are useful for noninvasive studies of the coral surface mucus layer, these results show that they are not optimal for studies of coral disease.
PhyloChip™ microarray comparison of sampling methods used for coral microbial ecology.
Kellogg, Christina A; Piceno, Yvette M; Tom, Lauren M; DeSantis, Todd Z; Zawada, David G; Andersen, Gary L
2012-01-01
Interest in coral microbial ecology has been increasing steadily over the last decade, yet standardized methods of sample collection still have not been defined. Two methods were compared for their ability to sample coral-associated microbial communities: tissue punches and foam swabs, the latter being less invasive and preferred by reef managers. Four colonies of star coral, Montastraea annularis, were sampled in the Dry Tortugas National Park (two healthy and two with white plague disease). The PhyloChip™ G3 microarray was used to assess microbial community structure of amplified 16S rRNA gene sequences. Samples clustered based on methodology rather than coral colony. Punch samples from healthy and diseased corals were distinct. All swab samples clustered closely together with the seawater control and did not group according to the health state of the corals. Although more microbial taxa were detected by the swab method, there is a much larger overlap between the water control and swab samples than punch samples, suggesting some of the additional diversity is due to contamination from water absorbed by the swab. While swabs are useful for noninvasive studies of the coral surface mucus layer, these results show that they are not optimal for studies of coral disease. Published by Elsevier B.V.
Bigras, Gilbert
2012-06-01
Color deconvolution relies on determination of unitary optical density vectors (OD(3D)) derived from pure constituent stains initially defined as intensity vectors in RGB space. OD(3D) can be defined in polar coordinates (phi, theta, radius); always being equal to one, radius can be ignored. Easier handling of unitary optical density 2D vectors (OD(2D)) is shown. OD(2D) pure stains used in anatomical pathology were assessed as centroid values (phi, theta) with a measure of variance: inertia based on arc lengths between centroid value and sampled points. These variables were plotted on a stereographic projection plane. In order to assess pure stains OD(2D), different methods of sampling RGB pixels were tested and compared: (2) direct sampling of nuclei from preparations using (a) composite H&E and (b) hematoxylin only and (2) for any pure stain RGB image, different associated 8-bit masks (saturation, brightness and RGB average) were used for sampling and compared. Behaviors of phi, theta and inertia were obtained by moving threshold in 8-bit mask histograms. Phi and theta stability were tested against variable light intensity during image acquisition and by using 2 different image acquisition systems. The more saturated RGB pixels are, the more stable phi, theta and inertia values are obtained. Different commercial hematoxylins have distinct OD(2D) characteristics. UltraView DAB stain shows high inertia and is angularly closer to usual counterstains than ultraView Red stain, which also has a lower inertia. Superior accuracy is expected from the latter stain. Phi and theta OD(2D) values are sensitive to light intensity variation, to the used imaging system and to the used objectives. An ImageJ plugin was designed to plot and interactively modify OD(2D) values with instant update of color deconvolution allowing heuristic segmentation. Utilization of polar OD(2D) eases statistical characterization of OD(3D) vectors: conditions of optimal sampling were demonstrated and various factors influencing OD(2D) stability were explored. Stereographic projection plane allows intuitive visualization of OD(3D) vectors as well as heuristic vectorial modification. All findings are not restricted to anatomical pathology but can be applied to bright field microscopy and subtractive color applications in general.
Red Cell Properties after Different Modes of Blood Transportation
Makhro, Asya; Huisjes, Rick; Verhagen, Liesbeth P.; Mañú-Pereira, María del Mar; Llaudet-Planas, Esther; Petkova-Kirova, Polina; Wang, Jue; Eichler, Hermann; Bogdanova, Anna; van Wijk, Richard; Vives-Corrons, Joan-Lluís; Kaestner, Lars
2016-01-01
Transportation of blood samples is unavoidable for assessment of specific parameters in blood of patients with rare anemias, blood doping testing, or for research purposes. Despite the awareness that shipment may substantially alter multiple parameters, no study of that extent has been performed to assess these changes and optimize shipment conditions to reduce transportation-related artifacts. Here we investigate the changes in multiple parameters in blood of healthy donors over 72 h of simulated shipment conditions. Three different anticoagulants (K3EDTA, Sodium Heparin, and citrate-based CPDA) for two temperatures (4°C and room temperature) were tested to define the optimal transportation conditions. Parameters measured cover common cytology and biochemistry parameters (complete blood count, hematocrit, morphological examination), red blood cell (RBC) volume, ion content and density, membrane properties and stability (hemolysis, osmotic fragility, membrane heat stability, patch-clamp investigations, and formation of micro vesicles), Ca2+ handling, RBC metabolism, activity of numerous enzymes, and O2 transport capacity. Our findings indicate that individual sets of parameters may require different shipment settings (anticoagulants, temperature). Most of the parameters except for ion (Na+, K+, Ca2+) handling and, possibly, reticulocytes counts, tend to favor transportation at 4°C. Whereas plasma and intraerythrocytic Ca2+ cannot be accurately measured in the presence of chelators such as citrate and EDTA, the majority of Ca2+-dependent parameters are stabilized in CPDA samples. Even in blood samples from healthy donors transported using an optimized shipment protocol, the majority of parameters were stable within 24 h, a condition that may not hold for the samples of patients with rare anemias. This implies for as short as possible shipping using fast courier services to the closest expert laboratory at reach. Mobile laboratories or the travel of the patients to the specialized laboratories may be the only option for some groups of patients with highly unstable RBCs. PMID:27471472
NASA Astrophysics Data System (ADS)
Chung, Kee-Choo; Park, Hwangseo
2016-11-01
The performance of the extended solvent-contact model has been addressed in the SAMPL5 blind prediction challenge for distribution coefficient (LogD) of drug-like molecules with respect to the cyclohexane/water partitioning system. All the atomic parameters defined for 41 atom types in the solvation free energy function were optimized by operating a standard genetic algorithm with respect to water and cyclohexane solvents. In the parameterizations for cyclohexane, the experimental solvation free energy (Δ G sol ) data of 15 molecules for 1-octanol were combined with those of 77 molecules for cyclohexane to construct a training set because Δ G sol values of the former were unavailable for cyclohexane in publicly accessible databases. Using this hybrid training set, we established the LogD prediction model with the correlation coefficient ( R), average error (AE), and root mean square error (RMSE) of 0.55, 1.53, and 3.03, respectively, for the comparison of experimental and computational results for 53 SAMPL5 molecules. The modest accuracy in LogD prediction could be attributed to the incomplete optimization of atomic solvation parameters for cyclohexane. With respect to 31 SAMPL5 molecules containing the atom types for which experimental reference data for Δ G sol were available for both water and cyclohexane, the accuracy in LogD prediction increased remarkably with the R, AE, and RMSE values of 0.82, 0.89, and 1.60, respectively. This significant enhancement in performance stemmed from the better optimization of atomic solvation parameters by limiting the element of training set to the molecules with experimental Δ G sol data for cyclohexane. Due to the simplicity in model building and to low computational cost for parameterizations, the extended solvent-contact model is anticipated to serve as a valuable computational tool for LogD prediction upon the enrichment of experimental Δ G sol data for organic solvents.
Iron isotope effect in the iron arsenide superconductor (Ca0.4Na0.6)Fe2As2
NASA Astrophysics Data System (ADS)
Tsuge, Y.; Nishio, T.; Iyo, A.; Tanaka, Y.; Eisaki, H.
2014-05-01
We report a new sample synthesis technique for polycrystalline (Ca1-xNax)Fe2As2 (0
Least squares polynomial chaos expansion: A review of sampling strategies
NASA Astrophysics Data System (ADS)
Hadigol, Mohammad; Doostan, Alireza
2018-04-01
As non-institutive polynomial chaos expansion (PCE) techniques have gained growing popularity among researchers, we here provide a comprehensive review of major sampling strategies for the least squares based PCE. Traditional sampling methods, such as Monte Carlo, Latin hypercube, quasi-Monte Carlo, optimal design of experiments (ODE), Gaussian quadratures, as well as more recent techniques, such as coherence-optimal and randomized quadratures are discussed. We also propose a hybrid sampling method, dubbed alphabetic-coherence-optimal, that employs the so-called alphabetic optimality criteria used in the context of ODE in conjunction with coherence-optimal samples. A comparison between the empirical performance of the selected sampling methods applied to three numerical examples, including high-order PCE's, high-dimensional problems, and low oversampling ratios, is presented to provide a road map for practitioners seeking the most suitable sampling technique for a problem at hand. We observed that the alphabetic-coherence-optimal technique outperforms other sampling methods, specially when high-order ODE are employed and/or the oversampling ratio is low.
Factors influencing patient interest in plastic surgery and the process of selecting a surgeon.
Galanis, Charles; Sanchez, Ivan S; Roostaeian, Jason; Crisera, Christopher
2013-05-01
Understanding patient interest in cosmetic surgery is an important tool in delineating the current market for aesthetic surgeons. Similarly, defining those factors that most influence surgeon selection is vital for optimizing marketing strategies. The authors evaluate a general population sample's interest in cosmetic surgery and investigate which factors patients value when selecting their surgeon. An anonymous questionnaire was distributed to 96 individuals in waiting rooms in nonsurgical clinics. Respondents were questioned on their ability to differentiate between a "plastic" surgeon and a "cosmetic" surgeon, their interest in having plastic surgery, and factors affecting surgeon and practice selection. Univariate and multivariate analyses were conducted to define any significant correlative relationships. Respondents consisted of 15 men and 81 women. Median age was 34.5 (range, 18-67) years. Overall, 20% were currently considering plastic surgery and 78% stated they would consider it in the future. The most common area of interest was a procedure for the face. The most important factors in selecting a surgeon were surgeon reputation and board certification. The least important were quality of advertising and surgeon age. The most cited factor preventing individuals from pursuing plastic surgery was fear of a poor result. Most (60%) patients would choose a private surgicenter-based practice. The level of importance for each studied attribute can help plastic surgeons understand the market for cosmetic surgery as well as what patients look for when selecting their surgeon. This study helps to define those attributes in a sample population.
Ligthart, Sjoerd T; Coumans, Frank A W; Attard, Gerhardt; Cassidy, Amy Mulick; de Bono, Johann S; Terstappen, Leon W M M
2011-01-01
Circulating tumour cells (CTC) in patients with metastatic carcinomas are associated with poor survival and can be used to guide therapy. Classification of CTC however remains subjective, as they are morphologically heterogeneous. We acquired digital images, using the CellSearch™ system, from blood of 185 castration resistant prostate cancer (CRPC) patients and 68 healthy subjects to define CTC by computer algorithms. Patient survival data was used as the training parameter for the computer to define CTC. The computer-generated CTC definition was validated on a separate CRPC dataset comprising 100 patients. The optimal definition of the computer defined CTC (aCTC) was stricter as compared to the manual CellSearch CTC (mCTC) definition and as a consequence aCTC were less frequent. The computer-generated CTC definition resulted in hazard ratios (HRs) of 2.8 for baseline and 3.9 for follow-up samples, which is comparable to the mCTC definition (baseline HR 2.9, follow-up HR 4.5). Validation resulted in HRs at baseline/follow-up of 3.9/5.4 for computer and 4.8/5.8 for manual definitions. In conclusion, we have defined and validated CTC by clinical outcome using a perfectly reproducing automated algorithm.
SoMIR framework for designing high-NDBP photonic crystal waveguides.
Mirjalili, Seyed Mohammad
2014-06-20
This work proposes a modularized framework for designing the structure of photonic crystal waveguides (PCWs) and reducing human involvement during the design process. The proposed framework consists of three main modules: parameters module, constraints module, and optimizer module. The first module is responsible for defining the structural parameters of a given PCW. The second module defines various limitations in order to achieve desirable optimum designs. The third module is the optimizer, in which a numerical optimization method is employed to perform optimization. As case studies, two new structures called Ellipse PCW (EPCW) and Hypoellipse PCW (HPCW) with different shape of holes in each row are proposed and optimized by the framework. The calculation results show that the proposed framework is able to successfully optimize the structures of the new EPCW and HPCW. In addition, the results demonstrate the applicability of the proposed framework for optimizing different PCWs. The results of the comparative study show that the optimized EPCW and HPCW provide 18% and 9% significant improvements in normalized delay-bandwidth product (NDBP), respectively, compared to the ring-shape-hole PCW, which has the highest NDBP in the literature. Finally, the simulations of pulse propagation confirm the manufacturing feasibility of both optimized structures.
Finite element approximation of an optimal control problem for the von Karman equations
NASA Technical Reports Server (NTRS)
Hou, L. Steven; Turner, James C.
1994-01-01
This paper is concerned with optimal control problems for the von Karman equations with distributed controls. We first show that optimal solutions exist. We then show that Lagrange multipliers may be used to enforce the constraints and derive an optimality system from which optimal states and controls may be deduced. Finally we define finite element approximations of solutions for the optimality system and derive error estimates for the approximations.
Optimal flexible sample size design with robust power.
Zhang, Lanju; Cui, Lu; Yang, Bo
2016-08-30
It is well recognized that sample size determination is challenging because of the uncertainty on the treatment effect size. Several remedies are available in the literature. Group sequential designs start with a sample size based on a conservative (smaller) effect size and allow early stop at interim looks. Sample size re-estimation designs start with a sample size based on an optimistic (larger) effect size and allow sample size increase if the observed effect size is smaller than planned. Different opinions favoring one type over the other exist. We propose an optimal approach using an appropriate optimality criterion to select the best design among all the candidate designs. Our results show that (1) for the same type of designs, for example, group sequential designs, there is room for significant improvement through our optimization approach; (2) optimal promising zone designs appear to have no advantages over optimal group sequential designs; and (3) optimal designs with sample size re-estimation deliver the best adaptive performance. We conclude that to deal with the challenge of sample size determination due to effect size uncertainty, an optimal approach can help to select the best design that provides most robust power across the effect size range of interest. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Precipitation Model Validation in 3rd Generation Aeroturbine Disc Alloys
NASA Technical Reports Server (NTRS)
Olson, G. B.; Jou, H.-J.; Jung, J.; Sebastian, J. T.; Misra, A.; Locci, I.; Hull, D.
2008-01-01
In support of application of the DARPA-AIM methodology to the accelerated hybrid thermal process optimization of 3rd generation aeroturbine disc alloys with quantified uncertainty, equilibrium and diffusion couple experiments have identified available fundamental thermodynamic and mobility databases of sufficient accuracy. Using coherent interfacial energies quantified by Single-Sensor DTA nucleation undercooling measurements, PrecipiCalc(TM) simulations of nonisothermal precipitation in both supersolvus and subsolvus treated samples show good agreement with measured gamma particle sizes and compositions. Observed longterm isothermal coarsening behavior defines requirements for further refinement of elastic misfit energy and treatment of the parallel evolution of incoherent precipitation at grain boundaries.
Muhammed, Musemma K.; Kot, Witold; Neve, Horst; Mahony, Jennifer; Castro-Mejía, Josué L.; Krych, Lukasz; Hansen, Lars H.; Nielsen, Dennis S.; Sørensen, Søren J.; Heller, Knut J.; van Sinderen, Douwe
2017-01-01
ABSTRACT Despite being potentially highly useful for characterizing the biodiversity of phages, metagenomic studies are currently not available for dairy bacteriophages, partly due to the lack of a standard procedure for phage extraction. We optimized an extraction method that allows the removal of the bulk protein from whey and milk samples with losses of less than 50% of spiked phages. The protocol was applied to extract phages from whey in order to test the notion that members of Lactococcus lactis 936 (now Sk1virus), P335, c2 (now C2virus) and Leuconostoc phage groups are the most frequently encountered in the dairy environment. The relative abundance and diversity of phages in eight and four whey mixtures from dairies using undefined mesophilic mixed-strain cultures containing Lactococcus lactis subsp. lactis biovar diacetylactis and Leuconostoc species (i.e., DL starter cultures) and defined cultures, respectively, were assessed. Results obtained from transmission electron microscopy and high-throughput sequence analyses revealed the dominance of Lc. lactis 936 phages (order Caudovirales, family Siphoviridae) in dairies using undefined DL starter cultures and Lc. lactis c2 phages (order Caudovirales, family Siphoviridae) in dairies using defined cultures. The 936 and Leuconostoc phages demonstrated limited diversity. Possible coinduction of temperate P335 prophages and satellite phages in one of the whey mixtures was also observed. IMPORTANCE The method optimized in this study could provide an important basis for understanding the dynamics of the phage community (abundance, development, diversity, evolution, etc.) in dairies with different sizes, locations, and production strategies. It may also enable the discovery of previously unknown phages, which is crucial for the development of rapid molecular biology-based methods for phage burden surveillance systems. The dominance of only a few phage groups in the dairy environment signifies the depth of knowledge gained over the past decades, which served as the basis for designing current phage control strategies. The presence of a correlation between phages and the type of starter cultures being used in dairies might help to improve the selection and/or design of suitable, custom, and cost-efficient phage control strategies. PMID:28754704
Leça, João M; Pereira, Ana C; Vieira, Ana C; Reis, Marco S; Marques, José C
2015-08-05
Vicinal diketones, namely diacetyl (DC) and pentanedione (PN), are compounds naturally found in beer that play a key role in the definition of its aroma. In lager beer, they are responsible for off-flavors (buttery flavor) and therefore their presence and quantification is of paramount importance to beer producers. Aiming at developing an accurate quantitative monitoring scheme to follow these off-flavor compounds during beer production and in the final product, the head space solid-phase microextraction (HS-SPME) analytical procedure was tuned through experiments planned in an optimal way and the final settings were fully validated. Optimal design of experiments (O-DOE) is a computational, statistically-oriented approach for designing experiences that are most informative according to a well-defined criterion. This methodology was applied for HS-SPME optimization, leading to the following optimal extraction conditions for the quantification of VDK: use a CAR/PDMS fiber, 5 ml of samples in 20 ml vial, 5 min of pre-incubation time followed by 25 min of extraction at 30 °C, with agitation. The validation of the final analytical methodology was performed using a matrix-matched calibration, in order to minimize matrix effects. The following key features were obtained: linearity (R(2) > 0.999, both for diacetyl and 2,3-pentanedione), high sensitivity (LOD of 0.92 μg L(-1) and 2.80 μg L(-1), and LOQ of 3.30 μg L(-1) and 10.01 μg L(-1), for diacetyl and 2,3-pentanedione, respectively), recoveries of approximately 100% and suitable precision (repeatability and reproducibility lower than 3% and 7.5%, respectively). The applicability of the methodology was fully confirmed through an independent analysis of several beer samples, with analyte concentrations ranging from 4 to 200 g L(-1). Copyright © 2015 Elsevier B.V. All rights reserved.
Esteghamati, Alireza; Ashraf, Haleh; Khalilzadeh, Omid; Zandieh, Ali; Nakhjavani, Manouchehr; Rashidi, Armin; Haghazali, Mehrdad; Asgari, Fereshteh
2010-04-07
We have recently determined the optimal cut-off of the homeostatic model assessment of insulin resistance for the diagnosis of insulin resistance (IR) and metabolic syndrome (MetS) in non-diabetic residents of Tehran, the capital of Iran. The aim of the present study is to establish the optimal cut-off at the national level in the Iranian population with and without diabetes. Data of the third National Surveillance of Risk Factors of Non-Communicable Diseases, available for 3,071 adult Iranian individuals aging 25-64 years were analyzed. MetS was defined according to the Adult Treatment Panel III (ATPIII) and International Diabetes Federation (IDF) criteria. HOMA-IR cut-offs from the 50th to the 95th percentile were calculated and sensitivity, specificity, and positive likelihood ratio for MetS diagnosis were determined. The receiver operating characteristic (ROC) curves of HOMA-IR for MetS diagnosis were depicted, and the optimal cut-offs were determined by two different methods: Youden index, and the shortest distance from the top left corner of the curve. The area under the curve (AUC) (95%CI) was 0.650 (0.631-0.670) for IDF-defined MetS and 0.683 (0.664-0.703) with the ATPIII definition. The optimal HOMA-IR cut-off for the diagnosis of IDF- and ATPIII-defined MetS in non-diabetic individuals was 1.775 (sensitivity: 57.3%, specificity: 65.3%, with ATPIII; sensitivity: 55.9%, specificity: 64.7%, with IDF). The optimal cut-offs in diabetic individuals were 3.875 (sensitivity: 49.7%, specificity: 69.6%) and 4.325 (sensitivity: 45.4%, specificity: 69.0%) for ATPIII- and IDF-defined MetS, respectively. We determined the optimal HOMA-IR cut-off points for the diagnosis of MetS in the Iranian population with and without diabetes.
Diversity of bacteria and archaea from two shallow marine hydrothermal vents from Vulcano Island.
Antranikian, Garabed; Suleiman, Marcel; Schäfers, Christian; Adams, Michael W W; Bartolucci, Simonetta; Blamey, Jenny M; Birkeland, Nils-Kåre; Bonch-Osmolovskaya, Elizaveta; da Costa, Milton S; Cowan, Don; Danson, Michael; Forterre, Patrick; Kelly, Robert; Ishino, Yoshizumi; Littlechild, Jennifer; Moracci, Marco; Noll, Kenneth; Oshima, Tairo; Robb, Frank; Rossi, Mosè; Santos, Helena; Schönheit, Peter; Sterner, Reinhard; Thauer, Rudolf; Thomm, Michael; Wiegel, Jürgen; Stetter, Karl Otto
2017-07-01
To obtain new insights into community compositions of hyperthermophilic microorganisms, defined as having optimal growth temperatures of 80 °C and above, sediment and water samples were taken from two shallow marine hydrothermal vents (I and II) with temperatures of 100 °C at Vulcano Island, Italy. A combinatorial approach of denaturant gradient gel electrophoresis (DGGE) and metagenomic sequencing was used for microbial community analyses of the samples. In addition, enrichment cultures, growing anaerobically on selected polysaccharides such as starch and cellulose, were also analyzed by the combinatorial approach. Our results showed a high abundance of hyperthermophilic archaea, especially in sample II, and a comparable diverse archaeal community composition in both samples. In particular, the strains of the hyperthermophilic anaerobic genera Staphylothermus and Thermococcus, and strains of the aerobic hyperthermophilic genus Aeropyrum, were abundant. Regarding the bacterial community, ε-Proteobacteria, especially the genera Sulfurimonas and Sulfurovum, were highly abundant. The microbial diversity of the enrichment cultures changed significantly by showing a high dominance of archaea, particularly the genera Thermococcus and Palaeococcus, depending on the carbon source and the selected temperature.
NASA Technical Reports Server (NTRS)
Walsh, Joanne L.; Young, Katherine C.; Pritchard, Jocelyn I.; Adelman, Howard M.; Mantay, Wayne R.
1995-01-01
This paper describes an integrated aerodynamic/dynamic/structural (IADS) optimization procedure for helicopter rotor blades. The procedure combines performance, dynamics, and structural analyses with a general-purpose optimizer using multilevel decomposition techniques. At the upper level, the structure is defined in terms of global quantities (stiffness, mass, and average strains). At the lower level, the structure is defined in terms of local quantities (detailed dimensions of the blade structure and stresses). The IADS procedure provides an optimization technique that is compatible with industrial design practices in which the aerodynamic and dynamic designs are performed at a global level and the structural design is carried out at a detailed level with considerable dialog and compromise among the aerodynamic, dynamic, and structural groups. The IADS procedure is demonstrated for several examples.
NASA Technical Reports Server (NTRS)
Walsh, Joanne L.; Young, Katherine C.; Pritchard, Jocelyn I.; Adelman, Howard M.; Mantay, Wayne R.
1994-01-01
This paper describes an integrated aerodynamic, dynamic, and structural (IADS) optimization procedure for helicopter rotor blades. The procedure combines performance, dynamics, and structural analyses with a general purpose optimizer using multilevel decomposition techniques. At the upper level, the structure is defined in terms of local quantities (stiffnesses, mass, and average strains). At the lower level, the structure is defined in terms of local quantities (detailed dimensions of the blade structure and stresses). The IADS procedure provides an optimization technique that is compatible with industrial design practices in which the aerodynamic and dynamic design is performed at a global level and the structural design is carried out at a detailed level with considerable dialogue and compromise among the aerodynamic, dynamic, and structural groups. The IADS procedure is demonstrated for several cases.
A Scheme to Optimize Flow Routing and Polling Switch Selection of Software Defined Networks.
Chen, Huan; Li, Lemin; Ren, Jing; Wang, Yang; Zhao, Yangming; Wang, Xiong; Wang, Sheng; Xu, Shizhong
2015-01-01
This paper aims at minimizing the communication cost for collecting flow information in Software Defined Networks (SDN). Since flow-based information collecting method requires too much communication cost, and switch-based method proposed recently cannot benefit from controlling flow routing, jointly optimize flow routing and polling switch selection is proposed to reduce the communication cost. To this end, joint optimization problem is formulated as an Integer Linear Programming (ILP) model firstly. Since the ILP model is intractable in large size network, we also design an optimal algorithm for the multi-rooted tree topology and an efficient heuristic algorithm for general topology. According to extensive simulations, it is found that our method can save up to 55.76% communication cost compared with the state-of-the-art switch-based scheme.
NASA Technical Reports Server (NTRS)
Byrnes, C. I.
1980-01-01
It is noted that recent work by Kamen (1979) on the stability of half-plane digital filters shows that the problem of the existence of a feedback law also arises for other Banach algebras in applications. This situation calls for a realization theory and stabilizability criteria for systems defined over Banach for Frechet algebra A. Such a theory is developed here, with special emphasis placed on the construction of finitely generated realizations, the existence of coprime factorizations for T(s) defined over A, and the solvability of the quadratic optimal control problem and the associated algebraic Riccati equation over A.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Masters, Daniel; Steinhardt, Charles; Faisst, Andreas
2015-11-01
Calibrating the photometric redshifts of ≳10{sup 9} galaxies for upcoming weak lensing cosmology experiments is a major challenge for the astrophysics community. The path to obtaining the required spectroscopic redshifts for training and calibration is daunting, given the anticipated depths of the surveys and the difficulty in obtaining secure redshifts for some faint galaxy populations. Here we present an analysis of the problem based on the self-organizing map, a method of mapping the distribution of data in a high-dimensional space and projecting it onto a lower-dimensional representation. We apply this method to existing photometric data from the COSMOS survey selectedmore » to approximate the anticipated Euclid weak lensing sample, enabling us to robustly map the empirical distribution of galaxies in the multidimensional color space defined by the expected Euclid filters. Mapping this multicolor distribution lets us determine where—in galaxy color space—redshifts from current spectroscopic surveys exist and where they are systematically missing. Crucially, the method lets us determine whether a spectroscopic training sample is representative of the full photometric space occupied by the galaxies in a survey. We explore optimal sampling techniques and estimate the additional spectroscopy needed to map out the color–redshift relation, finding that sampling the galaxy distribution in color space in a systematic way can efficiently meet the calibration requirements. While the analysis presented here focuses on the Euclid survey, similar analysis can be applied to other surveys facing the same calibration challenge, such as DES, LSST, and WFIRST.« less
Lamas, Leonardo; Drezner, Rene; Otranto, Guilherme; Barrera, Junior
2018-01-01
The aim of this study was to define a method for evaluating a player's decisions during a game based on the success probability of his actions and for analyzing the player strategy inferred from game actions. There were developed formal definitions of i) the stochastic process of player decisions in game situations and ii) the inference process of player strategy based on his game decisions. The method was applied to the context of soccer goalkeepers. A model of goalkeeper positioning, with geometric parameters and solutions to optimize his position based on the ball position and trajectory, was developed. The model was tested with a sample of 65 professional goalkeepers (28.8 ± 4.1 years old) playing for their national teams in 2010 and 2014 World Cups. The goalkeeper's decisions were compared to decisions from a large dataset of other goalkeepers, defining the probability of success in each game circumstance. There were assessed i) performance in a defined set of classes of game plays; ii) entropy of goalkeepers' decisions; and iii) the effect of goalkeepers' positioning updates on the outcome (save or goal). Goalkeepers' decisions were similar to the ones with the lowest probability of goal on the dataset. Goalkeepers' entropy varied between 24% and 71% of the maximum possible entropy. Positioning dynamics in the instants that preceded the shot indicated that, in goals and saves, goalkeepers optimized their position before the shot in 21.87% and 83.33% of the situations, respectively. These results validate a method to discriminate successful performance. In conclusion, this method enables a more precise assessment of a player's decision-making ability by consulting a representative dataset of equivalent actions to define the probability of his success. Therefore, it supports the evaluation of the player's decision separately from his technical skill execution, which overcomes the scientific challenge of discriminating the evaluation of a player's decision performance from the action result.
Li, Yan; Shi, Zhou; Wu, Hao-Xiang; Li, Feng; Li, Hong-Yi
2013-10-01
The loss of cultivated land has increasingly become an issue of regional and national concern in China. Definition of management zones is an important measure to protect limited cultivated land resource. In this study, combined spatial data were applied to define management zones in Fuyang city, China. The yield of cultivated land was first calculated and evaluated and the spatial distribution pattern mapped; the limiting factors affecting the yield were then explored; and their maps of the spatial variability were presented using geostatistics analysis. Data were jointly analyzed for management zone definition using a combination of principal component analysis with a fuzzy clustering method, two cluster validity functions were used to determine the optimal number of cluster. Finally one-way variance analysis was performed on 3,620 soil sampling points to assess how well the defined management zones reflected the soil properties and productivity level. It was shown that there existed great potential for increasing grain production, and the amount of cultivated land played a key role in maintaining security in grain production. Organic matter, total nitrogen, available phosphorus, elevation, thickness of the plow layer, and probability of irrigation guarantee were the main limiting factors affecting the yield. The optimal number of management zones was three, and there existed significantly statistical differences between the crop yield and field parameters in each defined management zone. Management zone I presented the highest potential crop yield, fertility level, and best agricultural production condition, whereas management zone III lowest. The study showed that the procedures used may be effective in automatically defining management zones; by the development of different management zones, different strategies of cultivated land management and practice in each zone could be determined, which is of great importance to enhance cultivated land conservation, stabilize agricultural production, promote sustainable use of cultivated land and guarantee food security.
Drezner, Rene; Otranto, Guilherme; Barrera, Junior
2018-01-01
The aim of this study was to define a method for evaluating a player’s decisions during a game based on the success probability of his actions and for analyzing the player strategy inferred from game actions. There were developed formal definitions of i) the stochastic process of player decisions in game situations and ii) the inference process of player strategy based on his game decisions. The method was applied to the context of soccer goalkeepers. A model of goalkeeper positioning, with geometric parameters and solutions to optimize his position based on the ball position and trajectory, was developed. The model was tested with a sample of 65 professional goalkeepers (28.8 ± 4.1 years old) playing for their national teams in 2010 and 2014 World Cups. The goalkeeper’s decisions were compared to decisions from a large dataset of other goalkeepers, defining the probability of success in each game circumstance. There were assessed i) performance in a defined set of classes of game plays; ii) entropy of goalkeepers’ decisions; and iii) the effect of goalkeepers’ positioning updates on the outcome (save or goal). Goalkeepers’ decisions were similar to the ones with the lowest probability of goal on the dataset. Goalkeepers’ entropy varied between 24% and 71% of the maximum possible entropy. Positioning dynamics in the instants that preceded the shot indicated that, in goals and saves, goalkeepers optimized their position before the shot in 21.87% and 83.33% of the situations, respectively. These results validate a method to discriminate successful performance. In conclusion, this method enables a more precise assessment of a player’s decision-making ability by consulting a representative dataset of equivalent actions to define the probability of his success. Therefore, it supports the evaluation of the player’s decision separately from his technical skill execution, which overcomes the scientific challenge of discriminating the evaluation of a player’s decision performance from the action result. PMID:29408923
Lee, Hee-Seok; Kang, Jea-Wook; Kim, Byung Hee; Park, Sang-Gyu; Lee, Chan
2011-03-01
The aim of this study was to optimize the culture conditions for the production of biological cyclic hexadepsipeptides (enniatins H, I and MK1688) from Fusarium oxysporum KFCC 11363P. Tests of 10 complete or chemically defined liquid culture media revealed that Fusarium defined medium was the best for the production of enniatins (produced amounts: enniatin H, 185.4 mg/L; enniatin I, 349.1mg/L; enniatin MK1688, 541.1mg/L; and total enniatins, 1075.6 mg/L). On the eighth day after inoculation, the maximal production of enniatins was observed at 25°C in Fusarium defined medium. The optimal carbon and nitrogen sources for producing biological cyclic hexadepsipeptides (enniatins H, I, and MK1688) were sucrose and NaNO(3), respectively, and their optimal concentrations were determined by the principle of response surface methodology. It was confirmed that using the optimized growth medium compositions increased the amounts of enniatins H, I, and MK1688, and total enniatins produced to 695.2, 882.4, 824.8, and 2398.5mg/L, respectively. These findings will assist in formulating microbiological media useful for enniatin research. Copyright © 2010 The Society for Biotechnology, Japan. Published by Elsevier B.V. All rights reserved.
Electrochemical synthesis of self-organized TiO2 crystalline nanotubes without annealing
NASA Astrophysics Data System (ADS)
Giorgi, Leonardo; Dikonimos, Theodoros; Giorgi, Rossella; Buonocore, Francesco; Faggio, Giuliana; Messina, Giacomo; Lisi, Nicola
2018-03-01
This work demonstrates that upon anodic polarization in an aqueous fluoride-containing electrolyte, TiO2 nanotube array films can be formed with a well-defined crystalline phase, rather than an amorphous one. The crystalline phase was obtained avoiding any high temperature annealing. We studied the formation of nanotubes in an HF/H2O medium and the development of crystalline grains on the nanotube wall, and we found a facile way to achieve crystalline TiO2 nanotube arrays through a one-step anodization. The crystallinity of the film was influenced by the synthesis parameters, and the optimization of the electrolyte composition and anodization conditions (applied voltage and time) were carried out. For comparison purposes, crystalline anatase TiO2 nanotubes were also prepared by thermal treatment of amorphous nanotubes grown in an organic bath (ethylene glycol/NH4F/H2O). The morphology and the crystallinity of the nanotubes were studied by field emission gun-scanning electron microscopy (FEG-SEM) and Raman spectroscopy, whereas the electrochemical and semiconducting properties were analyzed by means of linear sweep voltammetry, impedance spectroscopy, and Mott-Schottky plots. X-ray photoelectron spectroscopy (XPS) and ultraviolet photoelectron spectroscopy (UPS) allowed us to determine the surface composition and the electronic structure of the samples and to correlate them with the electrochemical data. The optimal conditions to achieve a crystalline phase with high donor concentration are defined.
Wavelet-Smoothed Interpolation of Masked Scientific Data for JPEG 2000 Compression
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brislawn, Christopher M.
2012-08-13
How should we manage scientific data with 'holes'? Some applications, like JPEG 2000, expect logically rectangular data, but some sources, like the Parallel Ocean Program (POP), generate data that isn't defined on certain subsets. We refer to grid points that lack well-defined, scientifically meaningful sample values as 'masked' samples. Wavelet-smoothing is a highly scalable interpolation scheme for regions with complex boundaries on logically rectangular grids. Computation is based on forward/inverse discrete wavelet transforms, so runtime complexity and memory scale linearly with respect to sample count. Efficient state-of-the-art minimal realizations yield small constants (O(10)) for arithmetic complexity scaling, and in-situ implementationmore » techniques make optimal use of memory. Implementation in two dimensions using tensor product filter banks is straighsorward and should generalize routinely to higher dimensions. No hand-tuning required when the interpolation mask changes, making the method aeractive for problems with time-varying masks. Well-suited for interpolating undefined samples prior to JPEG 2000 encoding. The method outperforms global mean interpolation, as judged by both SNR rate-distortion performance and low-rate artifact mitigation, for data distributions whose histograms do not take the form of sharply peaked, symmetric, unimodal probability density functions. These performance advantages can hold even for data whose distribution differs only moderately from the peaked unimodal case, as demonstrated by POP salinity data. The interpolation method is very general and is not tied to any particular class of applications, could be used for more generic smooth interpolation.« less
Using lod scores to detect sex differences in male-female recombination fractions.
Feenstra, B; Greenberg, D A; Hodge, S E
2004-01-01
Human recombination fraction (RF) can differ between males and females, but investigators do not always know which disease genes are located in genomic areas of large RF sex differences. Knowledge of RF sex differences contributes to our understanding of basic biology and can increase the power of a linkage study, improve gene localization, and provide clues to possible imprinting. One way to detect these differences is to use lod scores. In this study we focused on detecting RF sex differences and answered the following questions, in both phase-known and phase-unknown matings: (1) How large a sample size is needed to detect a RF sex difference? (2) What are "optimal" proportions of paternally vs. maternally informative matings? (3) Does ascertaining nonoptimal proportions of paternally or maternally informative matings lead to ascertainment bias? Our results were as follows: (1) We calculated expected lod scores (ELODs) under two different conditions: "unconstrained," allowing sex-specific RF parameters (theta(female), theta(male)); and "constrained," requiring theta(female) = theta(male). We then examined the DeltaELOD (identical with difference between maximized constrained and unconstrained ELODs) and calculated minimum sample sizes required to achieve statistically significant DeltaELODs. For large RF sex differences, samples as small as 10 to 20 fully informative matings can achieve statistical significance. We give general sample size guidelines for detecting RF differences in informative phase-known and phase-unknown matings. (2) We defined p as the proportion of paternally informative matings in the dataset; and the optimal proportion p(circ) as that value of p that maximizes DeltaELOD. We determined that, surprisingly, p(circ) does not necessarily equal (1/2), although it does fall between approximately 0.4 and 0.6 in most situations. (3) We showed that if p in a sample deviates from its optimal value, no bias is introduced (asymptotically) to the maximum likelihood estimates of theta(female) and theta(male), even though ELOD is reduced (see point 2). This fact is important because often investigators cannot control the proportions of paternally and maternally informative families. In conclusion, it is possible to reliably detect sex differences in recombination fraction. Copyright 2004 S. Karger AG, Basel
Ko, Sang-Bae; Choi, H. Alex; Parikh, Gunjan; Helbok, Raimund; Schmidt, J. Michael; Lee, Kiwon; Badjatia, Neeraj; Claassen, Jan; Connolly, E. Sander; Mayer, Stephan A.
2011-01-01
Background and Purpose Limited data exists to recommend specific cerebral perfusion pressure (CPP) targets in patients with intracerebral hemorrhage (ICH). We sought to determine the feasibility of brain multimodality monitoring (MMM) for optimizing CPP and potentially reducing secondary brain injury after ICH. Methods We retrospectively analyzed brain MMM data targeted at perihematomal brain tissue in 18 comatose ICH patients (median monitoring: 164 hours). Physiological measures were averaged over one-hour intervals corresponding to each microdialysis sample. Metabolic crisis (MC) was defined as a lactate/pyruvate ratio (LPR) >40 with a brain glucose concentration <0.7 mmol/L. Brain tissue hypoxia (BTH) was defined as PbtO2 <15 mm Hg. Pressure reactivity index (PRx) and oxygen reactivity index (ORx) were calculated. Results Median age was 59 years, median GCS score 6, and median ICH volume was 37.5 ml. The risk of BTH, and to a lesser extent MC, increased with lower CPP values. Multivariable analyses showed that CPP <80 mm Hg was associated with a greater risk of BTH (OR 1.5, 95% CI 1.1–2.1, P=0.01) compared to CPP >100 mm Hg as a reference range. Six patients died (33%). Survivors had significantly higher CPP and PbtO2 and lower ICP values starting on post-bleed day 4, whereas LPR and PRx values were lower, indicating preservation of aerobic metabolism and pressure autoregulation. Conclusions PbtO2 monitoring can be used to identify CPP targets for optimal brain tissue oxygenation. In patients who do not undergo MMM, maintaining CPP >80 mm Hg may reduce the risk of BTH. PMID:21852615
Motala, Ayesha A.; Esterhuizen, Tonya; Pirie, Fraser J.; Omar, Mahomed A.K.
2011-01-01
OBJECTIVE To determine the prevalence of metabolic syndrome and to define optimal ethnic-specific waist-circumference cutoff points in a rural South African black community. RESEARCH DESIGN AND METHODS This was a cross-sectional survey conducted by random-cluster sampling of adults aged >15 years. Participants had demographic, anthropometric, and biochemical measurements taken, including a 75-g oral glucose tolerance test. Metabolic syndrome was defined using the 2009 Joint Interim Statement (JIS) definition. RESULTS Of 947 subjects (758 women) studied, the age-adjusted prevalence of metabolic syndrome was 22.1%, with a higher prevalence in women (25.0%) than in men (10.5%). Peak prevalence was in the oldest age-group (≥65 years) in women (44.2%) and in the 45- to 54-year age-group in men (25.0%). The optimal waist circumference cutoff point to predict the presence of at least two other components of the metabolic syndrome was 86 cm for men and 92 cm for women. The crude prevalence of metabolic syndrome was higher with the JIS definition (26.5%) than with the International Diabetes Federation (IDF) (23.3%) or the modified Third Report of the National Cholesterol Education Program Adult Treatment Panel (ATPIII) (18.5%) criteria; there was very good agreement with the IDF definition (κ = 0.90 [95% CI 0.87–0.94]) and good concordance with ATPIII criteria (0.77 [0.72–0.82]). CONCLUSIONS There is a high prevalence of metabolic syndrome, especially in women, suggesting that this community, unlike other rural communities in Africa, already has entered the epidemic of metabolic syndrome. Waist circumference cutoff points differ from those currently recommended for Africans. PMID:21330644
When is a research question not a research question?
Mayo, Nancy E; Asano, Miho; Barbic, Skye Pamela
2013-06-01
Research is undertaken to answer important questions yet often the question is poorly expressed and lacks information on the population, the exposure or intervention, the comparison, and the outcome. An optimal research question sets out what the investigator wants to know, not what the investigator might do, nor what the results of the study might ultimately contribute. The purpose of this paper is to estimate the extent to which rehabilitation scientists optimally define their research questions. A cross-sectional survey of the rehabilitation research articles published during 2008. Two raters independently rated each question according to pre-specified criteria; a third rater adjudicated all discrepant ratings. The proportion of the 258 articles with a question formulated as methods or expected contribution and not as what knowledge was being sought was 65%; 30% of questions required reworking. The designs which most often had poorly formulated research questions were randomized trials, cross-sectional and measurement studies. Formulating the research question is not purely a semantic concern. When the question is poorly formulated, the design, analysis, sample size calculations, and presentation of results may not be optimal. The gap between research and clinical practice could be bridged by a clear, complete, and informative research question.
Kamiura, Moto; Sano, Kohei
2017-10-01
The principle of optimism in the face of uncertainty is known as a heuristic in sequential decision-making problems. Overtaking method based on this principle is an effective algorithm to solve multi-armed bandit problems. It was defined by a set of some heuristic patterns of the formulation in the previous study. The objective of the present paper is to redefine the value functions of Overtaking method and to unify the formulation of them. The unified Overtaking method is associated with upper bounds of confidence intervals of expected rewards on statistics. The unification of the formulation enhances the universality of Overtaking method. Consequently we newly obtain Overtaking method for the exponentially distributed rewards, numerically analyze it, and show that it outperforms UCB algorithm on average. The present study suggests that the principle of optimism in the face of uncertainty should be regarded as the statistics-based consequence of the law of large numbers for the sample mean of rewards and estimation of upper bounds of expected rewards, rather than as a heuristic, in the context of multi-armed bandit problems. Copyright © 2017 Elsevier B.V. All rights reserved.
European LeukemiaNet recommendations for the management of chronic myeloid leukemia: 2013
Deininger, Michael W.; Rosti, Gianantonio; Hochhaus, Andreas; Soverini, Simona; Apperley, Jane F.; Cervantes, Francisco; Clark, Richard E.; Cortes, Jorge E.; Guilhot, François; Hjorth-Hansen, Henrik; Hughes, Timothy P.; Kantarjian, Hagop M.; Kim, Dong-Wook; Larson, Richard A.; Lipton, Jeffrey H.; Mahon, François-Xavier; Martinelli, Giovanni; Mayer, Jiri; Müller, Martin C.; Niederwieser, Dietger; Pane, Fabrizio; Radich, Jerald P.; Rousselot, Philippe; Saglio, Giuseppe; Saußele, Susanne; Schiffer, Charles; Silver, Richard; Simonsson, Bengt; Steegmann, Juan-Luis; Goldman, John M.; Hehlmann, Rüdiger
2013-01-01
Advances in chronic myeloid leukemia treatment, particularly regarding tyrosine kinase inhibitors, mandate regular updating of concepts and management. A European LeukemiaNet expert panel reviewed prior and new studies to update recommendations made in 2009. We recommend as initial treatment imatinib, nilotinib, or dasatinib. Response is assessed with standardized real quantitative polymerase chain reaction and/or cytogenetics at 3, 6, and 12 months. BCR-ABL1 transcript levels ≤10% at 3 months, <1% at 6 months, and ≤0.1% from 12 months onward define optimal response, whereas >10% at 6 months and >1% from 12 months onward define failure, mandating a change in treatment. Similarly, partial cytogenetic response (PCyR) at 3 months and complete cytogenetic response (CCyR) from 6 months onward define optimal response, whereas no CyR (Philadelphia chromosome–positive [Ph+] >95%) at 3 months, less than PCyR at 6 months, and less than CCyR from 12 months onward define failure. Between optimal and failure, there is an intermediate warning zone requiring more frequent monitoring. Similar definitions are provided for response to second-line therapy. Specific recommendations are made for patients in the accelerated and blastic phases, and for allogeneic stem cell transplantation. Optimal responders should continue therapy indefinitely, with careful surveillance, or they can be enrolled in controlled studies of treatment discontinuation once a deeper molecular response is achieved. PMID:23803709
Solving quantum optimal control problems using Clebsch variables and Lin constraints
NASA Astrophysics Data System (ADS)
Delgado-Téllez, M.; Ibort, A.; Rodríguez de la Peña, T.
2018-01-01
Clebsch variables (and Lin constraints) are applied to the study of a class of optimal control problems for affine-controlled quantum systems. The optimal control problem will be modelled with controls defined on an auxiliary space where the dynamical group of the system acts freely. The reciprocity between both theories: the classical theory defined by the objective functional and the quantum system, is established by using a suitable version of Lagrange’s multipliers theorem and a geometrical interpretation of the constraints of the system as defining a subspace of horizontal curves in an associated bundle. It is shown how the solutions of the variational problem defined by the objective functional determine solutions of the quantum problem. Then a new way of obtaining explicit solutions for a family of optimal control problems for affine-controlled quantum systems (finite or infinite dimensional) is obtained. One of its main advantages, is the the use of Clebsch variables allows to compute such solutions from solutions of invariant problems that can often be computed explicitly. This procedure can be presented as an algorithm that can be applied to a large class of systems. Finally, some simple examples, spin control, a simple quantum Hamiltonian with an ‘Elroy beanie’ type classical model and a controlled one-dimensional quantum harmonic oscillator, illustrating the main features of the theory, will be discussed.
Surface Navigation Using Optimized Waypoints and Particle Swarm Optimization
NASA Technical Reports Server (NTRS)
Birge, Brian
2013-01-01
The design priority for manned space exploration missions is almost always placed on human safety. Proposed manned surface exploration tasks (lunar, asteroid sample returns, Mars) have the possibility of astronauts traveling several kilometers away from a home base. Deviations from preplanned paths are expected while exploring. In a time-critical emergency situation, there is a need to develop an optimal home base return path. The return path may or may not be similar to the outbound path, and what defines optimal may change with, and even within, each mission. A novel path planning algorithm and prototype program was developed using biologically inspired particle swarm optimization (PSO) that generates an optimal path of traversal while avoiding obstacles. Applications include emergency path planning on lunar, Martian, and/or asteroid surfaces, generating multiple scenarios for outbound missions, Earth-based search and rescue, as well as human manual traversal and/or path integration into robotic control systems. The strategy allows for a changing environment, and can be re-tasked at will and run in real-time situations. Given a random extraterrestrial planetary or small body surface position, the goal was to find the fastest (or shortest) path to an arbitrary position such as a safe zone or geographic objective, subject to possibly varying constraints. The problem requires a workable solution 100% of the time, though it does not require the absolute theoretical optimum. Obstacles should be avoided, but if they cannot be, then the algorithm needs to be smart enough to recognize this and deal with it. With some modifications, it works with non-stationary error topologies as well.
NASA Astrophysics Data System (ADS)
Soltani-Mohammadi, Saeed; Safa, Mohammad; Mokhtari, Hadi
2016-10-01
One of the most important stages in complementary exploration is optimal designing the additional drilling pattern or defining the optimum number and location of additional boreholes. Quite a lot research has been carried out in this regard in which for most of the proposed algorithms, kriging variance minimization as a criterion for uncertainty assessment is defined as objective function and the problem could be solved through optimization methods. Although kriging variance implementation is known to have many advantages in objective function definition, it is not sensitive to local variability. As a result, the only factors evaluated for locating the additional boreholes are initial data configuration and variogram model parameters and the effects of local variability are omitted. In this paper, with the goal of considering the local variability in boundaries uncertainty assessment, the application of combined variance is investigated to define the objective function. Thus in order to verify the applicability of the proposed objective function, it is used to locate the additional boreholes in Esfordi phosphate mine through the implementation of metaheuristic optimization methods such as simulated annealing and particle swarm optimization. Comparison of results from the proposed objective function and conventional methods indicates that the new changes imposed on the objective function has caused the algorithm output to be sensitive to the variations of grade, domain's boundaries and the thickness of mineralization domain. The comparison between the results of different optimization algorithms proved that for the presented case the application of particle swarm optimization is more appropriate than simulated annealing.
Optimal design of a piezoelectric transducer for exciting guided wave ultrasound in rails
NASA Astrophysics Data System (ADS)
Ramatlo, Dineo A.; Wilke, Daniel N.; Loveday, Philip W.
2017-02-01
An existing Ultrasonic Broken Rail Detection System installed in South Africa on a heavy duty railway line is currently being upgraded to include defect detection and location. To accomplish this, an ultrasonic piezoelectric transducer to strongly excite a guided wave mode with energy concentrated in the web (web mode) of a rail is required. A previous study demonstrated that the recently developed SAFE-3D (Semi-Analytical Finite Element - 3 Dimensional) method can effectively predict the guided waves excited by a resonant piezoelectric transducer. In this study, the SAFE-3D model is used in the design optimization of a rail web transducer. A bound-constrained optimization problem was formulated to maximize the energy transmitted by the transducer in the web mode when driven by a pre-defined excitation signal. Dimensions of the transducer components were selected as the three design variables. A Latin hypercube sampled design of experiments that required a total of 500 SAFE-3D analyses in the design space was employed in a response surface-based optimization approach. The Nelder-Mead optimization algorithm was then used to find an optimal transducer design on the constructed response surface. The radial basis function response surface was first verified by comparing a number of predicted responses against the computed SAFE-3D responses. The performance of the optimal transducer predicted by the optimization algorithm on the response surface was also verified to be sufficiently accurate using SAFE-3D. The computational advantages of SAFE-3D in optimal transducer design are noteworthy as more than 500 analyses were performed. The optimal design was then manufactured and experimental measurements were used to validate the predicted performance. The adopted design method has demonstrated the capability to automate the design of transducers for a particular rail cross-section and frequency range.
Computing the Partition Function for Kinetically Trapped RNA Secondary Structures
Lorenz, William A.; Clote, Peter
2011-01-01
An RNA secondary structure is locally optimal if there is no lower energy structure that can be obtained by the addition or removal of a single base pair, where energy is defined according to the widely accepted Turner nearest neighbor model. Locally optimal structures form kinetic traps, since any evolution away from a locally optimal structure must involve energetically unfavorable folding steps. Here, we present a novel, efficient algorithm to compute the partition function over all locally optimal secondary structures of a given RNA sequence. Our software, RNAlocopt runs in time and space. Additionally, RNAlocopt samples a user-specified number of structures from the Boltzmann subensemble of all locally optimal structures. We apply RNAlocopt to show that (1) the number of locally optimal structures is far fewer than the total number of structures – indeed, the number of locally optimal structures approximately equal to the square root of the number of all structures, (2) the structural diversity of this subensemble may be either similar to or quite different from the structural diversity of the entire Boltzmann ensemble, a situation that depends on the type of input RNA, (3) the (modified) maximum expected accuracy structure, computed by taking into account base pairing frequencies of locally optimal structures, is a more accurate prediction of the native structure than other current thermodynamics-based methods. The software RNAlocopt constitutes a technical breakthrough in our study of the folding landscape for RNA secondary structures. For the first time, locally optimal structures (kinetic traps in the Turner energy model) can be rapidly generated for long RNA sequences, previously impossible with methods that involved exhaustive enumeration. Use of locally optimal structure leads to state-of-the-art secondary structure prediction, as benchmarked against methods involving the computation of minimum free energy and of maximum expected accuracy. Web server and source code available at http://bioinformatics.bc.edu/clotelab/RNAlocopt/. PMID:21297972
Optimally Stopped Optimization
NASA Astrophysics Data System (ADS)
Vinci, Walter; Lidar, Daniel A.
2016-11-01
We combine the fields of heuristic optimization and optimal stopping. We propose a strategy for benchmarking randomized optimization algorithms that minimizes the expected total cost for obtaining a good solution with an optimal number of calls to the solver. To do so, rather than letting the objective function alone define a cost to be minimized, we introduce a further cost-per-call of the algorithm. We show that this problem can be formulated using optimal stopping theory. The expected cost is a flexible figure of merit for benchmarking probabilistic solvers that can be computed when the optimal solution is not known and that avoids the biases and arbitrariness that affect other measures. The optimal stopping formulation of benchmarking directly leads to a real-time optimal-utilization strategy for probabilistic optimizers with practical impact. We apply our formulation to benchmark simulated annealing on a class of maximum-2-satisfiability (MAX2SAT) problems. We also compare the performance of a D-Wave 2X quantum annealer to the Hamze-Freitas-Selby (HFS) solver, a specialized classical heuristic algorithm designed for low-tree-width graphs. On a set of frustrated-loop instances with planted solutions defined on up to N =1098 variables, the D-Wave device is 2 orders of magnitude faster than the HFS solver, and, modulo known caveats related to suboptimal annealing times, exhibits identical scaling with problem size.
A Bayesian Sampler for Optimization of Protein Domain Hierarchies
2014-01-01
Abstract The process of identifying and modeling functionally divergent subgroups for a specific protein domain class and arranging these subgroups hierarchically has, thus far, largely been done via manual curation. How to accomplish this automatically and optimally is an unsolved statistical and algorithmic problem that is addressed here via Markov chain Monte Carlo sampling. Taking as input a (typically very large) multiple-sequence alignment, the sampler creates and optimizes a hierarchy by adding and deleting leaf nodes, by moving nodes and subtrees up and down the hierarchy, by inserting or deleting internal nodes, and by redefining the sequences and conserved patterns associated with each node. All such operations are based on a probability distribution that models the conserved and divergent patterns defining each subgroup. When we view these patterns as sequence determinants of protein function, each node or subtree in such a hierarchy corresponds to a subgroup of sequences with similar biological properties. The sampler can be applied either de novo or to an existing hierarchy. When applied to 60 protein domains from multiple starting points in this way, it converged on similar solutions with nearly identical log-likelihood ratio scores, suggesting that it typically finds the optimal peak in the posterior probability distribution. Similarities and differences between independently generated, nearly optimal hierarchies for a given domain help distinguish robust from statistically uncertain features. Thus, a future application of the sampler is to provide confidence measures for various features of a domain hierarchy. PMID:24494927
Faheem, Muhammad; Heyden, Andreas
2014-08-12
We report the development of a quantum mechanics/molecular mechanics free energy perturbation (QM/MM-FEP) method for modeling chemical reactions at metal-water interfaces. This novel solvation scheme combines planewave density function theory (DFT), periodic electrostatic embedded cluster method (PEECM) calculations using Gaussian-type orbitals, and classical molecular dynamics (MD) simulations to obtain a free energy description of a complex metal-water system. We derive a potential of mean force (PMF) of the reaction system within the QM/MM framework. A fixed-size, finite ensemble of MM conformations is used to permit precise evaluation of the PMF of QM coordinates and its gradient defined within this ensemble. Local conformations of adsorbed reaction moieties are optimized using sequential MD-sampling and QM-optimization steps. An approximate reaction coordinate is constructed using a number of interpolated states and the free energy difference between adjacent states is calculated using the QM/MM-FEP method. By avoiding on-the-fly QM calculations and by circumventing the challenges associated with statistical averaging during MD sampling, a computational speedup of multiple orders of magnitude is realized. The method is systematically validated against the results of ab initio QM calculations and demonstrated for C-C cleavage in double-dehydrogenated ethylene glycol on a Pt (111) model surface.
Enhanced solid-phase recombinase polymerase amplification and electrochemical detection.
Del Río, Jonathan Sabaté; Lobato, Ivan Magriñà; Mayboroda, Olena; Katakis, Ioanis; O'Sullivan, Ciara K
2017-05-01
Recombinase polymerase amplification (RPA) is an elegant method for the rapid, isothermal amplification of nucleic acids. Here, we elucidate the optimal surface chemistry for rapid and efficient solid-phase RPA, which was fine-tuned in order to obtain a maximum signal-to-noise ratio, defining the optimal DNA probe density, probe-to-lateral spacer ratio (1:0, 1:1, 1:10 and 1:100) and length of a vertical spacer of the probe as well as investigating the effect of different types of lateral spacers. The use of different labelling strategies was also examined in order to reduce the number of steps required for the analysis, using biotin or horseradish peroxidase-labelled reverse primers. Optimisation of the amplification temperature used and the use of surface blocking agents were also pursued. The combination of these changes facilitated a significantly more rapid amplification and detection protocol, with a lowered limit of detection (LOD) of 1 · 10 -15 M. The optimised protocol was applied to the detection of Francisella tularensis in real samples from hares and a clear correlation with PCR and qPCR results observed and the solid-phase RPA demonstrated to be capable of detecting 500 fM target DNA in real samples. Graphical abstract Relative size of thiolated lateral spacers tested versus the primer and the uvsx recombinase protein.
Parent driver characteristics associated with sub-optimal restraint of child passengers.
Winston, Flaura K; Chen, Irene G; Smith, Rebecca; Elliott, Michael R
2006-12-01
To identify parent driver demographic and socioeconomic characteristics associated with the use of sub-optimal restraints for child passengers under nine years. Cross-sectional study using in-depth, validated telephone interviews with parent drivers in a probability sample of 3,818 vehicle crashes involving 5,146 children. Sub-optimal restraint was defined as use of forward-facing child safety seats for infants under one or weighing under 20 lbs, and any seat-belt use for children under 9. Sub-optimal restraint was more common among children under one and between four and eight years than among children aged one to three years (18%, 65%, and 5%, respectively). For children under nine, independent risk factors for sub-optimal restraint were: non-Hispanic black parent drivers (with non-Hispanic white parents as reference, adjusted relative risk, adjusted RR = 1.24, 95% CI: 1.09-1.41); less educated parents (with college graduate or above as reference: high school, adjusted RR = 1.27, 95% CI: 1.12-1.44; less than high school graduate, adjusted RR = 1.36, 95% CI: 1.13-1.63); and lower family income (with $50,000 or more as reference: <$20,000, adjusted RR = 1.23, 95% CI: 1.07-1.40). Multivariate analysis revealed the following independent risk factors for sub-optimal restraint among four-to-eight-year-olds: older parent age, limited education, black race, and income below $20,000. Parents with low educational levels or of non-Hispanic black background may require additional anticipatory guidance regarding child passenger safety. The importance of poverty in predicting sub-optimal restraint underscores the importance of child restraint and booster seat disbursement and education programs, potentially through Medicaid.
NASA Astrophysics Data System (ADS)
Aslamazashvili, Zurab; Tavadze, Giorgi; Chikhradze, Mikheil; Namicheishvili, Teimuraz; Melashvili, Zaqaria
2017-12-01
For the production materials by the proposed Self-propagating High-Temperature Synthesis (SHS) - Electric Rolling method, there are no limitations in the length of the material and the width only depends on the length of rolls. The innovation method enables to carry out the process in nonstop regime, which is possible by merging energy consuming SHS method and Electrical Rolling. For realizing the process it is mandatory and sufficient, that initial components, after initiation by thermal pulse, could interaction with the heat emission, which itself ensures the self-propagation of synthesis front in lieu of heat transfer in the whole sample. Just after that process, the rolls instantly start rotation with the set speed to ensure the motion of material. This speed should be equal to the speed of propagation of synthesis front. The synthesized product in hot plastic condition is delivered to the rolls in nonstop regime, simultaneously, providing the current in deformation zone in order to compensate the energy loses. As a result by using the innovation SHS -Electrical Rolling technology we obtain long dimensional metal-ceramic product. In the presented paper optimal compositions of SHS chasms were selected in Ti-Cr-C-Steel, Ti-B and Ti-B-Me systems. For the selection of the compounds the thermodynamic analysis has been carried out which enabled to determine adiabatic temperature of synthesis theoretically and to determine balanced concentrations of synthesized product at synthesis temperature. Thermodynamic analysis also gave possibility to determine optimal compositions of chasms and define the conditions, which are important for correct realization of synthesis process. For obtaining non porous materials and product by SHS-Electrical Rolling, it is necessary to select synthesis and compacting parameters correctly. These parameters are the pressure and the time. In Ti-Cr-C-Steel, Ti-B and Ti-B-Me systems the high quality (nonporous or low porosity <2%) of materials and product is directly depended on the liquid phase content just after the passing of synthesis front in the sample. The more content of liquid phase provides the higher quality of material. The content of liquid phase itself depends on synthesis parameters: speed and temperature of synthesis. The higher the speed and temperature of synthesis we have, higher the content of liquid phase is formed. The speed and the temperature of synthesis depend on the Δρ relative density of sample formed from initial chasm, this mean it depends on the pressure of formation of the sample. The paper describes the results of determination of optimal pressures in Ti-Cr-C-Steel, Ti-B and Ti-B-Me systems. Their values are defined as 50-70 MPa, 180-220 MPa and 45-70 MPa.
Perspectives of flax processing wastes in building materials production
NASA Astrophysics Data System (ADS)
Smirnova, Olga
2017-01-01
The paper discusses the possibility of using the flax boons for thermal insulation materials. The solution for systematization of materials based on flax boon is suggested. It based on the principle of building materials production using the flax waste with different types of binders. The purpose of the research is to obtain heat-insulating materials with different structure based on agricultural production waste - flax boon, mineral and organic binders. The composition and properties of organic filler - flax boons - are defined using infrared spectroscopy and standard techniques. Using the method of multivariate analysis the optimal ratio of flax boons and binders in production of pressed, porous and granular materials are determined. The effect of particles size distribution of flax boons on the strength of samples with the different composition is studied. As a result, the optimized compositions of pressed, porous and granular materials based on flax boons are obtained. Data on the physical and mechanical properties of these materials are given in the paper.
Efficiency in nonequilibrium molecular dynamics Monte Carlo simulations
Radak, Brian K.; Roux, Benoît
2016-10-07
Hybrid algorithms combining nonequilibrium molecular dynamics and Monte Carlo (neMD/MC) offer a powerful avenue for improving the sampling efficiency of computer simulations of complex systems. These neMD/MC algorithms are also increasingly finding use in applications where conventional approaches are impractical, such as constant-pH simulations with explicit solvent. However, selecting an optimal nonequilibrium protocol for maximum efficiency often represents a non-trivial challenge. This work evaluates the efficiency of a broad class of neMD/MC algorithms and protocols within the theoretical framework of linear response theory. The approximations are validated against constant pH-MD simulations and shown to provide accurate predictions of neMD/MC performance.more » An assessment of a large set of protocols confirms (both theoretically and empirically) that a linear work protocol gives the best neMD/MC performance. Lastly, a well-defined criterion for optimizing the time parameters of the protocol is proposed and demonstrated with an adaptive algorithm that improves the performance on-the-fly with minimal cost.« less
Influence of season and frequency of ejaculation on production of stallion semen for freezing.
Magistrini, M; Chanteloube, P; Palmer, E
1987-01-01
In an attempt to define optimal season and ejaculation frequency for frozen semen, semen was collected from 6 stallions (3 horses and 3 ponies) 3 times per week or every day, alternating every week, for 1 year. The semen was evaluated and frozen. All the samples were thawed at the end of the experiment. At collection, fresh semen evaluations showed that winter (as opposed to spring and summer) was associated with low sexual behaviour, small volumes of spermatozoa and gel, high sperm concentration and lower motility. The high ejaculation frequency yielded a decreased volume, concentration of spermatozoa in the ejaculate and slightly improved motility. The quality of thawed semen was analysed by video and microscope estimations for motility and by two staining methods for vitality. No variation was observed according to the ejaculation frequency; the best freezability was obtained in winter but the difference was small compared to between-stallion variability and optimization of frequency and season did not change a 'bad freezer' into a good one.
A Scheme to Optimize Flow Routing and Polling Switch Selection of Software Defined Networks
Chen, Huan; Li, Lemin; Ren, Jing; Wang, Yang; Zhao, Yangming; Wang, Xiong; Wang, Sheng; Xu, Shizhong
2015-01-01
This paper aims at minimizing the communication cost for collecting flow information in Software Defined Networks (SDN). Since flow-based information collecting method requires too much communication cost, and switch-based method proposed recently cannot benefit from controlling flow routing, jointly optimize flow routing and polling switch selection is proposed to reduce the communication cost. To this end, joint optimization problem is formulated as an Integer Linear Programming (ILP) model firstly. Since the ILP model is intractable in large size network, we also design an optimal algorithm for the multi-rooted tree topology and an efficient heuristic algorithm for general topology. According to extensive simulations, it is found that our method can save up to 55.76% communication cost compared with the state-of-the-art switch-based scheme. PMID:26690571
Optimal resource states for local state discrimination
NASA Astrophysics Data System (ADS)
Bandyopadhyay, Somshubhro; Halder, Saronath; Nathanson, Michael
2018-02-01
We study the problem of locally distinguishing pure quantum states using shared entanglement as a resource. For a given set of locally indistinguishable states, we define a resource state to be useful if it can enhance local distinguishability and optimal if it can distinguish the states as well as global measurements and is also minimal with respect to a partial ordering defined by entanglement and dimension. We present examples of useful resources and show that an entangled state need not be useful for distinguishing a given set of states. We obtain optimal resources with explicit local protocols to distinguish multipartite Greenberger-Horne-Zeilinger and graph states and also show that a maximally entangled state is an optimal resource under one-way local operations and classical communication to distinguish any bipartite orthonormal basis which contains at least one entangled state of full Schmidt rank.
Toledo, Jon B.; Van Deerlin, Vivianna M.; Lee, Edward B.; Suh, EunRan; Baek, Young; Robinson, John L.; Xie, Sharon X.; McBride, Jennifer; Wood, Elisabeth M.; Schuck, Theresa; Irwin, David J.; Gross, Rachel G.; Hurtig, Howard; McCluskey, Leo; Elman, Lauren; Karlawish, Jason; Schellenberg, Gerard; Chen-Plotkin, Alice; Wolk, David; Grossman, Murray; Arnold, Steven E.; Shaw, Leslie M.; Lee, Virginia M.-Y.; Trojanowski, John Q.
2014-01-01
Neurodegenerative diseases (NDs) are defined by the accumulation of abnormal protein deposits in the central nervous system (CNS), and only neuropathological examination enables a definitive diagnosis. Brain banks and their associated scientific programs have shaped the actual knowledge of NDs, identifying and characterizing the CNS deposits that define new diseases, formulating staging schemes, and establishing correlations between neuropathological changes and clinical features. However, brain banks have evolved to accommodate the banking of biofluids as well as DNA and RNA samples. Moreover, the value of biobanks is greatly enhanced if they link all the multidimensional clinical and laboratory information of each case, which is accomplished, optimally, using systematic and standardized operating procedures, and in the framework of multidisciplinary teams with the support of a flexible and user-friendly database system that facilitates the sharing of information of all the teams in the network. We describe a biobanking system that is a platform for discovery research at the Center for Neurodegenerative Disease Research at the University of Pennsylvania. PMID:23978324
Comparison of Optimal Design Methods in Inverse Problems
Banks, H. T.; Holm, Kathleen; Kappel, Franz
2011-01-01
Typical optimal design methods for inverse or parameter estimation problems are designed to choose optimal sampling distributions through minimization of a specific cost function related to the resulting error in parameter estimates. It is hoped that the inverse problem will produce parameter estimates with increased accuracy using data collected according to the optimal sampling distribution. Here we formulate the classical optimal design problem in the context of general optimization problems over distributions of sampling times. We present a new Prohorov metric based theoretical framework that permits one to treat succinctly and rigorously any optimal design criteria based on the Fisher Information Matrix (FIM). A fundamental approximation theory is also included in this framework. A new optimal design, SE-optimal design (standard error optimal design), is then introduced in the context of this framework. We compare this new design criteria with the more traditional D-optimal and E-optimal designs. The optimal sampling distributions from each design are used to compute and compare standard errors; the standard errors for parameters are computed using asymptotic theory or bootstrapping and the optimal mesh. We use three examples to illustrate ideas: the Verhulst-Pearl logistic population model [13], the standard harmonic oscillator model [13] and a popular glucose regulation model [16, 19, 29]. PMID:21857762
OLTARIS: An Efficient Web-Based Tool for Analyzing Materials Exposed to Space Radiation
NASA Technical Reports Server (NTRS)
Slaba, Tony; McMullen, Amelia M.; Thibeault, Sheila A.; Sandridge, Chris A.; Clowdsley, Martha S.; Blatting, Steve R.
2011-01-01
The near-Earth space radiation environment includes energetic galactic cosmic rays (GCR), high intensity proton and electron belts, and the potential for solar particle events (SPE). These sources may penetrate shielding materials and deposit significant energy in sensitive electronic devices on board spacecraft and satellites. Material and design optimization methods may be used to reduce the exposure and extend the operational lifetime of individual components and systems. Since laboratory experiments are expensive and may not cover the range of particles and energies relevant for space applications, such optimization may be done computationally with efficient algorithms that include the various constraints placed on the component, system, or mission. In the present work, the web-based tool OLTARIS (On-Line Tool for the Assessment of Radiation in Space) is presented, and the applicability of the tool for rapidly analyzing exposure levels within either complicated shielding geometries or user-defined material slabs exposed to space radiation is demonstrated. An example approach for material optimization is also presented. Slabs of various advanced multifunctional materials are defined and exposed to several space radiation environments. The materials and thicknesses defining each layer in the slab are then systematically adjusted to arrive at an optimal slab configuration.
What is value—accumulated reward or evidence?
Friston, Karl; Adams, Rick; Montague, Read
2012-01-01
Why are you reading this abstract? In some sense, your answer will cast the exercise as valuable—but what is value? In what follows, we suggest that value is evidence or, more exactly, log Bayesian evidence. This implies that a sufficient explanation for valuable behavior is the accumulation of evidence for internal models of our world. This contrasts with normative models of optimal control and reinforcement learning, which assume the existence of a value function that explains behavior, where (somewhat tautologically) behavior maximizes value. In this paper, we consider an alternative formulation—active inference—that replaces policies in normative models with prior beliefs about the (future) states agents should occupy. This enables optimal behavior to be cast purely in terms of inference: where agents sample their sensorium to maximize the evidence for their generative model of hidden states in the world, and minimize their uncertainty about those states. Crucially, this formulation resolves the tautology inherent in normative models and allows one to consider how prior beliefs are themselves optimized in a hierarchical setting. We illustrate these points by showing that any optimal policy can be specified with prior beliefs in the context of Bayesian inference. We then show how these prior beliefs are themselves prescribed by an imperative to minimize uncertainty. This formulation explains the saccadic eye movements required to read this text and defines the value of the visual sensations you are soliciting. PMID:23133414
Xu, Hongyi; Li, Yang; Zeng, Danielle
2017-01-02
Process integration and optimization is the key enabler of the Integrated Computational Materials Engineering (ICME) of carbon fiber composites. In this paper, automated workflows are developed for two types of composites: Sheet Molding Compounds (SMC) short fiber composites, and multi-layer unidirectional (UD) composites. For SMC, the proposed workflow integrates material processing simulation, microstructure representation volume element (RVE) models, material property prediction and structure preformation simulation to enable multiscale, multidisciplinary analysis and design. Processing parameters, microstructure parameters and vehicle subframe geometry parameters are defined as the design variables; the stiffness and weight of the structure are defined as the responses. Formore » multi-layer UD structure, this work focuses on the discussion of different design representation methods and their impacts on the optimization performance. Challenges in ICME process integration and optimization are also summarized and highlighted. Two case studies are conducted to demonstrate the integrated process and its application in optimization.« less
Pervious concrete mix optimization for sustainable pavement solution
NASA Astrophysics Data System (ADS)
Barišić, Ivana; Galić, Mario; Netinger Grubeša, Ivanka
2017-10-01
In order to fulfill requirements of sustainable road construction, new materials for pavement construction are investigated with the main goal to preserve natural resources and achieve energy savings. One of such sustainable pavement material is pervious concrete as a new solution for low volume pavements. To accommodate required strength and porosity as the measure of appropriate drainage capability, four mixtures of pervious concrete are investigated and results of laboratory tests of compressive and flexural strength and porosity are presented. For defining the optimal pervious concrete mixture in a view of aggregate and financial savings, optimization model is utilized and optimal mixtures defined according to required strength and porosity characteristics. Results of laboratory research showed that comparing single-sized aggregate pervious concrete mixtures, coarse aggregate mixture result in increased porosity but reduced strengths. The optimal share of the coarse aggregate turn to be 40.21%, the share of fine aggregate is 49.79% for achieving required compressive strength of 25 MPa, flexural strength of 4.31 MPa and porosity of 21.66%.
Optimal Resource Allocation in Library Systems
ERIC Educational Resources Information Center
Rouse, William B.
1975-01-01
Queueing theory is used to model processes as either waiting or balking processes. The optimal allocation of resources to these processes is defined as that which maximizes the expected value of the decision-maker's utility function. (Author)
NASA Astrophysics Data System (ADS)
Hamza, Karim; Shalaby, Mohamed
2014-09-01
This article presents a framework for simulation-based design optimization of computationally expensive problems, where economizing the generation of sample designs is highly desirable. One popular approach for such problems is efficient global optimization (EGO), where an initial set of design samples is used to construct a kriging model, which is then used to generate new 'infill' sample designs at regions of the search space where there is high expectancy of improvement. This article attempts to address one of the limitations of EGO, where generation of infill samples can become a difficult optimization problem in its own right, as well as allow the generation of multiple samples at a time in order to take advantage of parallel computing in the evaluation of the new samples. The proposed approach is tested on analytical functions, and then applied to the vehicle crashworthiness design of a full Geo Metro model undergoing frontal crash conditions.
Optimal two-phase sampling design for comparing accuracies of two binary classification rules.
Xu, Huiping; Hui, Siu L; Grannis, Shaun
2014-02-10
In this paper, we consider the design for comparing the performance of two binary classification rules, for example, two record linkage algorithms or two screening tests. Statistical methods are well developed for comparing these accuracy measures when the gold standard is available for every unit in the sample, or in a two-phase study when the gold standard is ascertained only in the second phase in a subsample using a fixed sampling scheme. However, these methods do not attempt to optimize the sampling scheme to minimize the variance of the estimators of interest. In comparing the performance of two classification rules, the parameters of primary interest are the difference in sensitivities, specificities, and positive predictive values. We derived the analytic variance formulas for these parameter estimates and used them to obtain the optimal sampling design. The efficiency of the optimal sampling design is evaluated through an empirical investigation that compares the optimal sampling with simple random sampling and with proportional allocation. Results of the empirical study show that the optimal sampling design is similar for estimating the difference in sensitivities and in specificities, and both achieve a substantial amount of variance reduction with an over-sample of subjects with discordant results and under-sample of subjects with concordant results. A heuristic rule is recommended when there is no prior knowledge of individual sensitivities and specificities, or the prevalence of the true positive findings in the study population. The optimal sampling is applied to a real-world example in record linkage to evaluate the difference in classification accuracy of two matching algorithms. Copyright © 2013 John Wiley & Sons, Ltd.
Watanabe, Ayumi; Inoue, Yusuke; Asano, Yuji; Kikuchi, Kei; Miyatake, Hiroki; Tokushige, Takanobu
2017-01-01
The specific binding ratio (SBR) was first reported by Tossici-Bolt et al. for quantitative indicators for dopamine transporter (DAT) imaging. It is defined as the ratio of the specific binding concentration of the striatum to the non-specific binding concentration of the whole brain other than the striatum. The non-specific binding concentration is calculated based on the region of interest (ROI), which is set 20 mm inside the outer contour, defined by a threshold technique. Tossici-Bolt et al. used a 50% threshold, but sometimes we couldn't define the ROI of non-specific binding concentration (reference region) and calculate SBR appropriately with a 50% threshold. Therefore, we sought a new method for determining the reference region when calculating SBR. We used data from 20 patients who had undergone DAT imaging in our hospital, to calculate the non-specific binding concentration by the following methods, the threshold to define a reference region was fixed at some specific values (the fixing method) and reference region was visually optimized by an examiner at every examination (the visual optimization method). First, we assessed the reference region of each method visually, and afterward, we quantitatively compared SBR calculated based on each method. In the visual assessment, the scores of the fixing method at 30% and visual optimization method were higher than the scores of the fixing method at other values, with or without scatter correction. In the quantitative assessment, the SBR obtained by visual optimization of the reference region, based on consensus of three radiological technologists, was used as a baseline (the standard method). The values of SBR showed good agreement between the standard method and both the fixing method at 30% and the visual optimization method, with or without scatter correction. Therefore, the fixing method at 30% and the visual optimization method were equally suitable for determining the reference region.
Minimal Clinically Important Difference of Berg Balance Scale in People With Multiple Sclerosis.
Gervasoni, Elisa; Jonsdottir, Johanna; Montesano, Angelo; Cattaneo, Davide
2017-02-01
To identify the minimal clinically important difference (MCID) to define clinically meaningful patient's improvement on the Berg Balance Scale (BBS) in people with multiple sclerosis (PwMS) in response to rehabilitation. Cohort study. Neurorehabilitation institute. PwMS (N=110). This study comprised inpatients and outpatients who participated in research on balance and gait rehabilitation. All received 20 rehabilitation sessions with different intensities. Inpatients received daily treatments over a period of 4 weeks, while outpatients received 2 to 3 treatments per week for 10 weeks. An anchor-based approach using clinical global impression of improvement in balance (Activities-specific Balance Confidence [ABC] Scale) was used to determine the MCID of the BBS. The MCID was defined as the minimum change in the BBS total score (postintervention - preintervention) that was needed to perceive at least a 10% improvement on the ABC Scale. Receiver operating characteristic curves were used to define the cutoff of the optimal MCID of the BBS discriminating between improved and not improved subjects. The MCID for change on the BBS was 3 points for the whole sample, 3 points for the inpatients, and 2 points for the outpatients. The area under the curve was .65 for the whole sample, .64 for inpatients, and .68 for outpatients. The MCID for improvement in balance as measured by the BBS was 3 points, meaning that PwMS are likely to perceive that as a reproducible and clinically important change in their balance performance. Copyright © 2016 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.
Yoon, Jong Lull; Cho, Jung Jin; Park, Kyung Mi; Noh, Hye Mi; Park, Yong Soon
2015-02-01
Associations between body mass index (BMI), body fat percentage (BF%), and health risks differ between Asian and European populations. BMI is commonly used to diagnose obesity; however, its accuracy in detecting adiposity in Koreans is unknown. The present cross-sectional study aimed at assessing the accuracy of BMI in determining BF%-defined obesity in 6,017 subjects (age 20-69 yr, 43.6% men) from the 2009 Korean National Health and Nutrition Examination Survey. We assessed the diagnostic performance of BMI using the Western Pacific Regional Office of World Health Organization reference standard for BF%-defined obesity by sex and age and identified the optimal BMI cut-off for BF%-defined obesity using receiver operating characteristic curve analysis. BMI-defined obesity (≥25 kg/m(2)) was observed in 38.7% of men and 28.1% of women, with a high specificity (89%, men; 84%, women) but poor sensitivity (56%, men; 72% women) for BF%-defined obesity (25.2%, men; 31.1%, women). The optimal BMI cut-off (24.2 kg/m(2)) had 78% sensitivity and 71% specificity. BMI demonstrated limited diagnostic accuracy for adiposity in Korea. There was a -1.3 kg/m(2) difference in optimal BMI cut-offs between Korea and America, smaller than the 5-unit difference between the Western Pacific Regional Office and global World Health Organization obesity criteria.
Determining optimal gestational weight gain in a multiethnic Asian population.
Ee, Tat Xin; Allen, John Carson; Malhotra, Rahul; Koh, Huishan; Østbye, Truls; Tan, Thiam Chye
2014-04-01
To define the optimal gestational weight gain (GWG) for the multiethnic Singaporean population. Data from 1529 live singleton deliveries was analyzed. A multinomial logistic regression analysis, with GWG as the predictor, was conducted to determine the lowest aggregated risk of a composite perinatal outcome, stratified by Asia-specific body mass index (BMI) categories. The composite perinatal outcome, based on a combination of delivery type (cesarean section [CS], vaginal delivery [VD]) and size for gestational age (small [SGA], appropriate [AGA], large [LGA]), had six categories: (i) VD with LGA; (ii) VD with SGA; (iii) CS with AGA; (iv) CS with SGA; (v) CS with LGA; (vi) and VD with AGA. The last was considered as the 'normal' reference category. In each BMI category, the GWG value corresponding to the lowest aggregated risk was defined as the optimal GWG, and the GWG values at which the aggregated risk did not exceed a 5% increase from the lowest aggregated risk were defined as the margins of the optimal GWG range. The optimal GWG by pre-pregnancy BMI category, was 19.5 kg (range, 12.9 to 23.9) for underweight, 13.7 kg (7.7 to 18.8) for normal weight, 7.9 kg (2.6 to 14.0) for overweight and 1.8 kg (-5.0 to 7.0) for obese. The results of this study, the first to determine optimal GWG in the multiethnic Singaporean population, concur with the Institute of Medicine (IOM) guidelines in that GWG among Asian women who are heavier prior to pregnancy, especially those who are obese, should be lower. However, the optimal GWG for underweight and obese women was outside the IOM recommended range. © 2014 The Authors. Journal of Obstetrics and Gynaecology Research © 2014 Japan Society of Obstetrics and Gynecology.
NASA Technical Reports Server (NTRS)
Korte, J. J.; Auslender, A. H.
1993-01-01
A new optimization procedure, in which a parabolized Navier-Stokes solver is coupled with a non-linear least-squares optimization algorithm, is applied to the design of a Mach 14, laminar two-dimensional hypersonic subscale flight inlet with an internal contraction ratio of 15:1 and a length-to-throat half-height ratio of 150:1. An automated numerical search of multiple geometric wall contours, which are defined by polynomical splines, results in an optimal geometry that yields the maximum total-pressure recovery for the compression process. Optimal inlet geometry is obtained for both inviscid and viscous flows, with the assumption that the gas is either calorically or thermally perfect. The analysis with a calorically perfect gas results in an optimized inviscid inlet design that is defined by two cubic splines and yields a mass-weighted total-pressure recovery of 0.787, which is a 23% improvement compared with the optimized shock-canceled two-ramp inlet design. Similarly, the design procedure obtains the optimized contour for a viscous calorically perfect gas to yield a mass-weighted total-pressure recovery value of 0.749. Additionally, an optimized contour for a viscous thermally perfect gas is obtained to yield a mass-weighted total-pressure recovery value of 0.768. The design methodology incorporates both complex fluid dynamic physics and optimal search techniques without an excessive compromise of computational speed; hence, this methodology is a practical technique that is applicable to optimal inlet design procedures.
Exact and Optimal Quantum Mechanics/Molecular Mechanics Boundaries.
Sun, Qiming; Chan, Garnet Kin-Lic
2014-09-09
Motivated by recent work in density matrix embedding theory, we define exact link orbitals that capture all quantum mechanical (QM) effects across arbitrary quantum mechanics/molecular mechanics (QM/MM) boundaries. Exact link orbitals are rigorously defined from the full QM solution, and their number is equal to the number of orbitals in the primary QM region. Truncating the exact set yields a smaller set of link orbitals optimal with respect to reproducing the primary region density matrix. We use the optimal link orbitals to obtain insight into the limits of QM/MM boundary treatments. We further analyze the popular general hybrid orbital (GHO) QM/MM boundary across a test suite of molecules. We find that GHOs are often good proxies for the most important optimal link orbital, although there is little detailed correlation between the detailed GHO composition and optimal link orbital valence weights. The optimal theory shows that anions and cations cannot be described by a single link orbital. However, expanding to include the second most important optimal link orbital in the boundary recovers an accurate description. The second optimal link orbital takes the chemically intuitive form of a donor or acceptor orbital for charge redistribution, suggesting that optimal link orbitals can be used as interpretative tools for electron transfer. We further find that two optimal link orbitals are also sufficient for boundaries that cut across double bonds. Finally, we suggest how to construct "approximately" optimal link orbitals for practical QM/MM calculations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, Hongyi; Li, Yang; Zeng, Danielle
Process integration and optimization is the key enabler of the Integrated Computational Materials Engineering (ICME) of carbon fiber composites. In this paper, automated workflows are developed for two types of composites: Sheet Molding Compounds (SMC) short fiber composites, and multi-layer unidirectional (UD) composites. For SMC, the proposed workflow integrates material processing simulation, microstructure representation volume element (RVE) models, material property prediction and structure preformation simulation to enable multiscale, multidisciplinary analysis and design. Processing parameters, microstructure parameters and vehicle subframe geometry parameters are defined as the design variables; the stiffness and weight of the structure are defined as the responses. Formore » multi-layer UD structure, this work focuses on the discussion of different design representation methods and their impacts on the optimization performance. Challenges in ICME process integration and optimization are also summarized and highlighted. Two case studies are conducted to demonstrate the integrated process and its application in optimization.« less
A mesh gradient technique for numerical optimization
NASA Technical Reports Server (NTRS)
Willis, E. A., Jr.
1973-01-01
A class of successive-improvement optimization methods in which directions of descent are defined in the state space along each trial trajectory are considered. The given problem is first decomposed into two discrete levels by imposing mesh points. Level 1 consists of running optimal subarcs between each successive pair of mesh points. For normal systems, these optimal two-point boundary value problems can be solved by following a routine prescription if the mesh spacing is sufficiently close. A spacing criterion is given. Under appropriate conditions, the criterion value depends only on the coordinates of the mesh points, and its gradient with respect to those coordinates may be defined by interpreting the adjoint variables as partial derivatives of the criterion value function. In level 2, the gradient data is used to generate improvement steps or search directions in the state space which satisfy the boundary values and constraints of the given problem.
NASA Astrophysics Data System (ADS)
Bolodurina, I. P.; Parfenov, D. I.
2017-10-01
The goal of our investigation is optimization of network work in virtual data center. The advantage of modern infrastructure virtualization lies in the possibility to use software-defined networks. However, the existing optimization of algorithmic solutions does not take into account specific features working with multiple classes of virtual network functions. The current paper describes models characterizing the basic structures of object of virtual data center. They including: a level distribution model of software-defined infrastructure virtual data center, a generalized model of a virtual network function, a neural network model of the identification of virtual network functions. We also developed an efficient algorithm for the optimization technology of containerization of virtual network functions in virtual data center. We propose an efficient algorithm for placing virtual network functions. In our investigation we also generalize the well renowned heuristic and deterministic algorithms of Karmakar-Karp.
Efficient dynamic optimization of logic programs
NASA Technical Reports Server (NTRS)
Laird, Phil
1992-01-01
A summary is given of the dynamic optimization approach to speed up learning for logic programs. The problem is to restructure a recursive program into an equivalent program whose expected performance is optimal for an unknown but fixed population of problem instances. We define the term 'optimal' relative to the source of input instances and sketch an algorithm that can come within a logarithmic factor of optimal with high probability. Finally, we show that finding high-utility unfolding operations (such as EBG) can be reduced to clause reordering.
A proposal of optimal sampling design using a modularity strategy
NASA Astrophysics Data System (ADS)
Simone, A.; Giustolisi, O.; Laucelli, D. B.
2016-08-01
In real water distribution networks (WDNs) are present thousands nodes and optimal placement of pressure and flow observations is a relevant issue for different management tasks. The planning of pressure observations in terms of spatial distribution and number is named sampling design and it was faced considering model calibration. Nowadays, the design of system monitoring is a relevant issue for water utilities e.g., in order to manage background leakages, to detect anomalies and bursts, to guarantee service quality, etc. In recent years, the optimal location of flow observations related to design of optimal district metering areas (DMAs) and leakage management purposes has been faced considering optimal network segmentation and the modularity index using a multiobjective strategy. Optimal network segmentation is the basis to identify network modules by means of optimal conceptual cuts, which are the candidate locations of closed gates or flow meters creating the DMAs. Starting from the WDN-oriented modularity index, as a metric for WDN segmentation, this paper proposes a new way to perform the sampling design, i.e., the optimal location of pressure meters, using newly developed sampling-oriented modularity index. The strategy optimizes the pressure monitoring system mainly based on network topology and weights assigned to pipes according to the specific technical tasks. A multiobjective optimization minimizes the cost of pressure meters while maximizing the sampling-oriented modularity index. The methodology is presented and discussed using the Apulian and Exnet networks.
Integrated multidisciplinary design optimization of rotorcraft
NASA Technical Reports Server (NTRS)
Adelman, Howard M.; Mantay, Wayne R.
1989-01-01
The NASA/Army research plan for developing the logic elements for helicopter rotor design optimization by integrating appropriate disciplines and accounting for important interactions among the disciplines is discussed. The paper describes the optimization formulation in terms of the objective function, design variables, and constraints. The analysis aspects are discussed, and an initial effort at defining the interdisciplinary coupling is summarized. Results are presented on the achievements made in the rotor aerodynamic performance optimization for minimum hover horsepower, rotor dynamic optimization for vibration reduction, rotor structural optimization for minimum weight, and integrated aerodynamic load/dynamics optimization for minimum vibration and weight.
Optimization of propagation-based x-ray phase-contrast tomography for breast cancer imaging
NASA Astrophysics Data System (ADS)
Baran, P.; Pacile, S.; Nesterets, Y. I.; Mayo, S. C.; Dullin, C.; Dreossi, D.; Arfelli, F.; Thompson, D.; Lockie, D.; McCormack, M.; Taba, S. T.; Brun, F.; Pinamonti, M.; Nickson, C.; Hall, C.; Dimmock, M.; Zanconati, F.; Cholewa, M.; Quiney, H.; Brennan, P. C.; Tromba, G.; Gureyev, T. E.
2017-03-01
The aim of this study was to optimise the experimental protocol and data analysis for in-vivo breast cancer x-ray imaging. Results are presented of the experiment at the SYRMEP beamline of Elettra Synchrotron using the propagation-based phase-contrast mammographic tomography method, which incorporates not only absorption, but also x-ray phase information. In this study the images of breast tissue samples, of a size corresponding to a full human breast, with radiologically acceptable x-ray doses were obtained, and the degree of improvement of the image quality (from the diagnostic point of view) achievable using propagation-based phase-contrast image acquisition protocols with proper incorporation of x-ray phase retrieval into the reconstruction pipeline was investigated. Parameters such as the x-ray energy, sample-to-detector distance and data processing methods were tested, evaluated and optimized with respect to the estimated diagnostic value using a mastectomy sample with a malignant lesion. The results of quantitative evaluation of images were obtained by means of radiological assessment carried out by 13 experienced specialists. A comparative analysis was performed between the x-ray and the histological images of the specimen. The results of the analysis indicate that, within the investigated range of parameters, both the objective image quality characteristics and the subjective radiological scores of propagation-based phase-contrast images of breast tissues monotonically increase with the strength of phase contrast which in turn is directly proportional to the product of the radiation wavelength and the sample-to-detector distance. The outcomes of this study serve to define the practical imaging conditions and the CT reconstruction procedures appropriate for low-dose phase-contrast mammographic imaging of live patients at specially designed synchrotron beamlines.
Toward a Developmental Psychology of Sehnsucht (Life Longings): The Optimal (Utopian) Life
ERIC Educational Resources Information Center
Scheibe, Susanne; Freund, Alexandra M.; Baltes, Paul B.
2007-01-01
The topic of an optimal or utopian life has received much attention across the humanities and the arts but not in psychology. The German concept of Sehnsucht captures individual and collective thoughts and feelings about one's optimal or utopian life. Sehnsucht (life longings; LLs) is defined as an intense desire for alternative states and…
Han, Zong-wei; Huang, Wei; Luo, Yun; Zhang, Chun-di; Qi, Da-cheng
2015-03-01
Taking the soil organic matter in eastern Zhongxiang County, Hubei Province, as a research object, thirteen sample sets from different regions were arranged surrounding the road network, the spatial configuration of which was optimized by the simulated annealing approach. The topographic factors of these thirteen sample sets, including slope, plane curvature, profile curvature, topographic wetness index, stream power index and sediment transport index, were extracted by the terrain analysis. Based on the results of optimization, a multiple linear regression model with topographic factors as independent variables was built. At the same time, a multilayer perception model on the basis of neural network approach was implemented. The comparison between these two models was carried out then. The results revealed that the proposed approach was practicable in optimizing soil sampling scheme. The optimal configuration was capable of gaining soil-landscape knowledge exactly, and the accuracy of optimal configuration was better than that of original samples. This study designed a sampling configuration to study the soil attribute distribution by referring to the spatial layout of road network, historical samples, and digital elevation data, which provided an effective means as well as a theoretical basis for determining the sampling configuration and displaying spatial distribution of soil organic matter with low cost and high efficiency.
Chenel, Marylore; Bouzom, François; Aarons, Leon; Ogungbenro, Kayode
2008-12-01
To determine the optimal sampling time design of a drug-drug interaction (DDI) study for the estimation of apparent clearances (CL/F) of two co-administered drugs (SX, a phase I compound, potentially a CYP3A4 inhibitor, and MDZ, a reference CYP3A4 substrate) without any in vivo data using physiologically based pharmacokinetic (PBPK) predictions, population PK modelling and multiresponse optimal design. PBPK models were developed with AcslXtreme using only in vitro data to simulate PK profiles of both drugs when they were co-administered. Then, using simulated data, population PK models were developed with NONMEM and optimal sampling times were determined by optimizing the determinant of the population Fisher information matrix with PopDes using either two uniresponse designs (UD) or a multiresponse design (MD) with joint sampling times for both drugs. Finally, the D-optimal sampling time designs were evaluated by simulation and re-estimation with NONMEM by computing the relative root mean squared error (RMSE) and empirical relative standard errors (RSE) of CL/F. There were four and five optimal sampling times (=nine different sampling times) in the UDs for SX and MDZ, respectively, whereas there were only five sampling times in the MD. Whatever design and compound, CL/F was well estimated (RSE < 20% for MDZ and <25% for SX) and expected RSEs from PopDes were in the same range as empirical RSEs. Moreover, there was no bias in CL/F estimation. Since MD required only five sampling times compared to the two UDs, D-optimal sampling times of the MD were included into a full empirical design for the proposed clinical trial. A joint paper compares the designs with real data. This global approach including PBPK simulations, population PK modelling and multiresponse optimal design allowed, without any in vivo data, the design of a clinical trial, using sparse sampling, capable of estimating CL/F of the CYP3A4 substrate and potential inhibitor when co-administered together.
Less-Complex Method of Classifying MPSK
NASA Technical Reports Server (NTRS)
Hamkins, Jon
2006-01-01
An alternative to an optimal method of automated classification of signals modulated with M-ary phase-shift-keying (M-ary PSK or MPSK) has been derived. The alternative method is approximate, but it offers nearly optimal performance and entails much less complexity, which translates to much less computation time. Modulation classification is becoming increasingly important in radio-communication systems that utilize multiple data modulation schemes and include software-defined or software-controlled receivers. Such a receiver may "know" little a priori about an incoming signal but may be required to correctly classify its data rate, modulation type, and forward error-correction code before properly configuring itself to acquire and track the symbol timing, carrier frequency, and phase, and ultimately produce decoded bits. Modulation classification has long been an important component of military interception of initially unknown radio signals transmitted by adversaries. Modulation classification may also be useful for enabling cellular telephones to automatically recognize different signal types and configure themselves accordingly. The concept of modulation classification as outlined in the preceding paragraph is quite general. However, at the present early stage of development, and for the purpose of describing the present alternative method, the term "modulation classification" or simply "classification" signifies, more specifically, a distinction between M-ary and M'-ary PSK, where M and M' represent two different integer multiples of 2. Both the prior optimal method and the present alternative method require the acquisition of magnitude and phase values of a number (N) of consecutive baseband samples of the incoming signal + noise. The prior optimal method is based on a maximum- likelihood (ML) classification rule that requires a calculation of likelihood functions for the M and M' hypotheses: Each likelihood function is an integral, over a full cycle of carrier phase, of a complicated sum of functions of the baseband sample values, the carrier phase, the carrier-signal and noise magnitudes, and M or M'. Then the likelihood ratio, defined as the ratio between the likelihood functions, is computed, leading to the choice of whichever hypothesis - M or M'- is more likely. In the alternative method, the integral in each likelihood function is approximated by a sum over values of the integrand sampled at a number, 1, of equally spaced values of carrier phase. Used in this way, 1 is a parameter that can be adjusted to trade computational complexity against the probability of misclassification. In the limit as 1 approaches infinity, one obtains the integral form of the likelihood function and thus recovers the ML classification. The present approximate method has been tested in comparison with the ML method by means of computational simulations. The results of the simulations have shown that the performance (as quantified by probability of misclassification) of the approximate method is nearly indistinguishable from that of the ML method (see figure).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Ping; Howard, Bret H.
Thermal pretreatment of biomass by torrefaction and low temperature pyrolysis has the potential for generating high quality and more suitable fuels. To utilize a model to describe the complex and dynamic changes taking place during these two treatments for process design, optimization and scale-up, detailed data is needed on the property evolution during treatment of well-defined individual biomass particles. The objectives of this study are to investigate the influence of thermal pretreatment temperatures on wood biomass biochemical compositions, physical properties and microstructure. Wild cherry wood was selected as a model biomass and prepared for this study. The well-defined wood particlemore » samples were consecutively heated at 220, 260, 300, 350, 450 and 550 °C for 0.5 h under nitrogen. Untreated and treated samples were characterized for biochemical composition changes (cellulose, hemicellulose, and lignin) by thermogravimetric analyzer (TGA), physical properties (color, dimensions, weight, density and grindablity), chemical property (proximate analysis and heating value) and microstructural changes by scanning electron microscopy (SEM). Hemicellulose was mostly decomposed in the samples treated at 260 and 300 °C and resulted in the cell walls weakening resulting in improved grindability. The dimensions of the wood were reduced in all directions and shrinkage increased with increased treatment temperature and weight loss. With increased treatment temperature, losses of weight and volume increased and bulk density decreased. The low temperature pyrolyzed wood samples improved solid fuel property with high fuel ratio, which are close to lignite/bituminous coal. Morphology of the wood remained intact through the treatment range but the cell walls were thinner. Lastly, these results will improve the understanding of the property changes of the biomass during pretreatment and will help to develop models for process simulation and potential application of the treated biomass.« less
Comparison of Inoculation with the InoqulA and WASP Automated Systems with Manual Inoculation
Croxatto, Antony; Dijkstra, Klaas; Prod'hom, Guy
2015-01-01
The quality of sample inoculation is critical for achieving an optimal yield of discrete colonies in both monomicrobial and polymicrobial samples to perform identification and antibiotic susceptibility testing. Consequently, we compared the performance between the InoqulA (BD Kiestra), the WASP (Copan), and manual inoculation methods. Defined mono- and polymicrobial samples of 4 bacterial species and cloudy urine specimens were inoculated on chromogenic agar by the InoqulA, the WASP, and manual methods. Images taken with ImagA (BD Kiestra) were analyzed with the VisionLab version 3.43 image analysis software to assess the quality of growth and to prevent subjective interpretation of the data. A 3- to 10-fold higher yield of discrete colonies was observed following automated inoculation with both the InoqulA and WASP systems than that with manual inoculation. The difference in performance between automated and manual inoculation was mainly observed at concentrations of >106 bacteria/ml. Inoculation with the InoqulA system allowed us to obtain significantly more discrete colonies than the WASP system at concentrations of >107 bacteria/ml. However, the level of difference observed was bacterial species dependent. Discrete colonies of bacteria present in 100- to 1,000-fold lower concentrations than the most concentrated populations in defined polymicrobial samples were not reproducibly recovered, even with the automated systems. The analysis of cloudy urine specimens showed that InoqulA inoculation provided a statistically significantly higher number of discrete colonies than that with WASP and manual inoculation. Consequently, the automated InoqulA inoculation greatly decreased the requirement for bacterial subculture and thus resulted in a significant reduction in the time to results, laboratory workload, and laboratory costs. PMID:25972424
Wang, Ping; Howard, Bret H.
2017-12-23
Thermal pretreatment of biomass by torrefaction and low temperature pyrolysis has the potential for generating high quality and more suitable fuels. To utilize a model to describe the complex and dynamic changes taking place during these two treatments for process design, optimization and scale-up, detailed data is needed on the property evolution during treatment of well-defined individual biomass particles. The objectives of this study are to investigate the influence of thermal pretreatment temperatures on wood biomass biochemical compositions, physical properties and microstructure. Wild cherry wood was selected as a model biomass and prepared for this study. The well-defined wood particlemore » samples were consecutively heated at 220, 260, 300, 350, 450 and 550 °C for 0.5 h under nitrogen. Untreated and treated samples were characterized for biochemical composition changes (cellulose, hemicellulose, and lignin) by thermogravimetric analyzer (TGA), physical properties (color, dimensions, weight, density and grindablity), chemical property (proximate analysis and heating value) and microstructural changes by scanning electron microscopy (SEM). Hemicellulose was mostly decomposed in the samples treated at 260 and 300 °C and resulted in the cell walls weakening resulting in improved grindability. The dimensions of the wood were reduced in all directions and shrinkage increased with increased treatment temperature and weight loss. With increased treatment temperature, losses of weight and volume increased and bulk density decreased. The low temperature pyrolyzed wood samples improved solid fuel property with high fuel ratio, which are close to lignite/bituminous coal. Morphology of the wood remained intact through the treatment range but the cell walls were thinner. Lastly, these results will improve the understanding of the property changes of the biomass during pretreatment and will help to develop models for process simulation and potential application of the treated biomass.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Man, Jun; Zhang, Jiangjiang; Li, Weixuan
2016-10-01
The ensemble Kalman filter (EnKF) has been widely used in parameter estimation for hydrological models. The focus of most previous studies was to develop more efficient analysis (estimation) algorithms. On the other hand, it is intuitively understandable that a well-designed sampling (data-collection) strategy should provide more informative measurements and subsequently improve the parameter estimation. In this work, a Sequential Ensemble-based Optimal Design (SEOD) method, coupled with EnKF, information theory and sequential optimal design, is proposed to improve the performance of parameter estimation. Based on the first-order and second-order statistics, different information metrics including the Shannon entropy difference (SD), degrees ofmore » freedom for signal (DFS) and relative entropy (RE) are used to design the optimal sampling strategy, respectively. The effectiveness of the proposed method is illustrated by synthetic one-dimensional and two-dimensional unsaturated flow case studies. It is shown that the designed sampling strategies can provide more accurate parameter estimation and state prediction compared with conventional sampling strategies. Optimal sampling designs based on various information metrics perform similarly in our cases. The effect of ensemble size on the optimal design is also investigated. Overall, larger ensemble size improves the parameter estimation and convergence of optimal sampling strategy. Although the proposed method is applied to unsaturated flow problems in this study, it can be equally applied in any other hydrological problems.« less
CRISM Multispectral and Hyperspectral Mapping Data - A Global Data Set for Hydrated Mineral Mapping
NASA Astrophysics Data System (ADS)
Seelos, F. P.; Hash, C. D.; Murchie, S. L.; Lim, H.
2017-12-01
The Compact Reconnaissance Imaging Spectrometer for Mars (CRISM) is a visible through short-wave infrared hyperspectral imaging spectrometer (VNIR S-detector: 364-1055 nm; IR L-detector: 1001-3936 nm; 6.55 nm sampling) that has been in operation on the Mars Reconnaissance Orbiter (MRO) since 2006. Over the course of the MRO mission, CRISM has acquired 290,000 individual mapping observation segments (mapping strips) with a variety of observing modes and data characteristics (VNIR/IR; 100/200 m/pxl; multi-/hyper-spectral band selection) over a wide range of observing conditions (atmospheric state, observation geometry, instrument state). CRISM mapping data coverage density varies primarily with latitude and secondarily due to seasonal and operational considerations. The aggregate global IR mapping data coverage currently stands at 85% ( 80% at the equator with 40% repeat sampling), which is sufficient spatial sampling density to support the assembly of empirically optimized radiometrically consistent mapping mosaic products. The CRISM project has defined a number of mapping mosaic data products (e.g. Multispectral Reduced Data Record (MRDR) map tiles) with varying degrees of observation-specific processing and correction applied prior to mosaic assembly. A commonality among the mosaic products is the presence of inter-observation radiometric discrepancies which are traceable to variable observation circumstances or associated atmospheric/photometric correction residuals. The empirical approach to radiometric reconciliation leverages inter-observation spatial overlaps and proximal relationships to construct a graph that encodes the mosaic structure and radiometric discrepancies. The graph theory abstraction allows the underling structure of the msaic to be evaluated and the corresponding optimization problem configured so it is well-posed. Linear and non-linear least squares optimization is then employed to derive a set of observation- and wavelength- specific model parameters for a series of transform functions that minimize the total radiometric discrepancy across the mosaic. This empirical approach to CRISM data radiometric reconciliation and the utility of the resulting mapping data mosaic products for hydrated mineral mapping will be presented.
Forcisi, Sara; Moritz, Franco; Kanawati, Basem; Tziotis, Dimitrios; Lehmann, Rainer; Schmitt-Kopplin, Philippe
2013-05-31
The present review gives an introduction into the concept of metabolomics and provides an overview of the analytical tools applied in non-targeted metabolomics with a focus on liquid chromatography (LC). LC is a powerful analytical tool in the study of complex sample matrices. A further development and configuration employing Ultra-High Pressure Liquid Chromatography (UHPLC) is optimized to provide the largest known liquid chromatographic resolution and peak capacity. Reasonably UHPLC plays an important role in separation and consequent metabolite identification of complex molecular mixtures such as bio-fluids. The most sensitive detectors for these purposes are mass spectrometers. Almost any mass analyzer can be optimized to identify and quantify small pre-defined sets of targets; however, the number of analytes in metabolomics is far greater. Optimized protocols for quantification of large sets of targets may be rendered inapplicable. Results on small target set analyses on different sample matrices are easily comparable with each other. In non-targeted metabolomics there is almost no analytical method which is applicable to all different matrices due to limitations pertaining to mass analyzers and chromatographic tools. The specifications of the most important interfaces and mass analyzers are discussed. We additionally provide an exemplary application in order to demonstrate the level of complexity which remains intractable up to date. The potential of coupling a high field Fourier Transform Ion Cyclotron Resonance Mass Spectrometer (ICR-FT/MS), the mass analyzer with the largest known mass resolving power, to UHPLC is given with an example of one human pre-treated plasma sample. This experimental example illustrates one way of overcoming the necessity of faster scanning rates in the coupling with UHPLC. The experiment enabled the extraction of thousands of features (analytical signals). A small subset of this compositional space could be mapped into a mass difference network whose topology shows specificity toward putative metabolite classes and retention time. Copyright © 2013 Elsevier B.V. All rights reserved.
DOT National Transportation Integrated Search
1988-10-01
This annotated bibliography, Volume III of the study entitled, Optimizing Wartime Materiel Delivery: An overview of DOD Containerization Efforts, documents studies related to containerization. Several objectives of the study were defined. These inclu...
Generating moment matching scenarios using optimization techniques
Mehrotra, Sanjay; Papp, Dávid
2013-05-16
An optimization based method is proposed to generate moment matching scenarios for numerical integration and its use in stochastic programming. The main advantage of the method is its flexibility: it can generate scenarios matching any prescribed set of moments of the underlying distribution rather than matching all moments up to a certain order, and the distribution can be defined over an arbitrary set. This allows for a reduction in the number of scenarios and allows the scenarios to be better tailored to the problem at hand. The method is based on a semi-infinite linear programming formulation of the problem thatmore » is shown to be solvable with polynomial iteration complexity. A practical column generation method is implemented. The column generation subproblems are polynomial optimization problems; however, they need not be solved to optimality. It is found that the columns in the column generation approach can be efficiently generated by random sampling. The number of scenarios generated matches a lower bound of Tchakaloff's. The rate of convergence of the approximation error is established for continuous integrands, and an improved bound is given for smooth integrands. Extensive numerical experiments are presented in which variants of the proposed method are compared to Monte Carlo and quasi-Monte Carlo methods on both numerical integration problems and stochastic optimization problems. The benefits of being able to match any prescribed set of moments, rather than all moments up to a certain order, is also demonstrated using optimization problems with 100-dimensional random vectors. Here, empirical results show that the proposed approach outperforms Monte Carlo and quasi-Monte Carlo based approaches on the tested problems.« less
Reliability-Based Design Optimization of a Composite Airframe Component
NASA Technical Reports Server (NTRS)
Pai, Shantaram S.; Coroneos, Rula; Patnaik, Surya N.
2011-01-01
A stochastic optimization methodology (SDO) has been developed to design airframe structural components made of metallic and composite materials. The design method accommodates uncertainties in load, strength, and material properties that are defined by distribution functions with mean values and standard deviations. A response parameter, like a failure mode, has become a function of reliability. The primitive variables like thermomechanical loads, material properties, and failure theories, as well as variables like depth of beam or thickness of a membrane, are considered random parameters with specified distribution functions defined by mean values and standard deviations.
Probabilistic computer model of optimal runway turnoffs
NASA Technical Reports Server (NTRS)
Schoen, M. L.; Preston, O. W.; Summers, L. G.; Nelson, B. A.; Vanderlinden, L.; Mcreynolds, M. C.
1985-01-01
Landing delays are currently a problem at major air carrier airports and many forecasters agree that airport congestion will get worse by the end of the century. It is anticipated that some types of delays can be reduced by an efficient optimal runway exist system allowing increased approach volumes necessary at congested airports. A computerized Probabilistic Runway Turnoff Model which locates exits and defines path geometry for a selected maximum occupancy time appropriate for each TERPS aircraft category is defined. The model includes an algorithm for lateral ride comfort limits.
Two-dimensional T2 distribution mapping in rock core plugs with optimal k-space sampling.
Xiao, Dan; Balcom, Bruce J
2012-07-01
Spin-echo single point imaging has been employed for 1D T(2) distribution mapping, but a simple extension to 2D is challenging since the time increase is n fold, where n is the number of pixels in the second dimension. Nevertheless 2D T(2) mapping in fluid saturated rock core plugs is highly desirable because the bedding plane structure in rocks often results in different pore properties within the sample. The acquisition time can be improved by undersampling k-space. The cylindrical shape of rock core plugs yields well defined intensity distributions in k-space that may be efficiently determined by new k-space sampling patterns that are developed in this work. These patterns acquire 22.2% and 11.7% of the k-space data points. Companion density images may be employed, in a keyhole imaging sense, to improve image quality. T(2) weighted images are fit to extract T(2) distributions, pixel by pixel, employing an inverse Laplace transform. Images reconstructed with compressed sensing, with similar acceleration factors, are also presented. The results show that restricted k-space sampling, in this application, provides high quality results. Copyright © 2012 Elsevier Inc. All rights reserved.
Echolocation in Blainville's beaked whales (Mesoplodon densirostris).
Madsen, P T; de Soto, N Aguilar; Arranz, P; Johnson, M
2013-06-01
Here we use sound and movement recording tags to study how deep-diving Blainville's beaked whales (Mesoplodon densirostris) use echolocation to forage in their natural mesopelagic habitat. These whales ensonify thousands of organisms per dive but select only about 25 prey for capture. They negotiate their cluttered environment by radiating sound in a narrow 20° field of view which they sample with 1.5-3 clicks per metre travelled requiring only some 60 clicks to locate, select and approach each prey. Sampling rates do not appear to be defined by the range to individual targets, but rather by the movement of the predator. Whales sample faster when they encounter patches of prey allowing them to search new water volumes while turning rapidly to stay within a patch. This implies that the Griffin search-approach-capture model of biosonar foraging must be expanded to account for sampling behaviours adapted to the overall prey distribution. Beaked whales can classify prey at more than 15 m range adopting stereotyped motor patterns when approaching some prey. This long detection range relative to swimming speed facilitates a deliberate mode of sensory-motor operation in which prey and capture tactics can be selected to optimize energy returns during long breath-hold dives.
Conditional Optimal Design in Three- and Four-Level Experiments
ERIC Educational Resources Information Center
Hedges, Larry V.; Borenstein, Michael
2014-01-01
The precision of estimates of treatment effects in multilevel experiments depends on the sample sizes chosen at each level. It is often desirable to choose sample sizes at each level to obtain the smallest variance for a fixed total cost, that is, to obtain optimal sample allocation. This article extends previous results on optimal allocation to…
Dynamically achieved active site precision in enzyme catalysis.
Klinman, Judith P
2015-02-17
CONSPECTUS: The grand challenge in enzymology is to define and understand all of the parameters that contribute to enzymes' enormous rate accelerations. The property of hydrogen tunneling in enzyme reactions has moved the focus of research away from an exclusive focus on transition state stabilization toward the importance of the motions of the heavy atoms of the protein, a role for reduced barrier width in catalysis, and the sampling of a protein conformational landscape to achieve a family of protein substates that optimize enzyme-substrate interactions and beyond. This Account focuses on a thermophilic alcohol dehydrogenase for which the chemical step of hydride transfer is rate determining across a wide range of experimental conditions. The properties of the chemical coordinate have been probed using kinetic isotope effects, indicating a transition in behavior below 30 °C that distinguishes nonoptimal from optimal C-H activation. Further, the introduction of single site mutants has the impact of either enhancing or eliminating the temperature dependent transition in catalysis. Biophysical probes, which include time dependent hydrogen/deuterium exchange and fluorescent lifetimes and Stokes shifts, have also been pursued. These studies allow the correlation of spatially resolved transitions in protein motions with catalysis. It is now possible to define a long-range network of protein motions in ht-ADH that extends from a dimer interface to the substrate binding domain across to the cofactor binding domain, over a distance of ca. 30 Å. The ongoing challenge to obtaining spatial and temporal resolution of catalysis-linked protein motions is discussed.
Thromboxane Formation Assay to Identify High On-Treatment Platelet Reactivity to Aspirin.
Mohring, Annemarie; Piayda, Kerstin; Dannenberg, Lisa; Zako, Saif; Schneider, Theresa; Bartkowski, Kirsten; Levkau, Bodo; Zeus, Tobias; Kelm, Malte; Hohlfeld, Thomas; Polzin, Amin
2017-01-01
Platelet inhibition by aspirin is indispensable in the secondary prevention of cardiovascular events. Nevertheless, impaired aspirin antiplatelet effects (high on-treatment platelet reactivity [HTPR]) are frequent. This is associated with an enhanced risk of cardiovascular events. The current gold standard to evaluate platelet hyper-reactivity despite aspirin intake is the light-transmittance aggregometry (LTA). However, pharmacologically, the most specific test is the measurement of arachidonic acid (AA)-induced thromboxane (TX) B2 formation. Currently, the optimal cut-off to define HTPR to aspirin by inhibition of TX formation is not known. Therefore, in this pilot study, we aimed to calculate a TX formation cut-off value to detect HTPR defined by the current gold standard LTA. We measured platelet function in 2,507 samples. AA-induced TX formation by ELISA and AA-induced LTA were used to measure aspirin antiplatelet effects. TX formation correlated nonlinearly with the maximum of aggregation in the AA-induced LTA (Spearman's rho R = 0.7396; 95% CI 0.7208-0.7573, p < 0.0001). Receiver operating characteristic analysis and Youden's J statistics revealed 209.8 ng/mL as the optimal cut-off value to detect HTPR to aspirin with the TX ELISA (area under the curve: 0.92, p < 0.0001, sensitivity of 82.7%, specificity of 90.3%). In summary, TX formation ELISA is reliable in detecting HTPR to aspirin. The calculated cut-off level needs to be tested in trials with clinical end points. © 2017 S. Karger AG, Basel.
Optimization technique of wavefront coding system based on ZEMAX externally compiled programs
NASA Astrophysics Data System (ADS)
Han, Libo; Dong, Liquan; Liu, Ming; Zhao, Yuejin; Liu, Xiaohua
2016-10-01
Wavefront coding technique as a means of athermalization applied to infrared imaging system, the design of phase plate is the key to system performance. This paper apply the externally compiled programs of ZEMAX to the optimization of phase mask in the normal optical design process, namely defining the evaluation function of wavefront coding system based on the consistency of modulation transfer function (MTF) and improving the speed of optimization by means of the introduction of the mathematical software. User write an external program which computes the evaluation function on account of the powerful computing feature of the mathematical software in order to find the optimal parameters of phase mask, and accelerate convergence through generic algorithm (GA), then use dynamic data exchange (DDE) interface between ZEMAX and mathematical software to realize high-speed data exchanging. The optimization of the rotational symmetric phase mask and the cubic phase mask have been completed by this method, the depth of focus increases nearly 3 times by inserting the rotational symmetric phase mask, while the other system with cubic phase mask can be increased to 10 times, the consistency of MTF decrease obviously, the maximum operating temperature of optimized system range between -40°-60°. Results show that this optimization method can be more convenient to define some unconventional optimization goals and fleetly to optimize optical system with special properties due to its externally compiled function and DDE, there will be greater significance for the optimization of unconventional optical system.
Manju, Md Abu; Candel, Math J J M; Berger, Martijn P F
2014-07-10
In this paper, the optimal sample sizes at the cluster and person levels for each of two treatment arms are obtained for cluster randomized trials where the cost-effectiveness of treatments on a continuous scale is studied. The optimal sample sizes maximize the efficiency or power for a given budget or minimize the budget for a given efficiency or power. Optimal sample sizes require information on the intra-cluster correlations (ICCs) for effects and costs, the correlations between costs and effects at individual and cluster levels, the ratio of the variance of effects translated into costs to the variance of the costs (the variance ratio), sampling and measuring costs, and the budget. When planning, a study information on the model parameters usually is not available. To overcome this local optimality problem, the current paper also presents maximin sample sizes. The maximin sample sizes turn out to be rather robust against misspecifying the correlation between costs and effects at the cluster and individual levels but may lose much efficiency when misspecifying the variance ratio. The robustness of the maximin sample sizes against misspecifying the ICCs depends on the variance ratio. The maximin sample sizes are robust under misspecification of the ICC for costs for realistic values of the variance ratio greater than one but not robust under misspecification of the ICC for effects. Finally, we show how to calculate optimal or maximin sample sizes that yield sufficient power for a test on the cost-effectiveness of an intervention.
NASA Astrophysics Data System (ADS)
Dambreville, Frédéric
2013-10-01
While there is a variety of approaches and algorithms for optimizing the mission of an unmanned moving sensor, there are much less works which deal with the implementation of several sensors within a human organization. In this case, the management of the sensors is done through at least one human decision layer, and the sensors management as a whole arises as a bi-level optimization process. In this work, the following hypotheses are considered as realistic: Sensor handlers of first level plans their sensors by means of elaborated algorithmic tools based on accurate modelling of the environment; Higher level plans the handled sensors according to a global observation mission and on the basis of an approximated model of the environment and of the first level sub-processes. This problem is formalized very generally as the maximization of an unknown function, defined a priori by sampling a known random function (law of model error). In such case, each actual evaluation of the function increases the knowledge about the function, and subsequently the efficiency of the maximization. The issue is to optimize the sequence of value to be evaluated, in regards to the evaluation costs. There is here a fundamental link with the domain of experiment design. Jones, Schonlau and Welch proposed a general method, the Efficient Global Optimization (EGO), for solving this problem in the case of additive functional Gaussian law. In our work, a generalization of the EGO is proposed, based on a rare event simulation approach. It is applied to the aforementioned bi-level sensor planning.
Adé, Apolline; Chauchat, Laure; Frève, Johann-François Ouellette; Gagné, Sébastien; Caron, Nicolas; Bussières, Jean-François
2017-01-01
Background Several studies have compared cleaning procedures for decontaminating surfaces exposed to antineoplastic drugs. All of the cleaning products tested were successful in reducing most of the antineoplastic drug quantities spilled on surfaces, but none of them completely removed residual traces. Objective To assess the efficacy of various cleaning solutions for decontaminating a biological safety cabinet workbench exposed to a defined amount of cyclophosphamide. Methods In this pilot study, specific areas of 2 biological safety cabinets (class II, type B2) were deliberately contaminated with a defined quantity of cyclophosphamide (10 μg or 107 pg). Three cleaning solutions were tested: quaternary ammonium, sodium hypochlorite 0.02%, and sodium hypochlorite 2%. After cleaning, the cyclophosphamide remaining on the areas was quantified by wipe sampling. Each cleaning solution was tested 3 times, with cleaning and wipe sampling being performed 5 times for each test. Results A total of 57 wipe samples were collected and analyzed. The average recovery efficiency was 121.690% (standard deviation 5.058%). The decontamination efficacy increased with the number of successive cleaning sessions: from 98.710% after session 1 to 99.997% after session 5 for quaternary ammonium; from 97.027% to 99.997% for sodium hypochlorite 0.02%; and from 98.008% to 100% for sodium hypochlorite 2%. Five additional cleaning sessions performed after the main study (with detergent and sodium hypochlorite 2%) were effective to complete the decontamination, leaving no detectable traces of the drug. Conclusions All of the cleaning solutions reduced contamination of biological safety cabinet workbenches exposed to a defined amount of cyclophosphamide. Quaternary ammonium and sodium hypochlorite (0.02% and 2%) had mean efficacy greater than 97% for removal of the initial quantity of the drug (107 pg) after the first cleaning session. When sodium hypochlorite 2% was used, fewer cleaning sessions were required to complete decontamination. Further studies should be conducted to identify optimal cleaning strategies to fully eliminate traces of hazardous drugs. PMID:29298999
Ochi, Kento; Kamiura, Moto
2015-09-01
A multi-armed bandit problem is a search problem on which a learning agent must select the optimal arm among multiple slot machines generating random rewards. UCB algorithm is one of the most popular methods to solve multi-armed bandit problems. It achieves logarithmic regret performance by coordinating balance between exploration and exploitation. Since UCB algorithms, researchers have empirically known that optimistic value functions exhibit good performance in multi-armed bandit problems. The terms optimistic or optimism might suggest that the value function is sufficiently larger than the sample mean of rewards. The first definition of UCB algorithm is focused on the optimization of regret, and it is not directly based on the optimism of a value function. We need to think the reason why the optimism derives good performance in multi-armed bandit problems. In the present article, we propose a new method, which is called Overtaking method, to solve multi-armed bandit problems. The value function of the proposed method is defined as an upper bound of a confidence interval with respect to an estimator of expected value of reward: the value function asymptotically approaches to the expected value of reward from the upper bound. If the value function is larger than the expected value under the asymptote, then the learning agent is almost sure to be able to obtain the optimal arm. This structure is called sand-sifter mechanism, which has no regrowth of value function of suboptimal arms. It means that the learning agent can play only the current best arm in each time step. Consequently the proposed method achieves high accuracy rate and low regret and some value functions of it can outperform UCB algorithms. This study suggests the advantage of optimism of agents in uncertain environment by one of the simplest frameworks. Copyright © 2015 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.
Valuing hydrological alteration in multi-objective water resources management
NASA Astrophysics Data System (ADS)
Bizzi, Simone; Pianosi, Francesca; Soncini-Sessa, Rodolfo
2012-11-01
SummaryThe management of water through the impoundment of rivers by dams and reservoirs is necessary to support key human activities such as hydropower production, agriculture and flood risk mitigation. Advances in multi-objective optimization techniques and ever growing computing power make it possible to design reservoir operating policies that represent Pareto-optimal tradeoffs between multiple interests. On the one hand, such optimization methods can enhance performances of commonly targeted objectives (such as hydropower production or water supply), on the other hand they risk strongly penalizing all the interests not directly (i.e. mathematically) included in the optimization algorithm. The alteration of the downstream hydrological regime is a well established cause of ecological degradation and its evaluation and rehabilitation is commonly required by recent legislation (as the Water Framework Directive in Europe). However, it is rarely embedded in reservoir optimization routines and, even when explicitly considered, the criteria adopted for its evaluation are doubted and not commonly trusted, undermining the possibility of real implementation of environmentally friendly policies. The main challenges in defining and assessing hydrological alterations are: how to define a reference state (referencing); how to define criteria upon which to build mathematical indicators of alteration (measuring); and finally how to aggregate the indicators in a single evaluation index (valuing) that can serve as objective function in the optimization problem. This paper aims to address these issues by: (i) discussing the benefits and constrains of different approaches to referencing, measuring and valuing hydrological alteration; (ii) testing two alternative indices of hydrological alteration, one based on the established framework of Indicators of Hydrological Alteration (Richter et al., 1996), and one satisfying the mathematical properties required by widely used optimization methods based on dynamic programming; (iii) demonstrating and discussing these indices by application River Ticino, in Italy; (iv) providing a framework to effectively include hydrological alteration within reservoir operation optimization.
Optimal sampling with prior information of the image geometry in microfluidic MRI.
Han, S H; Cho, H; Paulsen, J L
2015-03-01
Recent advances in MRI acquisition for microscopic flows enable unprecedented sensitivity and speed in a portable NMR/MRI microfluidic analysis platform. However, the application of MRI to microfluidics usually suffers from prolonged acquisition times owing to the combination of the required high resolution and wide field of view necessary to resolve details within microfluidic channels. When prior knowledge of the image geometry is available as a binarized image, such as for microfluidic MRI, it is possible to reduce sampling requirements by incorporating this information into the reconstruction algorithm. The current approach to the design of the partial weighted random sampling schemes is to bias toward the high signal energy portions of the binarized image geometry after Fourier transformation (i.e. in its k-space representation). Although this sampling prescription is frequently effective, it can be far from optimal in certain limiting cases, such as for a 1D channel, or more generally yield inefficient sampling schemes at low degrees of sub-sampling. This work explores the tradeoff between signal acquisition and incoherent sampling on image reconstruction quality given prior knowledge of the image geometry for weighted random sampling schemes, finding that optimal distribution is not robustly determined by maximizing the acquired signal but from interpreting its marginal change with respect to the sub-sampling rate. We develop a corresponding sampling design methodology that deterministically yields a near optimal sampling distribution for image reconstructions incorporating knowledge of the image geometry. The technique robustly identifies optimal weighted random sampling schemes and provides improved reconstruction fidelity for multiple 1D and 2D images, when compared to prior techniques for sampling optimization given knowledge of the image geometry. Copyright © 2015 Elsevier Inc. All rights reserved.
Bova, G Steven; Eltoum, Isam A; Kiernan, John A; Siegal, Gene P; Frost, Andra R; Best, Carolyn J M; Gillespie, John W; Su, Gloria H; Emmert-Buck, Michael R
2005-02-01
Isolation of well-preserved pure cell populations is a prerequisite for sound studies of the molecular basis of any tissue-based biological phenomenon. This article reviews current methods for obtaining anatomically specific signals from molecules isolated from tissues, a basic requirement for productive linking of phenotype and genotype. The quality of samples isolated from tissue and used for molecular analysis is often glossed over or omitted from publications, making interpretation and replication of data difficult or impossible. Fortunately, recently developed techniques allow life scientists to better document and control the quality of samples used for a given assay, creating a foundation for improvement in this area. Tissue processing for molecular studies usually involves some or all of the following steps: tissue collection, gross dissection/identification, fixation, processing/embedding, storage/archiving, sectioning, staining, microdissection/annotation, and pure analyte labeling/identification and quantification. We provide a detailed comparison of some current tissue microdissection technologies, and provide detailed example protocols for tissue component handling upstream and downstream from microdissection. We also discuss some of the physical and chemical issues related to optimal tissue processing, and include methods specific to cytology specimens. We encourage each laboratory to use these as a starting point for optimization of their overall process of moving from collected tissue to high quality, appropriately anatomically tagged scientific results. In optimized protocols is a source of inefficiency in current life science research. Improvement in this area will significantly increase life science quality and productivity. The article is divided into introduction, materials, protocols, and notes sections. Because many protocols are covered in each of these sections, information relating to a single protocol is not contiguous. To get the greatest benefit from this article, readers are advised to read through the entire article first, identify protocols appropriate to their laboratory for each step in their workflow, and then reread entries in each section pertaining to each of these single protocols.
van der Ploeg, Tjeerd; Austin, Peter C; Steyerberg, Ewout W
2014-12-22
Modern modelling techniques may potentially provide more accurate predictions of binary outcomes than classical techniques. We aimed to study the predictive performance of different modelling techniques in relation to the effective sample size ("data hungriness"). We performed simulation studies based on three clinical cohorts: 1282 patients with head and neck cancer (with 46.9% 5 year survival), 1731 patients with traumatic brain injury (22.3% 6 month mortality) and 3181 patients with minor head injury (7.6% with CT scan abnormalities). We compared three relatively modern modelling techniques: support vector machines (SVM), neural nets (NN), and random forests (RF) and two classical techniques: logistic regression (LR) and classification and regression trees (CART). We created three large artificial databases with 20 fold, 10 fold and 6 fold replication of subjects, where we generated dichotomous outcomes according to different underlying models. We applied each modelling technique to increasingly larger development parts (100 repetitions). The area under the ROC-curve (AUC) indicated the performance of each model in the development part and in an independent validation part. Data hungriness was defined by plateauing of AUC and small optimism (difference between the mean apparent AUC and the mean validated AUC <0.01). We found that a stable AUC was reached by LR at approximately 20 to 50 events per variable, followed by CART, SVM, NN and RF models. Optimism decreased with increasing sample sizes and the same ranking of techniques. The RF, SVM and NN models showed instability and a high optimism even with >200 events per variable. Modern modelling techniques such as SVM, NN and RF may need over 10 times as many events per variable to achieve a stable AUC and a small optimism than classical modelling techniques such as LR. This implies that such modern techniques should only be used in medical prediction problems if very large data sets are available.
CENTRAL PLATEAU REMEDIATION OPTIMIZATION STUDY
DOE Office of Scientific and Technical Information (OSTI.GOV)
BERGMAN, T. B.; STEFANSKI, L. D.; SEELEY, P. N.
2012-09-19
THE CENTRAL PLATEAU REMEDIATION OPTIMIZATION STUDY WAS CONDUCTED TO DEVELOP AN OPTIMAL SEQUENCE OF REMEDIATION ACTIVITIES IMPLEMENTING THE CERCLA DECISION ON THE CENTRAL PLATEAU. THE STUDY DEFINES A SEQUENCE OF ACTIVITIES THAT RESULT IN AN EFFECTIVE USE OF RESOURCES FROM A STRATEGIC PERSPECTIVE WHEN CONSIDERING EQUIPMENT PROCUREMENT AND STAGING, WORKFORCE MOBILIZATION/DEMOBILIZATION, WORKFORCE LEVELING, WORKFORCE SKILL-MIX, AND OTHER REMEDIATION/DISPOSITION PROJECT EXECUTION PARAMETERS.
Meringer, Markus; Cleaves, H James
2017-12-13
The reverse tricarboxylic acid (rTCA) cycle has been explored from various standpoints as an idealized primordial metabolic cycle. Its simplicity and apparent ubiquity in diverse organisms across the tree of life have been used to argue for its antiquity and its optimality. In 2000 it was proposed that chemoinformatics approaches support some of these views. Specifically, defined queries of the Beilstein database showed that the molecules of the rTCA are heavily represented in such compound databases. We explore here the chemical structure "space," e.g. the set of organic compounds which possesses some minimal set of defining characteristics, of the rTCA cycle's intermediates using an exhaustive structure generation method. The rTCA's chemical space as defined by the original criteria and explored by our method is some six to seven times larger than originally considered. Acknowledging that each assumption in what is a defining criterion making the rTCA cycle special limits possible generative outcomes, there are many unrealized compounds which fulfill these criteria. That these compounds are unrealized could be due to evolutionary frozen accidents or optimization, though this optimization may also be for systems-level reasons, e.g., the way the pathway and its elements interface with other aspects of metabolism.
Goldman, Johnathan M; More, Haresh T; Yee, Olga; Borgeson, Elizabeth; Remy, Brenda; Rowe, Jasmine; Sadineni, Vikram
2018-06-08
Development of optimal drug product lyophilization cycles is typically accomplished via multiple engineering runs to determine appropriate process parameters. These runs require significant time and product investments, which are especially costly during early phase development when the drug product formulation and lyophilization process are often defined simultaneously. Even small changes in the formulation may require a new set of engineering runs to define lyophilization process parameters. In order to overcome these development difficulties, an eight factor definitive screening design (DSD), including both formulation and process parameters, was executed on a fully human monoclonal antibody (mAb) drug product. The DSD enables evaluation of several interdependent factors to define critical parameters that affect primary drying time and product temperature. From these parameters, a lyophilization development model is defined where near optimal process parameters can be derived for many different drug product formulations. This concept is demonstrated on a mAb drug product where statistically predicted cycle responses agree well with those measured experimentally. This design of experiments (DoE) approach for early phase lyophilization cycle development offers a workflow that significantly decreases the development time of clinically and potentially commercially viable lyophilization cycles for a platform formulation that still has variable range of compositions. Copyright © 2018. Published by Elsevier Inc.
Halford, Keith J.
2006-01-01
MODOPTIM is a non-linear ground-water model calibration and management tool that simulates flow with MODFLOW-96 as a subroutine. A weighted sum-of-squares objective function defines optimal solutions for calibration and management problems. Water levels, discharges, water quality, subsidence, and pumping-lift costs are the five direct observation types that can be compared in MODOPTIM. Differences between direct observations of the same type can be compared to fit temporal changes and spatial gradients. Water levels in pumping wells, wellbore storage in the observation wells, and rotational translation of observation wells also can be compared. Negative and positive residuals can be weighted unequally so inequality constraints such as maximum chloride concentrations or minimum water levels can be incorporated in the objective function. Optimization parameters are defined with zones and parameter-weight matrices. Parameter change is estimated iteratively with a quasi-Newton algorithm and is constrained to a user-defined maximum parameter change per iteration. Parameters that are less sensitive than a user-defined threshold are not estimated. MODOPTIM facilitates testing more conceptual models by expediting calibration of each conceptual model. Examples of applying MODOPTIM to aquifer-test analysis, ground-water management, and parameter estimation problems are presented.
Towards global optimization with adaptive simulated annealing
NASA Astrophysics Data System (ADS)
Forbes, Gregory W.; Jones, Andrew E.
1991-01-01
The structure of the simulated annealing algorithm is presented and its rationale is discussed. A unifying heuristic is then introduced which serves as a guide in the design of all of the sub-components of the algorithm. Simply put this heuristic principle states that at every cycle in the algorithm the occupation density should be kept as close as possible to the equilibrium distribution. This heuristic has been used as a guide to develop novel step generation and temperature control methods intended to improve the efficiency of the simulated annealing algorithm. The resulting algorithm has been used in attempts to locate good solutions for one of the lens design problems associated with this conference viz. the " monochromatic quartet" and a sample of the results is presented. 1 Global optimization in the context oflens design Whatever the context optimization algorithms relate to problems that take the following form: Given some configuration space with coordinates r (x1 . . x) and a merit function written asffr) find the point r whereftr) takes it lowest value. That is find the global minimum. In many cases there is also a set of auxiliary constraints that must be met so the problem statement becomes: Find the global minimum of the merit function within the region defined by E. (r) 0 j 1 2 . . . p and 0 j 1 2 . . . q.
Time-Extended Payoffs for Collectives of Autonomous Agents
NASA Technical Reports Server (NTRS)
Tumer, Kagan; Agogino, Adrian K.
2002-01-01
A collective is a set of self-interested agents which try to maximize their own utilities, along with a a well-defined, time-extended world utility function which rates the performance of the entire system. In this paper, we use theory of collectives to design time-extended payoff utilities for agents that are both aligned with the world utility, and are "learnable", i.e., the agents can readily see how their behavior affects their utility. We show that in systems where each agent aims to optimize such payoff functions, coordination arises as a byproduct of the agents selfishly pursuing their own goals. A game theoretic analysis shows that such payoff functions have the net effect of aligning the Nash equilibrium, Pareto optimal solution and world utility optimum, thus eliminating undesirable behavior such as agents working at cross-purposes. We then apply collective-based payoff functions to the token collection in a gridworld problem where agents need to optimize the aggregate value of tokens collected across an episode of finite duration (i.e., an abstracted version of rovers on Mars collecting scientifically interesting rock samples, subject to power limitations). We show that, regardless of the initial token distribution, reinforcement learning agents using collective-based payoff functions significantly outperform both natural extensions of single agent algorithms and global reinforcement learning solutions based on "team games".
Gao, Dashan; Vasconcelos, Nuno
2009-01-01
A decision-theoretic formulation of visual saliency, first proposed for top-down processing (object recognition) (Gao & Vasconcelos, 2005a), is extended to the problem of bottom-up saliency. Under this formulation, optimality is defined in the minimum probability of error sense, under a constraint of computational parsimony. The saliency of the visual features at a given location of the visual field is defined as the power of those features to discriminate between the stimulus at the location and a null hypothesis. For bottom-up saliency, this is the set of visual features that surround the location under consideration. Discrimination is defined in an information-theoretic sense and the optimal saliency detector derived for a class of stimuli that complies with known statistical properties of natural images. It is shown that under the assumption that saliency is driven by linear filtering, the optimal detector consists of what is usually referred to as the standard architecture of V1: a cascade of linear filtering, divisive normalization, rectification, and spatial pooling. The optimal detector is also shown to replicate the fundamental properties of the psychophysics of saliency: stimulus pop-out, saliency asymmetries for stimulus presence versus absence, disregard of feature conjunctions, and Weber's law. Finally, it is shown that the optimal saliency architecture can be applied to the solution of generic inference problems. In particular, for the class of stimuli studied, it performs the three fundamental operations of statistical inference: assessment of probabilities, implementation of Bayes decision rule, and feature selection.
An improved reaction path optimization method using a chain of conformations
NASA Astrophysics Data System (ADS)
Asada, Toshio; Sawada, Nozomi; Nishikawa, Takuya; Koseki, Shiro
2018-05-01
The efficient fast path optimization (FPO) method is proposed to optimize the reaction paths on energy surfaces by using chains of conformations. No artificial spring force is used in the FPO method to ensure the equal spacing of adjacent conformations. The FPO method is applied to optimize the reaction path on two model potential surfaces. The use of this method enabled the optimization of the reaction paths with a drastically reduced number of optimization cycles for both potentials. It was also successfully utilized to define the MEP of the isomerization of the glycine molecule in water by FPO method.
A Language for Specifying Compiler Optimizations for Generic Software
DOE Office of Scientific and Technical Information (OSTI.GOV)
Willcock, Jeremiah J.
2007-01-01
Compiler optimization is important to software performance, and modern processor architectures make optimization even more critical. However, many modern software applications use libraries providing high levels of abstraction. Such libraries often hinder effective optimization — the libraries are difficult to analyze using current compiler technology. For example, high-level libraries often use dynamic memory allocation and indirectly expressed control structures, such as iteratorbased loops. Programs using these libraries often cannot achieve an optimal level of performance. On the other hand, software libraries have also been recognized as potentially aiding in program optimization. One proposed implementation of library-based optimization is to allowmore » the library author, or a library user, to define custom analyses and optimizations. Only limited systems have been created to take advantage of this potential, however. One problem in creating a framework for defining new optimizations and analyses is how users are to specify them: implementing them by hand inside a compiler is difficult and prone to errors. Thus, a domain-specific language for librarybased compiler optimizations would be beneficial. Many optimization specification languages have appeared in the literature, but they tend to be either limited in power or unnecessarily difficult to use. Therefore, I have designed, implemented, and evaluated the Pavilion language for specifying program analyses and optimizations, designed for library authors and users. These analyses and optimizations can be based on the implementation of a particular library, its use in a specific program, or on the properties of a broad range of types, expressed through concepts. The new system is intended to provide a high level of expressiveness, even though the intended users are unlikely to be compiler experts.« less
NASA Astrophysics Data System (ADS)
Stock, Karl; Diebolder, Rolf; Hausladen, Florian; Hibst, Raimund
2014-03-01
It is well known that flashlamp pumped Er:YAG lasers allow efficient bone ablation due to strong absorption at 3μm by water. Preliminary experiments revealed also a newly developed diode pumped Er:YAG laser system (Pantec Engineering AG) to be an efficient tool for use for bone surgery. The aim of the present in vitro study is the investigation of a new power increased version of the laser system with higher pulse energy and optimization of the treatment set-up to get high cutting quality, efficiency, and ablation depth. Optical simulations were performed to achieve various focus diameters and homogeneous beam profile. An appropriate experimental set-up with two different focusing units, a computer controlled linear stage with sample holder, and a shutter unit was realized. By this we are able to move the sample (slices of pig bone) with a defined velocity during the irradiation. Cutting was performed under appropriate water spray by moving the sample back and forth. After each path the ablation depth was measured and the focal plane was tracked to the actual bottom of the groove. Finally, the cuts were analyzed by light microcopy regarding the ablation quality and geometry, and thermal effects. In summary, the results show that with carefully adapted irradiation parameters narrow and deep cuts (ablation depth > 6mm, aspect ratio approx. 20) are possible without carbonization. In conclusion, these in vitro investigations demonstrate that high efficient bone cutting is possible with the diode pumped Er:YAG laser system using appropriate treatment set-up and parameters.
MITIE: Simultaneous RNA-Seq-based transcript identification and quantification in multiple samples.
Behr, Jonas; Kahles, André; Zhong, Yi; Sreedharan, Vipin T; Drewe, Philipp; Rätsch, Gunnar
2013-10-15
High-throughput sequencing of mRNA (RNA-Seq) has led to tremendous improvements in the detection of expressed genes and reconstruction of RNA transcripts. However, the extensive dynamic range of gene expression, technical limitations and biases, as well as the observed complexity of the transcriptional landscape, pose profound computational challenges for transcriptome reconstruction. We present the novel framework MITIE (Mixed Integer Transcript IdEntification) for simultaneous transcript reconstruction and quantification. We define a likelihood function based on the negative binomial distribution, use a regularization approach to select a few transcripts collectively explaining the observed read data and show how to find the optimal solution using Mixed Integer Programming. MITIE can (i) take advantage of known transcripts, (ii) reconstruct and quantify transcripts simultaneously in multiple samples, and (iii) resolve the location of multi-mapping reads. It is designed for genome- and assembly-based transcriptome reconstruction. We present an extensive study based on realistic simulated RNA-Seq data. When compared with state-of-the-art approaches, MITIE proves to be significantly more sensitive and overall more accurate. Moreover, MITIE yields substantial performance gains when used with multiple samples. We applied our system to 38 Drosophila melanogaster modENCODE RNA-Seq libraries and estimated the sensitivity of reconstructing omitted transcript annotations and the specificity with respect to annotated transcripts. Our results corroborate that a well-motivated objective paired with appropriate optimization techniques lead to significant improvements over the state-of-the-art in transcriptome reconstruction. MITIE is implemented in C++ and is available from http://bioweb.me/mitie under the GPL license.
Conditioning and Robustness of RNA Boltzmann Sampling under Thermodynamic Parameter Perturbations.
Rogers, Emily; Murrugarra, David; Heitsch, Christine
2017-07-25
Understanding how RNA secondary structure prediction methods depend on the underlying nearest-neighbor thermodynamic model remains a fundamental challenge in the field. Minimum free energy (MFE) predictions are known to be "ill conditioned" in that small changes to the thermodynamic model can result in significantly different optimal structures. Hence, the best practice is now to sample from the Boltzmann distribution, which generates a set of suboptimal structures. Although the structural signal of this Boltzmann sample is known to be robust to stochastic noise, the conditioning and robustness under thermodynamic perturbations have yet to be addressed. We present here a mathematically rigorous model for conditioning inspired by numerical analysis, and also a biologically inspired definition for robustness under thermodynamic perturbation. We demonstrate the strong correlation between conditioning and robustness and use its tight relationship to define quantitative thresholds for well versus ill conditioning. These resulting thresholds demonstrate that the majority of the sequences are at least sample robust, which verifies the assumption of sampling's improved conditioning over the MFE prediction. Furthermore, because we find no correlation between conditioning and MFE accuracy, the presence of both well- and ill-conditioned sequences indicates the continued need for both thermodynamic model refinements and alternate RNA structure prediction methods beyond the physics-based ones. Copyright © 2017. Published by Elsevier Inc.
Framework for Multidisciplinary Analysis, Design, and Optimization with High-Fidelity Analysis Tools
NASA Technical Reports Server (NTRS)
Orr, Stanley A.; Narducci, Robert P.
2009-01-01
A plan is presented for the development of a high fidelity multidisciplinary optimization process for rotorcraft. The plan formulates individual disciplinary design problems, identifies practical high-fidelity tools and processes that can be incorporated in an automated optimization environment, and establishes statements of the multidisciplinary design problem including objectives, constraints, design variables, and cross-disciplinary dependencies. Five key disciplinary areas are selected in the development plan. These are rotor aerodynamics, rotor structures and dynamics, fuselage aerodynamics, fuselage structures, and propulsion / drive system. Flying qualities and noise are included as ancillary areas. Consistency across engineering disciplines is maintained with a central geometry engine that supports all multidisciplinary analysis. The multidisciplinary optimization process targets the preliminary design cycle where gross elements of the helicopter have been defined. These might include number of rotors and rotor configuration (tandem, coaxial, etc.). It is at this stage that sufficient configuration information is defined to perform high-fidelity analysis. At the same time there is enough design freedom to influence a design. The rotorcraft multidisciplinary optimization tool is built and substantiated throughout its development cycle in a staged approach by incorporating disciplines sequentially.
Optimal ciliary beating patterns
NASA Astrophysics Data System (ADS)
Vilfan, Andrej; Osterman, Natan
2011-11-01
We introduce a measure for energetic efficiency of single or collective biological cilia. We define the efficiency of a single cilium as Q2 / P , where Q is the volume flow rate of the pumped fluid and P is the dissipated power. For ciliary arrays, we define it as (ρQ) 2 / (ρP) , with ρ denoting the surface density of cilia. We then numerically determine the optimal beating patterns according to this criterion. For a single cilium optimization leads to curly, somewhat counterintuitive patterns. But when looking at a densely ciliated surface, the optimal patterns become remarkably similar to what is observed in microorganisms like Paramecium. The optimal beating pattern then consists of a fast effective stroke and a slow sweeping recovery stroke. Metachronal waves lead to a significantly higher efficiency than synchronous beating. Efficiency also increases with an increasing density of cilia up to the point where crowding becomes a problem. We finally relate the pumping efficiency of cilia to the swimming efficiency of a spherical microorganism and show that the experimentally estimated efficiency of Paramecium is surprisingly close to the theoretically possible optimum.
Multivariate optimization of an analytical method for the analysis of dog and cat foods by ICP OES.
da Costa, Silvânio Silvério Lopes; Pereira, Ana Cristina Lima; Passos, Elisangela Andrade; Alves, José do Patrocínio Hora; Garcia, Carlos Alexandre Borges; Araujo, Rennan Geovanny Oliveira
2013-04-15
Experimental design methodology was used to optimize an analytical method for determination of the mineral element composition (Al, Ca, Cd, Cr, Cu, Ba, Fe, K, Mg, Mn, P, S, Sr and Zn) of dog and cat foods. Two-level full factorial design was applied to define the optimal proportions of the reagents used for microwave-assisted sample digestion (2.0 mol L(-1) HNO3 and 6% m/v H2O2). A three-level factorial design for two variables was used to optimize the operational conditions of the inductively coupled plasma optical emission spectrometer, employed for analysis of the extracts. A radiofrequency power of 1.2 kW and a nebulizer argon flow of 1.0 L min(-1) were selected. The limits of quantification (LOQ) were between 0.03 μg g(-1) (Cr, 267.716 nm) and 87 μg g(-1) (Ca, 373.690 nm). The trueness of the optimized method was evaluated by analysis of five certified reference materials (CRMs): wheat flour (NIST 1567a), bovine liver (NIST 1577), peach leaves (NIST 1547), oyster tissue (NIST 1566b), and fish protein (DORM-3). The recovery values obtained for the CRMs were between 80 ± 4% (Cr) and 117 ± 5% (Cd), with relative standard deviations (RSDs) better than 5%, demonstrating that the proposed method offered good trueness and precision. Ten samples of pet food (five each of cat and dog food) were acquired at supermarkets in Aracaju city (Sergipe State, Brazil). Concentrations in the dog food ranged between 7.1 mg kg(-1) (Ba) and 2.7 g kg(-1) (Ca), while for cat food the values were between 3.7 mg kg(-1) (Ba) and 3.0 g kg(-1) (Ca). The concentrations of Ca, K, Mg, P, Cu, Fe, Mn, and Zn in the food were compared with the guidelines of the United States' Association of American Feed Control Officials (AAFCO) and the Brazilian Ministry of Agriculture, Livestock, and Food Supply (Ministério da Agricultura, Pecuária e Abastecimento-MAPA). Copyright © 2013 Elsevier B.V. All rights reserved.
Meinzer, Caitlyn; Martin, Renee; Suarez, Jose I
2017-09-08
In phase II trials, the most efficacious dose is usually not known. Moreover, given limited resources, it is difficult to robustly identify a dose while also testing for a signal of efficacy that would support a phase III trial. Recent designs have sought to be more efficient by exploring multiple doses through the use of adaptive strategies. However, the added flexibility may potentially increase the risk of making incorrect assumptions and reduce the total amount of information available across the dose range as a function of imbalanced sample size. To balance these challenges, a novel placebo-controlled design is presented in which a restricted Bayesian response adaptive randomization (RAR) is used to allocate a majority of subjects to the optimal dose of active drug, defined as the dose with the lowest probability of poor outcome. However, the allocation between subjects who receive active drug or placebo is held constant to retain the maximum possible power for a hypothesis test of overall efficacy comparing the optimal dose to placebo. The design properties and optimization of the design are presented in the context of a phase II trial for subarachnoid hemorrhage. For a fixed total sample size, a trade-off exists between the ability to select the optimal dose and the probability of rejecting the null hypothesis. This relationship is modified by the allocation ratio between active and control subjects, the choice of RAR algorithm, and the number of subjects allocated to an initial fixed allocation period. While a responsive RAR algorithm improves the ability to select the correct dose, there is an increased risk of assigning more subjects to a worse arm as a function of ephemeral trends in the data. A subarachnoid treatment trial is used to illustrate how this design can be customized for specific objectives and available data. Bayesian adaptive designs are a flexible approach to addressing multiple questions surrounding the optimal dose for treatment efficacy within the context of limited resources. While the design is general enough to apply to many situations, future work is needed to address interim analyses and the incorporation of models for dose response.
A concept analysis of optimality in perinatal health.
Kennedy, Holly Powell
2006-01-01
This analysis was conducted to describe the concept of optimality and its appropriateness for perinatal health care. The concept was identified in 24 scientific disciplines. Across all disciplines, the universal definition of optimality is the robust, efficient, and cost-effective achievement of best possible outcomes within a rule-governed framework. Optimality, specifically defined for perinatal health care, is the maximal perinatal outcome with minimal intervention placed against the context of the woman's social, medical, and obstetric history.
On the theory of singular optimal controls in dynamic systems with control delay
NASA Astrophysics Data System (ADS)
Mardanov, M. J.; Melikov, T. K.
2017-05-01
An optimal control problem with a control delay is considered, and a more broad class of singular (in classical sense) controls is investigated. Various sequences of necessary conditions for the optimality of singular controls in recurrent form are obtained. These optimality conditions include analogues of the Kelley, Kopp-Moyer, R. Gabasov, and equality-type conditions. In the proof of the main results, the variation of the control is defined using Legendre polynomials.
Simulation and optimization of faceted structure for illumination
NASA Astrophysics Data System (ADS)
Liu, Lihong; Engel, Thierry; Flury, Manuel
2016-04-01
The re-direction of incoherent light using a surface containing only facets with specific angular values is proposed. A new photometric approach is adopted since the size of each facet is large in comparison with the wavelength. A reflective configuration is employed to avoid the dispersion problems of materials. The irradiance distribution of the reflected beam is determined by the angular position of each facet. In order to obtain the specific irradiance distribution, the angular position of each facet is optimized using Zemax OpticStudio 15 software. A detector is placed in the direction which is perpendicular to the reflected beam. According to the incoherent irradiance distribution on the detector, a merit function needs to be defined to pilot the optimization process. The two dimensional angular position of each facet is defined as a variable which is optimized within a specified varying range. Because the merit function needs to be updated, a macro program is carried out to update this function within Zemax. In order to reduce the complexity of the manual operation, an automatic optimization approach is established. Zemax is in charge of performing the optimization task and sending back the irradiance data to Matlab for further analysis. Several simulation results are given for the verification of the optimization method. The simulation results are compared to those obtained with the LightTools software in order to verify our optimization method.
Muhammed, Musemma K; Kot, Witold; Neve, Horst; Mahony, Jennifer; Castro-Mejía, Josué L; Krych, Lukasz; Hansen, Lars H; Nielsen, Dennis S; Sørensen, Søren J; Heller, Knut J; van Sinderen, Douwe; Vogensen, Finn K
2017-10-01
Despite being potentially highly useful for characterizing the biodiversity of phages, metagenomic studies are currently not available for dairy bacteriophages, partly due to the lack of a standard procedure for phage extraction. We optimized an extraction method that allows the removal of the bulk protein from whey and milk samples with losses of less than 50% of spiked phages. The protocol was applied to extract phages from whey in order to test the notion that members of Lactococcus lactis 936 (now Sk1virus ), P335, c2 (now C2virus ) and Leuconostoc phage groups are the most frequently encountered in the dairy environment. The relative abundance and diversity of phages in eight and four whey mixtures from dairies using undefined mesophilic mixed-strain cultures containing Lactococcus lactis subsp. lactis biovar diacetylactis and Leuconostoc species (i.e., DL starter cultures) and defined cultures, respectively, were assessed. Results obtained from transmission electron microscopy and high-throughput sequence analyses revealed the dominance of Lc. lactis 936 phages (order Caudovirales , family Siphoviridae ) in dairies using undefined DL starter cultures and Lc. lactis c2 phages (order Caudovirales , family Siphoviridae ) in dairies using defined cultures. The 936 and Leuconostoc phages demonstrated limited diversity. Possible coinduction of temperate P335 prophages and satellite phages in one of the whey mixtures was also observed. IMPORTANCE The method optimized in this study could provide an important basis for understanding the dynamics of the phage community (abundance, development, diversity, evolution, etc.) in dairies with different sizes, locations, and production strategies. It may also enable the discovery of previously unknown phages, which is crucial for the development of rapid molecular biology-based methods for phage burden surveillance systems. The dominance of only a few phage groups in the dairy environment signifies the depth of knowledge gained over the past decades, which served as the basis for designing current phage control strategies. The presence of a correlation between phages and the type of starter cultures being used in dairies might help to improve the selection and/or design of suitable, custom, and cost-efficient phage control strategies. Copyright © 2017 American Society for Microbiology.
Klinkenberg, Don; Thomas, Ekelijn; Artavia, Francisco F Calvo; Bouma, Annemarie
2011-08-01
Design of surveillance programs to detect infections could benefit from more insight into sampling schemes. We address the effect of sampling schemes for Salmonella Enteritidis surveillance in laying hens. Based on experimental estimates for the transmission rate in flocks, and the characteristics of an egg immunological test, we have simulated outbreaks with various sampling schemes, and with the current boot swab program with a 15-week sampling interval. Declaring a flock infected based on a single positive egg was not possible because test specificity was too low. Thus, a threshold number of positive eggs was defined to declare a flock infected, and, for small sample sizes, eggs from previous samplings had to be included in a cumulative sample to guarantee a minimum flock level specificity. Effectiveness of surveillance was measured by the proportion of outbreaks detected, and by the number of contaminated table eggs brought on the market. The boot swab program detected 90% of the outbreaks, with 75% fewer contaminated eggs compared to no surveillance, whereas the baseline egg program (30 eggs each 15 weeks) detected 86%, with 73% fewer contaminated eggs. We conclude that a larger sample size results in more detected outbreaks, whereas a smaller sampling interval decreases the number of contaminated eggs. Decreasing sample size and interval simultaneously reduces the number of contaminated eggs, but not indefinitely: the advantage of more frequent sampling is counterbalanced by the cumulative sample including less recently laid eggs. Apparently, optimizing surveillance has its limits when test specificity is taken into account. © 2011 Society for Risk Analysis.
Bidmon, Nicole; Kind, Sonja; Welters, Marij J P; Joseph-Pietras, Deborah; Laske, Karoline; Maurer, Dominik; Hadrup, Sine Reker; Schreibelt, Gerty; Rae, Richard; Sahin, Ugur; Gouttefangeas, Cécile; Britten, Cedrik M; van der Burg, Sjoerd H
2018-07-01
Cell-based assays to monitor antigen-specific T-cell responses are characterized by their high complexity and should be conducted under controlled conditions to lower multiple possible sources of assay variation. However, the lack of standard reagents makes it difficult to directly compare results generated in one lab over time and across institutions. Therefore TCR-engineered reference samples (TERS) that contain a defined number of antigen-specific T cells and continuously deliver stable results are urgently needed. We successfully established a simple and robust TERS technology that constitutes a useful tool to overcome this issue for commonly used T-cell immuno-assays. To enable users to generate large-scale TERS, on-site using the most commonly used electroporation (EP) devices, an RNA-based kit approach, providing stable TCR mRNA and an optimized manufacturing protocol were established. In preparation for the release of this immuno-control kit, we established optimal EP conditions on six devices and initiated an extended RNA stability study. Furthermore, we coordinated on-site production of TERS with 4 participants. Finally, a proficiency panel was organized to test the unsupervised production of TERS at different laboratories using the kit approach. The results obtained show the feasibility and robustness of the kit approach for versatile in-house production of cellular control samples. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.
Slepoy, A; Peters, M D; Thompson, A P
2007-11-30
Molecular dynamics and other molecular simulation methods rely on a potential energy function, based only on the relative coordinates of the atomic nuclei. Such a function, called a force field, approximately represents the electronic structure interactions of a condensed matter system. Developing such approximate functions and fitting their parameters remains an arduous, time-consuming process, relying on expert physical intuition. To address this problem, a functional programming methodology was developed that may enable automated discovery of entirely new force-field functional forms, while simultaneously fitting parameter values. The method uses a combination of genetic programming, Metropolis Monte Carlo importance sampling and parallel tempering, to efficiently search a large space of candidate functional forms and parameters. The methodology was tested using a nontrivial problem with a well-defined globally optimal solution: a small set of atomic configurations was generated and the energy of each configuration was calculated using the Lennard-Jones pair potential. Starting with a population of random functions, our fully automated, massively parallel implementation of the method reproducibly discovered the original Lennard-Jones pair potential by searching for several hours on 100 processors, sampling only a minuscule portion of the total search space. This result indicates that, with further improvement, the method may be suitable for unsupervised development of more accurate force fields with completely new functional forms. Copyright (c) 2007 Wiley Periodicals, Inc.
Xu, Guoai; Li, Qi; Guo, Yanhui; Zhang, Miao
2017-01-01
Authorship attribution is to identify the most likely author of a given sample among a set of candidate known authors. It can be not only applied to discover the original author of plain text, such as novels, blogs, emails, posts etc., but also used to identify source code programmers. Authorship attribution of source code is required in diverse applications, ranging from malicious code tracking to solving authorship dispute or software plagiarism detection. This paper aims to propose a new method to identify the programmer of Java source code samples with a higher accuracy. To this end, it first introduces back propagation (BP) neural network based on particle swarm optimization (PSO) into authorship attribution of source code. It begins by computing a set of defined feature metrics, including lexical and layout metrics, structure and syntax metrics, totally 19 dimensions. Then these metrics are input to neural network for supervised learning, the weights of which are output by PSO and BP hybrid algorithm. The effectiveness of the proposed method is evaluated on a collected dataset with 3,022 Java files belong to 40 authors. Experiment results show that the proposed method achieves 91.060% accuracy. And a comparison with previous work on authorship attribution of source code for Java language illustrates that this proposed method outperforms others overall, also with an acceptable overhead. PMID:29095934
Non-Contact Conductivity Measurement for Automated Sample Processing Systems
NASA Technical Reports Server (NTRS)
Beegle, Luther W.; Kirby, James P.
2012-01-01
A new method has been developed for monitoring and control of automated sample processing and preparation especially focusing on desalting of samples before analytical analysis (described in more detail in Automated Desalting Apparatus, (NPO-45428), NASA Tech Briefs, Vol. 34, No. 8 (August 2010), page 44). The use of non-contact conductivity probes, one at the inlet and one at the outlet of the solid phase sample preparation media, allows monitoring of the process, and acts as a trigger for the start of the next step in the sequence (see figure). At each step of the muti-step process, the system is flushed with low-conductivity water, which sets the system back to an overall low-conductivity state. This measurement then triggers the next stage of sample processing protocols, and greatly minimizes use of consumables. In the case of amino acid sample preparation for desalting, the conductivity measurement will define three key conditions for the sample preparation process. First, when the system is neutralized (low conductivity, by washing with excess de-ionized water); second, when the system is acidified, by washing with a strong acid (high conductivity); and third, when the system is at a basic condition of high pH (high conductivity). Taken together, this non-contact conductivity measurement for monitoring sample preparation will not only facilitate automation of the sample preparation and processing, but will also act as a way to optimize the operational time and use of consumables
High Resolution Manometry Correlates of Ineffective Esophageal Motility
Xiao, Yinglian; Kahrilas, Peter J.; Kwasny, Mary J.; Roman, Sabine; Lin, Zhiyue; Nicodème, Frédéric; Lu, Chang; Pandolfino, John E.
2013-01-01
Background There are currently no criteria for ineffective esophageal motility (IEM) and ineffective swallow (IES) in High Resolution Manometry (HRM) and Esophageal Pressure Topography (EPT). Our aims were to utilize HRM metrics to define IEM within the Chicago Classification and to determine the distal contractile integral (DCI) threshold for IES. Methods The EPT of 150 patients with either dysphagia or reflux symptoms were reviewed for the breaks >2 cm in the proximal, middle and distal esophagus in the 20 mmHg isobaric contour (IBC). Peristaltic function in EPT was defined by the Chicago Classification, the corresponding conventional line tracing (CLT) were reviewed separately for IEM and IES. Generalized linear mixed models were used to find thresholds for DCI corresponding to traditionally determined IES and failed swallows. An external validation sample was used to confirm these thresholds. Results In terms of swallow subtypes, IES in CLT were a mixture of normal, weak and failed peristalsis in EPT. A DCI of 450mmHg-s-cm was determined to be optimal in predicting IES. In the validation sample, the threshold of 450 mmHg-s-cm showed strong agreement with CLT determination of IES (positive percent agreement 83%, negative percent agreement 90%) Thirty-three among 42 IEM patients in CLT had large peristaltic breaks, small peristaltic breaks or ‘frequent failed peristalsis’ in EPT; 87.2% (34/39) of patients classified as normal in CLT had proximal IBC-breaks in EPT. the patient level diagnostic agreement between CLT and EPT was good (78.6% positive percent agreement, 63.9% negative percent agreement), with negative agreement increasing to 92.0% if proximal breaks were excluded. Conclusions The manometric correlate of IEM in EPT is a mixture of failed swallows and IBC break in the middle/ distal troughs. A DCI value<450 mmHg-s-cm can be utilized to predict IES previously defined in CLT. IEM can be defined by >5 swallows with weak /failed peristalsis or with a DCI <450 mmHg-s-cm. PMID:22929758
Embedded real-time image processing hardware for feature extraction and clustering
NASA Astrophysics Data System (ADS)
Chiu, Lihu; Chang, Grant
2003-08-01
Printronix, Inc. uses scanner-based image systems to perform print quality measurements for line-matrix printers. The size of the image samples and image definition required make commercial scanners convenient to use. The image processing is relatively well defined, and we are able to simplify many of the calculations into hardware equations and "c" code. The process of rapidly prototyping the system using DSP based "c" code gets the algorithms well defined early in the development cycle. Once a working system is defined, the rest of the process involves splitting the task up for the FPGA and the DSP implementation. Deciding which of the two to use, the DSP or the FPGA, is a simple matter of trial benchmarking. There are two kinds of benchmarking: One for speed, and the other for memory. The more memory intensive algorithms should run in the DSP, and the simple real time tasks can use the FPGA most effectively. Once the task is split, we can decide which platform the algorithm should be executed. This involves prototyping all the code in the DSP, then timing various blocks of the algorithm. Slow routines can be optimized using the compiler tools, and if further reduction in time is needed, into tasks that the FPGA can perform.
Implications of Measurement Assay Type in Design of HIV Experiments.
Cannon, LaMont; Jagarapu, Aditya; Vargas-Garcia, Cesar A; Piovoso, Michael J; Zurakowski, Ryan
2017-12-01
Time series measurements of circular viral episome (2-LTR) concentrations enable indirect quantification of persistent low-level Human Immunodeficiency Virus (HIV) replication in patients on Integrase-Inhibitor intensified Combined Antiretroviral Therapy (cART). In order to determine the magnitude of these low level infection events, blood has to be drawn from a patients at a frequency and volume that is strictly regulated by the Institutional Review Board (IRB). Once the blood is drawn, the 2-LTR concentration is determined by quantifying the amount of HIV DNA present in the sample via a PCR (Polymerase Chain Reaction) assay. Real time quantitative Polymerase Chain Reaction (qPCR) is a widely used method of performing PCR; however, a newer droplet digital Polymerase Chain Reaction (ddPCR) method has been shown to provide more accurate quantification of DNA. Using a validated model of HIV viral replication, this paper demonstrates the importance of considering DNA quantification assay type when optimizing experiment design conditions. Experiments are optimized using a Genetic Algorithm (GA) to locate a family of suboptimal sample schedules which yield the highest fitness. Fitness is defined as the expected information gained in the experiment, measured by the Kullback-Leibler Divergence (KLD) between the prior and posterior distributions of the model parameters. We compare the information content of the optimized schedules to uniform schedules as well as two clinical schedules implemented by researchers at UCSF and the University of Melbourne. This work shows that there is a significantly greater gain information in experiments using a ddPCR assay vs. a qPCR assay and that certain experiment design considerations should be taken when using either assay.
Variable-Field Analytical Ultracentrifugation: I. Time-Optimized Sedimentation Equilibrium
Ma, Jia; Metrick, Michael; Ghirlando, Rodolfo; Zhao, Huaying; Schuck, Peter
2015-01-01
Sedimentation equilibrium (SE) analytical ultracentrifugation (AUC) is a gold standard for the rigorous determination of macromolecular buoyant molar masses and the thermodynamic study of reversible interactions in solution. A significant experimental drawback is the long time required to attain SE, which is usually on the order of days. We have developed a method for time-optimized SE (toSE) with defined time-varying centrifugal fields that allow SE to be attained in a significantly (up to 10-fold) shorter time than is usually required. To achieve this, numerical Lamm equation solutions for sedimentation in time-varying fields are computed based on initial estimates of macromolecular transport properties. A parameterized rotor-speed schedule is optimized with the goal of achieving a minimal time to equilibrium while limiting transient sample preconcentration at the base of the solution column. The resulting rotor-speed schedule may include multiple over- and underspeeding phases, balancing the formation of gradients from strong sedimentation fluxes with periods of high diffusional transport. The computation is carried out in a new software program called TOSE, which also facilitates convenient experimental implementation. Further, we extend AUC data analysis to sedimentation processes in such time-varying centrifugal fields. Due to the initially high centrifugal fields in toSE and the resulting strong migration, it is possible to extract sedimentation coefficient distributions from the early data. This can provide better estimates of the size of macromolecular complexes and report on sample homogeneity early on, which may be used to further refine the prediction of the rotor-speed schedule. In this manner, the toSE experiment can be adapted in real time to the system under study, maximizing both the information content and the time efficiency of SE experiments. PMID:26287634
Dong, Min; McGann, Patrick T; Mizuno, Tomoyuki; Ware, Russell E; Vinks, Alexander A
2016-04-01
Hydroxyurea has emerged as the primary disease-modifying therapy for patients with sickle cell anaemia (SCA). The laboratory and clinical benefits of hydroxyurea are optimal at maximum tolerated dose (MTD), but the current empirical dose escalation process often takes up to 12 months. The purpose of this study was to develop a pharmacokinetic-guided dosing strategy to reduce the time required to reach hydroxyurea MTD in children with SCA. Pharmacokinetic (PK) data from the HUSTLE trial (NCT00305175) were used to develop a population PK model using non-linear mixed effects modelling (nonmem 7.2). A D-optimal sampling strategy was developed to estimate individual PK and hydroxyurea exposure (area under the concentration-time curve (AUC)). The initial AUC target was derived from HUSTLE clinical data and defined as the mean AUC at MTD. PK profiles were best described by a one compartment with Michaelis-Menten elimination and a transit absorption model. Body weight and cystatin C were identified as significant predictors of hydroxyurea clearance. The following clinically feasible sampling times are included in a new prospective protocol: pre-dose (baseline), 15-20 min, 50-60 min and 3 h after an initial 20 mg kg(-1) oral dose. The mean target AUC(0,∞) for initial dose titration was 115 mg l(-1) h. We developed a PK model-based individualized dosing strategy for the prospective Therapeutic Response Evaluation and Adherence Trial (TREAT, ClinicalTrials.gov NCT02286154). This approach has the potential to optimize the dose titration of hydroxyurea therapy for children with SCA, such that the clinical benefits at MTD are achieved more quickly. © 2015 The British Pharmacological Society.
Dong, Min; McGann, Patrick T.; Mizuno, Tomoyuki; Ware, Russell E.
2016-01-01
AIMS Hydroxyurea has emerged as the primary disease‐modifying therapy for patients with sickle cell anaemia (SCA). The laboratory and clinical benefits of hydroxyurea are optimal at maximum tolerated dose (MTD), but the current empirical dose escalation process often takes up to 12 months. The purpose of this study was to develop a pharmacokinetic‐guided dosing strategy to reduce the time required to reach hydroxyurea MTD in children with SCA. Methods Pharmacokinetic (PK) data from the HUSTLE trial (NCT00305175) were used to develop a population PK model using non‐linear mixed effects modelling (nonmem 7.2). A D‐optimal sampling strategy was developed to estimate individual PK and hydroxyurea exposure (area under the concentration–time curve (AUC)). The initial AUC target was derived from HUSTLE clinical data and defined as the mean AUC at MTD. Results PK profiles were best described by a one compartment with Michaelis–Menten elimination and a transit absorption model. Body weight and cystatin C were identified as significant predictors of hydroxyurea clearance. The following clinically feasible sampling times are included in a new prospective protocol: pre‐dose (baseline), 15–20 min, 50–60 min and 3 h after an initial 20 mg kg–1 oral dose. The mean target AUC(0,∞) for initial dose titration was 115 mg l–1 h. Conclusion We developed a PK model‐based individualized dosing strategy for the prospective Therapeutic Response Evaluation and Adherence Trial (TREAT, ClinicalTrials.gov NCT02286154). This approach has the potential to optimize the dose titration of hydroxyurea therapy for children with SCA, such that the clinical benefits at MTD are achieved more quickly. PMID:26615061
Diehl, Hanna C; Beine, Birte; Elm, Julian; Trede, Dennis; Ahrens, Maike; Eisenacher, Martin; Marcus, Katrin; Meyer, Helmut E; Henkel, Corinna
2015-03-01
Mass spectrometry imaging (MSI) has become a powerful and successful tool in the context of biomarker detection especially in recent years. This emerging technique is based on the combination of histological information of a tissue and its corresponding spatial resolved mass spectrometric information. The identification of differentially expressed protein peaks between samples is still the method's bottleneck. Therefore, peptide MSI compared to protein MSI is closer to the final goal of identification since peptides are easier to measure than proteins. Nevertheless, the processing of peptide imaging samples is challenging due to experimental complexity. To address this issue, a method development study for peptide MSI using cryoconserved and formalin-fixed paraffin-embedded (FFPE) rat brain tissue is provided. Different digestion times, matrices, and proteases were tested to define an optimal workflow for peptide MSI. All practical experiments were done in triplicates and analyzed by the SCiLS Lab software, using structures derived from myelin basic protein (MBP) peaks, principal component analysis (PCA) and probabilistic latent semantic analysis (pLSA) to rate the experiments' quality. Blinded experimental evaluation in case of defining countable structures in the datasets was performed by three individuals. Such an extensive method development for peptide matrix-assisted laser desorption/ionization (MALDI) imaging experiments has not been performed so far, and the resulting problems and consequences were analyzed and discussed.
Orthorexia nervosa in a sample of Italian university population.
Dell'Osso, Liliana; Abelli, Marianna; Carpita, Barbara; Massimetti, Gabriele; Pini, Stefano; Rivetti, Luigi; Gorrasi, Federica; Tognetti, Rosalba; Ricca, Valdo; Carmassi, Claudia
2016-01-01
To investigate frequency and characteristics of orthorexic behaviours in a large university population. A total of 2826 individuals volunteered to complete an on-line anonymous form of ORTO-15 questionnaire, a self-administered questionnaire designed and validated to evaluate orthorexic symptomatology. As made in previous studies, an ORTO-15 total score lower than 35 has been used as an optimal threshold to detect a tendency to orthorexia nervosa. A specifically designed form was also used to collect socio-demographic variables. Overall, 2130 students and 696 university employees belonging to University of Pisa (Italy) were assessed. Orthorexic features had a frequency of 32.7%. Females showed a significantly higher rate of over-threshold scores on ORTO-15, a lower BMI, a higher rate of underweight condition and of vegan/vegetarian nutrition style than males. Orthorexia nervosa defined as a “fixation on healthy food”, is not formally present in DSM-5. The emergence of this condition as a new, possible prodromal of a psychological syndrome, has been recently emphasized by an increasing number of scientific articles. From our sample of university population emerged that being vegetarian or vegan, under-weight, female, student and being interested in the present study were significantly predictive of orthorexic tendency. Our data contribute to define the new conceptualization of orthorexia nervosa. Further studies are warranted in order to explore the diagnostic boundaries of this syndrome, its course and outcome, and possible clinical implications.
Hitting the Optimal Vaccination Percentage and the Risks of Error: Why to Miss Right.
Harvey, Michael J; Prosser, Lisa A; Messonnier, Mark L; Hutton, David W
2016-01-01
To determine the optimal level of vaccination coverage defined as the level that minimizes total costs and explore how economic results change with marginal changes to this level of coverage. A susceptible-infected-recovered-vaccinated model designed to represent theoretical infectious diseases was created to simulate disease spread. Parameter inputs were defined to include ranges that could represent a variety of possible vaccine-preventable conditions. Costs included vaccine costs and disease costs. Health benefits were quantified as monetized quality adjusted life years lost from disease. Primary outcomes were the number of infected people and the total costs of vaccination. Optimization methods were used to determine population vaccination coverage that achieved a minimum cost given disease and vaccine characteristics. Sensitivity analyses explored the effects of changes in reproductive rates, costs and vaccine efficacies on primary outcomes. Further analysis examined the additional cost incurred if the optimal coverage levels were not achieved. Results indicate that the relationship between vaccine and disease cost is the main driver of the optimal vaccination level. Under a wide range of assumptions, vaccination beyond the optimal level is less expensive compared to vaccination below the optimal level. This observation did not hold when the cost of the vaccine cost becomes approximately equal to the cost of disease. These results suggest that vaccination below the optimal level of coverage is more costly than vaccinating beyond the optimal level. This work helps provide information for assessing the impact of changes in vaccination coverage at a societal level.
Determining optimal gestational weight gain in the Korean population: a retrospective cohort study.
Choi, Sae Kyung; Lee, Guisera; Kim, Yeon Hee; Park, In Yang; Ko, Hyun Sun; Shin, Jong Chul
2017-08-22
The World Health Organization (WHO) international body mass index (BMI) cut-off points defining pre-pregnancy BMI categories in the Institute of Medicine (IOM) guidelines are not directly applicable to Asians. We aimed to define the optimal gestational weight gain (GWG) for the Korean population based on Asia-specific BMI categories. Data from 2702 live singleton deliveries in three tertiary centers between 2010 and 2011 were analyzed retrospectively. A multivariable logistic regression analysis was conducted to determine the lowest aggregated risk of composite perinatal outcomes based on Asia-specific BMI categories. The perinatal outcomes included gestational hypertensive disorder, emergency cesarean section, and fetal size for gestational age. In each BMI category, the GWG value corresponding to the lowest aggregated risk was defined as the optimal GWG. Among the study population, 440 (16.3%) were underweight (BMI < 18.5), 1459 (54.0%) were normal weight (18.5 ≤ BMI < 23), 392 (14.5%) were overweight (23 ≤ BMI < 25) and 411 (15.2%) were obese (BMI ≥ 25). The optimal GWG by Asia-specific BMI category was 20.8 kg (range, 16.7 to 24.7) for underweight, 16.6 kg (11.5 to 21.5) for normal weight, 13.1 kg (8.0 to 17.7) for overweight, and 14.4 kg (7.5 to 21.9) for obese. Considerably higher and wider optimal GWG ranges than recommended by IOM are found in our study in order to avoid adverse perinatal outcomes. Revised IOM recommendations for GWG could be considered for Korean women according to Asian BMI categories. Further prospective studies are needed in order to determine the optimal GWG for the Korean population.
Alshaikh, Nahla; Brunklaus, Andreas; Davis, Tracey; Robb, Stephanie A; Quinlivan, Ros; Munot, Pinki; Sarkozy, Anna; Muntoni, Francesco; Manzur, Adnan Y
2016-10-01
Assessment of the efficacy of vitamin D replenishment and maintenance doses required to attain optimal levels in boys with Duchenne muscular dystrophy (DMD). 25(OH)-vitamin D levels and concurrent vitamin D dosage were collected from retrospective case-note review of boys with DMD at the Dubowitz Neuromuscular Centre. Vitamin D levels were stratified as deficient at <25 nmol/L, insufficient at 25-49 nmol/L, adequate at 50-75 nmol/L and optimal at >75 nmol/L. 617 vitamin D samples were available from 197 boys (range 2-18 years)-69% from individuals on corticosteroids. Vitamin D-naïve boys (154 samples) showed deficiency in 28%, insufficiency in 42%, adequate levels in 24% and optimal levels in 6%. The vitamin D-supplemented group (463 samples) was tested while on different maintenance/replenishment doses. Three-month replenishment of daily 3000 IU (23 samples) or 6000 IU (37 samples) achieved optimal levels in 52% and 84%, respectively. 182 samples taken on 400 IU revealed deficiency in 19 (10%), insufficiency in 84 (47%), adequate levels in 67 (37%) and optimal levels in 11 (6%). 97 samples taken on 800 IU showed deficiency in 2 (2%), insufficiency in 17 (17%), adequate levels in 56 (58%) and optimal levels in 22 (23%). 81 samples were on 1000 IU and 14 samples on 1500 IU, with optimal levels in 35 (43%) and 9 (64%), respectively. No toxic level was seen (highest level 230 nmol/L). The prevalence of vitamin D deficiency and insufficiency in DMD is high. A 2-month replenishment regimen of 6000 IU and maintenance regimen of 1000-1500 IU/day was associated with optimal vitamin D levels. These data have important implications for optimising vitamin D dosing in DMD. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
NASA Astrophysics Data System (ADS)
Bode, F.; Reuschen, S.; Nowak, W.
2015-12-01
Drinking-water well catchments include many potential sources of contaminations like gas stations or agriculture. Finding optimal positions of early-warning monitoring wells is challenging because there are various parameters (and their uncertainties) that influence the reliability and optimality of any suggested monitoring location or monitoring network.The overall goal of this project is to develop and establish a concept to assess, design and optimize early-warning systems within well catchments. Such optimal monitoring networks need to optimize three competing objectives: a high detection probability, which can be reached by maximizing the "field of vision" of the monitoring network, a long early-warning time such that there is enough time left to install counter measures after first detection, and the overall operating costs of the monitoring network, which should ideally be reduced to a minimum. The method is based on numerical simulation of flow and transport in heterogeneous porous media coupled with geostatistics and Monte-Carlo, scenario analyses for real data, respectively, wrapped up within the framework of formal multi-objective optimization using a genetic algorithm.In order to speed up the optimization process and to better explore the Pareto-front, we developed a concept that forces the algorithm to search only in regions of the search space where promising solutions can be expected. We are going to show how to define these regions beforehand, using knowledge of the optimization problem, but also how to define them independently of problem attributes. With that, our method can be used with and/or without detailed knowledge of the objective functions.In summary, our study helps to improve optimization results in less optimization time by meaningful restrictions of the search space. These restrictions can be done independently of the optimization problem, but also in a problem-specific manner.
A Sequential Optimization Sampling Method for Metamodels with Radial Basis Functions
Pan, Guang; Ye, Pengcheng; Yang, Zhidong
2014-01-01
Metamodels have been widely used in engineering design to facilitate analysis and optimization of complex systems that involve computationally expensive simulation programs. The accuracy of metamodels is strongly affected by the sampling methods. In this paper, a new sequential optimization sampling method is proposed. Based on the new sampling method, metamodels can be constructed repeatedly through the addition of sampling points, namely, extrema points of metamodels and minimum points of density function. Afterwards, the more accurate metamodels would be constructed by the procedure above. The validity and effectiveness of proposed sampling method are examined by studying typical numerical examples. PMID:25133206
Fibigr, Jakub; Šatínský, Dalibor; Solich, Petr
2016-02-20
A new high-performance liquid chromatography method using fused-core column for fast separation of resveratrol and polydatin has been developed and used for quality control of nutraceuticals with resveratrol and polydatin content. Retention characteristics (log k) were studied under different conditions on C-18, RP-Amide C-18, Phenyl-hexyl, Pentafluorophenyl (F5) and Cyano stationary phases for both compounds. The effect of the volume fraction of acetonitrile on a retention factors log k of resveratrol and polydatin were evaluated. The optimal separation conditions for resveratrol, polydatin and internal standard p-nitrophenol were found on the fused-core column Ascentis Express ES-Cyano (100×3.0mm), particle size 2.7μm, with mobile phase acetonitrile/water solution with 0.5% acetic acid pH 3 (20:80, v/v) at a flow rate of 1.0mL/min and at 60°C. The detection wavelength was set at 305nm. Under the optimal chromatographic conditions, good linearity with regression coefficients in the range (r=0.9992-0.9998; n=10) for both compounds was achieved. Commercial samples of nutraceuticals were extracted with methanol using ultrasound bath for 15min. A 5μL sample volume of the filtered solution was directly injected into the HPLC system. Accuracy of the method defined as a mean recovery was in the range 83.2-107.3% for both nutraceuticals. The intraday method precision was found satisfactory and relative standard deviations of sample analysis were in the range 0.8-4.7%. The developed method has shown high sample throughput during sample preparation process, modern separation approach, and short time (3min) of analysis. The results of study showed that the declared content of resveratrol and polydatin varied widely in different nutraceuticals according the producers (71.50-115.00% of declared content). Copyright © 2015 Elsevier B.V. All rights reserved.
Optimal glass-ceramic structures: Components of giant mirror telescopes
NASA Technical Reports Server (NTRS)
Eschenauer, Hans A.
1990-01-01
Detailed investigations are carried out on optimal glass-ceramic mirror structures of terrestrial space technology (optical telescopes). In order to find an optimum design, a nonlinear multi-criteria optimization problem is formulated. 'Minimum deformation' at 'minimum weight' are selected as contradictory objectives, and a set of further constraints (quilting effect, optical faults etc.) is defined and included. A special result of the investigations is described.
Theoretical Foundations of Wireless Networks
2015-07-22
Optimal transmission over a fading channel with imperfect channel state information,” in Global Telecommun. Conf., pp. 1–5, Houston TX , December 5-9...SECURITY CLASSIFICATION OF: The goal of this project is to develop a formal theory of wireless networks providing a scientific basis to understand...randomness and optimality. Randomness, in the form of fading, is a defining characteristic of wireless networks. Optimality is a suitable design
PharmDock: a pharmacophore-based docking program
2014-01-01
Background Protein-based pharmacophore models are enriched with the information of potential interactions between ligands and the protein target. We have shown in a previous study that protein-based pharmacophore models can be applied for ligand pose prediction and pose ranking. In this publication, we present a new pharmacophore-based docking program PharmDock that combines pose sampling and ranking based on optimized protein-based pharmacophore models with local optimization using an empirical scoring function. Results Tests of PharmDock on ligand pose prediction, binding affinity estimation, compound ranking and virtual screening yielded comparable or better performance to existing and widely used docking programs. The docking program comes with an easy-to-use GUI within PyMOL. Two features have been incorporated in the program suite that allow for user-defined guidance of the docking process based on previous experimental data. Docking with those features demonstrated superior performance compared to unbiased docking. Conclusion A protein pharmacophore-based docking program, PharmDock, has been made available with a PyMOL plugin. PharmDock and the PyMOL plugin are freely available from http://people.pharmacy.purdue.edu/~mlill/software/pharmdock. PMID:24739488
How hot? Systematic convergence of the replica exchange method using multiple reservoirs.
Ruscio, Jory Z; Fawzi, Nicolas L; Head-Gordon, Teresa
2010-02-01
We have devised a systematic approach to converge a replica exchange molecular dynamics simulation by dividing the full temperature range into a series of higher temperature reservoirs and a finite number of lower temperature subreplicas. A defined highest temperature reservoir of equilibrium conformations is used to help converge a lower but still hot temperature subreplica, which in turn serves as the high-temperature reservoir for the next set of lower temperature subreplicas. The process is continued until an optimal temperature reservoir is reached to converge the simulation at the target temperature. This gradual convergence of subreplicas allows for better and faster convergence at the temperature of interest and all intermediate temperatures for thermodynamic analysis, as well as optimizing the use of multiple processors. We illustrate the overall effectiveness of our multiple reservoir replica exchange strategy by comparing sampling and computational efficiency with respect to replica exchange, as well as comparing methods when converging the structural ensemble of the disordered Abeta(21-30) peptide simulated with explicit water by comparing calculated Rotating Overhauser Effect Spectroscopy intensities to experimentally measured values. Copyright 2009 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Rolfe, P.
2006-03-01
Specialized sensing and measurement instruments are under development to aid the controlled culture of cells in bioreactors for the fabrication of biological tissues. Precisely defined physical and chemical conditions are needed for the correct culture of the many cell-tissue types now being studied, including chondrocytes (cartilage), vascular endothelial cells and smooth muscle cells (blood vessels), fibroblasts, hepatocytes (liver) and receptor neurones. Cell and tissue culture processes are dynamic and therefore, optimal control requires monitoring of the key process variables. Chemical and physical sensing is approached in this paper with the aim of enabling automatic optimal control, based on classical cell growth models, to be achieved. Non-invasive sensing is performed via the bioreactor wall, invasive sensing with probes placed inside the cell culture chamber and indirect monitoring using analysis within a shunt or a sampling chamber. Electroanalytical and photonics-based systems are described. Chemical sensing for gases, ions, metabolites, certain hormones and proteins, is under development. Spectroscopic analysis of the culture medium is used for measurement of glucose and for proteins that are markers of cell biosynthetic behaviour. Optical interrogation of cells and tissues is also investigated for structural analysis based on scatter.
Lefmann, Kim; Klenø, Kaspar H; Birk, Jonas Okkels; Hansen, Britt R; Holm, Sonja L; Knudsen, Erik; Lieutenant, Klaus; von Moos, Lars; Sales, Morten; Willendrup, Peter K; Andersen, Ken H
2013-05-01
We here describe the result of simulations of 15 generic neutron instruments for the long-pulsed European Spallation Source. All instruments have been simulated for 20 different settings of the source time structure, corresponding to pulse lengths between 1 ms and 2 ms; and repetition frequencies between 10 Hz and 25 Hz. The relative change in performance with time structure is given for each instrument, and an unweighted average is calculated. The performance of the instrument suite is proportional to (a) the peak flux and (b) the duty cycle to a power of approximately 0.3. This information is an important input to determining the best accelerator parameters. In addition, we find that in our simple guide systems, most neutrons reaching the sample originate from the central 3-5 cm of the moderator. This result can be used as an input in later optimization of the moderator design. We discuss the relevance and validity of defining a single figure-of-merit for a full facility and compare with evaluations of the individual instrument classes.
Wages, N A; Slingluff, C L; Petroni, G R
2017-04-01
In recent years, investigators have asserted that the 3 + 3 design lacks flexibility, making its use in modern early-phase trial settings, such as combinations and/or biological agents, inefficient. More innovative approaches are required to address contemporary research questions, such as those posed in trials involving immunotherapies. We describe the implementation of an adaptive design for identifying an optimal treatment regimen, defined by low toxicity and high immune response, in an early-phase trial of a melanoma helper peptide vaccine plus novel adjuvant combinations. Operating characteristics demonstrate the ability of the method to effectively recommend optimal regimens in a high percentage of trials with reasonable sample sizes. The proposed design is a practical, early-phase, adaptive method for use with combined immunotherapy regimens. This design can be applied more broadly to early-phase combination studies, as it was used in an ongoing study of two small molecule inhibitors in relapsed/refractory mantle cell lymphoma. © The Author 2016. Published by Oxford University Press on behalf of the European Society for Medical Oncology. All rights reserved. For permissions, please email: journals.permissions@oup.com.
Using constraints and their value for optimization of large ODE systems
Domijan, Mirela; Rand, David A.
2015-01-01
We provide analytical tools to facilitate a rigorous assessment of the quality and value of the fit of a complex model to data. We use this to provide approaches to model fitting, parameter estimation, the design of optimization functions and experimental optimization. This is in the context where multiple constraints are used to select or optimize a large model defined by differential equations. We illustrate the approach using models of circadian clocks and the NF-κB signalling system. PMID:25673300
Minimal time spiking in various ChR2-controlled neuron models.
Renault, Vincent; Thieullen, Michèle; Trélat, Emmanuel
2018-02-01
We use conductance based neuron models, and the mathematical modeling of optogenetics to define controlled neuron models and we address the minimal time control of these affine systems for the first spike from equilibrium. We apply tools of geometric optimal control theory to study singular extremals, and we implement a direct method to compute optimal controls. When the system is too large to theoretically investigate the existence of singular optimal controls, we observe numerically the optimal bang-bang controls.
Active control of the spatial MRI phase distribution with optimal control theory
NASA Astrophysics Data System (ADS)
Lefebvre, Pauline M.; Van Reeth, Eric; Ratiney, Hélène; Beuf, Olivier; Brusseau, Elisabeth; Lambert, Simon A.; Glaser, Steffen J.; Sugny, Dominique; Grenier, Denis; Tse Ve Koon, Kevin
2017-08-01
This paper investigates the use of Optimal Control (OC) theory to design Radio-Frequency (RF) pulses that actively control the spatial distribution of the MRI magnetization phase. The RF pulses are generated through the application of the Pontryagin Maximum Principle and optimized so that the resulting transverse magnetization reproduces various non-trivial and spatial phase patterns. Two different phase patterns are defined and the resulting optimal pulses are tested both numerically with the ODIN MRI simulator and experimentally with an agar gel phantom on a 4.7 T small-animal MR scanner. Phase images obtained in simulations and experiments are both consistent with the defined phase patterns. A practical application of phase control with OC-designed pulses is also presented, with the generation of RF pulses adapted for a Magnetic Resonance Elastography experiment. This study demonstrates the possibility to use OC-designed RF pulses to encode information in the magnetization phase and could have applications in MRI sequences using phase images.
NASA Astrophysics Data System (ADS)
Ren, Danping; Wu, Shanshan; Zhang, Lijing
2016-09-01
In view of the characteristics of the global control and flexible monitor of software-defined networks (SDN), we proposes a new optical access network architecture dedicated to Wavelength Division Multiplexing-Passive Optical Network (WDM-PON) systems based on SDN. The network coding (NC) technology is also applied into this architecture to enhance the utilization of wavelength resource and reduce the costs of light source. Simulation results show that this scheme can optimize the throughput of the WDM-PON network, greatly reduce the system time delay and energy consumption.
Gaussian content as a laser beam quality parameter.
Ruschin, Shlomo; Yaakobi, Elad; Shekel, Eyal
2011-08-01
We propose the Gaussian content (GC) as an optional quality parameter for the characterization of laser beams. It is defined as the overlap integral of a given field with an optimally defined Gaussian. The definition is especially suited for applications where coherence properties are targeted. Mathematical definitions and basic calculation procedures are given along with results for basic beam profiles. The coherent combination of an array of laser beams and the optimal coupling between a diode laser and a single-mode fiber are elaborated as application examples. The measurement of the GC and its conservation upon propagation are experimentally confirmed.
Evaluating information content of SNPs for sample-tagging in re-sequencing projects.
Hu, Hao; Liu, Xiang; Jin, Wenfei; Hilger Ropers, H; Wienker, Thomas F
2015-05-15
Sample-tagging is designed for identification of accidental sample mix-up, which is a major issue in re-sequencing studies. In this work, we develop a model to measure the information content of SNPs, so that we can optimize a panel of SNPs that approach the maximal information for discrimination. The analysis shows that as low as 60 optimized SNPs can differentiate the individuals in a population as large as the present world, and only 30 optimized SNPs are in practice sufficient in labeling up to 100 thousand individuals. In the simulated populations of 100 thousand individuals, the average Hamming distances, generated by the optimized set of 30 SNPs are larger than 18, and the duality frequency, is lower than 1 in 10 thousand. This strategy of sample discrimination is proved robust in large sample size and different datasets. The optimized sets of SNPs are designed for Whole Exome Sequencing, and a program is provided for SNP selection, allowing for customized SNP numbers and interested genes. The sample-tagging plan based on this framework will improve re-sequencing projects in terms of reliability and cost-effectiveness.
Wang, Yuhao; Li, Xin; Xu, Kai; Ren, Fengbo; Yu, Hao
2017-04-01
Compressive sensing is widely used in biomedical applications, and the sampling matrix plays a critical role on both quality and power consumption of signal acquisition. It projects a high-dimensional vector of data into a low-dimensional subspace by matrix-vector multiplication. An optimal sampling matrix can ensure accurate data reconstruction and/or high compression ratio. Most existing optimization methods can only produce real-valued embedding matrices that result in large energy consumption during data acquisition. In this paper, we propose an efficient method that finds an optimal Boolean sampling matrix in order to reduce the energy consumption. Compared to random Boolean embedding, our data-driven Boolean sampling matrix can improve the image recovery quality by 9 dB. Moreover, in terms of sampling hardware complexity, it reduces the energy consumption by 4.6× and the silicon area by 1.9× over the data-driven real-valued embedding.
A systematic conservation planning approach to fire risk management in Natura 2000 sites.
Foresta, Massimiliano; Carranza, Maria Laura; Garfì, Vittorio; Di Febbraro, Mirko; Marchetti, Marco; Loy, Anna
2016-10-01
A primary challenge in conservation biology is to preserve the most representative biodiversity while simultaneously optimizing the efforts associated with conservation. In Europe, the implementation of the Natura 2000 network requires protocols to recognize and map threats to biodiversity and to identify specific mitigation actions. We propose a systematic conservation planning approach to optimize management actions against specific threats based on two fundamental parameters: biodiversity values and threat pressure. We used the conservation planning software Marxan to optimize a fire management plan in a Natura 2000 coastal network in southern Italy. We address three primary questions: i) Which areas are at high fire risk? ii) Which areas are the most valuable for threatened biodiversity? iii) Which areas should receive priority risk-mitigation actions for the optimal effect?, iv) which fire-prevention actions are feasible in the management areas?. The biodiversity values for the Natura 2000 spatial units were derived from the distribution maps of 18 habitats and 89 vertebrate species of concern in Europe (Habitat Directive 92/43/EEC). The threat pressure map, defined as fire probability, was obtained from digital layers of fire risk and of fire frequency. Marxan settings were defined as follows: a) planning units of 40 × 40 m, b) conservation features defined as all habitats and vertebrate species of European concern occurring in the study area, c) conservation targets defined according with fire sensitivity and extinction risk of conservation features, and d) costs determined as the complement of fire probabilities. We identified 23 management areas in which to concentrate efforts for the optimal reduction of fire-induced effects. Because traditional fire prevention is not feasible for most of policy habitats included in the management areas, alternative prevention practices were identified that allows the conservation of the vegetation structure. The proposed approach has potential applications for multiple landscapes, threats and spatial scales and could be extended to other valuable natural areas, including protected areas. Copyright © 2016 Elsevier Ltd. All rights reserved.
Wang, Junxiao; Wang, Xiaorui; Zhou, Shenglu; Wu, Shaohua; Zhu, Yan; Lu, Chunfeng
2016-01-01
With China’s rapid economic development, the reduction in arable land has emerged as one of the most prominent problems in the nation. The long-term dynamic monitoring of arable land quality is important for protecting arable land resources. An efficient practice is to select optimal sample points while obtaining accurate predictions. To this end, the selection of effective points from a dense set of soil sample points is an urgent problem. In this study, data were collected from Donghai County, Jiangsu Province, China. The number and layout of soil sample points are optimized by considering the spatial variations in soil properties and by using an improved simulated annealing (SA) algorithm. The conclusions are as follows: (1) Optimization results in the retention of more sample points in the moderate- and high-variation partitions of the study area; (2) The number of optimal sample points obtained with the improved SA algorithm is markedly reduced, while the accuracy of the predicted soil properties is improved by approximately 5% compared with the raw data; (3) With regard to the monitoring of arable land quality, a dense distribution of sample points is needed to monitor the granularity. PMID:27706051
Camin, Federica; Pavone, Anita; Bontempo, Luana; Wehrens, Ron; Paolini, Mauro; Faberi, Angelo; Marianella, Rosa Maria; Capitani, Donatella; Vista, Silvia; Mannina, Luisa
2016-04-01
Isotope Ratio Mass Spectrometry (IRMS), (1)H Nuclear Magnetic Resonance ((1)H NMR), conventional chemical analysis and chemometric elaboration were used to assess quality and to define and confirm the geographical origin of 177 Italian PDO (Protected Denomination of Origin) olive oils and 86 samples imported from Tunisia. Italian olive oils were richer in squalene and unsaturated fatty acids, whereas Tunisian olive oils showed higher δ(18)O, δ(2)H, linoleic acid, saturated fatty acids β-sitosterol, sn-1 and 3 diglyceride values. Furthermore, all the Tunisian samples imported were of poor quality, with a K232 and/or acidity values above the limits established for extra virgin olive oils. By combining isotopic composition with (1)H NMR data using a multivariate statistical approach, a statistical model able to discriminate olive oil from Italy and those imported from Tunisia was obtained, with an optimal differentiation ability arriving at around 98%. Copyright © 2015 Elsevier Ltd. All rights reserved.
Surface Estimation, Variable Selection, and the Nonparametric Oracle Property.
Storlie, Curtis B; Bondell, Howard D; Reich, Brian J; Zhang, Hao Helen
2011-04-01
Variable selection for multivariate nonparametric regression is an important, yet challenging, problem due, in part, to the infinite dimensionality of the function space. An ideal selection procedure should be automatic, stable, easy to use, and have desirable asymptotic properties. In particular, we define a selection procedure to be nonparametric oracle (np-oracle) if it consistently selects the correct subset of predictors and at the same time estimates the smooth surface at the optimal nonparametric rate, as the sample size goes to infinity. In this paper, we propose a model selection procedure for nonparametric models, and explore the conditions under which the new method enjoys the aforementioned properties. Developed in the framework of smoothing spline ANOVA, our estimator is obtained via solving a regularization problem with a novel adaptive penalty on the sum of functional component norms. Theoretical properties of the new estimator are established. Additionally, numerous simulated and real examples further demonstrate that the new approach substantially outperforms other existing methods in the finite sample setting.
Surface Estimation, Variable Selection, and the Nonparametric Oracle Property
Storlie, Curtis B.; Bondell, Howard D.; Reich, Brian J.; Zhang, Hao Helen
2010-01-01
Variable selection for multivariate nonparametric regression is an important, yet challenging, problem due, in part, to the infinite dimensionality of the function space. An ideal selection procedure should be automatic, stable, easy to use, and have desirable asymptotic properties. In particular, we define a selection procedure to be nonparametric oracle (np-oracle) if it consistently selects the correct subset of predictors and at the same time estimates the smooth surface at the optimal nonparametric rate, as the sample size goes to infinity. In this paper, we propose a model selection procedure for nonparametric models, and explore the conditions under which the new method enjoys the aforementioned properties. Developed in the framework of smoothing spline ANOVA, our estimator is obtained via solving a regularization problem with a novel adaptive penalty on the sum of functional component norms. Theoretical properties of the new estimator are established. Additionally, numerous simulated and real examples further demonstrate that the new approach substantially outperforms other existing methods in the finite sample setting. PMID:21603586
Femtoelectron-Based Terahertz Imaging of Hydration State in a Proton Exchange Membrane Fuel Cell
NASA Astrophysics Data System (ADS)
Buaphad, P.; Thamboon, P.; Kangrang, N.; Rhodes, M. W.; Thongbai, C.
2015-08-01
Imbalanced water management in a proton exchange membrane (PEM) fuel cell significantly reduces the cell performance and durability. Visualization of water distribution and transport can provide greater comprehension toward optimization of the PEM fuel cell. In this work, we are interested in water flooding issues that occurred in flow channels on cathode side of the PEM fuel cell. The sample cell was fabricated with addition of a transparent acrylic window allowing light access and observed the process of flooding formation (in situ) via a CCD camera. We then explore potential use of terahertz (THz) imaging, consisting of femtoelectron-based THz source and off-angle reflective-mode imaging, to identify water presence in the sample cell. We present simulations of two hydration states (water and nonwater area), which are in agreement with the THz image results. A line-scan plot is utilized for quantitative analysis and for defining spatial resolution of the image. Implementing metal mesh filtering can improve spatial resolution of our THz imaging system.
SPICE: exploration and analysis of post-cytometric complex multivariate datasets.
Roederer, Mario; Nozzi, Joshua L; Nason, Martha C
2011-02-01
Polychromatic flow cytometry results in complex, multivariate datasets. To date, tools for the aggregate analysis of these datasets across multiple specimens grouped by different categorical variables, such as demographic information, have not been optimized. Often, the exploration of such datasets is accomplished by visualization of patterns with pie charts or bar charts, without easy access to statistical comparisons of measurements that comprise multiple components. Here we report on algorithms and a graphical interface we developed for these purposes. In particular, we discuss thresholding necessary for accurate representation of data in pie charts, the implications for display and comparison of normalized versus unnormalized data, and the effects of averaging when samples with significant background noise are present. Finally, we define a statistic for the nonparametric comparison of complex distributions to test for difference between groups of samples based on multi-component measurements. While originally developed to support the analysis of T cell functional profiles, these techniques are amenable to a broad range of datatypes. Published 2011 Wiley-Liss, Inc.
Risk-Based Sampling: I Don't Want to Weight in Vain.
Powell, Mark R
2015-12-01
Recently, there has been considerable interest in developing risk-based sampling for food safety and animal and plant health for efficient allocation of inspection and surveillance resources. The problem of risk-based sampling allocation presents a challenge similar to financial portfolio analysis. Markowitz (1952) laid the foundation for modern portfolio theory based on mean-variance optimization. However, a persistent challenge in implementing portfolio optimization is the problem of estimation error, leading to false "optimal" portfolios and unstable asset weights. In some cases, portfolio diversification based on simple heuristics (e.g., equal allocation) has better out-of-sample performance than complex portfolio optimization methods due to estimation uncertainty. Even for portfolios with a modest number of assets, the estimation window required for true optimization may imply an implausibly long stationary period. The implications for risk-based sampling are illustrated by a simple simulation model of lot inspection for a small, heterogeneous group of producers. © 2015 Society for Risk Analysis.
NASA Technical Reports Server (NTRS)
Jackson, J. K.; Yakut, M. M.
1976-01-01
An all-important first step in the development of the Spacelab Life Science Laboratory is the design of the Biological Specimen Holding Facility (BSHF) which will provide accommodation for living specimens for life science research in orbit. As a useful tool in the understanding of physiological and biomedical changes produced in the weightless environment, the BSHF will enable biomedical researchers to conduct in-orbit investigations utilizing techniques that may be impossible to perform on human subjects. The results of a comprehensive study for defining the BSHF, description of its experiment support capabilities, and the planning required for its development are presented. Conceptual designs of the facility, its subsystems and interfaces with the Orbiter and Spacelab are included. Environmental control, life support and data management systems are provided. Interface and support equipment required for specimen transfer, surgical research, and food, water and waste storage is defined. New and optimized concepts are presented for waste collection, feces and urine separation and sampling, environmental control, feeding and watering, lighting, data management and other support subsystems.
In-Space Radiator Shape Optimization using Genetic Algorithms
NASA Technical Reports Server (NTRS)
Hull, Patrick V.; Kittredge, Ken; Tinker, Michael; SanSoucie, Michael
2006-01-01
Future space exploration missions will require the development of more advanced in-space radiators. These radiators should be highly efficient and lightweight, deployable heat rejection systems. Typical radiators for in-space heat mitigation commonly comprise a substantial portion of the total vehicle mass. A small mass savings of even 5-10% can greatly improve vehicle performance. The objective of this paper is to present the development of detailed tools for the analysis and design of in-space radiators using evolutionary computation techniques. The optimality criterion is defined as a two-dimensional radiator with a shape demonstrating the smallest mass for the greatest overall heat transfer, thus the end result is a set of highly functional radiator designs. This cross-disciplinary work combines topology optimization and thermal analysis design by means of a genetic algorithm The proposed design tool consists of the following steps; design parameterization based on the exterior boundary of the radiator, objective function definition (mass minimization and heat loss maximization), objective function evaluation via finite element analysis (thermal radiation analysis) and optimization based on evolutionary algorithms. The radiator design problem is defined as follows: the input force is a driving temperature and the output reaction is heat loss. Appropriate modeling of the space environment is added to capture its effect on the radiator. The design parameters chosen for this radiator shape optimization problem fall into two classes, variable height along the width of the radiator and a spline curve defining the -material boundary of the radiator. The implementation of multiple design parameter schemes allows the user to have more confidence in the radiator optimization tool upon demonstration of convergence between the two design parameter schemes. This tool easily allows the user to manipulate the driving temperature regions thus permitting detailed design of in-space radiators for unique situations. Preliminary results indicate an optimized shape following that of the temperature distribution regions in the "cooler" portions of the radiator. The results closely follow the expected radiator shape.
Hennig, Stefanie; Waterhouse, Timothy H; Bell, Scott C; France, Megan; Wainwright, Claire E; Miller, Hugh; Charles, Bruce G; Duffull, Stephen B
2007-01-01
What is already known about this subject • Itraconazole is a triazole antifungal used in the treatment of allergic bronchopulmonary aspergillosis in patients with cystic fibrosis (CF). • The pharmacokinetic (PK) properties of this drug and its active metabolite have been described before, mostly in healthy volunteers. • However, only sparse information from case reports were available of the PK properties of this drug in CF patients at the start of our study. What this study adds • This study reports for the first time the population pharmacokinetic properties of itraconazole and a known active metabolite, hydroxy-itraconazole in adult patients with CF. • As a result, this study offers new dosing approaches and their pharmacoeconomic impact as well as a PK model for therapeutic drug monitoring of this drug in this patient group. • Furthermore, it is an example of a successful d-optimal design application in a clinical setting. Aim The primary objective of the study was to estimate the population pharmacokinetic parameters for itraconazole and hydroxy-itraconazole, in particular, the relative oral bioavailability of the capsule compared with solution in adult cystic fibrosis patients, in order to develop new dosing guidelines. A secondary objective was to evaluate the performance of a population optimal design. Methods The blood sampling times for the population study were optimized previously using POPT v.2.0. The design was based on the administration of solution and capsules to 30 patients in a cross-over study. Prior information suggested that itraconazole is generally well described by a two-compartment disposition model with either linear or saturable elimination. The pharmacokinetics of itraconazole and the metabolite were modelled simultaneously using NONMEM. Dosing schedules were simulated to assess their ability to achieve a trough target concentration of 0.5 mg ml−1. Results Out of 241 blood samples, 94% were taken within the defined optimal sampling windows. A two-compartment model with first order absorption and elimination best described itraconazole kinetics, with first order metabolism to the hydroxy-metabolite. For itraconazole the absorption rate constants (between-subject variability) for capsule and solution were 0.0315 h−1 (91.9%) and 0.125 h−1 (106.3%), respectively, and the relative bioavailability of the capsule was 0.82 (62.3%) (confidence interval 0.36, 1.97), compared with the solution. There was no evidence of nonlinearity. Simulations from the final model showed that a dosing schedule of 500 mg twice daily for both formulations provided the highest chance of target success. Conclusion The optimal design performed well and the pharmacokinetics of itraconazole and hydroxy-itraconazole were described adequately by the model. The relative bioavailability for itraconazole capsules was 82% compared with the solution. PMID:17073891
Comparison of optimal design methods in inverse problems
NASA Astrophysics Data System (ADS)
Banks, H. T.; Holm, K.; Kappel, F.
2011-07-01
Typical optimal design methods for inverse or parameter estimation problems are designed to choose optimal sampling distributions through minimization of a specific cost function related to the resulting error in parameter estimates. It is hoped that the inverse problem will produce parameter estimates with increased accuracy using data collected according to the optimal sampling distribution. Here we formulate the classical optimal design problem in the context of general optimization problems over distributions of sampling times. We present a new Prohorov metric-based theoretical framework that permits one to treat succinctly and rigorously any optimal design criteria based on the Fisher information matrix. A fundamental approximation theory is also included in this framework. A new optimal design, SE-optimal design (standard error optimal design), is then introduced in the context of this framework. We compare this new design criterion with the more traditional D-optimal and E-optimal designs. The optimal sampling distributions from each design are used to compute and compare standard errors; the standard errors for parameters are computed using asymptotic theory or bootstrapping and the optimal mesh. We use three examples to illustrate ideas: the Verhulst-Pearl logistic population model (Banks H T and Tran H T 2009 Mathematical and Experimental Modeling of Physical and Biological Processes (Boca Raton, FL: Chapman and Hall/CRC)), the standard harmonic oscillator model (Banks H T and Tran H T 2009) and a popular glucose regulation model (Bergman R N, Ider Y Z, Bowden C R and Cobelli C 1979 Am. J. Physiol. 236 E667-77 De Gaetano A and Arino O 2000 J. Math. Biol. 40 136-68 Toffolo G, Bergman R N, Finegood D T, Bowden C R and Cobelli C 1980 Diabetes 29 979-90).
Advanced Structural Optimization Under Consideration of Cost Tracking
NASA Astrophysics Data System (ADS)
Zell, D.; Link, T.; Bickelmaier, S.; Albinger, J.; Weikert, S.; Cremaschi, F.; Wiegand, A.
2014-06-01
In order to improve the design process of launcher configurations in the early development phase, the software Multidisciplinary Optimization (MDO) was developed. The tool combines different efficient software tools such as Optimal Design Investigations (ODIN) for structural optimizations, Aerospace Trajectory Optimization Software (ASTOS) for trajectory and vehicle design optimization for a defined payload and mission.The present paper focuses to the integration and validation of ODIN. ODIN enables the user to optimize typical axis-symmetric structures by means of sizing the stiffening designs concerning strength and stability while minimizing the structural mass. In addition a fully automatic finite element model (FEM) generator module creates ready-to-run FEM models of a complete stage or launcher assembly.Cost tracking respectively future improvements concerning cost optimization are indicated.
Gallagher, Matthew W; Lopez, Shane J; Pressman, Sarah D
2013-10-01
Current theories of optimism suggest that the tendency to maintain positive expectations for the future is an adaptive psychological resource associated with improved well-being and physical health, but the majority of previous optimism research has been conducted in industrialized nations. The present study examined (a) whether optimism is universal, (b) what demographic factors predict optimism, and (c) whether optimism is consistently associated with improved subjective well-being and perceived health worldwide. The present study used representative samples of 142 countries that together represent 95% of the world's population. The total sample of 150,048 individuals had a mean age of 38.28 (SD = 16.85) and approximately equal sex distribution (51.2% female). The relationships between optimism, subjective well-being, and perceived health were examined using hierarchical linear modeling. Results indicated that most individuals and most countries worldwide are optimistic and that higher levels of optimism are associated with improved subjective well-being and perceived health worldwide. The present study provides compelling evidence that optimism is a universal phenomenon and that the associations between optimism and improved psychological functioning are not limited to industrialized nations. © 2012 Wiley Periodicals, Inc.
Computer-aided resource planning and scheduling for radiological services
NASA Astrophysics Data System (ADS)
Garcia, Hong-Mei C.; Yun, David Y.; Ge, Yiqun; Khan, Javed I.
1996-05-01
There exists tremendous opportunity in hospital-wide resource optimization based on system integration. This paper defines the resource planning and scheduling requirements integral to PACS, RIS and HIS integration. An multi-site case study is conducted to define the requirements. A well-tested planning and scheduling methodology, called Constrained Resource Planning model, has been applied to the chosen problem of radiological service optimization. This investigation focuses on resource optimization issues for minimizing the turnaround time to increase clinical efficiency and customer satisfaction, particularly in cases where the scheduling of multiple exams are required for a patient. How best to combine the information system efficiency and human intelligence in improving radiological services is described. Finally, an architecture for interfacing a computer-aided resource planning and scheduling tool with the existing PACS, HIS and RIS implementation is presented.
Osuch, Tomasz; Markowski, Konrad; Jędrzejewski, Kazimierz
2015-06-10
A versatile numerical model for spectral transmission/reflection, group delay characteristic analysis, and design of tapered fiber Bragg gratings (TFBGs) is presented. This approach ensures flexibility with defining both distribution of refractive index change of the gratings (including apodization) and shape of the taper profile. Additionally, sensing and tunable dispersion properties of the TFBGs were fully examined, considering strain-induced effects. The presented numerical approach, together with Pareto optimization, were also used to design the best tanh apodization profiles of the TFBG in terms of maximizing its spectral width with simultaneous minimization of the group delay oscillations. Experimental verification of the model confirms its correctness. The combination of model versatility and possibility to define the other objective functions of Pareto optimization creates a universal tool for TFBG analysis and design.
An Iterative Approach for the Optimization of Pavement Maintenance Management at the Network Level
Torres-Machí, Cristina; Chamorro, Alondra; Videla, Carlos; Yepes, Víctor
2014-01-01
Pavement maintenance is one of the major issues of public agencies. Insufficient investment or inefficient maintenance strategies lead to high economic expenses in the long term. Under budgetary restrictions, the optimal allocation of resources becomes a crucial aspect. Two traditional approaches (sequential and holistic) and four classes of optimization methods (selection based on ranking, mathematical optimization, near optimization, and other methods) have been applied to solve this problem. They vary in the number of alternatives considered and how the selection process is performed. Therefore, a previous understanding of the problem is mandatory to identify the most suitable approach and method for a particular network. This study aims to assist highway agencies, researchers, and practitioners on when and how to apply available methods based on a comparative analysis of the current state of the practice. Holistic approach tackles the problem considering the overall network condition, while the sequential approach is easier to implement and understand, but may lead to solutions far from optimal. Scenarios defining the suitability of these approaches are defined. Finally, an iterative approach gathering the advantages of traditional approaches is proposed and applied in a case study. The proposed approach considers the overall network condition in a simpler and more intuitive manner than the holistic approach. PMID:24741352
NASA Technical Reports Server (NTRS)
Pindera, Marek-Jerzy; Salzar, Robert S.
1996-01-01
A user's guide for the computer program OPTCOMP2 is presented in this report. This program provides a capability to optimize the fabrication or service-induced residual stresses in unidirectional metal matrix composites subjected to combined thermomechanical axisymmetric loading by altering the processing history, as well as through the microstructural design of interfacial fiber coatings. The user specifies the initial architecture of the composite and the load history, with the constituent materials being elastic, plastic, viscoplastic, or as defined by the 'user-defined' constitutive model, in addition to the objective function and constraints, through a user-friendly data input interface. The optimization procedure is based on an efficient solution methodology for the inelastic response of a fiber/interface layer(s)/matrix concentric cylinder model where the interface layers can be either homogeneous or heterogeneous. The response of heterogeneous layers is modeled using Aboudi's three-dimensional method of cells micromechanics model. The commercial optimization package DOT is used for the nonlinear optimization problem. The solution methodology for the arbitrarily layered cylinder is based on the local-global stiffness matrix formulation and Mendelson's iterative technique of successive elastic solutions developed for elastoplastic boundary-value problems. The optimization algorithm employed in DOT is based on the method of feasible directions.
An iterative approach for the optimization of pavement maintenance management at the network level.
Torres-Machí, Cristina; Chamorro, Alondra; Videla, Carlos; Pellicer, Eugenio; Yepes, Víctor
2014-01-01
Pavement maintenance is one of the major issues of public agencies. Insufficient investment or inefficient maintenance strategies lead to high economic expenses in the long term. Under budgetary restrictions, the optimal allocation of resources becomes a crucial aspect. Two traditional approaches (sequential and holistic) and four classes of optimization methods (selection based on ranking, mathematical optimization, near optimization, and other methods) have been applied to solve this problem. They vary in the number of alternatives considered and how the selection process is performed. Therefore, a previous understanding of the problem is mandatory to identify the most suitable approach and method for a particular network. This study aims to assist highway agencies, researchers, and practitioners on when and how to apply available methods based on a comparative analysis of the current state of the practice. Holistic approach tackles the problem considering the overall network condition, while the sequential approach is easier to implement and understand, but may lead to solutions far from optimal. Scenarios defining the suitability of these approaches are defined. Finally, an iterative approach gathering the advantages of traditional approaches is proposed and applied in a case study. The proposed approach considers the overall network condition in a simpler and more intuitive manner than the holistic approach.
Taking Stock of Unrealistic Optimism.
Shepperd, James A; Klein, William M P; Waters, Erika A; Weinstein, Neil D
2013-07-01
Researchers have used terms such as unrealistic optimism and optimistic bias to refer to concepts that are similar but not synonymous. Drawing from three decades of research, we critically discuss how researchers define unrealistic optimism and we identify four types that reflect different measurement approaches: unrealistic absolute optimism at the individual and group level and unrealistic comparative optimism at the individual and group level. In addition, we discuss methodological criticisms leveled against research on unrealistic optimism and note that the criticisms are primarily relevant to only one type-the group form of unrealistic comparative optimism. We further clarify how the criticisms are not nearly as problematic even for unrealistic comparative optimism as they might seem. Finally, we note boundary conditions on the different types of unrealistic optimism and reflect on five broad questions that deserve further attention.
Shape optimization of pulsatile ventricular assist devices using FSI to minimize thrombotic risk
NASA Astrophysics Data System (ADS)
Long, C. C.; Marsden, A. L.; Bazilevs, Y.
2014-10-01
In this paper we perform shape optimization of a pediatric pulsatile ventricular assist device (PVAD). The device simulation is carried out using fluid-structure interaction (FSI) modeling techniques within a computational framework that combines FEM for fluid mechanics and isogeometric analysis for structural mechanics modeling. The PVAD FSI simulations are performed under realistic conditions (i.e., flow speeds, pressure levels, boundary conditions, etc.), and account for the interaction of air, blood, and a thin structural membrane separating the two fluid subdomains. The shape optimization study is designed to reduce thrombotic risk, a major clinical problem in PVADs. Thrombotic risk is quantified in terms of particle residence time in the device blood chamber. Methods to compute particle residence time in the context of moving spatial domains are presented in a companion paper published in the same issue (Comput Mech, doi: 10.1007/s00466-013-0931-y, 2013). The surrogate management framework, a derivative-free pattern search optimization method that relies on surrogates for increased efficiency, is employed in this work. For the optimization study shown here, particle residence time is used to define a suitable cost or objective function, while four adjustable design optimization parameters are used to define the device geometry. The FSI-based optimization framework is implemented in a parallel computing environment, and deployed with minimal user intervention. Using five SEARCH/ POLL steps the optimization scheme identifies a PVAD design with significantly better throughput efficiency than the original device.
Evaluating data worth for ground-water management under uncertainty
Wagner, B.J.
1999-01-01
A decision framework is presented for assessing the value of ground-water sampling within the context of ground-water management under uncertainty. The framework couples two optimization models-a chance-constrained ground-water management model and an integer-programing sampling network design model-to identify optimal pumping and sampling strategies. The methodology consists of four steps: (1) The optimal ground-water management strategy for the present level of model uncertainty is determined using the chance-constrained management model; (2) for a specified data collection budget, the monitoring network design model identifies, prior to data collection, the sampling strategy that will minimize model uncertainty; (3) the optimal ground-water management strategy is recalculated on the basis of the projected model uncertainty after sampling; and (4) the worth of the monitoring strategy is assessed by comparing the value of the sample information-i.e., the projected reduction in management costs-with the cost of data collection. Steps 2-4 are repeated for a series of data collection budgets, producing a suite of management/monitoring alternatives, from which the best alternative can be selected. A hypothetical example demonstrates the methodology's ability to identify the ground-water sampling strategy with greatest net economic benefit for ground-water management.A decision framework is presented for assessing the value of ground-water sampling within the context of ground-water management under uncertainty. The framework couples two optimization models - a chance-constrained ground-water management model and an integer-programming sampling network design model - to identify optimal pumping and sampling strategies. The methodology consists of four steps: (1) The optimal ground-water management strategy for the present level of model uncertainty is determined using the chance-constrained management model; (2) for a specified data collection budget, the monitoring network design model identifies, prior to data collection, the sampling strategy that will minimize model uncertainty; (3) the optimal ground-water management strategy is recalculated on the basis of the projected model uncertainty after sampling; and (4) the worth of the monitoring strategy is assessed by comparing the value of the sample information - i.e., the projected reduction in management costs - with the cost of data collection. Steps 2-4 are repeated for a series of data collection budgets, producing a suite of management/monitoring alternatives, from which the best alternative can be selected. A hypothetical example demonstrates the methodology's ability to identify the ground-water sampling strategy with greatest net economic benefit for ground-water management.
Bonta, Maximilian; Török, Szilvia; Hegedus, Balazs; Döme, Balazs; Limbeck, Andreas
2017-03-01
Laser ablation-inductively coupled plasma-mass spectrometry (LA-ICP-MS) is one of the most commonly applied methods for lateral trace element distribution analysis in medical studies. Many improvements of the technique regarding quantification and achievable lateral resolution have been achieved in the last years. Nevertheless, sample preparation is also of major importance and the optimal sample preparation strategy still has not been defined. While conventional histology knows a number of sample pre-treatment strategies, little is known about the effect of these approaches on the lateral distributions of elements and/or their quantities in tissues. The technique of formalin fixation and paraffin embedding (FFPE) has emerged as the gold standard in tissue preparation. However, the potential use for elemental distribution studies is questionable due to a large number of sample preparation steps. In this work, LA-ICP-MS was used to examine the applicability of the FFPE sample preparation approach for elemental distribution studies. Qualitative elemental distributions as well as quantitative concentrations in cryo-cut tissues as well as FFPE samples were compared. Results showed that some metals (especially Na and K) are severely affected by the FFPE process, whereas others (e.g., Mn, Ni) are less influenced. Based on these results, a general recommendation can be given: FFPE samples are completely unsuitable for the analysis of alkaline metals. When analyzing transition metals, FFPE samples can give comparable results to snap-frozen tissues. Graphical abstract Sample preparation strategies for biological tissues are compared with regard to the elemental distributions and average trace element concentrations.
Storage Optimization of Educational System Data
ERIC Educational Resources Information Center
Boja, Catalin
2006-01-01
There are described methods used to minimize data files dimension. There are defined indicators for measuring size of files and databases. The storage optimization process is based on selecting from a multitude of data storage models the one that satisfies the propose problem objective, maximization or minimization of the optimum criterion that is…
Software Partitioning Schemes for Advanced Simulation Computer Systems. Final Report.
ERIC Educational Resources Information Center
Clymer, S. J.
Conducted to design software partitioning techniques for use by the Air Force to partition a large flight simulator program for optimal execution on alternative configurations, this study resulted in a mathematical model which defines characteristics for an optimal partition, and a manually demonstrated partitioning algorithm design which…
Encapsulated Multifunction Corrosion Inhibitive Primer.
1983-11-01
Optimization of Microcapsule Preparation ...................... 162 24 Optimized Procedure for Polyurea Microencapsulation ................... 166 25... microcapsules , which suggests that a nearly quantitative yield of microencapsulated inhibitor was achieved. The burst ratio is defined as the conductivity after...effectiveness of the microencapsulation approach in achieving sustained release. 4. Loading Determination of Polyurea Microcapsules In studies relating
Dynamically Achieved Active Site Precision in Enzyme Catalysis
2015-01-01
Conspectus The grand challenge in enzymology is to define and understand all of the parameters that contribute to enzymes’ enormous rate accelerations. The property of hydrogen tunneling in enzyme reactions has moved the focus of research away from an exclusive focus on transition state stabilization toward the importance of the motions of the heavy atoms of the protein, a role for reduced barrier width in catalysis, and the sampling of a protein conformational landscape to achieve a family of protein substates that optimize enzyme–substrate interactions and beyond. This Account focuses on a thermophilic alcohol dehydrogenase for which the chemical step of hydride transfer is rate determining across a wide range of experimental conditions. The properties of the chemical coordinate have been probed using kinetic isotope effects, indicating a transition in behavior below 30 °C that distinguishes nonoptimal from optimal C–H activation. Further, the introduction of single site mutants has the impact of either enhancing or eliminating the temperature dependent transition in catalysis. Biophysical probes, which include time dependent hydrogen/deuterium exchange and fluorescent lifetimes and Stokes shifts, have also been pursued. These studies allow the correlation of spatially resolved transitions in protein motions with catalysis. It is now possible to define a long-range network of protein motions in ht-ADH that extends from a dimer interface to the substrate binding domain across to the cofactor binding domain, over a distance of ca. 30 Å. The ongoing challenge to obtaining spatial and temporal resolution of catalysis-linked protein motions is discussed. PMID:25539048
Selection of entropy-measure parameters for knowledge discovery in heart rate variability data
2014-01-01
Background Heart rate variability is the variation of the time interval between consecutive heartbeats. Entropy is a commonly used tool to describe the regularity of data sets. Entropy functions are defined using multiple parameters, the selection of which is controversial and depends on the intended purpose. This study describes the results of tests conducted to support parameter selection, towards the goal of enabling further biomarker discovery. Methods This study deals with approximate, sample, fuzzy, and fuzzy measure entropies. All data were obtained from PhysioNet, a free-access, on-line archive of physiological signals, and represent various medical conditions. Five tests were defined and conducted to examine the influence of: varying the threshold value r (as multiples of the sample standard deviation σ, or the entropy-maximizing rChon), the data length N, the weighting factors n for fuzzy and fuzzy measure entropies, and the thresholds rF and rL for fuzzy measure entropy. The results were tested for normality using Lilliefors' composite goodness-of-fit test. Consequently, the p-value was calculated with either a two sample t-test or a Wilcoxon rank sum test. Results The first test shows a cross-over of entropy values with regard to a change of r. Thus, a clear statement that a higher entropy corresponds to a high irregularity is not possible, but is rather an indicator of differences in regularity. N should be at least 200 data points for r = 0.2 σ and should even exceed a length of 1000 for r = rChon. The results for the weighting parameters n for the fuzzy membership function show different behavior when coupled with different r values, therefore the weighting parameters have been chosen independently for the different threshold values. The tests concerning rF and rL showed that there is no optimal choice, but r = rF = rL is reasonable with r = rChon or r = 0.2σ. Conclusions Some of the tests showed a dependency of the test significance on the data at hand. Nevertheless, as the medical conditions are unknown beforehand, compromises had to be made. Optimal parameter combinations are suggested for the methods considered. Yet, due to the high number of potential parameter combinations, further investigations of entropy for heart rate variability data will be necessary. PMID:25078574
Selection of entropy-measure parameters for knowledge discovery in heart rate variability data.
Mayer, Christopher C; Bachler, Martin; Hörtenhuber, Matthias; Stocker, Christof; Holzinger, Andreas; Wassertheurer, Siegfried
2014-01-01
Heart rate variability is the variation of the time interval between consecutive heartbeats. Entropy is a commonly used tool to describe the regularity of data sets. Entropy functions are defined using multiple parameters, the selection of which is controversial and depends on the intended purpose. This study describes the results of tests conducted to support parameter selection, towards the goal of enabling further biomarker discovery. This study deals with approximate, sample, fuzzy, and fuzzy measure entropies. All data were obtained from PhysioNet, a free-access, on-line archive of physiological signals, and represent various medical conditions. Five tests were defined and conducted to examine the influence of: varying the threshold value r (as multiples of the sample standard deviation σ, or the entropy-maximizing rChon), the data length N, the weighting factors n for fuzzy and fuzzy measure entropies, and the thresholds rF and rL for fuzzy measure entropy. The results were tested for normality using Lilliefors' composite goodness-of-fit test. Consequently, the p-value was calculated with either a two sample t-test or a Wilcoxon rank sum test. The first test shows a cross-over of entropy values with regard to a change of r. Thus, a clear statement that a higher entropy corresponds to a high irregularity is not possible, but is rather an indicator of differences in regularity. N should be at least 200 data points for r = 0.2 σ and should even exceed a length of 1000 for r = rChon. The results for the weighting parameters n for the fuzzy membership function show different behavior when coupled with different r values, therefore the weighting parameters have been chosen independently for the different threshold values. The tests concerning rF and rL showed that there is no optimal choice, but r = rF = rL is reasonable with r = rChon or r = 0.2σ. Some of the tests showed a dependency of the test significance on the data at hand. Nevertheless, as the medical conditions are unknown beforehand, compromises had to be made. Optimal parameter combinations are suggested for the methods considered. Yet, due to the high number of potential parameter combinations, further investigations of entropy for heart rate variability data will be necessary.
Optimal sampling strategies for detecting zoonotic disease epidemics.
Ferguson, Jake M; Langebrake, Jessica B; Cannataro, Vincent L; Garcia, Andres J; Hamman, Elizabeth A; Martcheva, Maia; Osenberg, Craig W
2014-06-01
The early detection of disease epidemics reduces the chance of successful introductions into new locales, minimizes the number of infections, and reduces the financial impact. We develop a framework to determine the optimal sampling strategy for disease detection in zoonotic host-vector epidemiological systems when a disease goes from below detectable levels to an epidemic. We find that if the time of disease introduction is known then the optimal sampling strategy can switch abruptly between sampling only from the vector population to sampling only from the host population. We also construct time-independent optimal sampling strategies when conducting periodic sampling that can involve sampling both the host and the vector populations simultaneously. Both time-dependent and -independent solutions can be useful for sampling design, depending on whether the time of introduction of the disease is known or not. We illustrate the approach with West Nile virus, a globally-spreading zoonotic arbovirus. Though our analytical results are based on a linearization of the dynamical systems, the sampling rules appear robust over a wide range of parameter space when compared to nonlinear simulation models. Our results suggest some simple rules that can be used by practitioners when developing surveillance programs. These rules require knowledge of transition rates between epidemiological compartments, which population was initially infected, and of the cost per sample for serological tests.
NASA Astrophysics Data System (ADS)
Feng, J.; Bai, L.; Liu, S.; Su, X.; Hu, H.
2012-07-01
In this paper, the MODIS remote sensing data, featured with low-cost, high-timely and moderate/low spatial resolutions, in the North China Plain (NCP) as a study region were firstly used to carry out mixed-pixel spectral decomposition to extract an useful regionalized indicator parameter (RIP) (i.e., an available ratio, that is, fraction/percentage, of winter wheat planting area in each pixel as a regionalized indicator variable (RIV) of spatial sampling) from the initial selected indicators. Then, the RIV values were spatially analyzed, and the spatial structure characteristics (i.e., spatial correlation and variation) of the NCP were achieved, which were further processed to obtain the scalefitting, valid a priori knowledge or information of spatial sampling. Subsequently, founded upon an idea of rationally integrating probability-based and model-based sampling techniques and effectively utilizing the obtained a priori knowledge or information, the spatial sampling models and design schemes and their optimization and optimal selection were developed, as is a scientific basis of improving and optimizing the existing spatial sampling schemes of large-scale cropland remote sensing monitoring. Additionally, by the adaptive analysis and decision strategy the optimal local spatial prediction and gridded system of extrapolation results were able to excellently implement an adaptive report pattern of spatial sampling in accordance with report-covering units in order to satisfy the actual needs of sampling surveys.
Processor design optimization methodology for synthetic vision systems
NASA Astrophysics Data System (ADS)
Wren, Bill; Tarleton, Norman G.; Symosek, Peter F.
1997-06-01
Architecture optimization requires numerous inputs from hardware to software specifications. The task of varying these input parameters to obtain an optimal system architecture with regard to cost, specified performance and method of upgrade considerably increases the development cost due to the infinitude of events, most of which cannot even be defined by any simple enumeration or set of inequalities. We shall address the use of a PC-based tool using genetic algorithms to optimize the architecture for an avionics synthetic vision system, specifically passive millimeter wave system implementation.
Forecasting Electricity Prices in an Optimization Hydrothermal Problem
NASA Astrophysics Data System (ADS)
Matías, J. M.; Bayón, L.; Suárez, P.; Argüelles, A.; Taboada, J.
2007-12-01
This paper presents an economic dispatch algorithm in a hydrothermal system within the framework of a competitive and deregulated electricity market. The optimization problem of one firm is described, whose objective function can be defined as its profit maximization. Since next-day price forecasting is an aspect crucial, this paper proposes an efficient yet highly accurate next-day price new forecasting method using a functional time series approach trying to exploit the daily seasonal structure of the series of prices. For the optimization problem, an optimal control technique is applied and Pontryagin's theorem is employed.
Structural damage identification using an enhanced thermal exchange optimization algorithm
NASA Astrophysics Data System (ADS)
Kaveh, A.; Dadras, A.
2018-03-01
The recently developed optimization algorithm-the so-called thermal exchange optimization (TEO) algorithm-is enhanced and applied to a damage detection problem. An offline parameter tuning approach is utilized to set the internal parameters of the TEO, resulting in the enhanced heat transfer optimization (ETEO) algorithm. The damage detection problem is defined as an inverse problem, and ETEO is applied to a wide range of structures. Several scenarios with noise and noise-free modal data are tested and the locations and extents of damages are identified with good accuracy.
Shinzato, Takashi
2016-12-01
The portfolio optimization problem in which the variances of the return rates of assets are not identical is analyzed in this paper using the methodology of statistical mechanical informatics, specifically, replica analysis. We defined two characteristic quantities of an optimal portfolio, namely, minimal investment risk and investment concentration, in order to solve the portfolio optimization problem and analytically determined their asymptotical behaviors using replica analysis. Numerical experiments were also performed, and a comparison between the results of our simulation and those obtained via replica analysis validated our proposed method.
NASA Astrophysics Data System (ADS)
Shinzato, Takashi
2016-12-01
The portfolio optimization problem in which the variances of the return rates of assets are not identical is analyzed in this paper using the methodology of statistical mechanical informatics, specifically, replica analysis. We defined two characteristic quantities of an optimal portfolio, namely, minimal investment risk and investment concentration, in order to solve the portfolio optimization problem and analytically determined their asymptotical behaviors using replica analysis. Numerical experiments were also performed, and a comparison between the results of our simulation and those obtained via replica analysis validated our proposed method.
NASA Astrophysics Data System (ADS)
Muratore-Ginanneschi, Paolo
2005-05-01
Investment strategies in multiplicative Markovian market models with transaction costs are defined using growth optimal criteria. The optimal strategy is shown to consist in holding the amount of capital invested in stocks within an interval around an ideal optimal investment. The size of the holding interval is determined by the intensity of the transaction costs and the time horizon. The inclusion of financial derivatives in the models is also considered. All the results presented in this contributions were previously derived in collaboration with E. Aurell.
Integrated multidisciplinary optimization of rotorcraft: A plan for development
NASA Technical Reports Server (NTRS)
Adelman, Howard M. (Editor); Mantay, Wayne R. (Editor)
1989-01-01
This paper describes a joint NASA/Army initiative at the Langley Research Center to develop optimization procedures aimed at improving the rotor blade design process by integrating appropriate disciplines and accounting for important interactions among the disciplines. The paper describes the optimization formulation in terms of the objective function, design variables, and constraints. Additionally, some of the analysis aspects are discussed, validation strategies are described, and an initial attempt at defining the interdisciplinary couplings is summarized. At this writing, significant progress has been made, principally in the areas of single discipline optimization. Accomplishments are described in areas of rotor aerodynamic performance optimization for minimum hover horsepower, rotor dynamic optimization for vibration reduction, and rotor structural optimization for minimum weight.
Anand, T S; Sujatha, S
2017-08-01
Polycentric knees for transfemoral prostheses have a variety of geometries, but a survey of literature shows that there are few ways of comparing their performance. Our objective was to present a method for performance comparison of polycentric knee geometries and design a new geometry. In this work, we define parameters to compare various commercially available prosthetic knees in terms of their stability, toe clearance, maximum flexion, and so on and optimize the parameters to obtain a new knee design. We use the defined parameters and optimization to design a new knee geometry that provides the greater stability and toe clearance necessary to navigate uneven terrain which is typically encountered in developing countries. Several commercial knees were compared based on the defined parameters to determine their suitability for uneven terrain. A new knee was designed based on optimization of these parameters. Preliminary user testing indicates that the new knee is very stable and easy to use. The methodology can be used for better knee selection and design of more customized knee geometries. Clinical relevance The method provides a tool to aid in the selection and design of polycentric knees for transfemoral prostheses.
Fowler, K. R.; Jenkins, E.W.; Parno, M.; Chrispell, J.C.; Colón, A. I.; Hanson, Randall T.
2016-01-01
The development of appropriate water management strategies requires, in part, a methodology for quantifying and evaluating the impact of water policy decisions on regional stakeholders. In this work, we describe the framework we are developing to enhance the body of resources available to policy makers, farmers, and other community members in their e orts to understand, quantify, and assess the often competing objectives water consumers have with respect to usage. The foundation for the framework is the construction of a simulation-based optimization software tool using two existing software packages. In particular, we couple a robust optimization software suite (DAKOTA) with the USGS MF-OWHM water management simulation tool to provide a flexible software environment that will enable the evaluation of one or multiple (possibly competing) user-defined (or stakeholder) objectives. We introduce the individual software components and outline the communication strategy we defined for the coupled development. We present numerical results for case studies related to crop portfolio management with several defined objectives. The objectives are not optimally satisfied for any single user class, demonstrating the capability of the software tool to aid in the evaluation of a variety of competing interests.
A business planning model to identify new safety net clinic locations.
Langabeer, James; Helton, Jeffrey; DelliFraine, Jami; Dotson, Ebbin; Watts, Carolyn; Love, Karen
2014-01-01
Community health clinics serving the poor and underserved are geographically expanding due to changes in U.S. health care policy. This paper describes the experience of a collaborative alliance of health care providers in a large metropolitan area who develop a conceptual and mathematical decision model to guide decisions on expanding its network of community health clinics. Community stakeholders participated in a collaborative process that defined constructs they deemed important in guiding decisions on the location of community health clinics. This collaboration also defined key variables within each construct. Scores for variables within each construct were then totaled and weighted into a community-specific optimal space planning equation. This analysis relied entirely on secondary data available from published sources. The model built from this collaboration revolved around the constructs of demand, sustainability, and competition. It used publicly available data defining variables within each construct to arrive at an optimal location that maximized demand and sustainability and minimized competition. This is a model that safety net clinic planners and community stakeholders can use to analyze demographic and utilization data to optimize capacity expansion to serve uninsured and Medicaid populations. Communities can use this innovative model to develop a locally relevant clinic location-planning framework.
NASA Astrophysics Data System (ADS)
Abedini, M. J.; Nasseri, M.; Burn, D. H.
2012-04-01
In any geostatistical study, an important consideration is the choice of an appropriate, repeatable, and objective search strategy that controls the nearby samples to be included in the location-specific estimation procedure. Almost all geostatistical software available in the market puts the onus on the user to supply search strategy parameters in a heuristic manner. These parameters are solely controlled by geographical coordinates that are defined for the entire area under study, and the user has no guidance as to how to choose these parameters. The main thesis of the current study is that the selection of search strategy parameters has to be driven by data—both the spatial coordinates and the sample values—and cannot be chosen beforehand. For this purpose, a genetic-algorithm-based ordinary kriging with moving neighborhood technique is proposed. The search capability of a genetic algorithm is exploited to search the feature space for appropriate, either local or global, search strategy parameters. Radius of circle/sphere and/or radii of standard or rotated ellipse/ellipsoid are considered as the decision variables to be optimized by GA. The superiority of GA-based ordinary kriging is demonstrated through application to the Wolfcamp Aquifer piezometric head data. Assessment of numerical results showed that definition of search strategy parameters based on both geographical coordinates and sample values improves cross-validation statistics when compared with that based on geographical coordinates alone. In the case of a variable search neighborhood for each estimation point, optimization of local search strategy parameters for an elliptical support domain—the orientation of which is dictated by anisotropic axes—via GA was able to capture the dynamics of piezometric head in west Texas/New Mexico in an efficient way.
NASA Astrophysics Data System (ADS)
Hengl, Tomislav
2015-04-01
Efficiency of spatial sampling largely determines success of model building. This is especially important for geostatistical mapping where an initial sampling plan should provide a good representation or coverage of both geographical (defined by the study area mask map) and feature space (defined by the multi-dimensional covariates). Otherwise the model will need to extrapolate and, hence, the overall uncertainty of the predictions will be high. In many cases, geostatisticians use point data sets which are produced using unknown or inconsistent sampling algorithms. Many point data sets in environmental sciences suffer from spatial clustering and systematic omission of feature space. But how to quantify these 'representation' problems and how to incorporate this knowledge into model building? The author has developed a generic function called 'spsample.prob' (Global Soil Information Facilities package for R) and which simultaneously determines (effective) inclusion probabilities as an average between the kernel density estimation (geographical spreading of points; analysed using the spatstat package in R) and MaxEnt analysis (feature space spreading of points; analysed using the MaxEnt software used primarily for species distribution modelling). The output 'iprob' map indicates whether the sampling plan has systematically missed some important locations and/or features, and can also be used as an input for geostatistical modelling e.g. as a weight map for geostatistical model fitting. The spsample.prob function can also be used in combination with the accessibility analysis (cost of field survey are usually function of distance from the road network, slope and land cover) to allow for simultaneous maximization of average inclusion probabilities and minimization of total survey costs. The author postulates that, by estimating effective inclusion probabilities using combined geographical and feature space analysis, and by comparing survey costs to representation efficiency, an optimal initial sampling plan can be produced which satisfies both criteria: (a) good representation (i.e. within a tolerance threshold), and (b) minimized survey costs. This sampling analysis framework could become especially interesting for generating sampling plans in new areas e.g. for which no previous spatial prediction model exists. The presentation includes data processing demos with standard soil sampling data sets Ebergotzen (Germany) and Edgeroi (Australia), also available via the GSIF package.
Propensity Scores in Pharmacoepidemiology: Beyond the Horizon.
Jackson, John W; Schmid, Ian; Stuart, Elizabeth A
2017-12-01
Propensity score methods have become commonplace in pharmacoepidemiology over the past decade. Their adoption has confronted formidable obstacles that arise from pharmacoepidemiology's reliance on large healthcare databases of considerable heterogeneity and complexity. These include identifying clinically meaningful samples, defining treatment comparisons, and measuring covariates in ways that respect sound epidemiologic study design. Additional complexities involve correctly modeling treatment decisions in the face of variation in healthcare practice, and dealing with missing information and unmeasured confounding. In this review, we examine the application of propensity score methods in pharmacoepidemiology with particular attention to these and other issues, with an eye towards standards of practice, recent methodological advances, and opportunities for future progress. Propensity score methods have matured in ways that can advance comparative effectiveness and safety research in pharmacoepidemiology. These include natural extensions for categorical treatments, matching algorithms that can optimize sample size given design constraints, weighting estimators that asymptotically target matched and overlap samples, and the incorporation of machine learning to aid in covariate selection and model building. These recent and encouraging advances should be further evaluated through simulation and empirical studies, but nonetheless represent a bright path ahead for the observational study of treatment benefits and harms.
3D Material Response Analysis of PICA Pyrolysis Experiments
NASA Technical Reports Server (NTRS)
Oliver, A. Brandon
2017-01-01
The PICA decomposition experiments of Bessire and Minton are investigated using 3D material response analysis. The steady thermoelectric equations have been added to the CHAR code to enable analysis of the Joule-heated experiments and the DAKOTA optimization code is used to define the voltage boundary condition that yields the experimentally observed temperature response. This analysis has identified a potential spatial non-uniformity in the PICA sample temperature driven by the cooled copper electrodes and thermal radiation from the surface of the test article (Figure 1). The non-uniformity leads to a variable heating rate throughout the sample volume that has an effect on the quantitative results of the experiment. Averaging the results of integrating a kinetic reaction mechanism with the heating rates seen across the sample volume yield a shift of peak species production to lower temperatures that is more significant for higher heating rates (Figure 2) when compared to integrating the same mechanism at the reported heating rate. The analysis supporting these conclusions will be presented along with a proposed analysis procedure that permits quantitative use of the existing data. Time permitting, a status on the in-development kinetic decomposition mechanism based on this data will be presented as well.
Vidal, J L Martínez; Vega, A Belmonte; López, F J Sánchez; Frenich, A Garrido
2004-10-01
A method has been developed for the simultaneous determination of paraquat (PQ), deiquat (DQ), chlormequat (CQ) and mepiquat (MQ) in water samples by liquid chromatography (LC) coupled with electrospray ionization mass spectrometry (MS). The LC separations of the target compounds, as well as their MS parameters, were optimized in order to improve selectivity and sensitivity. Separation was carried out in a Xterra C8 column, using as mobile phase methanol-heptafluorobutyric acid (HFBA) in isocratic mode. The molecular ion was selected for the quantitation in selected ion monitoring (SIM) mode. Off-line solid-phase extraction (SPE) was applied with silica cartridges in order to preconcentrate the compounds from waters. Detection limits were in the range 0.02-0.40 microg l(-1). Recovery range varied between 89 and 99.5% with precision values lower than 6%. The method has been applied successfully to the analysis of both surface and groundwater samples from agricultural areas of Andalusia (Spain), using well defined internal quality control (IQC) criteria. The results revealed the presence of deiquat and paraquat in some samples.
Young, William F; Stanson, Anthony W
2009-01-01
Adrenal venous sampling (AVS) is the criterion standard to distinguish between unilateral and bilateral adrenal disease in patients with primary aldosteronism. The keys to successful AVS include appropriate patient selection, careful patient preparation, focused technical expertise, defined protocol, and accurate data interpretation. The use of AVS should be based on patient preferences, patient age, clinical comorbidities, and the clinical probability of finding an aldosterone-producing adenoma. AVS is optimally performed in the fasting state in the morning. AVS is an intricate procedure because the right adrenal vein is small and may be difficult to locate - the success rate depends on the proficiency of the angiographer. The key factors that determine the successful catheterization of both adrenal veins are experience, dedication and repetition. With experience, and focusing the expertise to 1 or 2 radiologists at a referral centre, the AVS success rate can be as high as 96%. A centre-specific, written protocol is mandatory. The protocol should be developed by an interested group of endocrinologists, radiologists and laboratory personnel. Safeguards should be in place to prevent mislabelling of the blood tubes in the radiology suite and to prevent sample mix-up in the laboratory.
Spatial averaging for small molecule diffusion in condensed phase environments
NASA Astrophysics Data System (ADS)
Plattner, Nuria; Doll, J. D.; Meuwly, Markus
2010-07-01
Spatial averaging is a new approach for sampling rare-event problems. The approach modifies the importance function which improves the sampling efficiency while keeping a defined relation to the original statistical distribution. In this work, spatial averaging is applied to multidimensional systems for typical problems arising in physical chemistry. They include (I) a CO molecule diffusing on an amorphous ice surface, (II) a hydrogen molecule probing favorable positions in amorphous ice, and (III) CO migration in myoglobin. The systems encompass a wide range of energy barriers and for all of them spatial averaging is found to outperform conventional Metropolis Monte Carlo. It is also found that optimal simulation parameters are surprisingly similar for the different systems studied, in particular, the radius of the point cloud over which the potential energy function is averaged. For H2 diffusing in amorphous ice it is found that facile migration is possible which is in agreement with previous suggestions from experiment. The free energy barriers involved are typically lower than 1 kcal/mol. Spatial averaging simulations for CO in myoglobin are able to locate all currently characterized metastable states. Overall, it is found that spatial averaging considerably improves the sampling of configurational space.
Accelerating IMRT optimization by voxel sampling
NASA Astrophysics Data System (ADS)
Martin, Benjamin C.; Bortfeld, Thomas R.; Castañon, David A.
2007-12-01
This paper presents a new method for accelerating intensity-modulated radiation therapy (IMRT) optimization using voxel sampling. Rather than calculating the dose to the entire patient at each step in the optimization, the dose is only calculated for some randomly selected voxels. Those voxels are then used to calculate estimates of the objective and gradient which are used in a randomized version of a steepest descent algorithm. By selecting different voxels on each step, we are able to find an optimal solution to the full problem. We also present an algorithm to automatically choose the best sampling rate for each structure within the patient during the optimization. Seeking further improvements, we experimented with several other gradient-based optimization algorithms and found that the delta-bar-delta algorithm performs well despite the randomness. Overall, we were able to achieve approximately an order of magnitude speedup on our test case as compared to steepest descent.
Taking Stock of Unrealistic Optimism
Shepperd, James A.; Klein, William M. P.; Waters, Erika A.; Weinstein, Neil D.
2015-01-01
Researchers have used terms such as unrealistic optimism and optimistic bias to refer to concepts that are similar but not synonymous. Drawing from three decades of research, we critically discuss how researchers define unrealistic optimism and we identify four types that reflect different measurement approaches: unrealistic absolute optimism at the individual and group level and unrealistic comparative optimism at the individual and group level. In addition, we discuss methodological criticisms leveled against research on unrealistic optimism and note that the criticisms are primarily relevant to only one type—the group form of unrealistic comparative optimism. We further clarify how the criticisms are not nearly as problematic even for unrealistic comparative optimism as they might seem. Finally, we note boundary conditions on the different types of unrealistic optimism and reflect on five broad questions that deserve further attention. PMID:26045714
Vogel, Michael W; Vegh, Viktor; Reutens, David C
2013-05-01
This paper investigates optimal placement of a localized single-axis magnetometer for ultralow field (ULF) relaxometry in view of various sample shapes and sizes. The authors used finite element method for the numerical analysis to determine the sample magnetic field environment and evaluate the optimal location of the single-axis magnetometer. Given the different samples, the authors analysed the magnetic field distribution around the sample and determined the optimal orientation and possible positions of the sensor to maximize signal strength, that is, the power of the free induction decay. The authors demonstrate that a glass vial with flat bottom and 10 ml volume is the best structure to achieve the highest signal out of samples studied. This paper demonstrates the importance of taking into account the combined effects of sensor configuration and sample parameters for signal generation prior to designing and constructing ULF systems with a single-axis magnetometer. Through numerical simulations the authors were able to optimize structural parameters, such as sample shape and size, sensor orientation and location, to maximize the measured signal in ultralow field relaxometry.
Arnold, Benjamin F; Galiani, Sebastian; Ram, Pavani K; Hubbard, Alan E; Briceño, Bertha; Gertler, Paul J; Colford, John M
2013-02-15
Many community-based studies of acute child illness rely on cases reported by caregivers. In prior investigations, researchers noted a reporting bias when longer illness recall periods were used. The use of recall periods longer than 2-3 days has been discouraged to minimize this reporting bias. In the present study, we sought to determine the optimal recall period for illness measurement when accounting for both bias and variance. Using data from 12,191 children less than 24 months of age collected in 2008-2009 from Himachal Pradesh in India, Madhya Pradesh in India, Indonesia, Peru, and Senegal, we calculated bias, variance, and mean squared error for estimates of the prevalence ratio between groups defined by anemia, stunting, and underweight status to identify optimal recall periods for caregiver-reported diarrhea, cough, and fever. There was little bias in the prevalence ratio when a 7-day recall period was used (<10% in 35 of 45 scenarios), and the mean squared error was usually minimized with recall periods of 6 or more days. Shortening the recall period from 7 days to 2 days required sample-size increases of 52%-92% for diarrhea, 47%-61% for cough, and 102%-206% for fever. In contrast to the current practice of using 2-day recall periods, this work suggests that studies should measure caregiver-reported illness with a 7-day recall period.
Merli, Marco; Galli, Laura; Castagna, Antonella; Salpietro, Stefania; Gianotti, Nicola; Messina, Emanuela; Poli, Andrea; Morsica, Giulia; Bagaglio, Sabrina; Cernuschi, Massimo; Bigoloni, Alba; Uberti-Foppa, Caterina; Lazzarin, Adriano; Hasson, Hamid
2016-04-01
We determined the diagnostic accuracy and optimal cut off of three indirect fibrosis biomarkers (APRI, FIB-4, Forns) compared with liver stiffness (LS) for the detection of liver cirrhosis in HIV/HCV-coinfected patients. An observational retrospective study on HIV/HCV-coinfected patients with concomitant LS measurement and APRI, FIB-4 and Forns was performed. The presence of liver cirrhosis was defined as a LS ≥13 KPa. The diagnostic accuracy and optimal cut-off values, compared with LS categorization (<13 vs ≥13 KPa), were determined by receiver operating characteristics (ROC) curves. The study sample included 646 patients. The area-under-the ROC curve (95% confidence interval) for the detection of liver cirrhosis were 0.84 (0.81-0.88), 0.87 (0.84-0.91) and 0.87 (0.84-0.90) for APRI, FIB-4 and Forns, respectively. According to the optimal cut off values for liver cirrhosis (≥0.97 for APRI, ≥2.02 for FIB-4 and ≥7.8 for Forns), 80%, 80% and 82% of subjects were correctly classified by the three indirect fibrosis biomarkers, respectively. Misclassifications were mostly due to false positive cases. The study suggests that indirect fibrosis biomarkers can help clinicians to exclude liver cirrhosis in the management of HIV/HCV co-infected patients, reducing the frequency of more expensive or invasive assessments.
Khurana, Rajneet Kaur; Rao, Satish; Beg, Sarwar; Katare, O.P.; Singh, Bhupinder
2016-01-01
The present work aims at the systematic development of a simple, rapid and highly sensitive densitometry-based thin-layer chromatographic method for the quantification of mangiferin in bioanalytical samples. Initially, the quality target method profile was defined and critical analytical attributes (CAAs) earmarked, namely, retardation factor (Rf), peak height, capacity factor, theoretical plates and separation number. Face-centered cubic design was selected for optimization of volume loaded and plate dimensions as the critical method parameters selected from screening studies employing D-optimal and Plackett–Burman design studies, followed by evaluating their effect on the CAAs. The mobile phase containing a mixture of ethyl acetate : acetic acid : formic acid : water in a 7 : 1 : 1 : 1 (v/v/v/v) ratio was finally selected as the optimized solvent for apt chromatographic separation of mangiferin at 262 nm with Rf 0.68 ± 0.02 and all other parameters within the acceptance limits. Method validation studies revealed high linearity in the concentration range of 50–800 ng/band for mangiferin. The developed method showed high accuracy, precision, ruggedness, robustness, specificity, sensitivity, selectivity and recovery. In a nutshell, the bioanalytical method for analysis of mangiferin in plasma revealed the presence of well-resolved peaks and high recovery of mangiferin. PMID:26912808
NASA Astrophysics Data System (ADS)
Pedersen, N. L.
2015-06-01
The strength of a gear is typically defined relative to durability (pitting) and load capacity (tooth-breakage). Tooth-breakage is controlled by the root shape and this gear part can be designed because there is no contact between gear pairs here. The shape of gears is generally defined by different standards, with the ISO standard probably being the most common one. Gears are manufactured using two principally different tools: rack tools and gear tools. In this work, the bending stress of involute teeth is minimized by shape optimization made directly on the final gear. This optimized shape is then used to find the cutting tool (the gear envelope) that can create this optimized gear shape. A simple but sufficiently flexible root parameterization is applied and emphasis is put on the importance of separating the shape parameterization from the finite element analysis of stresses. Large improvements in the stress level are found.
Defining defect specifications to optimize photomask production and requalification
NASA Astrophysics Data System (ADS)
Fiekowsky, Peter
2006-10-01
Reducing defect repairs and accelerating defect analysis is becoming more important as the total cost of defect repairs on advanced masks increases. Photomask defect specs based on printability, as measured on AIMS microscopes has been used for years, but the fundamental defect spec is still the defect size, as measured on the photomask, requiring the repair of many unprintable defects. ADAS, the Automated Defect Analysis System from AVI is now available in most advanced mask shops. It makes the use of pure printability specs, or "Optimal Defect Specs" practical. This software uses advanced algorithms to eliminate false defects caused by approximations in the inspection algorithm, classify each defect, simulate each defect and disposition each defect based on its printability and location. This paper defines "optimal defect specs", explains why they are now practical and economic, gives a method of determining them and provides accuracy data.
Design of planar microcoil-based NMR probe ensuring high SNR
NASA Astrophysics Data System (ADS)
Ali, Zishan; Poenar, D. P.; Aditya, Sheel
2017-09-01
A microNMR probe for ex vivo applications may consist of at least one microcoil, which can be used as the oscillating magnetic field (MF) generator as well as receiver coil, and a sample holder, with a volume in the range of nanoliters to micro-liters, placed near the microcoil. The Signal-to-Noise ratio (SNR) of such a probe is, however, dependent not only on its design but also on the measurement setup, and the measured sample. This paper introduces a performance factor P independent of both the proton spin density in the sample and the external DC magnetic field, and which can thus assess the performance of the probe alone. First, two of the components of the P factor (inhomogeneity factor K and filling factor η ) are defined and an approach to calculate their values for different probe variants from electromagnetic simulations is devised. A criterion based on dominant component of the magnetic field is then formulated to help designers optimize the sample volume which also affects the performance of the probe, in order to obtain the best SNR for a given planar microcoil. Finally, the P factor values are compared between different planar microcoils with different number of turns and conductor aspect ratios, and planar microcoils are also compared with conventional solenoids. These comparisons highlight which microcoil geometry-sample volume combination will ensure a high SNR under any external setup.
Silber, Hanna E; Nyberg, Joakim; Hooker, Andrew C; Karlsson, Mats O
2009-06-01
Intravenous glucose tolerance test (IVGTT) provocations are informative, but complex and laborious, for studying the glucose-insulin system. The objective of this study was to evaluate, through optimal design methodology, the possibilities of more informative and/or less laborious study design of the insulin modified IVGTT in type 2 diabetic patients. A previously developed model for glucose and insulin regulation was implemented in the optimal design software PopED 2.0. The following aspects of the study design of the insulin modified IVGTT were evaluated; (1) glucose dose, (2) insulin infusion, (3) combination of (1) and (2), (4) sampling times, (5) exclusion of labeled glucose. Constraints were incorporated to avoid prolonged hyper- and/or hypoglycemia and a reduced design was used to decrease run times. Design efficiency was calculated as a measure of the improvement with an optimal design compared to the basic design. The results showed that the design of the insulin modified IVGTT could be substantially improved by the use of an optimized design compared to the standard design and that it was possible to use a reduced number of samples. Optimization of sample times gave the largest improvement followed by insulin dose. The results further showed that it was possible to reduce the total sample time with only a minor loss in efficiency. Simulations confirmed the predictions from PopED. The predicted uncertainty of parameter estimates (CV) was low in all tested cases, despite the reduction in the number of samples/subject. The best design had a predicted average CV of parameter estimates of 19.5%. We conclude that improvement can be made to the design of the insulin modified IVGTT and that the most important design factor was the placement of sample times followed by the use of an optimal insulin dose. This paper illustrates how complex provocation experiments can be improved by sequential modeling and optimal design.
Integrated vision-based GNC for autonomous rendezvous and capture around Mars
NASA Astrophysics Data System (ADS)
Strippoli, L.; Novelli, G.; Gil Fernandez, J.; Colmenarejo, P.; Le Peuvedic, C.; Lanza, P.; Ankersen, F.
2015-06-01
Integrated GNC (iGNC) is an activity aimed at designing, developing and validating the GNC for autonomously performing the rendezvous and capture phase of the Mars sample return mission as defined during the Mars sample return Orbiter (MSRO) ESA study. The validation cycle includes testing in an end-to-end simulator, in a real-time avionics-representative test bench and, finally, in a dynamic HW in the loop test bench for assessing the feasibility, performances and figure of merits of the baseline approach defined during the MSRO study, for both nominal and contingency scenarios. The on-board software (OBSW) is tailored to work with the sensors, actuators and orbits baseline proposed in MSRO. The whole rendezvous is based on optical navigation, aided by RF-Doppler during the search and first orbit determination of the orbiting sample. The simulated rendezvous phase includes also the non-linear orbit synchronization, based on a dedicated non-linear guidance algorithm robust to Mars ascent vehicle (MAV) injection accuracy or MAV failures resulting in elliptic target orbits. The search phase is very demanding for the image processing (IP) due to the very high visual magnitude of the target wrt. the stellar background, and the attitude GNC requires very high pointing stability accuracies to fulfil IP constraints. A trade-off of innovative, autonomous navigation filters indicates the unscented Kalman filter (UKF) as the approach that provides the best results in terms of robustness, response to non-linearities and performances compatibly with computational load. At short range, an optimized IP based on a convex hull algorithm has been developed in order to guarantee LoS and range measurements from hundreds of metres to capture.
The relationship between creativity and mood disorders
Andreasen, Nancy C.
2008-01-01
Research designed to examine the relationship between creativity and mental illnesses must confront multiple challenges. What is the optimal sample to study? How should creativity be defined? What is the most appropriate comparison group? Only a limited number of studies have examined highly creative individuals using personal interviews and a noncreative comparison group. The majority of these have examined writers. The preponderance of the evidence suggests that in these creative individuals the rate of mood disorder is high, and that both bipolar disorder and unipolar depression are quite common. Clinicians who treat creative individuals with mood disorders must also confronta variety of challenges, including the fear that treatment may diminish creativity, in the case of bipolar disorder, hovt/ever, it is likely that reducing severe manic episodes may actually enhance creativity in many individuals. PMID:18689294
Kim, Sung Bong; Zhang, Yi; Won, Sang Min; Bandodkar, Amay J; Sekine, Yurina; Xue, Yeguang; Koo, Jahyun; Harshman, Sean W; Martin, Jennifer A; Park, Jeong Min; Ray, Tyler R; Crawford, Kaitlyn E; Lee, Kyu-Tae; Choi, Jungil; Pitsch, Rhonda L; Grigsby, Claude C; Strang, Adam J; Chen, Yu-Yu; Xu, Shuai; Kim, Jeonghyun; Koh, Ahyeon; Ha, Jeong Sook; Huang, Yonggang; Kim, Seung Wook; Rogers, John A
2018-03-01
This paper introduces super absorbent polymer valves and colorimetric sensing reagents as enabling components of soft, skin-mounted microfluidic devices designed to capture, store, and chemically analyze sweat released from eccrine glands. The valving technology enables robust means for guiding the flow of sweat from an inlet location into a collection of isolated reservoirs, in a well-defined sequence. Analysis in these reservoirs involves a color responsive indicator of chloride concentration with a formulation tailored to offer stable operation with sensitivity optimized for the relevant physiological range. Evaluations on human subjects with comparisons against ex situ analysis illustrate the practical utility of these advances. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Online 3D Ear Recognition by Combining Global and Local Features.
Liu, Yahui; Zhang, Bob; Lu, Guangming; Zhang, David
2016-01-01
The three-dimensional shape of the ear has been proven to be a stable candidate for biometric authentication because of its desirable properties such as universality, uniqueness, and permanence. In this paper, a special laser scanner designed for online three-dimensional ear acquisition was described. Based on the dataset collected by our scanner, two novel feature classes were defined from a three-dimensional ear image: the global feature class (empty centers and angles) and local feature class (points, lines, and areas). These features are extracted and combined in an optimal way for three-dimensional ear recognition. Using a large dataset consisting of 2,000 samples, the experimental results illustrate the effectiveness of fusing global and local features, obtaining an equal error rate of 2.2%.
Online 3D Ear Recognition by Combining Global and Local Features
Liu, Yahui; Zhang, Bob; Lu, Guangming; Zhang, David
2016-01-01
The three-dimensional shape of the ear has been proven to be a stable candidate for biometric authentication because of its desirable properties such as universality, uniqueness, and permanence. In this paper, a special laser scanner designed for online three-dimensional ear acquisition was described. Based on the dataset collected by our scanner, two novel feature classes were defined from a three-dimensional ear image: the global feature class (empty centers and angles) and local feature class (points, lines, and areas). These features are extracted and combined in an optimal way for three-dimensional ear recognition. Using a large dataset consisting of 2,000 samples, the experimental results illustrate the effectiveness of fusing global and local features, obtaining an equal error rate of 2.2%. PMID:27935955
High precision AlGaAsSb ridge-waveguide etching by in situ reflectance monitored ICP-RIE
NASA Astrophysics Data System (ADS)
Tran, N. T.; Breivik, Magnus; Patra, S. K.; Fimland, Bjørn-Ove
2014-05-01
GaSb-based semiconductor diode lasers are promising candidates for light sources working in the mid-infrared wavelength region of 2-5 μm. Using edge emitting lasers with ridge-waveguide structure, light emission with good beam quality can be achieved. Fabrication of the ridge waveguide requires precise etch stop control for optimal laser performance. Simulation results are presented that show the effect of increased confinement in the waveguide when the etch depth is well-defined. In situ reflectance monitoring with a 675 nm-wavelength laser was used to determine the etch stop with high accuracy. Based on the simulations of laser reflectance from a proposed sample, the etching process can be controlled to provide an endpoint depth precision within +/- 10 nm.
Flumignan, Danilo Luiz; Boralle, Nivaldo; Oliveira, José Eduardo de
2010-06-30
In this work, the combination of carbon nuclear magnetic resonance ((13)C NMR) fingerprinting with pattern-recognition analyses provides an original and alternative approach to screening commercial gasoline quality. Soft Independent Modelling of Class Analogy (SIMCA) was performed on spectroscopic fingerprints to classify representative commercial gasoline samples, which were selected by Hierarchical Cluster Analyses (HCA) over several months in retails services of gas stations, into previously quality-defined classes. Following optimized (13)C NMR-SIMCA algorithm, sensitivity values were obtained in the training set (99.0%), with leave-one-out cross-validation, and external prediction set (92.0%). Governmental laboratories could employ this method as a rapid screening analysis to discourage adulteration practices. Copyright 2010 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
McNamara, Luke W.; Braun, Robert D.
2014-01-01
One of the key design objectives of NASA's Orion Exploration Mission 1 (EM- 1) is to execute a guided entry trajectory demonstrating GN&C capability. The focus of this paper is defining the flyable entry corridor for EM-1 taking into account multiple subsystem constraints such as complex aerothermal heating constraints, aerothermal heating objectives, landing accuracy constraints, structural load limits, Human-System-Integration-Requirements, Service Module debris disposal limits and other flight test objectives. During the EM-1 Design Analysis Cycle 1 design challenges came up that made defining the flyable entry corridor for the EM-1 mission critical to mission success. This document details the optimization techniques that were explored to use with the 6-DOF ANTARES simulation to assist in defining the design entry interface state and entry corridor with respect to key flight test constraints and objectives.
Foo, Lee Kien; McGree, James; Duffull, Stephen
2012-01-01
Optimal design methods have been proposed to determine the best sampling times when sparse blood sampling is required in clinical pharmacokinetic studies. However, the optimal blood sampling time points may not be feasible in clinical practice. Sampling windows, a time interval for blood sample collection, have been proposed to provide flexibility in blood sampling times while preserving efficient parameter estimation. Because of the complexity of the population pharmacokinetic models, which are generally nonlinear mixed effects models, there is no analytical solution available to determine sampling windows. We propose a method for determination of sampling windows based on MCMC sampling techniques. The proposed method attains a stationary distribution rapidly and provides time-sensitive windows around the optimal design points. The proposed method is applicable to determine sampling windows for any nonlinear mixed effects model although our work focuses on an application to population pharmacokinetic models. Copyright © 2012 John Wiley & Sons, Ltd.
Hyperglycosylated hCG and Placenta Accreta Spectrum.
Einerson, Brett D; Straubhar, Alli; Soisson, Sean; Szczotka, Kathryn; Dodson, Mark K; Silver, Robert M; Soisson, Andrew P
2018-02-28
We aimed to evaluate the relationship between hyperglycosylated human chorionic gonadotropin (hCG-H) and placenta accreta spectrum (PAS) in the second and third trimesters of pregnancy. This was a case-control study of PAS and controls. hCG-H was measured in the second and third trimesters of pregnancy in women with pathologically confirmed cases of PAS and in gestational age-matched controls without PAS. We compared serum hCG-H levels in cases and controls, calculated summary statistics for diagnostic accuracy, and used receiver operating characteristic (ROC) curves to define an optimal cut-point for diagnosis of PAS using hCG-H. Thirty case samples and 30 control samples were evaluated for hCG-H. Mean hCG-H was lower in the case compared with control group (7.8 ± 5.9 μg/L vs. 11.8 ± 8.8 μg/L, p = 0.03). At an optimal cut-point for hCG-H of ≤7.6 μg/L, the sensitivity, specificity, positive likelihood ratios, negative likelihood ratios, and area under the ROC curve were 66.7%, 69.7%, 2.20%, 0.48%, and 0.68%, respectively. Hyperglycosylated hCG levels in the second and third trimesters of pregnancy were lower in patients with PAS than in controls, but hCG-H showed only modest capability as a diagnostic test for PAS. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.
Performance index and meta-optimization of a direct search optimization method
NASA Astrophysics Data System (ADS)
Krus, P.; Ölvander, J.
2013-10-01
Design optimization is becoming an increasingly important tool for design, often using simulation as part of the evaluation of the objective function. A measure of the efficiency of an optimization algorithm is of great importance when comparing methods. The main contribution of this article is the introduction of a singular performance criterion, the entropy rate index based on Shannon's information theory, taking both reliability and rate of convergence into account. It can also be used to characterize the difficulty of different optimization problems. Such a performance criterion can also be used for optimization of the optimization algorithms itself. In this article the Complex-RF optimization method is described and its performance evaluated and optimized using the established performance criterion. Finally, in order to be able to predict the resources needed for optimization an objective function temperament factor is defined that indicates the degree of difficulty of the objective function.
In-Flight Pitot-Static Calibration
NASA Technical Reports Server (NTRS)
Foster, John V. (Inventor); Cunningham, Kevin (Inventor)
2016-01-01
A GPS-based pitot-static calibration system uses global output-error optimization. High data rate measurements of static and total pressure, ambient air conditions, and GPS-based ground speed measurements are used to compute pitot-static pressure errors over a range of airspeed. System identification methods rapidly compute optimal pressure error models with defined confidence intervals.
Nonlinear stability in reaction-diffusion systems via optimal Lyapunov functions
NASA Astrophysics Data System (ADS)
Lombardo, S.; Mulone, G.; Trovato, M.
2008-06-01
We define optimal Lyapunov functions to study nonlinear stability of constant solutions to reaction-diffusion systems. A computable and finite radius of attraction for the initial data is obtained. Applications are given to the well-known Brusselator model and a three-species model for the spatial spread of rabies among foxes.
Episodic and Individual Effects of Elementary Students' Optimal Experience: An HLM Study
ERIC Educational Resources Information Center
Cheng, Chao-Yang; Chen, Sherry Y.; Lin, Sunny S. J.
2017-01-01
The authors defined optimal experience as a functional state of a relatively high level of concentration, time distortion, satisfaction, and enjoyment (Csikszentmihalyi, 1992) and collected data through the Day Reconstruction Method. In three random days, 147 fifth-grade students answered questionnaires for each school event in the previous day…
DOT National Transportation Integrated Search
2000-02-01
This training manual describes the fuzzy logic ramp metering algorithm in detail, as implemented system-wide in the greater Seattle area. The method of defining the inputs to the controller and optimizing the performance of the algorithm is explained...
ERIC Educational Resources Information Center
Ansari, Fazel; Seidenberg, Ulrich
2016-01-01
This paper discusses the complementarity of human and cyber physical production systems (CPPS). The discourse of complementarity is elaborated by defining five criteria for comparing the characteristics of human and CPPS. Finally, a management portfolio matrix is proposed for examining the feasibility of optimal collaboration between them. The…
Comparison of four methods to assess colostral IgG concentration in dairy cows.
Chigerwe, Munashe; Tyler, Jeff W; Middleton, John R; Spain, James N; Dill, Jeffrey S; Steevens, Barry J
2008-09-01
To determine sensitivity and specificity of 4 methods to assess colostral IgG concentration in dairy cows and determine the optimal cutpoint for each method. Cross-sectional study. 160 Holstein dairy cows. 171 composite colostrum samples collected within 2 hours after parturition were used in the study. Test methods used to estimate colostral IgG concentration consisted of weight of the first milking, 2 hydrometers, and an electronic refractometer. Results of the test methods were compared with colostral IgG concentration determined by means of radial immunodiffusion. For each method, sensitivity and specificity for detecting colostral IgG concentration < 50 g/L were calculated across a range of potential cutpoints, and the optimal cutpoint for each test was selected to maximize sensitivity and specificity. At the optimal cutpoint for each method, sensitivity for weight of the first milking (0.42) was significantly lower than sensitivity for each of the other 3 methods (hydrometer 1, 0.75; hydrometer 2, 0.76; refractometer, 0.75), but no significant differences were identified among the other 3 methods with regard to sensitivity. Specificities at the optimal cutpoint were similar for all 4 methods. Results suggested that use of either hydrometer or the electronic refractometer was an acceptable method of screening colostrum for low IgG concentration; however, the manufacturer-defined scale for both hydrometers overestimated colostral IgG concentration. Use of weight of the first milking as a screening test to identify bovine colostrum with inadequate IgG concentration could not be justified because of the low sensitivity.
NASA Astrophysics Data System (ADS)
Fefer, M.; Dogan, M. S.; Herman, J. D.
2017-12-01
Long-term shifts in the timing and magnitude of reservoir inflows will potentially have significant impacts on water supply reliability in California, though projections remain uncertain. Here we assess the vulnerability of the statewide system to changes in total annual runoff (a function of precipitation) and the fraction of runoff occurring during the winter months (primarily a function of temperature). An ensemble of scenarios is sampled using a bottom-up approach and compared to the most recent available streamflow projections from the state's 4th Climate Assessment. We evaluate these scenarios using a new open-source version of the CALVIN model, a network flow optimization model encompassing roughly 90% of the urban and agricultural water demands in California, which is capable of running scenario ensembles on a high-performance computing cluster. The economic representation of water demand in the model yields several advantages for this type of analysis: optimized reservoir operating policies to minimize shortage cost and the marginal value of adaptation opportunities, defined by shadow prices on infrastructure and regulatory constraints. Results indicate a shift in optimal reservoir operations and high marginal value of additional reservoir storage in the winter months. The collaborative management of reservoirs in CALVIN yields increased storage in downstream reservoirs to store the increased winter runoff. This study contributes an ensemble evaluation of a large-scale network model to investigate uncertain climate projections, and an approach to interpret the results of economic optimization through the lens of long-term adaptation strategies.
The primer vector in linear, relative-motion equations. [spacecraft trajectory optimization
NASA Technical Reports Server (NTRS)
1980-01-01
Primer vector theory is used in analyzing a set of linear, relative-motion equations - the Clohessy-Wiltshire equations - to determine the criteria and necessary conditions for an optimal, N-impulse trajectory. Since the state vector for these equations is defined in terms of a linear system of ordinary differential equations, all fundamental relations defining the solution of the state and costate equations, and the necessary conditions for optimality, can be expressed in terms of elementary functions. The analysis develops the analytical criteria for improving a solution by (1) moving any dependent or independent variable in the initial and/or final orbit, and (2) adding intermediate impulses. If these criteria are violated, the theory establishes a sufficient number of analytical equations. The subsequent satisfaction of these equations will result in the optimal position vectors and times of an N-impulse trajectory. The solution is examined for the specific boundary conditions of (1) fixed-end conditions, two-impulse, and time-open transfer; (2) an orbit-to-orbit transfer; and (3) a generalized rendezvous problem. A sequence of rendezvous problems is solved to illustrate the analysis and the computational procedure.
ERIC Educational Resources Information Center
Liu, Xiaofeng
2003-01-01
This article considers optimal sample allocation between the treatment and control condition in multilevel designs when the costs per sampling unit vary due to treatment assignment. Optimal unequal allocation may reduce the cost from that of a balanced design without sacrificing any power. The optimum sample allocation ratio depends only on the…
Cope, Davis; Blakeslee, Barbara; McCourt, Mark E
2013-05-01
The difference-of-Gaussians (DOG) filter is a widely used model for the receptive field of neurons in the retina and lateral geniculate nucleus (LGN) and is a potential model in general for responses modulated by an excitatory center with an inhibitory surrounding region. A DOG filter is defined by three standard parameters: the center and surround sigmas (which define the variance of the radially symmetric Gaussians) and the balance (which defines the linear combination of the two Gaussians). These parameters are not directly observable and are typically determined by nonlinear parameter estimation methods applied to the frequency response function. DOG filters show both low-pass (optimal response at zero frequency) and bandpass (optimal response at a nonzero frequency) behavior. This paper reformulates the DOG filter in terms of a directly observable parameter, the zero-crossing radius, and two new (but not directly observable) parameters. In the two-dimensional parameter space, the exact region corresponding to bandpass behavior is determined. A detailed description of the frequency response characteristics of the DOG filter is obtained. It is also found that the directly observable optimal frequency and optimal gain (the ratio of the response at optimal frequency to the response at zero frequency) provide an alternate coordinate system for the bandpass region. Altogether, the DOG filter and its three standard implicit parameters can be determined by three directly observable values. The two-dimensional bandpass region is a potential tool for the analysis of populations of DOG filters (for example, populations of neurons in the retina or LGN), because the clustering of points in this parameter space may indicate an underlying organizational principle. This paper concentrates on circular Gaussians, but the results generalize to multidimensional radially symmetric Gaussians and are given as an appendix.
Bot, Maarten; Schuurman, P Richard; Odekerken, Vincent J J; Verhagen, Rens; Contarino, Fiorella Maria; De Bie, Rob M A; van den Munckhof, Pepijn
2018-05-01
Individual motor improvement after deep brain stimulation (DBS) of the subthalamic nucleus (STN) for Parkinson's disease (PD) varies considerably. Stereotactic targeting of the dorsolateral sensorimotor part of the STN is considered paramount for maximising effectiveness, but studies employing the midcommissural point (MCP) as anatomical reference failed to show correlation between DBS location and motor improvement. The medial border of the STN as reference may provide better insight in the relationship between DBS location and clinical outcome. Motor improvement after 12 months of 65 STN DBS electrodes was categorised into non-responding, responding and optimally responding body-sides. Stereotactic coordinates of optimal electrode contacts relative to both medial STN border and MCP served to define theoretic DBS 'hotspots'. Using the medial STN border as reference, significant negative correlation (Pearson's correlation -0.52, P<0.01) was found between the Euclidean distance from the centre of stimulation to this DBS hotspot and motor improvement. This hotspot was located at 2.8 mm lateral, 1.7 mm anterior and 2.5 mm superior relative to the medial STN border. Using MCP as reference, no correlation was found. The medial STN border proved superior compared with MCP as anatomical reference for correlation of DBS location and motor improvement, and enabled defining an optimal DBS location within the nucleus. We therefore propose the medial STN border as a better individual reference point than the currently used MCP on preoperative stereotactic imaging, in order to obtain optimal and thus less variable motor improvement for individual patients with PD following STN DBS. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Computer-aided osteotomy design for harvesting autologous bone grafts in reconstructive surgery
NASA Astrophysics Data System (ADS)
Krol, Zdzislaw; Zerfass, Peter; von Rymon-Lipinski, Bartosz; Jansen, Thomas; Hauck, Wolfgang; Zeilhofer, Hans-Florian U.; Sader, Robert; Keeve, Erwin
2001-05-01
Autologous grafts serve as the standard grafting material in the treatment of maxillofacial bone tumors, traumatic defects or congenital malformations. The pre-selection of a donor site depends primarily on the morphological fit of the available bone mass and the shape of the part that has to be transplanted. To achieve sufficient incorporation of the autograft into the host bone, precise planning and simulation of the surgical intervention based on 3D CT studies is required. This paper presents a method to identify an optimal donor site by performing an optimization of appropriate similarity measures between donor region and a given transplant. At the initial stage the surgeon has to delineate the osteotomy border lines in the template CT data set and to define a set of constraints for the optimization of appropriate similarity measures between donor region and a given transplant. At the initial stage the surgeon has to delineate the osteotomy border lines in the template CT data set and to define a set of constraints for the optimization task in the donor site CT data set. The following fully automatic optimization stage delivers a set of sub-optimal and optimal donor sites for a given template. All generated solutions can be explored interactively on the computer display using an efficient graphical interface. Reconstructive operations supported by our system were performed on 28 patients. We found that the operation time can be considerably shortened by this approach.
An Asymptotically-Optimal Sampling-Based Algorithm for Bi-directional Motion Planning
Starek, Joseph A.; Gomez, Javier V.; Schmerling, Edward; Janson, Lucas; Moreno, Luis; Pavone, Marco
2015-01-01
Bi-directional search is a widely used strategy to increase the success and convergence rates of sampling-based motion planning algorithms. Yet, few results are available that merge both bi-directional search and asymptotic optimality into existing optimal planners, such as PRM*, RRT*, and FMT*. The objective of this paper is to fill this gap. Specifically, this paper presents a bi-directional, sampling-based, asymptotically-optimal algorithm named Bi-directional FMT* (BFMT*) that extends the Fast Marching Tree (FMT*) algorithm to bidirectional search while preserving its key properties, chiefly lazy search and asymptotic optimality through convergence in probability. BFMT* performs a two-source, lazy dynamic programming recursion over a set of randomly-drawn samples, correspondingly generating two search trees: one in cost-to-come space from the initial configuration and another in cost-to-go space from the goal configuration. Numerical experiments illustrate the advantages of BFMT* over its unidirectional counterpart, as well as a number of other state-of-the-art planners. PMID:27004130
Šumić, Zdravko; Vakula, Anita; Tepić, Aleksandra; Čakarević, Jelena; Vitas, Jasmina; Pavlić, Branimir
2016-07-15
Fresh red currants were dried by vacuum drying process under different drying conditions. Box-Behnken experimental design with response surface methodology was used for optimization of drying process in terms of physical (moisture content, water activity, total color change, firmness and rehydratation power) and chemical (total phenols, total flavonoids, monomeric anthocyanins and ascorbic acid content and antioxidant activity) properties of dried samples. Temperature (48-78 °C), pressure (30-330 mbar) and drying time (8-16 h) were investigated as independent variables. Experimental results were fitted to a second-order polynomial model where regression analysis and analysis of variance were used to determine model fitness and optimal drying conditions. The optimal conditions of simultaneously optimized responses were temperature of 70.2 °C, pressure of 39 mbar and drying time of 8 h. It could be concluded that vacuum drying provides samples with good physico-chemical properties, similar to lyophilized sample and better than conventionally dried sample. Copyright © 2016 Elsevier Ltd. All rights reserved.
On algorithmic optimization of histogramming functions for GEM systems
NASA Astrophysics Data System (ADS)
Krawczyk, Rafał D.; Czarski, Tomasz; Kolasinski, Piotr; Poźniak, Krzysztof T.; Linczuk, Maciej; Byszuk, Adrian; Chernyshova, Maryna; Juszczyk, Bartlomiej; Kasprowicz, Grzegorz; Wojenski, Andrzej; Zabolotny, Wojciech
2015-09-01
This article concerns optimization methods for data analysis for the X-ray GEM detector system. The offline analysis of collected samples was optimized for MATLAB computations. Compiled functions in C language were used with MEX library. Significant speedup was received for both ordering-preprocessing and for histogramming of samples. Utilized techniques with obtained results are presented.
Optimal timber harvest scheduling with spatially defined sediment objectives
Jon Hof; Michael Bevers
2000-01-01
This note presents a simple model formulation that focuses on the spatial relationships over time between timber harvesting and sediment levels in water runoff courses throughout the watershed being managed. A hypothetical example is developed to demonstrate the formulation and show how sediment objectives can be spatially defined anywhere in the watershed. Spatial...
Goudeau, V; Daniel, B; Dubot, D
2017-04-21
During the operation and the decommissioning of a nuclear site the operator must assure the protection of the workers and the environment. It must furthermore identify and classify the various wastes, while optimizing the associated costs. At all stages of the decommissioning radiological measurements are performed to determine the initial situation, to monitor the demolition and clean-up, and to verify the final situation. Radiochemical analysis is crucial for the radiological evaluation process to optimize the clean-up operations and to the respect limits defined with the authorities. Even though these types of analysis are omnipresent in activities such as the exploitation, the monitoring, and the cleaning up of nuclear plants, some nuclear sites do not have their own radiochemical analysis laboratory. Mobile facilities can overcome this lack when nuclear facilities are dismantled, when contaminated sites are cleaned-up, or in a post-accident situation. The current operations for the characterization of radiological soils of CEA nuclear facilities, lead to a large increase of radiochemical analysis. To manage this high throughput of samples in a timely manner, the CEA has developed a new mobile laboratory for the clean-up of its soils, called SMaRT (Shelter for Monitoring and nucleAR chemisTry). This laboratory is dedicated to the preparation and the radiochemical analysis (alpha, beta, and gamma) of potentially contaminated samples. In this framework, CEA and Eichrom laboratories has signed a partnership agreement to extend the analytical capacities and bring on site optimized and validated methods for different problematic. Gamma-emitting radionuclides can usually be measured in situ as little or no sample preparation is required. Alpha and beta-emitting radionuclides are a different matter. Analytical chemistry laboratory facilities are required. Mobile and transportable laboratories equipped with the necessary tools can provide all that is needed. The main advantage of a mobile laboratory is its portability; the shelter can be placed in the vicinity of nuclear facilities under decommissioning, or of contaminated sites with infrastructures unsuitable for the reception and treatment of radioactive samples. Radiological analysis can then be performed without the disadvantages of radioactive material transport. This paper describes how this solution allows a fast response and control of costs, with a high analytical capacity. Copyright © 2017 Elsevier Ltd. All rights reserved.
Chen, Yu; Dong, Fengqing; Wang, Yonghong
2016-09-01
With determined components and experimental reducibility, the chemically defined medium (CDM) and the minimal chemically defined medium (MCDM) are used in many metabolism and regulation studies. This research aimed to develop the chemically defined medium supporting high cell density growth of Bacillus coagulans, which is a promising producer of lactic acid and other bio-chemicals. In this study, a systematic methodology combining the experimental technique with flux balance analysis (FBA) was proposed to design and simplify a CDM. The single omission technique and single addition technique were employed to determine the essential and stimulatory compounds, before the optimization of their concentrations by the statistical method. In addition, to improve the growth rationally, in silico omission and addition were performed by FBA based on the construction of a medium-size metabolic model of B. coagulans 36D1. Thus, CDMs were developed to obtain considerable biomass production of at least five B. coagulans strains, in which two model strains B. coagulans 36D1 and ATCC 7050 were involved.
Reconstructing metabolic flux vectors from extreme pathways: defining the alpha-spectrum.
Wiback, Sharon J; Mahadevan, Radhakrishnan; Palsson, Bernhard Ø
2003-10-07
The move towards genome-scale analysis of cellular functions has necessitated the development of analytical (in silico) methods to understand such large and complex biochemical reaction networks. One such method is extreme pathway analysis that uses stoichiometry and thermodynamic irreversibly to define mathematically unique, systemic metabolic pathways. These extreme pathways form the edges of a high-dimensional convex cone in the flux space that contains all the attainable steady state solutions, or flux distributions, for the metabolic network. By definition, any steady state flux distribution can be described as a nonnegative linear combination of the extreme pathways. To date, much effort has been focused on calculating, defining, and understanding these extreme pathways. However, little work has been performed to determine how these extreme pathways contribute to a given steady state flux distribution. This study represents an initial effort aimed at defining how physiological steady state solutions can be reconstructed from a network's extreme pathways. In general, there is not a unique set of nonnegative weightings on the extreme pathways that produce a given steady state flux distribution but rather a range of possible values. This range can be determined using linear optimization to maximize and minimize the weightings of a particular extreme pathway in the reconstruction, resulting in what we have termed the alpha-spectrum. The alpha-spectrum defines which extreme pathways can and cannot be included in the reconstruction of a given steady state flux distribution and to what extent they individually contribute to the reconstruction. It is shown that accounting for transcriptional regulatory constraints can considerably shrink the alpha-spectrum. The alpha-spectrum is computed and interpreted for two cases; first, optimal states of a skeleton representation of core metabolism that include transcriptional regulation, and second for human red blood cell metabolism under various physiological, non-optimal conditions.
Optimization of multi-environment trials for genomic selection based on crop models.
Rincent, R; Kuhn, E; Monod, H; Oury, F-X; Rousset, M; Allard, V; Le Gouis, J
2017-08-01
We propose a statistical criterion to optimize multi-environment trials to predict genotype × environment interactions more efficiently, by combining crop growth models and genomic selection models. Genotype × environment interactions (GEI) are common in plant multi-environment trials (METs). In this context, models developed for genomic selection (GS) that refers to the use of genome-wide information for predicting breeding values of selection candidates need to be adapted. One promising way to increase prediction accuracy in various environments is to combine ecophysiological and genetic modelling thanks to crop growth models (CGM) incorporating genetic parameters. The efficiency of this approach relies on the quality of the parameter estimates, which depends on the environments composing this MET used for calibration. The objective of this study was to determine a method to optimize the set of environments composing the MET for estimating genetic parameters in this context. A criterion called OptiMET was defined to this aim, and was evaluated on simulated and real data, with the example of wheat phenology. The MET defined with OptiMET allowed estimating the genetic parameters with lower error, leading to higher QTL detection power and higher prediction accuracies. MET defined with OptiMET was on average more efficient than random MET composed of twice as many environments, in terms of quality of the parameter estimates. OptiMET is thus a valuable tool to determine optimal experimental conditions to best exploit MET and the phenotyping tools that are currently developed.
Xiang, Wei; Li, Chong
2015-01-01
Operating Room (OR) is the core sector in hospital expenditure, the operation management of which involves a complete three-stage surgery flow, multiple resources, prioritization of the various surgeries, and several real-life OR constraints. As such reasonable surgery scheduling is crucial to OR management. To optimize OR management and reduce operation cost, a short-term surgery scheduling problem is proposed and defined based on the survey of the OR operation in a typical hospital in China. The comprehensive operation cost is clearly defined considering both under-utilization and overutilization. A nested Ant Colony Optimization (nested-ACO) incorporated with several real-life OR constraints is proposed to solve such a combinatorial optimization problem. The 10-day manual surgery schedules from a hospital in China are compared with the optimized schedules solved by the nested-ACO. Comparison results show the advantage using the nested-ACO in several measurements: OR-related time, nurse-related time, variation in resources' working time, and the end time. The nested-ACO considering real-life operation constraints such as the difference between first and following case, surgeries priority, and fixed nurses in pre/post-operative stage is proposed to solve the surgery scheduling optimization problem. The results clearly show the benefit of using the nested-ACO in enhancing the OR management efficiency and minimizing the comprehensive overall operation cost.
A Neuroscience Approach to Optimizing Brain Resources for Human Performance in Extreme Environments
Paulus, Martin P.; Potterat, Eric G.; Taylor, Marcus K.; Van Orden, Karl F.; Bauman, James; Momen, Nausheen; Padilla, Genieleah A.; Swain, Judith L.
2009-01-01
Extreme environments requiring optimal cognitive and behavioral performance occur in a wide variety of situations ranging from complex combat operations to elite athletic competitions. Although a large literature characterizes psychological and other aspects of individual differences in performances in extreme environments, virtually nothing is known about the underlying neural basis for these differences. This review summarizes the cognitive, emotional, and behavioral consequences of exposure to extreme environments, discusses predictors of performance, and builds a case for the use of neuroscience approaches to quantify and understand optimal cognitive and behavioral performance. Extreme environments are defined as an external context that exposes individuals to demanding psychological and/or physical conditions, and which may have profound effects on cognitive and behavioral performance. Examples of these types of environments include combat situations, Olympic-level competition, and expeditions in extreme cold, at high altitudes, or in space. Optimal performance is defined as the degree to which individuals achieve a desired outcome when completing goal-oriented tasks. It is hypothesized that individual variability with respect to optimal performance in extreme environments depends on a well “contextualized” internal body state that is associated with an appropriate potential to act. This hypothesis can be translated into an experimental approach that may be useful for quantifying the degree to which individuals are particularly suited to performing optimally in demanding environments. PMID:19447132
Thriveni, T; Rajesh Kumar, J; Sujatha, D; Sreedhar, N Y
2007-05-01
The cyclic voltammograms of terbacil and lenacil at the hanging mercury drop electrode showed a single well defined four electron irreversible peak in universal buffer of pH 4.0 for both compounds. The peak potentials were shifted to more negative values on the increase of pH of the medium, implying the involvement of protons in the electrode reaction and that the proton transfer reaction precedes the proper electrode process. The four electron single peak may be attributed to the simultaneous reduction of carbonyl groups present in 2 and 4 in pyrimidine ring of terbacil and lenacil to the corresponding hydroxy derivative. Based on the interfacial adsorptive character of the terbacil and lenacil onto the mercury electrode surface, a simple sensitive and low cost differential pulse adsorptive stripping voltammetric procedure was optimized for the analysis of terbacil and lenacil. The optimal operational conditions of the proposed procedure were accumulation potential E (acc) = -0.4 V, accumulation time t (acc) = 80 s, scan rate = 40 mV s(-1), pulse amplitude = 25 mV using a universal buffer pH 4.0 as a supporting electrolyte. The linear concentration range was found to be 1.5 x 10(-5) to 1.2 x 10(-9) mol/l and 1.5 x 10(-5) to 2.5 x 10(-8) mol/l with the lower detection limit of 1.22 x 10(-9) and 2.0 x 10(-8) mol/l. The correlation coefficient and relative standard deviation values are found to be 0.942, 0.996, 1.64% and 1.23%, respectively, for 10 replicants. The procedure was successfully applied for determination of terbacil and lenacil in formulations, mixed formulations, environmental samples such as fruit samples and spiked water samples.
Carotid-femoral pulse wave velocity in a healthy adult sample: The ELSA-Brasil study.
Baldo, Marcelo Perim; Cunha, Roberto S; Molina, Maria Del Carmen B; Chór, Dora; Griep, Rosane H; Duncan, Bruce B; Schmidt, Maria Inês; Ribeiro, Antonio L P; Barreto, Sandhi M; Lotufo, Paulo A; Bensenor, Isabela M; Pereira, Alexandre C; Mill, José Geraldo
2018-01-15
Aging declines essential physiological functions, and the vascular system is strongly affected by artery stiffening. We intended to define the age- and sex-specific reference values for carotid-to-femoral pulse wave velocity (cf-PWV) in a sample free of major risk factors. The ELSA-Brasil study enrolled 15,105 participants aged 35-74years. The healthy sample was achieved by excluding diabetics, those over the optimal and normal blood pressure levels, body mass index ≤18.5 or ≥25kg/m 2 , current and former smokers, and those with self-report of previous cardiovascular disease. After exclusions, the sample consisted of 2158 healthy adults (1412 women). Although cf-PWV predictors were similar between sex (age, mean arterial pressure (MAP) and heart rate), cf-PWV was higher in men (8.74±1.15 vs. 8.31±1.13m/s; adjusted for age and MAP, P<0.001) for all age intervals. When divided by MAP categories, cf-PWV was significantly higher in those which MAP ≥85mmHg, regardless of sex, and for all age intervals. Risk factors for arterial stiffening in the entire ELSA-Brasil population (n=15,105) increased by twice the age-related slope of cf-PWV growth, regardless of sex (0.0919±0.182 vs. 0.0504±0.153m/s per year for men, 0.0960±0.173 vs. 0.0606±0.139m/s per year for women). cf-PWV is different between men and women and even in an optimal and normal range of MAP and free of other classical risk factors for arterial stiffness, reference values for cf-PWV should take into account MAP levels. Also, the presence of major risk factors in the general population doubles the age-related rise in cf-PWV. Copyright © 2017 Elsevier B.V. All rights reserved.
Pirpinia, Kleopatra; Bosman, Peter A N; Loo, Claudette E; Winter-Warnars, Gonneke; Janssen, Natasja N Y; Scholten, Astrid N; Sonke, Jan-Jakob; van Herk, Marcel; Alderliesten, Tanja
2017-06-23
Deformable image registration is typically formulated as an optimization problem involving a linearly weighted combination of terms that correspond to objectives of interest (e.g. similarity, deformation magnitude). The weights, along with multiple other parameters, need to be manually tuned for each application, a task currently addressed mainly via trial-and-error approaches. Such approaches can only be successful if there is a sensible interplay between parameters, objectives, and desired registration outcome. This, however, is not well established. To study this interplay, we use multi-objective optimization, where multiple solutions exist that represent the optimal trade-offs between the objectives, forming a so-called Pareto front. Here, we focus on weight tuning. To study the space a user has to navigate during manual weight tuning, we randomly sample multiple linear combinations. To understand how these combinations relate to desirability of registration outcome, we associate with each outcome a mean target registration error (TRE) based on expert-defined anatomical landmarks. Further, we employ a multi-objective evolutionary algorithm that optimizes the weight combinations, yielding a Pareto front of solutions, which can be directly navigated by the user. To study how the complexity of manual weight tuning changes depending on the registration problem, we consider an easy problem, prone-to-prone breast MR image registration, and a hard problem, prone-to-supine breast MR image registration. Lastly, we investigate how guidance information as an additional objective influences the prone-to-supine registration outcome. Results show that the interplay between weights, objectives, and registration outcome makes manual weight tuning feasible for the prone-to-prone problem, but very challenging for the harder prone-to-supine problem. Here, patient-specific, multi-objective weight optimization is needed, obtaining a mean TRE of 13.6 mm without guidance information reduced to 7.3 mm with guidance information, but also providing a Pareto front that exhibits an intuitively sensible interplay between weights, objectives, and registration outcome, allowing outcome selection.
NASA Astrophysics Data System (ADS)
Pirpinia, Kleopatra; Bosman, Peter A. N.; E Loo, Claudette; Winter-Warnars, Gonneke; Y Janssen, Natasja N.; Scholten, Astrid N.; Sonke, Jan-Jakob; van Herk, Marcel; Alderliesten, Tanja
2017-07-01
Deformable image registration is typically formulated as an optimization problem involving a linearly weighted combination of terms that correspond to objectives of interest (e.g. similarity, deformation magnitude). The weights, along with multiple other parameters, need to be manually tuned for each application, a task currently addressed mainly via trial-and-error approaches. Such approaches can only be successful if there is a sensible interplay between parameters, objectives, and desired registration outcome. This, however, is not well established. To study this interplay, we use multi-objective optimization, where multiple solutions exist that represent the optimal trade-offs between the objectives, forming a so-called Pareto front. Here, we focus on weight tuning. To study the space a user has to navigate during manual weight tuning, we randomly sample multiple linear combinations. To understand how these combinations relate to desirability of registration outcome, we associate with each outcome a mean target registration error (TRE) based on expert-defined anatomical landmarks. Further, we employ a multi-objective evolutionary algorithm that optimizes the weight combinations, yielding a Pareto front of solutions, which can be directly navigated by the user. To study how the complexity of manual weight tuning changes depending on the registration problem, we consider an easy problem, prone-to-prone breast MR image registration, and a hard problem, prone-to-supine breast MR image registration. Lastly, we investigate how guidance information as an additional objective influences the prone-to-supine registration outcome. Results show that the interplay between weights, objectives, and registration outcome makes manual weight tuning feasible for the prone-to-prone problem, but very challenging for the harder prone-to-supine problem. Here, patient-specific, multi-objective weight optimization is needed, obtaining a mean TRE of 13.6 mm without guidance information reduced to 7.3 mm with guidance information, but also providing a Pareto front that exhibits an intuitively sensible interplay between weights, objectives, and registration outcome, allowing outcome selection.
Gradient design for liquid chromatography using multi-scale optimization.
López-Ureña, S; Torres-Lapasió, J R; Donat, R; García-Alvarez-Coque, M C
2018-01-26
In reversed phase-liquid chromatography, the usual solution to the "general elution problem" is the application of gradient elution with programmed changes of organic solvent (or other properties). A correct quantification of chromatographic peaks in liquid chromatography requires well resolved signals in a proper analysis time. When the complexity of the sample is high, the gradient program should be accommodated to the local resolution needs of each analyte. This makes the optimization of such situations rather troublesome, since enhancing the resolution for a given analyte may imply a collateral worsening of the resolution of other analytes. The aim of this work is to design multi-linear gradients that maximize the resolution, while fulfilling some restrictions: all peaks should be eluted before a given maximal time, the gradient should be flat or increasing, and sudden changes close to eluting peaks are penalized. Consequently, an equilibrated baseline resolution for all compounds is sought. This goal is achieved by splitting the optimization problem in a multi-scale framework. In each scale κ, an optimization problem is solved with N κ ≈ 2 κ variables that are used to build the gradients. The N κ variables define cubic splines written in terms of a B-spline basis. This allows expressing gradients as polygonals of M points approximating the splines. The cubic splines are built using subdivision schemes, a technique of fast generation of smooth curves, compatible with the multi-scale framework. Owing to the nature of the problem and the presence of multiple local maxima, the algorithm used in the optimization problem of each scale κ should be "global", such as the pattern-search algorithm. The multi-scale optimization approach is successfully applied to find the best multi-linear gradient for resolving a mixture of amino acid derivatives. Copyright © 2017 Elsevier B.V. All rights reserved.
Pavlović, Dragana Mutavdžić; Ašperger, Danijela; Tolić, Dijana; Babić, Sandra
2013-09-01
This paper describes the development, optimization, and validation of a method for the determination of five pharmaceuticals from different therapeutic classes (antibiotics, anthelmintics, glucocorticoides) in water samples. Water samples were prepared using SPE and extracts were analyzed by HPLC with diode-array detection. The efficiency of 11 different SPE cartridges to extract the investigated compounds from water was tested in preliminary experiments. Then, the pH of the water sample, elution solvent, and sorbent mass were optimized. Except for optimization of the SPE procedure, selection of the optimal HPLC column with different stationary phases from different manufacturers has been performed. The developed method was validated using spring water samples spiked with appropriate concentrations of pharmaceuticals. Good linearity was obtained in the range of 2.4-200 μg/L, depending on the pharmaceutical with the correlation coefficients >0.9930 in all cases, except for ciprofloxacin (0.9866). Also, the method has revealed that low LODs (0.7-3.9 μg/L), good precision (intra- and interday) with RSD below 17% and recoveries above 98% for all pharmaceuticals. The method has been successfully applied to the analysis of production wastewater samples from the pharmaceutical industry. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Apparatus and methods for manipulation and optimization of biological systems
NASA Technical Reports Server (NTRS)
Sun, Ren (Inventor); Ho, Chih-Ming (Inventor); Wong, Pak Kin (Inventor); Yu, Fuqu (Inventor)
2012-01-01
The invention provides systems and methods for manipulating, e.g., optimizing and controlling, biological systems, e.g., for eliciting a more desired biological response of biological sample, such as a tissue, organ, and/or a cell. In one aspect, systems and methods of the invention operate by efficiently searching through a large parametric space of stimuli and system parameters to manipulate, control, and optimize the response of biological samples sustained in the system, e.g., a bioreactor. In alternative aspects, systems include a device for sustaining cells or tissue samples, one or more actuators for stimulating the samples via biochemical, electromagnetic, thermal, mechanical, and/or optical stimulation, one or more sensors for measuring a biological response signal of the samples resulting from the stimulation of the sample. In one aspect, the systems and methods of the invention use at least one optimization algorithm to modify the actuator's control inputs for stimulation, responsive to the sensor's output of response signals. The compositions and methods of the invention can be used, e.g., to for systems optimization of any biological manufacturing or experimental system, e.g., bioreactors for proteins, e.g., therapeutic proteins, polypeptides or peptides for vaccines, and the like, small molecules (e.g., antibiotics), polysaccharides, lipids, and the like. Another use of the apparatus and methods includes combination drug therapy, e.g. optimal drug cocktail, directed cell proliferations and differentiations, e.g. in tissue engineering, e.g. neural progenitor cells differentiation, and discovery of key parameters in complex biological systems.
NASA Astrophysics Data System (ADS)
Mo, S.; Lu, D.; Shi, X.; Zhang, G.; Ye, M.; Wu, J.
2016-12-01
Surrogate models have shown remarkable computational efficiency in hydrological simulations involving design space exploration, sensitivity analysis, uncertainty quantification, etc. The central task of constructing a global surrogate models is to achieve a prescribed approximation accuracy with as few original model executions as possible, which requires a good design strategy to optimize the distribution of data points in the parameter domains and an effective stopping criterion to automatically terminate the design process when desired approximation accuracy is achieved. This study proposes a novel adaptive sampling strategy, which starts from a small number of initial samples and adaptively selects additional samples by balancing the collection in unexplored regions and refinement in interesting areas. We define an efficient and effective evaluation metric basing on Taylor expansion to select the most promising potential samples from candidate points, and propose a robust stopping criterion basing on the approximation accuracy at new points to guarantee the achievement of desired accuracy. The numerical results of several benchmark analytical functions indicate that the proposed approach is more computationally efficient and robust than the widely used maximin distance design and two other well-known adaptive sampling strategies. The application to two complicated multiphase flow problems further demonstrates the efficiency and effectiveness of our method in constructing global surrogate models for high-dimensional and highly nonlinear problems. Acknowledgements: This work was financially supported by the National Nature Science Foundation of China grants No. 41030746 and 41172206.
Distributed Energy Resources Customer Adoption Model - Graphical User Interface, Version 2.1.8
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ewald, Friedrich; Stadler, Michael; Cardoso, Goncalo F
The DER-CAM Graphical User Interface has been redesigned to consist of a dynamic tree structure on the left side of the application window to allow users to quickly navigate between different data categories and views. Views can either be tables with model parameters and input data, the optimization results, or a graphical interface to draw circuit topology and visualize investment results. The model parameters and input data consist of tables where values are assigned to specific keys. The aggregation of all model parameters and input data amounts to the data required to build a DER-CAM model, and is passed tomore » the GAMS solver when users initiate the DER-CAM optimization process. Passing data to the GAMS solver relies on the use of a Java server that handles DER-CAM requests, queuing, and results delivery. This component of the DER-CAM GUI can be deployed either locally or remotely, and constitutes an intermediate step between the user data input and manipulation, and the execution of a DER-CAM optimization in the GAMS engine. The results view shows the results of the DER-CAM optimization and distinguishes between a single and a multi-objective process. The single optimization runs the DER-CAM optimization once and presents the results as a combination of summary charts and hourly dispatch profiles. The multi-objective optimization process consists of a sequence of runs initiated by the GUI, including: 1) CO2 minimization, 2) cost minimization, 3) a user defined number of points in-between objectives 1) and 2). The multi-objective results view includes both access to the detailed results of each point generated by the process as well as the generation of a Pareto Frontier graph to illustrate the trade-off between objectives. DER-CAM GUI 2.1.8 also introduces the ability to graphically generate circuit topologies, enabling support to DER-CAM 5.0.0. This feature consists of: 1) The drawing area, where users can manually create nodes and define their properties (e.g. point of common coupling, slack bus, load) and connect them through edges representing either power lines, transformers, or heat pipes, all with user defined characteristics (e.g., length, ampacity, inductance, or heat loss); 2) The tables, which display the user-defined topology in the final numerical form that will be passed to the DER-CAM optimization. Finally, the DER-CAM GUI is also deployed with a database schema that allows users to provide different energy load profiles, solar irradiance profiles, and tariff data, that can be stored locally and later used in any DER-CAM model. However, no real data will be delivered with this version.« less
Verhagen, Simone J. W.; Simons, Claudia J. P.; van Zelst, Catherine; Delespaul, Philippe A. E. G.
2017-01-01
Background: Mental healthcare needs person-tailored interventions. Experience Sampling Method (ESM) can provide daily life monitoring of personal experiences. This study aims to operationalize and test a measure of momentary reward-related Quality of Life (rQoL). Intuitively, quality of life improves by spending more time on rewarding experiences. ESM clinical interventions can use this information to coach patients to find a realistic, optimal balance of positive experiences (maximize reward) in daily life. rQoL combines the frequency of engaging in a relevant context (a ‘behavior setting’) with concurrent (positive) affect. High rQoL occurs when the most frequent behavior settings are combined with positive affect or infrequent behavior settings co-occur with low positive affect. Methods: Resampling procedures (Monte Carlo experiments) were applied to assess the reliability of rQoL using various behavior setting definitions under different sampling circumstances, for real or virtual subjects with low-, average- and high contextual variability. Furthermore, resampling was used to assess whether rQoL is a distinct concept from positive affect. Virtual ESM beep datasets were extracted from 1,058 valid ESM observations for virtual and real subjects. Results: Behavior settings defined by Who-What contextual information were most informative. Simulations of at least 100 ESM observations are needed for reliable assessment. Virtual ESM beep datasets of a real subject can be defined by Who-What-Where behavior setting combinations. Large sample sizes are necessary for reliable rQoL assessments, except for subjects with low contextual variability. rQoL is distinct from positive affect. Conclusion: rQoL is a feasible concept. Monte Carlo experiments should be used to assess the reliable implementation of an ESM statistic. Future research in ESM should asses the behavior of summary statistics under different sampling situations. This exploration is especially relevant in clinical implementation, where often only small datasets are available. PMID:29163294
Verhagen, Simone J W; Simons, Claudia J P; van Zelst, Catherine; Delespaul, Philippe A E G
2017-01-01
Background: Mental healthcare needs person-tailored interventions. Experience Sampling Method (ESM) can provide daily life monitoring of personal experiences. This study aims to operationalize and test a measure of momentary reward-related Quality of Life (rQoL). Intuitively, quality of life improves by spending more time on rewarding experiences. ESM clinical interventions can use this information to coach patients to find a realistic, optimal balance of positive experiences (maximize reward) in daily life. rQoL combines the frequency of engaging in a relevant context (a 'behavior setting') with concurrent (positive) affect. High rQoL occurs when the most frequent behavior settings are combined with positive affect or infrequent behavior settings co-occur with low positive affect. Methods: Resampling procedures (Monte Carlo experiments) were applied to assess the reliability of rQoL using various behavior setting definitions under different sampling circumstances, for real or virtual subjects with low-, average- and high contextual variability. Furthermore, resampling was used to assess whether rQoL is a distinct concept from positive affect. Virtual ESM beep datasets were extracted from 1,058 valid ESM observations for virtual and real subjects. Results: Behavior settings defined by Who-What contextual information were most informative. Simulations of at least 100 ESM observations are needed for reliable assessment. Virtual ESM beep datasets of a real subject can be defined by Who-What-Where behavior setting combinations. Large sample sizes are necessary for reliable rQoL assessments, except for subjects with low contextual variability. rQoL is distinct from positive affect. Conclusion: rQoL is a feasible concept. Monte Carlo experiments should be used to assess the reliable implementation of an ESM statistic. Future research in ESM should asses the behavior of summary statistics under different sampling situations. This exploration is especially relevant in clinical implementation, where often only small datasets are available.
Analysis of Ballast Water Sampling Port Designs Using Computational Fluid Dynamics
2008-02-01
straight, vertical, upward-flowing pipe having a sample port diameter between 1.5 and 2.0 times the basic isokinetic diameter as defined in this report...water, flow modeling, sample port, sample pipe, particle trajectory, isokinetic sampling 18. Distribution Statement This document is available to...2.0 times the basic isokinetic diameter as defined in this report. Sample ports should use ball valves for isolation purposes and diaphragm or
2008-03-01
multiplicative corrections as well as space mapping transformations for models defined over a lower dimensional space. A corrected surrogate model for the...correction functions used in [72]. If the low fidelity model g(x̃) is defined over a lower dimensional space then a space mapping transformation is...required. As defined in [21, 72], space mapping is a method of mapping between models of different dimensionality or fidelity. Let P denote the space
Clewe, Oskar; Karlsson, Mats O; Simonsson, Ulrika S H
2015-12-01
Bronchoalveolar lavage (BAL) is a pulmonary sampling technique for characterization of drug concentrations in epithelial lining fluid and alveolar cells. Two hypothetical drugs with different pulmonary distribution rates (fast and slow) were considered. An optimized BAL sampling design was generated assuming no previous information regarding the pulmonary distribution (rate and extent) and with a maximum of two samples per subject. Simulations were performed to evaluate the impact of the number of samples per subject (1 or 2) and the sample size on the relative bias and relative root mean square error of the parameter estimates (rate and extent of pulmonary distribution). The optimized BAL sampling design depends on a characterized plasma concentration time profile, a population plasma pharmacokinetic model, the limit of quantification (LOQ) of the BAL method and involves only two BAL sample time points, one early and one late. The early sample should be taken as early as possible, where concentrations in the BAL fluid ≥ LOQ. The second sample should be taken at a time point in the declining part of the plasma curve, where the plasma concentration is equivalent to the plasma concentration in the early sample. Using a previously described general pulmonary distribution model linked to a plasma population pharmacokinetic model, simulated data using the final BAL sampling design enabled characterization of both the rate and extent of pulmonary distribution. The optimized BAL sampling design enables characterization of both the rate and extent of the pulmonary distribution for both fast and slowly equilibrating drugs.
NASA Astrophysics Data System (ADS)
Bieler, Noah S.; Hünenberger, Philippe H.
2015-04-01
Estimating the relative stabilities of different conformational states of a (bio-)molecule using molecular dynamics simulations involves two challenging problems: the conceptual problem of how to define the states of interest and the technical problem of how to properly sample these states, along with achieving a sufficient number of interconversion transitions. In this study, the two issues are addressed in the context of a decaalanine peptide in water, by considering the 310-, α-, and π-helical states. The simulations rely on the ball-and-stick local-elevation umbrella-sampling (B&S-LEUS) method. In this scheme, the states are defined as hyperspheres (balls) in a (possibly high dimensional) collective-coordinate space and connected by hypercylinders (sticks) to ensure transitions. A new object, the pipe, is also introduced here to handle curvilinear pathways. Optimal sampling within the so-defined space is ensured by confinement and (one-dimensional) memory-based biasing potentials associated with the three different kinds of objects. The simulation results are then analysed in terms of free energies using reweighting, possibly relying on two distinct sets of collective coordinates for the state definition and analysis. The four possible choices considered for these sets are Cartesian coordinates, hydrogen-bond distances, backbone dihedral angles, or pairwise sums of successive backbone dihedral angles. The results concerning decaalanine underline that the concept of conformational state may be extremely ambiguous, and that its tentative absolute definition as a free-energy basin remains subordinated to the choice of a specific analysis space. For example, within the force-field employed and depending on the analysis coordinates selected, the 310-helical state may refer to weakly overlapping collections of conformations, differing by as much as 25 kJ mol-1 in terms of free energy. As another example, the π-helical state appears to correspond to a free-energy basin for three choices of analysis coordinates, but to be unstable with the fourth one. The problem of conformational-state definition may become even more intricate when comparison with experiment is involved, where the state definition relies on spectroscopic or functional observables.
Bieler, Noah S; Hünenberger, Philippe H
2015-04-28
Estimating the relative stabilities of different conformational states of a (bio-)molecule using molecular dynamics simulations involves two challenging problems: the conceptual problem of how to define the states of interest and the technical problem of how to properly sample these states, along with achieving a sufficient number of interconversion transitions. In this study, the two issues are addressed in the context of a decaalanine peptide in water, by considering the 310-, α-, and π-helical states. The simulations rely on the ball-and-stick local-elevation umbrella-sampling (B&S-LEUS) method. In this scheme, the states are defined as hyperspheres (balls) in a (possibly high dimensional) collective-coordinate space and connected by hypercylinders (sticks) to ensure transitions. A new object, the pipe, is also introduced here to handle curvilinear pathways. Optimal sampling within the so-defined space is ensured by confinement and (one-dimensional) memory-based biasing potentials associated with the three different kinds of objects. The simulation results are then analysed in terms of free energies using reweighting, possibly relying on two distinct sets of collective coordinates for the state definition and analysis. The four possible choices considered for these sets are Cartesian coordinates, hydrogen-bond distances, backbone dihedral angles, or pairwise sums of successive backbone dihedral angles. The results concerning decaalanine underline that the concept of conformational state may be extremely ambiguous, and that its tentative absolute definition as a free-energy basin remains subordinated to the choice of a specific analysis space. For example, within the force-field employed and depending on the analysis coordinates selected, the 310-helical state may refer to weakly overlapping collections of conformations, differing by as much as 25 kJ mol(-1) in terms of free energy. As another example, the π-helical state appears to correspond to a free-energy basin for three choices of analysis coordinates, but to be unstable with the fourth one. The problem of conformational-state definition may become even more intricate when comparison with experiment is involved, where the state definition relies on spectroscopic or functional observables.
NASA Astrophysics Data System (ADS)
Dharmaseelan, Anoop; Adistambha, Keyne D.
2015-05-01
Fuel cost accounts for 40 percent of the operating cost of an airline. Fuel cost can be minimized by planning a flight on optimized routes. The routes can be optimized by searching best connections based on the cost function defined by the airline. The most common algorithm that used to optimize route search is Dijkstra's. Dijkstra's algorithm produces a static result and the time taken for the search is relatively long. This paper experiments a new algorithm to optimize route search which combines the principle of simulated annealing and genetic algorithm. The experimental results of route search, presented are shown to be computationally fast and accurate compared with timings from generic algorithm. The new algorithm is optimal for random routing feature that is highly sought by many regional operators.
Local Feature Selection for Data Classification.
Armanfard, Narges; Reilly, James P; Komeili, Majid
2016-06-01
Typical feature selection methods choose an optimal global feature subset that is applied over all regions of the sample space. In contrast, in this paper we propose a novel localized feature selection (LFS) approach whereby each region of the sample space is associated with its own distinct optimized feature set, which may vary both in membership and size across the sample space. This allows the feature set to optimally adapt to local variations in the sample space. An associated method for measuring the similarities of a query datum to each of the respective classes is also proposed. The proposed method makes no assumptions about the underlying structure of the samples; hence the method is insensitive to the distribution of the data over the sample space. The method is efficiently formulated as a linear programming optimization problem. Furthermore, we demonstrate the method is robust against the over-fitting problem. Experimental results on eleven synthetic and real-world data sets demonstrate the viability of the formulation and the effectiveness of the proposed algorithm. In addition we show several examples where localized feature selection produces better results than a global feature selection method.
Method of optimization onboard communication network
NASA Astrophysics Data System (ADS)
Platoshin, G. A.; Selvesuk, N. I.; Semenov, M. E.; Novikov, V. M.
2018-02-01
In this article the optimization levels of onboard communication network (OCN) are proposed. We defined the basic parameters, which are necessary for the evaluation and comparison of modern OCN, we identified also a set of initial data for possible modeling of the OCN. We also proposed a mathematical technique for implementing the OCN optimization procedure. This technique is based on the principles and ideas of binary programming. It is shown that the binary programming technique allows to obtain an inherently optimal solution for the avionics tasks. An example of the proposed approach implementation to the problem of devices assignment in OCN is considered.
Optimal parameters uncoupling vibration modes of oscillators
NASA Astrophysics Data System (ADS)
Le, K. C.; Pieper, A.
2017-07-01
This paper proposes a novel optimization concept for an oscillator with two degrees of freedom. By using specially defined motion ratios, we control the action of springs to each degree of freedom of the oscillator. We aim at showing that, if the potential action of the springs in one period of vibration, used as the payoff function for the conservative oscillator, is maximized among all admissible parameters and motions satisfying Lagrange's equations, then the optimal motion ratios uncouple vibration modes. A similar result holds true for the dissipative oscillator having dampers. The application to optimal design of vehicle suspension is discussed.
Equivalence between entanglement and the optimal fidelity of continuous variable teleportation.
Adesso, Gerardo; Illuminati, Fabrizio
2005-10-07
We devise the optimal form of Gaussian resource states enabling continuous-variable teleportation with maximal fidelity. We show that a nonclassical optimal fidelity of N-user teleportation networks is necessary and sufficient for N-party entangled Gaussian resources, yielding an estimator of multipartite entanglement. The entanglement of teleportation is equivalent to the entanglement of formation in a two-user protocol, and to the localizable entanglement in a multiuser one. Finally, we show that the continuous-variable tangle, quantifying entanglement sharing in three-mode Gaussian states, is defined operationally in terms of the optimal fidelity of a tripartite teleportation network.
Provisional-Ideal-Point-Based Multi-objective Optimization Method for Drone Delivery Problem
NASA Astrophysics Data System (ADS)
Omagari, Hiroki; Higashino, Shin-Ichiro
2018-04-01
In this paper, we proposed a new evolutionary multi-objective optimization method for solving drone delivery problems (DDP). It can be formulated as a constrained multi-objective optimization problem. In our previous research, we proposed the "aspiration-point-based method" to solve multi-objective optimization problems. However, this method needs to calculate the optimal values of each objective function value in advance. Moreover, it does not consider the constraint conditions except for the objective functions. Therefore, it cannot apply to DDP which has many constraint conditions. To solve these issues, we proposed "provisional-ideal-point-based method." The proposed method defines a "penalty value" to search for feasible solutions. It also defines a new reference solution named "provisional-ideal point" to search for the preferred solution for a decision maker. In this way, we can eliminate the preliminary calculations and its limited application scope. The results of the benchmark test problems show that the proposed method can generate the preferred solution efficiently. The usefulness of the proposed method is also demonstrated by applying it to DDP. As a result, the delivery path when combining one drone and one truck drastically reduces the traveling distance and the delivery time compared with the case of using only one truck.
Paoloni, Melissa C.; Mazcko, Christina; Fox, Elizabeth; Fan, Timothy; Lana, Susan; Kisseberth, William; Vail, David M.; Nuckolls, Kaylee; Osborne, Tanasa; Yalkowsy, Samuel; Gustafson, Daniel; Yu, Yunkai; Cao, Liang; Khanna, Chand
2010-01-01
Background Signaling through the mTOR pathway contributes to growth, progression and chemoresistance of several cancers. Accordingly, inhibitors have been developed as potentially valuable therapeutics. Their optimal development requires consideration of dose, regimen, biomarkers and a rationale for their use in combination with other agents. Using the infrastructure of the Comparative Oncology Trials Consortium many of these complex questions were asked within a relevant population of dogs with osteosarcoma to inform the development of mTOR inhibitors for future use in pediatric osteosarcoma patients. Methodology/Principal Findings This prospective dose escalation study of a parenteral formulation of rapamycin sought to define a safe, pharmacokinetically relevant, and pharmacodynamically active dose of rapamycin in dogs with appendicular osteosarcoma. Dogs entered into dose cohorts consisting of 3 dogs/cohort. Dogs underwent a pre-treatment tumor biopsy and collection of baseline PBMC. Dogs received a single intramuscular dose of rapamycin and underwent 48-hour whole blood pharmacokinetic sampling. Additionally, daily intramuscular doses of rapamycin were administered for 7 days with blood rapamycin trough levels collected on Day 8, 9 and 15. At Day 8 post-treatment collection of tumor and PBMC were obtained. No maximally tolerated dose of rapamycin was attained through escalation to the maximal planned dose of 0.08 mg/kg (2.5 mg/30kg dog). Pharmacokinetic analysis revealed a dose-dependent exposure. In all cohorts modulation of the mTOR pathway in tumor and PBMC (pS6RP/S6RP) was demonstrated. No change in pAKT/AKT was seen in tumor samples following rapamycin therapy. Conclusions/Significance Rapamycin may be safely administered to dogs and can yield therapeutic exposures. Modulation pS6RP/S6RP in tumor tissue and PBMCs was not dependent on dose. Results from this study confirm that the dog may be included in the translational development of rapamycin and potentially other mTOR inhibitors. Ongoing studies of rapamycin in dogs will define optimal schedules for their use in cancer and evaluate the role of rapamycin use in the setting of minimal residual disease. PMID:20543980
Sampling design optimization for spatial functions
Olea, R.A.
1984-01-01
A new procedure is presented for minimizing the sampling requirements necessary to estimate a mappable spatial function at a specified level of accuracy. The technique is based on universal kriging, an estimation method within the theory of regionalized variables. Neither actual implementation of the sampling nor universal kriging estimations are necessary to make an optimal design. The average standard error and maximum standard error of estimation over the sampling domain are used as global indices of sampling efficiency. The procedure optimally selects those parameters controlling the magnitude of the indices, including the density and spatial pattern of the sample elements and the number of nearest sample elements used in the estimation. As an illustration, the network of observation wells used to monitor the water table in the Equus Beds of Kansas is analyzed and an improved sampling pattern suggested. This example demonstrates the practical utility of the procedure, which can be applied equally well to other spatial sampling problems, as the procedure is not limited by the nature of the spatial function. ?? 1984 Plenum Publishing Corporation.
Fast Algorithms for Earth Mover Distance Based on Optimal Transport and L1 Regularization II
2016-09-01
of optimal transport, the EMD problem can be reformulated as a familiar L1 minimization. We use a regularization which gives us a unique solution for...plays a central role in many applications, including image processing, computer vision and statistics etc. [13, 17, 20, 24]. The EMD is a metric defined
The Basic Organizing/Optimizing Training Scheduler (BOOTS): User's Guide. Technical Report 151.
ERIC Educational Resources Information Center
Church, Richard L.; Keeler, F. Laurence
This report provides the step-by-step instructions required for using the Navy's Basic Organizing/Optimizing Training Scheduler (BOOTS) system. BOOTS is a computerized tool designed to aid in the creation of master training schedules for each Navy recruit training command. The system is defined in terms of three major functions: (1) data file…
Forecasting of dissolved oxygen in the Guanting reservoir using an optimized NGBM (1,1) model.
An, Yan; Zou, Zhihong; Zhao, Yanfei
2015-03-01
An optimized nonlinear grey Bernoulli model was proposed by using a particle swarm optimization algorithm to solve the parameter optimization problem. In addition, each item in the first-order accumulated generating sequence was set in turn as an initial condition to determine which alternative would yield the highest forecasting accuracy. To test the forecasting performance, the optimized models with different initial conditions were then used to simulate dissolved oxygen concentrations in the Guanting reservoir inlet and outlet (China). The empirical results show that the optimized model can remarkably improve forecasting accuracy, and the particle swarm optimization technique is a good tool to solve parameter optimization problems. What's more, the optimized model with an initial condition that performs well in in-sample simulation may not do as well as in out-of-sample forecasting. Copyright © 2015. Published by Elsevier B.V.
Optimism, Social Support, and Adjustment in African American Women with Breast Cancer
Shelby, Rebecca A.; Crespin, Tim R.; Wells-Di Gregorio, Sharla M.; Lamdan, Ruth M.; Siegel, Jamie E.; Taylor, Kathryn L.
2013-01-01
Past studies show that optimism and social support are associated with better adjustment following breast cancer treatment. Most studies have examined these relationships in predominantly non-Hispanic White samples. The present study included 77 African American women treated for nonmetastatic breast cancer. Women completed measures of optimism, social support, and adjustment within 10-months of surgical treatment. In contrast to past studies, social support did not mediate the relationship between optimism and adjustment in this sample. Instead, social support was a moderator of the optimism-adjustment relationship, as it buffered the negative impact of low optimism on psychological distress, well-being, and psychosocial functioning. Women with high levels of social support experienced better adjustment even when optimism was low. In contrast, among women with high levels of optimism, increasing social support did not provide an added benefit. These data suggest that perceived social support is an important resource for women with low optimism. PMID:18712591
NASA Astrophysics Data System (ADS)
Flinders, Bryn; Beasley, Emma; Verlaan, Ricky M.; Cuypers, Eva; Francese, Simona; Bassindale, Tom; Clench, Malcolm R.; Heeren, Ron M. A.
2017-08-01
Matrix-assisted laser desorption/ionization-mass spectrometry imaging (MALDI-MSI) has been employed to rapidly screen longitudinally sectioned drug user hair samples for cocaine and its metabolites using continuous raster imaging. Optimization of the spatial resolution and raster speed were performed on intact cocaine contaminated hair samples. The optimized settings (100 × 150 μm at 0.24 mm/s) were subsequently used to examine longitudinally sectioned drug user hair samples. The MALDI-MS/MS images showed the distribution of the most abundant cocaine product ion at m/z 182. Using the optimized settings, multiple hair samples obtained from two users were analyzed in approximately 3 h: six times faster than the standard spot-to-spot acquisition method. Quantitation was achieved using longitudinally sectioned control hair samples sprayed with a cocaine dilution series. A multiple reaction monitoring (MRM) experiment was also performed using the `dynamic pixel' imaging method to screen for cocaine and a range of its metabolites, in order to differentiate between contaminated hairs and drug users. Cocaine, benzoylecgonine, and cocaethylene were detectable, in agreement with analyses carried out using the standard LC-MS/MS method. [Figure not available: see fulltext.
Training set optimization under population structure in genomic selection.
Isidro, Julio; Jannink, Jean-Luc; Akdemir, Deniz; Poland, Jesse; Heslot, Nicolas; Sorrells, Mark E
2015-01-01
Population structure must be evaluated before optimization of the training set population. Maximizing the phenotypic variance captured by the training set is important for optimal performance. The optimization of the training set (TRS) in genomic selection has received much interest in both animal and plant breeding, because it is critical to the accuracy of the prediction models. In this study, five different TRS sampling algorithms, stratified sampling, mean of the coefficient of determination (CDmean), mean of predictor error variance (PEVmean), stratified CDmean (StratCDmean) and random sampling, were evaluated for prediction accuracy in the presence of different levels of population structure. In the presence of population structure, the most phenotypic variation captured by a sampling method in the TRS is desirable. The wheat dataset showed mild population structure, and CDmean and stratified CDmean methods showed the highest accuracies for all the traits except for test weight and heading date. The rice dataset had strong population structure and the approach based on stratified sampling showed the highest accuracies for all traits. In general, CDmean minimized the relationship between genotypes in the TRS, maximizing the relationship between TRS and the test set. This makes it suitable as an optimization criterion for long-term selection. Our results indicated that the best selection criterion used to optimize the TRS seems to depend on the interaction of trait architecture and population structure.
Robust Airfoil Optimization to Achieve Consistent Drag Reduction Over a Mach Range
NASA Technical Reports Server (NTRS)
Li, Wu; Huyse, Luc; Padula, Sharon; Bushnell, Dennis M. (Technical Monitor)
2001-01-01
We prove mathematically that in order to avoid point-optimization at the sampled design points for multipoint airfoil optimization, the number of design points must be greater than the number of free-design variables. To overcome point-optimization at the sampled design points, a robust airfoil optimization method (called the profile optimization method) is developed and analyzed. This optimization method aims at a consistent drag reduction over a given Mach range and has three advantages: (a) it prevents severe degradation in the off-design performance by using a smart descent direction in each optimization iteration, (b) there is no random airfoil shape distortion for any iterate it generates, and (c) it allows a designer to make a trade-off between a truly optimized airfoil and the amount of computing time consumed. For illustration purposes, we use the profile optimization method to solve a lift-constrained drag minimization problem for 2-D airfoil in Euler flow with 20 free-design variables. A comparison with other airfoil optimization methods is also included.
Mizuno, Kana; Dong, Min; Fukuda, Tsuyoshi; Chandra, Sharat; Mehta, Parinda A; McConnell, Scott; Anaissie, Elias J; Vinks, Alexander A
2018-05-01
High-dose melphalan is an important component of conditioning regimens for patients undergoing hematopoietic stem cell transplantation. The current dosing strategy based on body surface area results in a high incidence of oral mucositis and gastrointestinal and liver toxicity. Pharmacokinetically guided dosing will individualize exposure and help minimize overexposure-related toxicity. The purpose of this study was to develop a population pharmacokinetic model and optimal sampling strategy. A population pharmacokinetic model was developed with NONMEM using 98 observations collected from 15 adult patients given the standard dose of 140 or 200 mg/m 2 by intravenous infusion. The determinant-optimal sampling strategy was explored with PopED software. Individual area under the curve estimates were generated by Bayesian estimation using full and the proposed sparse sampling data. The predictive performance of the optimal sampling strategy was evaluated based on bias and precision estimates. The feasibility of the optimal sampling strategy was tested using pharmacokinetic data from five pediatric patients. A two-compartment model best described the data. The final model included body weight and creatinine clearance as predictors of clearance. The determinant-optimal sampling strategies (and windows) were identified at 0.08 (0.08-0.19), 0.61 (0.33-0.90), 2.0 (1.3-2.7), and 4.0 (3.6-4.0) h post-infusion. An excellent correlation was observed between area under the curve estimates obtained with the full and the proposed four-sample strategy (R 2 = 0.98; p < 0.01) with a mean bias of -2.2% and precision of 9.4%. A similar relationship was observed in children (R 2 = 0.99; p < 0.01). The developed pharmacokinetic model-based sparse sampling strategy promises to achieve the target area under the curve as part of precision dosing.
YAHA: fast and flexible long-read alignment with optimal breakpoint detection.
Faust, Gregory G; Hall, Ira M
2012-10-01
With improved short-read assembly algorithms and the recent development of long-read sequencers, split mapping will soon be the preferred method for structural variant (SV) detection. Yet, current alignment tools are not well suited for this. We present YAHA, a fast and flexible hash-based aligner. YAHA is as fast and accurate as BWA-SW at finding the single best alignment per query and is dramatically faster and more sensitive than both SSAHA2 and MegaBLAST at finding all possible alignments. Unlike other aligners that report all, or one, alignment per query, or that use simple heuristics to select alignments, YAHA uses a directed acyclic graph to find the optimal set of alignments that cover a query using a biologically relevant breakpoint penalty. YAHA can also report multiple mappings per defined segment of the query. We show that YAHA detects more breakpoints in less time than BWA-SW across all SV classes, and especially excels at complex SVs comprising multiple breakpoints. YAHA is currently supported on 64-bit Linux systems. Binaries and sample data are freely available for download from http://faculty.virginia.edu/irahall/YAHA. imh4y@virginia.edu.
Duval, Kristin; Aubin, Rémy A; Elliott, James; Gorn-Hondermann, Ivan; Birnboim, H Chaim; Jonker, Derek; Fourney, Ron M; Frégeau, Chantal J
2010-02-01
Archival tissue preserved in fixative constitutes an invaluable resource for histological examination, molecular diagnostic procedures and for DNA typing analysis in forensic investigations. However, available material is often limited in size and quantity. Moreover, recovery of DNA is often severely compromised by the presence of covalent DNA-protein cross-links generated by formalin, the most prevalent fixative. We describe the evaluation of buffer formulations, sample lysis regimens and DNA recovery strategies and define optimized manual and automated procedures for the extraction of high quality DNA suitable for molecular diagnostics and genotyping. Using a 3-step enzymatic digestion protocol carried out in the absence of dithiothreitol, we demonstrate that DNA can be efficiently released from cells or tissues preserved in buffered formalin or the alcohol-based fixative GenoFix. This preparatory procedure can then be integrated to traditional phenol/chloroform extraction, a modified manual DNA IQ or automated DNA IQ/Te-Shake-based extraction in order to recover DNA for downstream applications. Quantitative recovery of high quality DNA was best achieved from specimens archived in GenoFix and extracted using magnetic bead capture.
Zhu, Feifei; Zhang, Qinglin; Qiu, Jiang
2013-01-01
Creativity can be defined the capacity of an individual to produce something original and useful. An important measurable component of creativity is divergent thinking. Despite existing studies on creativity-related cerebral structural basis, no study has used a large sample to investigate the relationship between individual verbal creativity and regional gray matter volumes (GMVs) and white matter volumes (WMVs). In the present work, optimal voxel-based morphometry (VBM) was employed to identify the structure that correlates verbal creativity (measured by the verbal form of Torrance Tests of Creative Thinking) across the brain in young healthy subjects. Verbal creativity was found to be significantly positively correlated with regional GMV in the left inferior frontal gyrus (IFG), which is believed to be responsible for language production and comprehension, new semantic representation, and memory retrieval, and in the right IFG, which may involve inhibitory control and attention switching. A relationship between verbal creativity and regional WMV in the left and right IFG was also observed. Overall, a highly verbal creative individual with superior verbal skills may demonstrate a greater computational efficiency in the brain areas involved in high-level cognitive processes including language production, semantic representation and cognitive control. PMID:24223921
[Effect of increased protein content on nutritional and sensory quality of cookies].
Pérez, Santiago Rafael; Osella, Carlos Alberto; Torre, Maria Adela de la; Sánchez, Hugo Diego
2008-12-01
The objective of this work was to study the effect of soy flour and whey protein concentrate (WPC) on cookies quality. An optimal recipe showing improved protein quality and content as well as acceptable sensory quality was defined taking into account the results obtained. Rotary moulded cookie formulation adaptable to lamination and cutting in pilot plant was used. Wheat flour from this formulation was partially replaced by whey protein concentrate and full fat soy flour. Second order models were employed to generate response surfaces for: total protein, lysine by 16 grams of total nitrogen, lysine by 100 grams of sample, loss of lysine during processing and sensory evaluation of cookies. We could obtain an effect on available lysine value when water content was increased in the formulation because a delay in the Maillard reaction. The optimal formulation contains 13% of full fat soy flour, 3% of whey protein concentrate and 23% of water. The results demonstrated that the protein content and the protein quality of the supplemented flours were increased when soy flour was added in the formulation of cookies. On other hand, protein content was increased but protein quality was decreased when WPC was used, because of available lysine loss.
DOE Office of Scientific and Technical Information (OSTI.GOV)
None, None
This document is the monitoring optimization plan for groundwater monitoring wells associated with the U.S. Department of Energy (DOE) Y-12 National Security Complex (Y-12) in Oak Ridge, Tennessee. The plan describes the technical approach that is implemented under the Y-12 Groundwater Protection Program (GWPP) to focus available resources on the monitoring wells at Y-12 that provide the most useful hydrologic and groundwater quality monitoring data. The technical approach is based on the GWPP status designation for each well. Under this approach, wells granted “active” status are used by the GWPP for hydrologic monitoring and/or groundwater quality sampling, whereas wells grantedmore » “inactive” status are not used for either purpose. The status designation also defines the frequency at which the GWPP will inspect applicable wells, the scope of these well inspections, and extent of any maintenance actions initiated by the GWPP. Details regarding the ancillary activities associated with implementation of this plan (e.g., well inspection) are deferred to the referenced GWPP plans.« less
Plate-impact loading of cellular structures formed by selective laser melting
NASA Astrophysics Data System (ADS)
Winter, R. E.; Cotton, M.; Harris, E. J.; Maw, J. R.; Chapman, D. J.; Eakins, D. E.; McShane, G.
2014-03-01
Porous materials are of great interest because of improved energy absorption over their solid counterparts. Their properties, however, have been difficult to optimize. Additive manufacturing has emerged as a potential technique to closely define the structure and properties of porous components, i.e. density, strut width and pore size; however, the behaviour of these materials at very high impact energies remains largely unexplored. We describe an initial study of the dynamic compression response of lattice materials fabricated through additive manufacturing. Lattices consisting of an array of intersecting stainless steel rods were fabricated into discs using selective laser melting. The resulting discs were impacted against solid stainless steel targets at velocities ranging from 300 to 700 m s-1 using a gas gun. Continuum CTH simulations were performed to identify key features in the measured wave profiles, while 3D simulations, in which the individual cells were modelled, revealed details of microscale deformation during collapse of the lattice structure. The validated computer models have been used to provide an understanding of the deformation processes in the cellular samples. The study supports the optimization of cellular structures for application as energy absorbers.
Magyar, Zsuzsanna; Mester, Anita; Nadubinszky, Gabor; Varga, Gabor; Ghanem, Souleiman; Somogyi, Viktoria; Tanczos, Bence; Deak, Adam; Bidiga, Laszlo; Oltean, Mihai; Peto, Katalin; Nemeth, Norbert
2018-04-14
Remote ischemic preconditioning (RIPC) can be protective against the damage. However, there is no consensus on the optimal amount of tissue, the number and duration of the ischemic cycles, and the timing of the preconditioning. The hemorheological background of the process is also unknown. To investigate the effects of remote organ ischemic preconditioning on micro-rheological parameters during liver ischemia-reperfusion in rats. In anesthetized rats 60-minute partial liver ischemia was induced with 120-minute reperfusion (Control, n = 7). In the preconditioned groups a tourniquet was applied on the left thigh for 3×10 minutes 1 hour (RIPC-1, n = 7) or 24 hours (RIPC-24, n = 7) prior to the liver ischemia. Blood samples were taken before the operation and during the reperfusion. Acid-base, hematological parameters, erythrocyte aggregation and deformability were tested. Lactate concentration significantly increased by the end of the reperfusion. Erythrocyte deformability was improved in the RIPC-1 group, erythrocyte aggregation increased during the reperfusion, particularly in the RIPC-24 group. RIPC alleviated several hemorheological changes caused by the liver I/R. However, the optimal timing of the RIPC cannot be defined based on these results.
Kandadai, Venk; Yang, Haodong; Jiang, Ling; Yang, Christopher C; Fleisher, Linda; Winston, Flaura Koplin
2016-05-05
Little is known about the ability of individual stakeholder groups to achieve health information dissemination goals through Twitter. This study aimed to develop and apply methods for the systematic evaluation and optimization of health information dissemination by stakeholders through Twitter. Tweet content from 1790 followers of @SafetyMD (July-November 2012) was examined. User emphasis, a new indicator of Twitter information dissemination, was defined and applied to retweets across two levels of retweeters originating from @SafetyMD. User interest clusters were identified based on principal component analysis (PCA) and hierarchical cluster analysis (HCA) of a random sample of 170 followers. User emphasis of keywords remained across levels but decreased by 9.5 percentage points. PCA and HCA identified 12 statistically unique clusters of followers within the @SafetyMD Twitter network. This study is one of the first to develop methods for use by stakeholders to evaluate and optimize their use of Twitter to disseminate health information. Our new methods provide preliminary evidence that individual stakeholders can evaluate the effectiveness of health information dissemination and create content-specific clusters for more specific targeted messaging.
Zhu, Feifei; Zhang, Qinglin; Qiu, Jiang
2013-01-01
Creativity can be defined the capacity of an individual to produce something original and useful. An important measurable component of creativity is divergent thinking. Despite existing studies on creativity-related cerebral structural basis, no study has used a large sample to investigate the relationship between individual verbal creativity and regional gray matter volumes (GMVs) and white matter volumes (WMVs). In the present work, optimal voxel-based morphometry (VBM) was employed to identify the structure that correlates verbal creativity (measured by the verbal form of Torrance Tests of Creative Thinking) across the brain in young healthy subjects. Verbal creativity was found to be significantly positively correlated with regional GMV in the left inferior frontal gyrus (IFG), which is believed to be responsible for language production and comprehension, new semantic representation, and memory retrieval, and in the right IFG, which may involve inhibitory control and attention switching. A relationship between verbal creativity and regional WMV in the left and right IFG was also observed. Overall, a highly verbal creative individual with superior verbal skills may demonstrate a greater computational efficiency in the brain areas involved in high-level cognitive processes including language production, semantic representation and cognitive control.
Optimal sampling and quantization of synthetic aperture radar signals
NASA Technical Reports Server (NTRS)
Wu, C.
1978-01-01
Some theoretical and experimental results on optimal sampling and quantization of synthetic aperture radar (SAR) signals are presented. It includes a description of a derived theoretical relationship between the pixel signal to noise ratio of processed SAR images and the number of quantization bits per sampled signal, assuming homogeneous extended targets. With this relationship known, a solution may be realized for the problem of optimal allocation of a fixed data bit-volume (for specified surface area and resolution criterion) between the number of samples and the number of bits per sample. The results indicate that to achieve the best possible image quality for a fixed bit rate and a given resolution criterion, one should quantize individual samples coarsely and thereby maximize the number of multiple looks. The theoretical results are then compared with simulation results obtained by processing aircraft SAR data.
NASA Technical Reports Server (NTRS)
Rao, R. G. S.; Ulaby, F. T.
1977-01-01
The paper examines optimal sampling techniques for obtaining accurate spatial averages of soil moisture, at various depths and for cell sizes in the range 2.5-40 acres, with a minimum number of samples. Both simple random sampling and stratified sampling procedures are used to reach a set of recommended sample sizes for each depth and for each cell size. Major conclusions from statistical sampling test results are that (1) the number of samples required decreases with increasing depth; (2) when the total number of samples cannot be prespecified or the moisture in only one single layer is of interest, then a simple random sample procedure should be used which is based on the observed mean and SD for data from a single field; (3) when the total number of samples can be prespecified and the objective is to measure the soil moisture profile with depth, then stratified random sampling based on optimal allocation should be used; and (4) decreasing the sensor resolution cell size leads to fairly large decreases in samples sizes with stratified sampling procedures, whereas only a moderate decrease is obtained in simple random sampling procedures.
Wendell R. Haag
2013-01-01
Selection is expected to optimize reproductive investment resulting in characteristic trade-offs among traits such as brood size, offspring size, somatic maintenance, and lifespan; relative patterns of energy allocation to these functions are important in defining life-history strategies. Freshwater mussels are a diverse and imperiled component of aquatic ecosystems,...
General shape optimization capability
NASA Technical Reports Server (NTRS)
Chargin, Mladen K.; Raasch, Ingo; Bruns, Rudolf; Deuermeyer, Dawson
1991-01-01
A method is described for calculating shape sensitivities, within MSC/NASTRAN, in a simple manner without resort to external programs. The method uses natural design variables to define the shape changes in a given structure. Once the shape sensitivities are obtained, the shape optimization process is carried out in a manner similar to property optimization processes. The capability of this method is illustrated by two examples: the shape optimization of a cantilever beam with holes, loaded by a point load at the free end (with the shape of the holes and the thickness of the beam selected as the design variables), and the shape optimization of a connecting rod subjected to several different loading and boundary conditions.
Contribution to the optimal shape design of two-dimensional internal flows with embedded shocks
NASA Technical Reports Server (NTRS)
Iollo, Angelo; Salas, Manuel D.
1995-01-01
We explore the practicability of optimal shape design for flows modeled by the Euler equations. We define a functional whose minimum represents the optimality condition. The gradient of the functional with respect to the geometry is calculated with the Lagrange multipliers, which are determined by solving a co-state equation. The optimization problem is then examined by comparing the performance of several gradient-based optimization algorithms. In this formulation, the flow field can be computed to an arbitrary order of accuracy. Finally, some results for internal flows with embedded shocks are presented, including a case for which the solution to the inverse problem does not belong to the design space.
Shan, Yi-chu; Zhang, Yu-kui; Zhao, Rui-huan
2002-07-01
In high performance liquid chromatography, it is necessary to apply multi-composition gradient elution for the separation of complex samples such as environmental and biological samples. Multivariate stepwise gradient elution is one of the most efficient elution modes, because it combines the high selectivity of multi-composition mobile phase and shorter analysis time of gradient elution. In practical separations, the separation selectivity of samples can be effectively adjusted by using ternary mobile phase. For the optimization of these parameters, the retention equation of samples must be obtained at first. Traditionally, several isocratic experiments are used to get the retention equation of solute. However, it is time consuming especially for the separation of complex samples with a wide range of polarity. A new method for the fast optimization of ternary stepwise gradient elution was proposed based on the migration rule of solute in column. First, the coefficients of retention equation of solute are obtained by running several linear gradient experiments, then the optimal separation conditions are searched according to the hierarchical chromatography response function which acts as the optimization criterion. For each kind of organic modifier, two initial linear gradient experiments are used to obtain the primary coefficients of retention equation of each solute. For ternary mobile phase, only four linear gradient runs are needed to get the coefficients of retention equation. Then the retention times of solutes under arbitrary mobile phase composition can be predicted. The initial optimal mobile phase composition is obtained by resolution mapping for all of the solutes. A hierarchical chromatography response function is used to evaluate the separation efficiencies and search the optimal elution conditions. In subsequent optimization, the migrating distance of solute in the column is considered to decide the mobile phase composition and sustaining time of the latter steps until all the solutes are eluted out. Thus the first stepwise gradient elution conditions are predicted. If the resolution of samples under the predicted optimal separation conditions is satisfactory, the optimization procedure is stopped; otherwise, the coefficients of retention equation are adjusted according to the experimental results under the previously predicted elution conditions. Then the new stepwise gradient elution conditions are predicted repeatedly until satisfactory resolution is obtained. Normally, the satisfactory separation conditions can be found only after six experiments by using the proposed method. In comparison with the traditional optimization method, the time needed to finish the optimization procedure can be greatly reduced. The method has been validated by its application to the separation of several samples such as amino acid derivatives, aromatic amines, in which satisfactory separations were obtained with predicted resolution.
Willan, Andrew R; Eckermann, Simon
2012-10-01
Previous applications of value of information methods for determining optimal sample size in randomized clinical trials have assumed no between-study variation in mean incremental net benefit. By adopting a hierarchical model, we provide a solution for determining optimal sample size with this assumption relaxed. The solution is illustrated with two examples from the literature. Expected net gain increases with increasing between-study variation, reflecting the increased uncertainty in incremental net benefit and reduced extent to which data are borrowed from previous evidence. Hence, a trial can become optimal where current evidence is sufficient assuming no between-study variation. However, despite the expected net gain increasing, the optimal sample size in the illustrated examples is relatively insensitive to the amount of between-study variation. Further percentage losses in expected net gain were small even when choosing sample sizes that reflected widely different between-study variation. Copyright © 2011 John Wiley & Sons, Ltd.
Fast global image smoothing based on weighted least squares.
Min, Dongbo; Choi, Sunghwan; Lu, Jiangbo; Ham, Bumsub; Sohn, Kwanghoon; Do, Minh N
2014-12-01
This paper presents an efficient technique for performing a spatially inhomogeneous edge-preserving image smoothing, called fast global smoother. Focusing on sparse Laplacian matrices consisting of a data term and a prior term (typically defined using four or eight neighbors for 2D image), our approach efficiently solves such global objective functions. In particular, we approximate the solution of the memory-and computation-intensive large linear system, defined over a d-dimensional spatial domain, by solving a sequence of 1D subsystems. Our separable implementation enables applying a linear-time tridiagonal matrix algorithm to solve d three-point Laplacian matrices iteratively. Our approach combines the best of two paradigms, i.e., efficient edge-preserving filters and optimization-based smoothing. Our method has a comparable runtime to the fast edge-preserving filters, but its global optimization formulation overcomes many limitations of the local filtering approaches. Our method also achieves high-quality results as the state-of-the-art optimization-based techniques, but runs ∼10-30 times faster. Besides, considering the flexibility in defining an objective function, we further propose generalized fast algorithms that perform Lγ norm smoothing (0 < γ < 2) and support an aggregated (robust) data term for handling imprecise data constraints. We demonstrate the effectiveness and efficiency of our techniques in a range of image processing and computer graphics applications.
Edwards, Meghan; Ley, Eric; Mirocha, James; Hadjibashi, Anoushiravan Amini; Margulies, Daniel R; Salim, Ali
2010-10-01
Hypotension, defined as systolic blood pressure less than 90 mm Hg, is recognized as a sign of hemorrhagic shock and is a validated prognostic indicator. The definition of hypotension, particularly in the elderly population, deserves attention. We hypothesized that the systolic blood pressure associated with increased mortality resulting from hemorrhagic shock increases with increasing age. The Los Angeles County Trauma Database was queried for all moderate to severely injured patients without major head injuries admitted between 1998 and 2005. Several fit statistic analyses were performed for each systolic blood pressure from 50 to 180 mm Hg to identify the model that most accurately defined hypotension for three age groups. The optimal definition of hypotension for each group was determined from the best fit model. A total of 24,438 patients were analyzed. The optimal definition of hypotension was systolic blood pressure of 100 mm Hg for patients 20 to 49 years, 120 mm Hg for patients 50 to 69 years, and 140 mm Hg for patients 70 years and older. The optimal systolic blood pressure for improved mortality in hemorrhagic shock increases significantly with increasing age. Elderly trauma patients without major head injuries should be considered hypotensive for systolic blood pressure less than 140 mm Hg.
Beiske, K; Burchill, S A; Cheung, I Y; Hiyama, E; Seeger, R C; Cohn, S L; Pearson, A D J; Matthay, K K
2009-01-01
Disseminating disease is a predictive and prognostic indicator of poor outcome in children with neuroblastoma. Its accurate and sensitive assessment can facilitate optimal treatment decisions. The International Neuroblastoma Risk Group (INRG) Task Force has defined standardised methods for the determination of minimal disease (MD) by immunocytology (IC) and quantitative reverse transcriptase-polymerase chain reaction (QRT-PCR) using disialoganglioside GD2 and tyrosine hydroxylase mRNA respectively. The INRG standard operating procedures (SOPs) define methods for collecting, processing and evaluating bone marrow (BM), peripheral blood (PB) and peripheral blood stem cell harvest by IC and QRT-PCR. Sampling PB and BM is recommended at diagnosis, before and after myeloablative therapy and at the end of treatment. Peripheral blood stem cell products should be analysed at the time of harvest. Performing MD detection according to INRG SOPs will enable laboratories throughout the world to compare their results and thus facilitate quality-controlled multi-centre prospective trials to assess the clinical significance of MD and minimal residual disease in heterogeneous patient groups. PMID:19401690
A systematic analysis of commonly used antibodies in cancer diagnostics.
Gremel, Gabriela; Bergman, Julia; Djureinovic, Dijana; Edqvist, Per-Henrik; Maindad, Vikas; Bharambe, Bhavana M; Khan, Wasif Ali Z A; Navani, Sanjay; Elebro, Jacob; Jirström, Karin; Hellberg, Dan; Uhlén, Mathias; Micke, Patrick; Pontén, Fredrik
2014-01-01
Immunohistochemistry plays a pivotal role in cancer differential diagnostics. To identify the primary tumour from a metastasis specimen remains a significant challenge, despite the availability of an increasing number of antibodies. The aim of the present study was to provide evidence-based data on the diagnostic power of antibodies used frequently for clinical differential diagnostics. A tissue microarray cohort comprising 940 tumour samples, of which 502 were metastatic lesions, representing tumours from 18 different organs and four non-localized cancer types, was analysed using immunohistochemistry with 27 well-established antibodies used in clinical differential diagnostics. Few antibodies, e.g. prostate-specific antigen and thyroglobulin, showed a cancer type-related sensitivity and specificity of more than 95%. A majority of the antibodies showed a low degree of sensitivity and specificity for defined cancer types. Combinations of antibodies provided limited added value for differential diagnostics of cancer types. The results from analysing 27 diagnostic antibodies on consecutive sections of 940 defined tumours provide a unique repository of data that can empower a more optimal use of clinical immunohistochemistry. Our results highlight the benefit of immunohistochemistry and the unmet need for novel markers to improve differential diagnostics of cancer. © 2013 John Wiley & Sons Ltd.
Natural Environmental Service Support to NASA Vehicle, Technology, and Sensor Development Programs
NASA Technical Reports Server (NTRS)
1993-01-01
The research performed under this contract involved definition of the natural environmental parameters affecting the design, development, and operation of space and launch vehicles. The Universities Space Research Association (USRA) provided the manpower and resources to accomplish the following tasks: defining environmental parameters critical for design, development, and operation of launch vehicles; defining environmental forecasts required to assure optimal utilization of launch vehicles; and defining orbital environments of operation and developing models on environmental parameters affecting launch vehicle operations.
Hu, Meng; Krauss, Martin; Brack, Werner; Schulze, Tobias
2016-11-01
Liquid chromatography-high resolution mass spectrometry (LC-HRMS) is a well-established technique for nontarget screening of contaminants in complex environmental samples. Automatic peak detection is essential, but its performance has only rarely been assessed and optimized so far. With the aim to fill this gap, we used pristine water extracts spiked with 78 contaminants as a test case to evaluate and optimize chromatogram and spectral data processing. To assess whether data acquisition strategies have a significant impact on peak detection, three values of MS cycle time (CT) of an LTQ Orbitrap instrument were tested. Furthermore, the key parameter settings of the data processing software MZmine 2 were optimized to detect the maximum number of target peaks from the samples by the design of experiments (DoE) approach and compared to a manual evaluation. The results indicate that short CT significantly improves the quality of automatic peak detection, which means that full scan acquisition without additional MS 2 experiments is suggested for nontarget screening. MZmine 2 detected 75-100 % of the peaks compared to manual peak detection at an intensity level of 10 5 in a validation dataset on both spiked and real water samples under optimal parameter settings. Finally, we provide an optimization workflow of MZmine 2 for LC-HRMS data processing that is applicable for environmental samples for nontarget screening. The results also show that the DoE approach is useful and effort-saving for optimizing data processing parameters. Graphical Abstract ᅟ.
A nonvoxel-based dose convolution/superposition algorithm optimized for scalable GPU architectures.
Neylon, J; Sheng, K; Yu, V; Chen, Q; Low, D A; Kupelian, P; Santhanam, A
2014-10-01
Real-time adaptive planning and treatment has been infeasible due in part to its high computational complexity. There have been many recent efforts to utilize graphics processing units (GPUs) to accelerate the computational performance and dose accuracy in radiation therapy. Data structure and memory access patterns are the key GPU factors that determine the computational performance and accuracy. In this paper, the authors present a nonvoxel-based (NVB) approach to maximize computational and memory access efficiency and throughput on the GPU. The proposed algorithm employs a ray-tracing mechanism to restructure the 3D data sets computed from the CT anatomy into a nonvoxel-based framework. In a process that takes only a few milliseconds of computing time, the algorithm restructured the data sets by ray-tracing through precalculated CT volumes to realign the coordinate system along the convolution direction, as defined by zenithal and azimuthal angles. During the ray-tracing step, the data were resampled according to radial sampling and parallel ray-spacing parameters making the algorithm independent of the original CT resolution. The nonvoxel-based algorithm presented in this paper also demonstrated a trade-off in computational performance and dose accuracy for different coordinate system configurations. In order to find the best balance between the computed speedup and the accuracy, the authors employed an exhaustive parameter search on all sampling parameters that defined the coordinate system configuration: zenithal, azimuthal, and radial sampling of the convolution algorithm, as well as the parallel ray spacing during ray tracing. The angular sampling parameters were varied between 4 and 48 discrete angles, while both radial sampling and parallel ray spacing were varied from 0.5 to 10 mm. The gamma distribution analysis method (γ) was used to compare the dose distributions using 2% and 2 mm dose difference and distance-to-agreement criteria, respectively. Accuracy was investigated using three distinct phantoms with varied geometries and heterogeneities and on a series of 14 segmented lung CT data sets. Performance gains were calculated using three 256 mm cube homogenous water phantoms, with isotropic voxel dimensions of 1, 2, and 4 mm. The nonvoxel-based GPU algorithm was independent of the data size and provided significant computational gains over the CPU algorithm for large CT data sizes. The parameter search analysis also showed that the ray combination of 8 zenithal and 8 azimuthal angles along with 1 mm radial sampling and 2 mm parallel ray spacing maintained dose accuracy with greater than 99% of voxels passing the γ test. Combining the acceleration obtained from GPU parallelization with the sampling optimization, the authors achieved a total performance improvement factor of >175 000 when compared to our voxel-based ground truth CPU benchmark and a factor of 20 compared with a voxel-based GPU dose convolution method. The nonvoxel-based convolution method yielded substantial performance improvements over a generic GPU implementation, while maintaining accuracy as compared to a CPU computed ground truth dose distribution. Such an algorithm can be a key contribution toward developing tools for adaptive radiation therapy systems.
A nonvoxel-based dose convolution/superposition algorithm optimized for scalable GPU architectures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Neylon, J., E-mail: jneylon@mednet.ucla.edu; Sheng, K.; Yu, V.
Purpose: Real-time adaptive planning and treatment has been infeasible due in part to its high computational complexity. There have been many recent efforts to utilize graphics processing units (GPUs) to accelerate the computational performance and dose accuracy in radiation therapy. Data structure and memory access patterns are the key GPU factors that determine the computational performance and accuracy. In this paper, the authors present a nonvoxel-based (NVB) approach to maximize computational and memory access efficiency and throughput on the GPU. Methods: The proposed algorithm employs a ray-tracing mechanism to restructure the 3D data sets computed from the CT anatomy intomore » a nonvoxel-based framework. In a process that takes only a few milliseconds of computing time, the algorithm restructured the data sets by ray-tracing through precalculated CT volumes to realign the coordinate system along the convolution direction, as defined by zenithal and azimuthal angles. During the ray-tracing step, the data were resampled according to radial sampling and parallel ray-spacing parameters making the algorithm independent of the original CT resolution. The nonvoxel-based algorithm presented in this paper also demonstrated a trade-off in computational performance and dose accuracy for different coordinate system configurations. In order to find the best balance between the computed speedup and the accuracy, the authors employed an exhaustive parameter search on all sampling parameters that defined the coordinate system configuration: zenithal, azimuthal, and radial sampling of the convolution algorithm, as well as the parallel ray spacing during ray tracing. The angular sampling parameters were varied between 4 and 48 discrete angles, while both radial sampling and parallel ray spacing were varied from 0.5 to 10 mm. The gamma distribution analysis method (γ) was used to compare the dose distributions using 2% and 2 mm dose difference and distance-to-agreement criteria, respectively. Accuracy was investigated using three distinct phantoms with varied geometries and heterogeneities and on a series of 14 segmented lung CT data sets. Performance gains were calculated using three 256 mm cube homogenous water phantoms, with isotropic voxel dimensions of 1, 2, and 4 mm. Results: The nonvoxel-based GPU algorithm was independent of the data size and provided significant computational gains over the CPU algorithm for large CT data sizes. The parameter search analysis also showed that the ray combination of 8 zenithal and 8 azimuthal angles along with 1 mm radial sampling and 2 mm parallel ray spacing maintained dose accuracy with greater than 99% of voxels passing the γ test. Combining the acceleration obtained from GPU parallelization with the sampling optimization, the authors achieved a total performance improvement factor of >175 000 when compared to our voxel-based ground truth CPU benchmark and a factor of 20 compared with a voxel-based GPU dose convolution method. Conclusions: The nonvoxel-based convolution method yielded substantial performance improvements over a generic GPU implementation, while maintaining accuracy as compared to a CPU computed ground truth dose distribution. Such an algorithm can be a key contribution toward developing tools for adaptive radiation therapy systems.« less
Data-driven sensor placement from coherent fluid structures
NASA Astrophysics Data System (ADS)
Manohar, Krithika; Kaiser, Eurika; Brunton, Bingni W.; Kutz, J. Nathan; Brunton, Steven L.
2017-11-01
Optimal sensor placement is a central challenge in the prediction, estimation and control of fluid flows. We reinterpret sensor placement as optimizing discrete samples of coherent fluid structures for full state reconstruction. This permits a drastic reduction in the number of sensors required for faithful reconstruction, since complex fluid interactions can often be described by a small number of coherent structures. Our work optimizes point sensors using the pivoted matrix QR factorization to sample coherent structures directly computed from flow data. We apply this sampling technique in conjunction with various data-driven modal identification methods, including the proper orthogonal decomposition (POD) and dynamic mode decomposition (DMD). In contrast to POD-based sensors, DMD demonstrably enables the optimization of sensors for prediction in systems exhibiting multiple scales of dynamics. Finally, reconstruction accuracy from pivot sensors is shown to be competitive with sensors obtained using traditional computationally prohibitive optimization methods.
Karimi, Mehdi; Dadfarnia, Shayessteh; Shabani, Ali Mohammad Haji; Tamaddon, Fatemeh; Azadi, Davood
2015-11-01
Deep eutectic liquid organic salt was used as the solvent and a liquid phase microextraction (DES-LPME) combined with electrothermal atomic absorption spectrometry (ETAAS) was developed for separation, preconcentration and determination of lead and cadmium in edible oils. A 4:1 mixture of deep eutectic solvent and 2% nitric acid (200 µL) was added to an oil sample. The mixture was vortexed and transferred into a water bath at 50 °C and stirred for 5 minutes. After the extraction was completed, the phases were separated by centrifugation, and the enriched analytes in the deep eutectic solvent phase were determined by ETAAS. Under optimized extraction conditions and for an oil sample of 28 g, enhancement factors of 198 and 195 and limits of detection (defined as 3 Sb/m) of 8 and 0. 2 ng kg(-1) were achieved for lead and cadmium respectively. The method was successfully applied to the determination of lead and cadmium in various edible oils. Copyright © 2015. Published by Elsevier B.V.